PromptBase
Close icon
General
Home
Marketplace
Hire
Create
Login
Sell
Explore
🤖 GPT

Generate Test Cases To Test Gpu Platform

Generate Test Cases To Test Gpu Platform gpt prompt mini thumbnail
3Eye icon
Views
TestedTick icon
TipsTick icon
4 examplesTick icon
Test Cases to test GPU machine
...more
$4.99
Get prompt
After purchasing, you will gain access to the prompt file which you can use with GPT or the app builder. You'll receive 20 free generation credits with this purchase. By purchasing this prompt, you agree to our terms of service.
1 week ago
Related item

Prompt Details

Model
Chat - GPT-4o (gpt-4o)
Token size
95 ($0.00475 / call)
Example input
[Sub-category]: Data Cleaning [Category]: Data Preprocessing & Manipulation
Example output
Certainly! Here are 20 targeted test cases to help assess the stability and performance of your GPU virtualization system, focusing on data cleaning and preprocessing tasks commonly encountered in data science workflows. | **Description** | **Expected Outcome** | |-----------------|----------------------| | **1. Load large CSV file onto GPU memory** | Data is successfully loaded into GPU memory without errors or memory leaks; GPU memory usage remains stable. | | **2. Perform null value imputation on GPU for large dataset** | Null values are accurately filled without system crashes or memory issues; GPU utilization is efficient. | | **3. Execute one-hot encoding on a high-dimensional categorical dataset** | Encoding completes within expected time limits; GPU memory is optimally used, and no errors arise. | | **4. Scale numerical features using Min-Max scaling on GPU** | Scaling operations execute smoothly with minimal latency; the GPU handles boundary values correctly without overflows. | | **5. Standardize dataset with mixed feature types (categorical, numerical)** | The system can differentiate and apply correct transformations; execution is stable, and resource use remains balanced. | | **6. Apply text preprocessing (tokenization) with GPU acceleration** | Text tokenization completes with no errors, and GPU memory usage scales according to dataset size. | | **7. Batch process image resizing and augmentation on GPU** | Images are resized and augmented without GPU memory fragmentation or processing bottlenecks. | | **8. Execute batch normalization for a large dataset on GPU** | GPU handles batch normalization without drift or numerical instability; resource allocation is efficient. | | **9. Drop rows with specific conditions (e.g., rows with missing values)** | Operation completes without issues, and GPU memory is released post-operation; timing and memory metrics are within expected bounds. | | **10. Merge two large datasets using common keys on GPU** | Merging completes efficiently; no issues arise from data transfer between CPU and GPU, and memory is managed correctly. | | **11. Conduct string parsing and cleaning in a large text dataset** | String operations are processed without GPU crashes, and memory usage remains stable; results are accurate with no data corruption. | | **12. Encode labels in a classification dataset using label encoding** | Labels are encoded accurately with no unexpected errors; the GPU utilization aligns with data size. | | **13. Filter rows based on specific column values on GPU** | Rows are filtered accurately; no delays in data retrieval, and GPU performance remains stable. | | **14. Calculate descriptive statistics (mean, median, mode) on a large dataset** | Statistics are calculated without overloading the GPU, and results match expected values. | | **15. Handle outliers with GPU-accelerated clipping for numerical columns** | Outliers are clipped without affecting other data points; memory is managed well, and no errors occur. | | **16. Execute date-time parsing and conversion on a timestamped dataset** | Date-time parsing completes correctly without crashes, with minimal latency; memory usage is consistent. | | **17. Concatenate multiple datasets with different dimensions on GPU** | Concatenation handles dimensional mismatches gracefully; operation completes with no crashes or memory issues. | | **18. Perform PCA (Principal Component Analysis) for dimensionality reduction** | PCA completes without GPU overloading, and the results are consistent with CPU-based computations. | | **19. Apply data binning on continuous variables using GPU resources** | Binning is accurate, with no memory leaks; GPU usage scales with data size without performance degradation. | | **20. Remove duplicate records in a large dataset** | Duplicates are identified and removed without causing memory bottlenecks; execution is stable, and GPU utilization is efficient. | These test cases aim to cover a variety of data manipulation tasks with a focus on ensuring efficient GPU-CPU interactions, reliable CUDA execution, and proper resource management under virtualized conditions. Each test case should help identify potential vulnerabilities and areas for optimization in the virtualization setup.
Browse Marketplace