PromptBase
Close icon
Explore
General
Home
Marketplace
Apps
Create
Login
Sell
🤖 GPT

Custom Python Code Interpreter

Custom Python Code Interpreter gpt prompt mini thumbnail
28Eye icon
Views
TestedTick icon
TipsTick icon
4 examplesTick icon
This prompt is a powerful tool designed for anyone looking to create a sophisticated Python script that functions as a personal code interpreter. It allows you to input any type of code and receive detailed, step-by-step explanations of the output, complete with clear comments that break down each part of the process. This script not only helps you understand your code but also identifies common errors, offering actionable suggestions for improvement. It includes advanced features like logging.
...more
$5.99
Get prompt
After purchasing, you will gain access to the prompt file which you can use with GPT or the app builder. You'll receive 20 free generation credits with this purchase. By purchasing this prompt, you agree to our terms of service.
Over 1 month ago

Prompt Details

Model
Chat - GPT-4o (gpt-4o)
Token size
133 ($0.00665 / call)
Example input
[code_type]: machine learning [error_type]: data shape mismatches [specific_area]: feature scaling and normalization
Example output
Certainly! Below is a Python script that acts as a machine learning interpreter. This script executes machine learning code, provides step-by-step explanations, and offers error handling for common issues like data shape mismatches. It also includes optional logging and performance monitoring. ```python import logging import time import numpy as np import pandas as pd from sklearn.preprocessing import StandardScaler, MinMaxScaler from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score # Enable logging functionality (Optional) logging.basicConfig(filename='ml_interpreter.log', level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s') class MLInterpreter: def __init__(self, log=True, performance_monitoring=True): self.log = log self.performance_monitoring = performance_monitoring def log_message(self, message): """Logs a message if logging is enabled.""" if self.log: logging.info(message) print(message) def monitor_performance(self, start_time, description): """Monitors and logs performance time if enabled.""" if self.performance_monitoring: elapsed_time = time.time() - start_time self.log_message(f"{description} took {elapsed_time:.4f} seconds") def execute_code(self, code: str): """Executes machine learning code and provides detailed explanations.""" # Start performance monitoring start_time = time.time() try: # Dynamically execute the code provided exec(code, globals()) self.monitor_performance(start_time, "Code execution") self.log_message("Code executed successfully.") except Exception as e: # Catching general exceptions self.log_message(f"Error: {str(e)}") self.suggest_fixes(e) def suggest_fixes(self, error): """Provides suggestions for common errors.""" error_message = str(error).lower() if 'shape' in error_message: self.log_message("Suggestion: There seems to be a shape mismatch. Check the dimensions of your input data (X) and target labels (y). They must align.") elif 'scaling' in error_message or 'normalization' in error_message: self.log_message("Suggestion: Consider applying feature scaling or normalization. You can use StandardScaler or MinMaxScaler from sklearn to preprocess your features.") else: self.log_message("Suggestion: Please check your code for common issues or syntax errors.") def feature_scaling(self, X, method='standard'): """Applies feature scaling based on the specified method.""" start_time = time.time() if method == 'standard': self.log_message("Applying StandardScaler for feature scaling.") scaler = StandardScaler() elif method == 'minmax': self.log_message("Applying MinMaxScaler for feature scaling.") scaler = MinMaxScaler() else: self.log_message("Invalid scaling method provided. Defaulting to StandardScaler.") scaler = StandardScaler() X_scaled = scaler.fit_transform(X) self.monitor_performance(start_time, "Feature scaling") return X_scaled def handle_data_split(self, X, y, test_size=0.2, random_state=None): """Handles the splitting of data into training and testing sets.""" start_time = time.time() self.log_message(f"Splitting data with test size = {test_size} and random state = {random_state}.") X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=test_size, random_state=random_state) self.monitor_performance(start_time, "Data splitting") return X_train, X_test, y_train, y_test def evaluate_model(self, model, X_test, y_test): """Evaluates the model and provides accuracy.""" start_time = time.time() self.log_message("Evaluating the model on the test set.") predictions = model.predict(X_test) accuracy = accuracy_score(y_test, predictions) self.monitor_performance(start_time, "Model evaluation") self.log_message(f"Model accuracy: {accuracy:.4f}") return accuracy # Example usage: if __name__ == "__main__": interpreter = MLInterpreter() # Sample machine learning code code = """ import numpy as np from sklearn.datasets import load_iris from sklearn.linear_model import LogisticRegression # Load dataset iris = load_iris() X = iris.data y = iris.target # Feature Scaling X_scaled = interpreter.feature_scaling(X, method='standard') # Splitting data X_train, X_test, y_train, y_test = interpreter.handle_data_split(X_scaled, y, test_size=0.3, random_state=42) # Training the model model = LogisticRegression() model.fit(X_train, y_train) # Evaluate the model interpreter.evaluate_model(model, X_test, y_test) """ interpreter.execute_code(code) ``` ### Features of the Script: 1. **Dynamic Code Execution**: The `execute_code` method allows users to pass in any machine learning code as a string. The code is executed, and explanations are logged and printed. 2. **Error Handling**: The script captures common errors like shape mismatches and offers suggestions to fix them. For example, it recommends checking input data shapes or applying feature scaling. 3. **Feature Scaling**: A utility function `feature_scaling` is included to handle feature scaling with options for `StandardScaler` and `MinMaxScaler`. 4. **Data Splitting**: The `handle_data_split` method uses `train_test_split` to divide data into training and test sets, with logging to track the process. 5. **Model Evaluation**: The `evaluate_model` method calculates and logs the accuracy of the model on the test data. 6. **Logging**: The script has optional logging functionality to keep track of the execution process in a file. 7. **Performance Monitoring**: The script can monitor the time taken for each major operation (e.g., feature scaling, data splitting) and log the duration. ### Customization: - **Logging**: You can enable or disable logging by setting the `log` parameter in the `MLInterpreter` class. - **Performance Monitoring**: You can control performance monitoring by setting the `performance_monitoring` parameter. - **Feature Scaling**: The method for scaling features can be customized (e.g., `standard` or `minmax`). This script should make it easier to understand and debug machine learning code by providing step-by-step explanations and guidance.
Browse Marketplace