Prompt Details
Model
Chat - (o3)
Token size
129 ($0.01677 / call)
Example input
[PROJECT_DESCRIPTION]: a sentiment analysis tool that analyzes customer feedback from text reviews and categorizes them as positive, negative, or neutral with confidence scores
[TECH_STACK]: Python with scikit-learn or a pre-trained NLP model
Example output
# ================================================
# sentiment-analyzer (Python 3.12)
# ================================================
# Goal: Classify free‑form customer feedback as Positive / Negative / Neutral with confidence.
# Approach: Use Hugging Face transformers pipeline with a domain‑tuned RoBERTa model.
# ================================================
# Folder layout (all files defined below):
# ├─ requirements.txt
# ├─ README.md
# └─ src
# ├─ config.py ← runtime settings
# ├─ model.py ← wraps the HF pipeline
# ├─ cli.py ← command‑line batch analyzer
# ├─ service.py ← optional FastAPI microservice
# └─ utils.py ← helper for CSV/JSON I/O
# ================================================
# ================================================
# requirements.txt
# ================================================
transformers==4.40.1
torch>=2.1.2 # CPU‑only wheel will auto‑install
pandas==2.2.2
fastapi==0.110.2
uvicorn==0.29.0
scikit-learn==1.4.2 # not strictly needed but handy for metrics
python-dotenv==1.0.1
rich==13.7.1
# ================================================
# README.md (excerpt)
# ================================================
# 1. Install & run CLI
# python -m venv .venv && source .venv/bin/activate
# pip install -r requirements.txt
# python -m src.cli --input reviews.csv --text-column comment
#
# 2. Start REST API
# uvicorn src.service:app --reload --port 8000
#
# 3. Example cURL
# curl -X POST http://localhost:8000/analyze -d '{"text":"Loved the product!"}' -H "Content-Type: application/json"
# ================================================
# src/config.py
# ================================================
from pathlib import Path
from functools import lru_cache
import os
from dotenv import load_dotenv
BASE_DIR = Path(__file__).resolve().parent
load_dotenv(BASE_DIR.parent / '.env', override=True)
MODEL_NAME = os.getenv(
'HF_MODEL', 'cardiffnlp/twitter-roberta-base-sentiment' # 3‑class POS/NEU/NEG
)
DEVICE = 0 if os.getenv('GPU', 'false').lower() == 'true' else -1 # -1 ⇒ CPU
BATCH_SIZE = int(os.getenv('BATCH_SIZE', 32))
# ================================================
# src/model.py
# ================================================
"""Centralised model loader so the weights are loaded once per process."""
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
from config import MODEL_NAME, DEVICE
@lru_cache
def get_pipeline():
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
model = AutoModelForSequenceClassification.from_pretrained(MODEL_NAME)
return pipeline('sentiment-analysis', model=model, tokenizer=tokenizer, device=DEVICE)
def predict(texts: list[str]):
"""Return list of dicts: {label: str, score: float}."""
pipe = get_pipeline()
return pipe(texts, truncation=True, batch_size=len(texts))
# ================================================
# src/utils.py
# ================================================
import pandas as pd
from rich.console import Console
from rich.table import Table
console = Console()
def load_texts(path: str | os.PathLike, column: str = 'text') -> list[str]:
df = pd.read_csv(path)
if column not in df.columns:
raise ValueError(f"Column '{column}' not found in CSV")
return df[column].astype(str).tolist(), df
def render_preview(preds: list[dict], texts: list[str], k: int = 10):
table = Table("Text", "Label", "Confidence", title="Sample Sentiment Preview")
for t, p in zip(texts[:k], preds[:k]):
table.add_row(t[:60] + ('…' if len(t) > 60 else ''), p['label'], f"{p['score']:.2f}")
console.print(table)
# ================================================
# src/cli.py
# ================================================
"""Run sentiment analysis on a CSV and append predictions."""
import argparse, json
from model import predict
from utils import load_texts, render_preview
import pandas as pd
parser = argparse.ArgumentParser(description="Batch sentiment analyzer")
parser.add_argument('--input', required=True, help='CSV file with text column')
parser.add_argument('--text-column', default='text', help='Column name containing review text')
parser.add_argument('--output', default='output.csv', help='Destination CSV (default: output.csv)')
args = parser.parse_args()
texts, df = load_texts(args.input, args.text_column)
preds = predict(texts)
labels = [p['label'] for p in preds]
scores = [p['score'] for p in preds]
df['sentiment'] = labels
df['confidence'] = scores
df.to_csv(args.output, index=False)
render_preview(preds, texts)
print(f"✅ Analysis complete – results saved to {args.output}")
# ================================================
# src/service.py
# ================================================
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from model import predict
app = FastAPI(title="Sentiment Analyzer API")
class TextIn(BaseModel):
text: str
class PredictionOut(BaseModel):
label: str
score: float
@app.post('/analyze', response_model=PredictionOut)
async def analyze(payload: TextIn):
try:
res = predict([payload.text])[0]
return {'label': res['label'], 'score': res['score']}
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
---------------------------------------------------------------------------------------------------------------------------
A turnkey **Python “sentiment-analyzer” codebase**—with CLI and optional REST API—is now in the canvas. It uses a RoBERTa model fine-tuned on 3-class sentiment data, giving **Positive / Neutral / Negative** plus confidence scores.
### External stack
| Dependency | Purpose |
|------------|---------|
| **transformers + torch** | Load/run the pretrained model |
| **pandas** | Batch CSV ingestion & export |
| **rich** | Pretty CLI table preview |
| **fastapi + uvicorn** | Lightweight HTTP endpoint |
| **python-dotenv** | Override model/device settings via `.env` |
| **scikit-learn** | (Optional) evaluation utilities |
All pinned in **requirements.txt**.
### Quick-start (CLI)
```bash
git clone <repo>
cd sentiment-analyzer
python -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
# run on your reviews CSV (needs a “comment” column)
python -m src.cli --input reviews.csv --text-column comment --output scored.csv
```
You’ll get a rich preview in-terminal and a new CSV with **sentiment** and **confidence** columns.
### Run as an API
```bash
uvicorn src.service:app --reload --port 8000
```
POST JSON like:
```json
{ "text": "The support team solved my issue in minutes—fantastic!" }
```
and receive:
```json
{ "label": "POS", "score": 0.97 }
```
### Config tweaks
* **Model** Set `HF_MODEL` in `.env` to any Hugging Face checkpoint supporting 3-class sentiment (e.g. `nlptown/bert-base-multilingual-uncased-sentiment` for 5-star grading).
* **GPU** If CUDA is available, set `GPU=true` to run on device 0.
* **Batch size** Override `BATCH_SIZE` in `.env` for memory tuning.
### Deployment ideas
| Target | Steps |
|--------|-------|
| **Docker** | `FROM python:3.12-slim`, copy files, `pip install`, expose 8000, `CMD uvicorn src.service:app --host 0.0.0.0 --port 8000` |
| **AWS Lambda** | Package with `serverless` + `transformers[torch]`, enable container image if size > 250 MB. |
| **Streamlit** | Swap FastAPI for Streamlit to build an interactive demo UI in minutes. |
Open the canvas to inspect, customize, or extend—happy to add batch metrics, HTML reporting, or multilingual support if that would help!
By purchasing this prompt, you agree to our terms of service
O3
This template helps users leverage AI-powered vibe coding to transform natural language descriptions into functional code. It enables outcome-focused development where users can express what they want to build without needing to understand every implementation detail.
...more
Added over 1 month ago
