PromptBase
Upgrade
Close icon
General
Home
Marketplace
Create
Hire
Login
Chat
Sell
Explore

Prompt Details

Model
(claude-4-6-sonnet)
Token size
2,218
Example input
[PRODUCT_DESCRIPTION]: "Customer support chatbot for a SaaS product that answers questions about our features, handles common troubleshooting, and escalates complex issues to human agents"
Example output
Now I'll run the full analysis and produce the system design document. --- **Internal Analysis (Steps 1–4)** **STEP 1 — PRODUCT DECOMPOSITION** - Core AI capability: Retrieve + Generate (answer from a knowledge base, classify intent for routing) - Interaction pattern: Real-time conversational, embedded in a SaaS product - Data: Product docs, feature FAQs, troubleshooting guides, ticket history — relatively static, internal, low PII surface unless users describe their account - Scale: B2B SaaS tool → moderate load, spiky (business hours), not consumer-scale initially - "Wrong" looks like: confidently wrong answers about features, missed escalation triggers, hallucinated workarounds **STEP 2 — PATTERN: RAG + lightweight intent classification** Reasons it beats alternatives: (1) Pure prompting fails — product docs won't fit in context and drift with every release; (2) Fine-tuning unjustified — the knowledge problem (docs) is retrieval, not style; (3) Agent overkill — no external actions needed beyond routing to a human **STEP 3 — RISKS** - Technical: Retrieval quality (stale embeddings, bad chunking = confidently wrong answers) - Cost: LLM call per message; no caching = linear cost growth - UX: Hallucinated feature explanations — hardest to catch, most damaging to trust - Compliance: Any user-described account data in conversation logs **STEP 4 — IRREVERSIBLE FLAGS** - Embedding model choice (re-embedding the entire KB is expensive to redo) - Vector DB vendor lock-in (schema migration is painful) - Whether to store conversation history (GDPR/data retention retroactive change is hard)Now the full document:The architecture document above covers all 10 sections grounded specifically in your support chatbot use case. A few things worth calling out at the top level: **The most important call in this design** is the rejection of fine-tuning. Teams instinctively reach for it when their chatbot gives wrong answers — but for a product that answers questions about docs, the problem is almost always retrieval quality, not generation quality. Spending 6 weeks collecting and labeling fine-tuning data before you've validated that your chunking strategy actually returns the right chunks is a very common and very expensive mistake. **The highest practical risk** before you write a line of production code is committing to an embedding model without benchmarking. Run 200 real support queries (or realistic simulations of them) through two candidate embedding models and measure retrieval precision@3. That benchmark will inform every subsequent architectural decision about chunk size, hybrid search weighting, and whether you need a reranker at all. **Phase 1's validation gate** — retrieval precision on your golden QA set before building Phase 2 — is non-negotiable. The rest of the system is just a way to route bad chunks into an LLM faster if retrieval doesn't work.
🌀 Claude

Ml System Designers

Add to Cart
Instant accessInstant access
Usage rightsCommercial use
Money-back guaranteeMoney‑back
By purchasing this prompt, you agree to our terms of service
CLAUDE-4-6-SONNET
Tested icon
Guide icon
4 examples icon
Free credits icon
Stop guessing your AI stack. This prompt turns a simple product idea into a complete, production-grade AI system architecture — the kind senior engineers actually use to make build decisions. Built for founders, AI engineers, and consultants who need clarity fast — without spending weeks debating tools, patterns, or infrastructur
...more
Updated 2 weeks ago
Report
Browse Marketplace