PromptBase
Upgrade
Close icon
General
Home
Marketplace
Create
Hire
Login
Chat
Sell
Explore

Prompt Details

Model
(claude-4-6-sonnet)
Token size
3,378
Example input
[INCOMING_REQUEST]: “Create a 1,500-word SEO article on AI tools for startups, optimized for ranking and including tactical examples.” [AVAILABLE_AGENTS_OR_WORKFLOWS]: • Research Agent (keyword + competitor analysis) • Outline Builder Agent • Draft Writer Agent • SEO Optimizer Agent • Evaluator Loop Agent [ROUTING_PRIORITY]: quality-first
Example output
# ROUTER-PRIME — ROUTING ANALYSIS --- ## PHASE 1 — REQUEST INTELLIGENCE **COMPLEXITY ASSESSMENT:** | Dimension | Assessment | |:---|:---| | Task type | HYBRID — Generative (article creation) + Analytical (SEO/keyword logic) + Constrained Transformative (optimization pass) | | Domain specificity | MULTI-DOMAIN — Content writing + SEO mechanics + AI tools landscape | | Output precision | CONSTRAINED — 1,500 words, SEO-optimized, tactical examples required | | Verification need | REQUIRES-REVIEW — SEO quality is not self-evident; keyword density, structure, and ranking signals need evaluation | | Effort estimate | COMPLEX — Multiple sequential dependencies, full pipeline needed | **OVERALL COMPLEXITY SCORE: COMPLEX** **HIDDEN REQUIREMENTS DETECTED:** - **Hidden Req. 1**: "SEO-optimized" implies keyword research must precede writing — not just sprinkling terms in post-hoc. Impact: Research Agent must run *before* Outline Builder, not in parallel. - **Hidden Req. 2**: "Ranking" implies competitive gap analysis (what's already ranking for this topic). Impact: Research Agent scope must include SERP competitor analysis, not just keyword volume. - **Hidden Req. 3**: "Tactical examples" require current, specific AI tool names/use cases. Impact: Research Agent must surface real tools (e.g., Notion AI, Jasper, Runway) — generic examples will hurt credibility and dwell time. - **Hidden Req. 4**: Quality-first priority means the draft cannot go directly to delivery — an evaluator pass is mandatory before output. **P1: PASS** --- ## PHASE 2 — AGENT CAPABILITY MAPPING **Research Agent** - Best suited for: Keyword discovery, competitor content analysis, SERP gap identification - Limitations: Cannot write prose; produces structured data, not narrative - Output type: Keyword list, competitor gaps, target search intent - Verification: Human-checkable data output - **Match score: HIGH** — this request's SEO requirements are entirely dependent on what this agent surfaces first **Outline Builder Agent** - Best suited for: Structuring content hierarchies (H1/H2/H3), mapping keywords to sections, sequencing narrative flow - Limitations: Cannot validate SEO scoring; produces structure, not substance - Output type: Annotated outline with section-level keyword targets - Verification: Reviewable before writing begins - **Match score: HIGH** — a 1,500-word constrained article with SEO intent requires deliberate section architecture, not improvised structure **Draft Writer Agent** - Best suited for: Long-form prose generation from a defined outline and keyword brief - Limitations: Without outline + keyword inputs, tends toward generic output; cannot self-optimize for SEO - Output type: Full draft article - Verification: Requires SEO pass before delivery - **Match score: HIGH** — but only when fed Research + Outline outputs; LOW if run cold **SEO Optimizer Agent** - Best suited for: Post-draft optimization — keyword density tuning, meta description, internal link suggestions, readability scoring, title tag variants - Limitations: Cannot generate original content; works on existing draft only - Output type: Revised draft with SEO annotations - Verification: Can produce quantitative scoring (e.g., Flesch score, keyword coverage %) - **Match score: HIGH** — essential for the "optimized for ranking" requirement **Evaluator Loop Agent** - Best suited for: Quality gate enforcement — scoring output against defined criteria and looping back to the appropriate agent if thresholds aren't met - Limitations: Adds latency; needs explicit success criteria defined upfront to function well - Output type: Pass/fail verdict + specific revision instructions if failing - Verification: Self-verifying by design - **Match score: HIGH** — quality-first priority makes this non-negotiable **CAPABILITY GAP DETECTION:** - **Gap 1**: No agent is explicitly scoped for *tactical example validation* — verifying that named AI tools are current, accurately described, and genuinely relevant to startups. Mitigation: Brief Research Agent explicitly to surface 5–8 specific tools with startup use cases; Evaluator Loop Agent should include a criterion checking example specificity. **P2: PASS** --- ## PHASE 3 — ROUTING DECISION **SELECTED PATTERN: SEQUENTIAL PIPELINE → EVALUATOR LOOP (Hybrid Pattern 2 + 4)** The request has clear stage dependencies (you cannot outline before researching; you cannot optimize before drafting), which mandates sequential execution. The quality-first priority and REQUIRES-REVIEW verification need mandate an Evaluator Loop at the end rather than direct delivery. ``` ┌──────────────────────────────────────────────────────────────────┐ │ REQUEST: 1,500-word SEO article, AI tools for startups │ │ COMPLEXITY: COMPLEX │ │ SELECTED PATTERN: Sequential Pipeline → Evaluator Loop │ │ PRIMARY AGENTS: Research → Outline → Writer → SEO Optimizer │ │ GATE AGENT: Evaluator Loop (final quality enforcement) │ │ EXECUTION ORDER: 5 stages, each gated before progression │ │ PRIORITY APPLIED: quality-first — no stage skipped, no direct │ │ delivery without Evaluator pass │ │ CONFIDENCE: HIGH │ │ RISK FLAGS: Gap in tactical example validation (see Gap 1); │ │ Evaluator criteria must be defined before run │ └──────────────────────────────────────────────────────────────────┘ ``` --- ## PHASE 4 — EXECUTION HANDOFF **HANDOFF INSTRUCTION:** *Route this request using a Sequential Pipeline terminating in an Evaluator Loop.* --- **STEP 1 — Research Agent** - **Input**: Target topic: "AI tools for startups." Scope: (a) primary + secondary keyword clusters, (b) top 5 SERP competitors for primary keyword, (c) content gaps in existing ranking articles, (d) 6–8 specific AI tools used by early-stage startups with concrete use case per tool. - **Expected output**: Keyword brief (primary KW + 4–6 secondary KWs with intent labels), competitor gap list, curated tool list with one-line use case each. - **Gate**: Does output include a primary keyword with clear search intent? Are at least 5 specific tools named with startup context? → If YES, proceed. If NO, re-run with narrower scope prompt. --- **STEP 2 — Outline Builder Agent** - **Input**: Keyword brief + competitor gap list + tool list from Step 1. Target: 1,500 words, introduction + 4–5 H2 sections + conclusion, each section tagged with target keyword(s). - **Expected output**: Annotated outline — section titles, target keyword per section, 1-sentence content instruction per section, suggested placement for tactical examples. - **Gate**: Does every H2 map to at least one keyword from the brief? Are tactical examples placed in at least 2 sections? → If YES, proceed. --- **STEP 3 — Draft Writer Agent** - **Input**: Full annotated outline from Step 2 + keyword brief from Step 1. Instruction: write to 1,500 words (±100), use conversational-professional tone, embed tactical examples as named tools with specific use cases, avoid keyword stuffing. - **Expected output**: Complete draft article, section headers intact, word count confirmed. - **Gate**: Is word count 1,400–1,600? Are all H2 sections from outline present? Are at least 4 specific tools named with context? → If YES, proceed. --- **STEP 4 — SEO Optimizer Agent** - **Input**: Full draft from Step 3 + keyword brief from Step 1. Tasks: (a) verify primary keyword appears in title, first 100 words, and at least 2 H2s, (b) optimize meta description (150–160 chars), (c) flag passive voice or readability issues above Flesch grade 10, (d) suggest 2 internal link anchor opportunities. - **Expected output**: Revised draft with inline SEO annotations, meta description, readability score, keyword coverage summary. - **Gate**: Does primary keyword appear in all required positions? Is meta description within length? Readability score acceptable? → If YES, proceed to Evaluator. --- **STEP 5 — Evaluator Loop Agent** - **Input**: Optimized draft + original request requirements. Evaluate against: (1) word count 1,400–1,600 ✓/✗, (2) primary keyword correctly placed ✓/✗, (3) minimum 4 tactical tool examples with startup context ✓/✗, (4) logical narrative flow ✓/✗, (5) meta description present and valid ✓/✗. - **Expected output**: PASS → deliver final article. FAIL → return to the specific failed stage with targeted revision instruction (e.g., "Return to Draft Writer: tactical examples in Section 3 are generic — replace with specific tool names"). - **Gate**: All 5 criteria scored PASS → deliver. Any FAIL → re-route to the responsible agent only (not full pipeline restart). --- **IF any gate fails mid-pipeline** → return to the immediately preceding agent with the specific failure note. Do not restart from Step 1 unless the keyword brief itself is found to be flawed. **IF complexity escalates mid-execution** (e.g., competitive landscape requires deeper research than anticipated) → pause at Step 1 gate, flag to human with specific gap identified, request expanded scope approval before continuing. --- ## PHASE 5 — ROUTING AUDIT LOG | Field | Entry | |:---|:---| | Request type | HYBRID Generative + Analytical, CONSTRAINED output | | Pattern selected | Sequential Pipeline + Evaluator Loop — because stage dependencies are strict and quality-first priority demands a formal gate before delivery | | Alternatives considered | Parallel Agents (Research + Outline simultaneously) — rejected because outline quality degrades without research inputs; Single Agent — rejected as under-powered for SEO constraint | | Key trade-off | Speed sacrificed for quality — 5-stage pipeline is slower than a 2-step draft+optimize flow, but quality-first priority makes this the correct call | | Confidence driver | High — every agent maps clearly to a stage; no ambiguous handoffs | | Re-routing trigger | If Research Agent returns low-confidence keyword data (e.g., unclear search intent), pause and escalate before outline stage — a bad brief poisons every downstream step | --- **Routing complete. Pipeline is ready to execute. Begin at Step 1 — Research Agent.**
🌀 Claude

Routing Orchestrator

Add to Cart
Instant accessInstant access
Usage rightsCommercial use
Money-back guaranteeMoney‑back
By purchasing this prompt, you agree to our terms of service
CLAUDE-4-6-SONNET
Tested icon
Guide icon
4 examples icon
Free credits icon
Intelligent AI Agent Traffic Controller Routing Orchestrator™ acts as a strategic traffic controller for AI systems. It does not use basic IF/ELSE logic. It: ✓ Assesses real request complexity (5-dimension matrix) ✓ Detects hidden requirements ✓ Maps request needs to actual agent capability ✓ Identifies capability gaps before execution ✓ Selects optimal routing pattern (single, sequential, parallel, evaluator loop, hybrid, or escalate) ✓ Produces a ready-to-run execution handoff
...more
Added over 1 month ago
Report
Browse Marketplace