Prompt Details
Model
(claude-4-5-sonnet)
Token size
2,980
Example input
[CURRENT_WORKFLOW]: Task: Refactor authentication module Observed turn pattern: Turn 1–3: Agent reads multiple files (auth.py, user.py, config.py) Turn 4–7: Agent proposes Solution A (JWT redesign) Turn 8–11: Agent partially implements Solution A Turn 12–14: Agent doubts approach, proposes Solution B (session-based) Turn 15–19: Rewrites parts of module Turn 20–24: Re-reads edited files after each change Turn 25–29: Runs tests repeatedly after small edits Turn 30–34: Switches back to JWT approach Turn 35–38: More edits + test reruns Turn 39–41: Final solution Turn 42–44: Extra refinements not requested
[OPTIMIZATION_GOALS]: - Reduce session cost - Reduce turn count - Keep reliability identical - Prevent agent from bouncing between solutions
Example output
## Session Optimization Report
### Current Workflow Map
**TURN SEQUENCE:**
- Turn 1–3: Context loading (read auth.py, user.py, config.py)
- Turn 4–7: Solution A proposed (JWT redesign)
- Turn 8–11: Partial implementation of Solution A
- Turn 12–14: Abandons Solution A, proposes Solution B (session-based)
- Turn 15–19: Rewrites module for Solution B
- Turn 20–24: Re-reads edited files after each modification
- Turn 25–29: Runs tests after every small change
- Turn 30–34: Reverts to JWT approach (Solution A)
- Turn 35–38: More edits + repeated test runs
- Turn 39–41: Completes implementation
- Turn 42–44: Unsolicited refinements
**TOOL CALL MAP:**
| Tool | Count | Turns | Pattern |
|------|-------|-------|---------|
| `view` (file read) | 15+ | 1-3, 20-24, scattered | Hot path: repeated reads of same 3 files |
| `str_replace` (edit) | 12+ | 8-11, 15-19, 30-38 | Sequential single edits |
| `bash_tool` (tests) | 8+ | 25-29, 35-38 | After every 1-2 edits |
**DECISION POINTS:**
- Turn 7: Commits to JWT approach
- Turn 12: **Critical branch** — abandons JWT for session-based
- Turn 30: **Critical branch** — reverts to JWT
- Turn 42: Continues past completion
**FAILURE MODES:**
- Turns 12–34: 22-turn exploration spiral (51% of session)
- Turns 20–38: Verification loop clusters
**COMPLETION PROFILE:**
- Actual completion: Turn 41 (solution working)
- Session end: Turn 44 (3 unnecessary turns)
- Optimal completion: ~15 turns (see redesign)
---
### Anti-Patterns Detected
| Pattern | Evidence | Turns Wasted | Cost Impact | Fix Effort | Confidence |
|---------|----------|-------------|-------------|-----------|------------|
| **AP1: Redundant Read** | auth.py read 5x (turns 1,12,20,24,30), user.py read 4x (turns 2,21,31,35), config.py read 3x (turns 3,13,32) | **8 turns** | $0.64 per session @ $0.08/turn | QUICK | HIGH |
| **AP2: Exploration Spiral** | JWT→session→JWT flip (turns 12–34). Agent abandons 70% complete work, rewrites, then reverts. No plan commitment. | **22 turns** | $1.76 per session | MODERATE | HIGH |
| **AP3: Over-Verification** | Tests run 8x (turns 25-29, 35-38). File re-reads after each edit (turns 20-24). Verify-edit-verify loop instead of edit-batch-verify-once. | **7 turns** | $0.56 per session | QUICK | HIGH |
| **AP4: Serial Single-Tool** | Initial file reads (turns 1-3) serial when parallelizable. Could batch into 1 turn. | **2 turns** | $0.16 per session | QUICK | HIGH |
| **AP7: Missing Exit Condition** | Continues refining (turns 42-44) after task completion at turn 41. No explicit "TASK COMPLETE" declaration. | **3 turns** | $0.24 per session | QUICK | MEDIUM |
| **AP9: No-Plan Execution** | Starts coding JWT at turn 8 without exploring alternatives. No upfront architecture decision. Leads to turn 12 pivot. | **Indirect: 15 turns** (causes AP2) | $1.20 per session | QUICK | HIGH |
**✓ Clean:** AP5 (minimal history echo), AP6 (single model tier appropriate), AP8 (just-in-time loading used), AP10 (responses appropriately sized)
**Total Wasted Turns:** 42 of 44 turns (95% waste rate)
**Direct waste:** 20 turns (AP1+AP3+AP4+AP7)
**Indirect waste:** 22 turns (AP2, caused by AP9)
**Total Wasted Cost:** $3.36 of $3.52 session cost (95% avoidable)
**Calculation:** 42 wasted turns × $0.08/turn = $3.36
---
### Optimized Session Architecture
**PHASE A — UNDERSTAND (2 turns)**
**Turn 1:** Agent reads task requirements, asks clarifying question:
*"Refactor authentication module — should I maintain JWT approach, switch to session-based, or evaluate both and recommend?"*
**Turn 2:** User responds. Agent confirms scope and success criteria.
**Anti-pattern fixes integrated:**
- **AP9 fix:** Clarify architectural constraint before any implementation
- **AP4 fix:** If file reading needed, batch all 3 files in turn 2
**Exit condition:** Task fully understood, architectural direction set
---
**PHASE B — PLAN (1 turn)**
**Turn 3:** Agent states:
- Chosen approach: [JWT with security improvements]
- Files to modify: auth.py (lines X-Y), user.py (function Z)
- Verification plan: Run test suite once after all edits complete
- Estimated turns: 6 execution + 2 verify = 8 total
**Anti-pattern fixes integrated:**
- **AP2 fix:** Commit to approach. If it fails during execution, pivot explicitly — don't drift
- **AP1 fix:** "I have cached auth.py (327 lines), user.py (189 lines), config.py (64 lines) from turn 2. Will reference cached versions."
**Exit condition:** Plan stated, no execution started yet
---
**PHASE C — EXECUTE (6-8 turns)**
**Turns 4–11:** Implementation with strict rules:
- **AP1 enforcement:** Reference "cached version from turn 2" instead of re-reading files
- **AP2 enforcement:** Follow the plan. If JWT approach fails (tests break), declare failure and pivot once — don't drift
- **AP3 enforcement:** Do NOT run tests after each edit. Make all changes, then verify in Phase D
- **AP4 enforcement:** Group independent edits where possible (e.g., auth.py lines 45-67 + lines 120-135 in one turn if logically separate)
**Typical execution flow:**
- Turns 4-5: Core auth.py refactor
- Turns 6-7: user.py integration updates
- Turns 8-9: config.py updates
- Turns 10-11: Final integration touchups
**Exit condition:** All planned changes implemented
---
**PHASE D — VERIFY (2 turns)**
**Turn 12:** Run full test suite once. If tests fail:
**Turn 13:** Fix failures, re-run tests. (If failures persist, extend by 1-2 turns max)
**Anti-pattern fixes integrated:**
- **AP3 fix:** Verify ONCE at end of execution phase, not after every edit
**Exit condition:** Tests pass, implementation meets requirements
---
**PHASE E — COMPLETE (1 turn)**
**Turn 14:** Agent declares:
```
TASK COMPLETE
Deliverables:
- auth.py refactored (JWT security hardened)
- user.py integration updated
- config.py secrets management improved
- Test suite: 47/47 passing
Changes summary: [3-sentence summary]
```
**Anti-pattern fixes integrated:**
- **AP7 fix:** Explicit completion declaration. No "anything else?" or unsolicited refinements.
**Exit condition:** Task closed
---
**OPTIMIZED TURN BUDGET:**
| Phase | Turns | Notes |
|-------|-------|-------|
| Understand | 2 | Clarify approach upfront |
| Plan | 1 | Commit to strategy |
| Execute | 6-8 | Focused implementation |
| Verify | 2 | Single verification cycle |
| Complete | 1 | Explicit closure |
| **TOTAL** | **12-14 turns** | vs. 44 current |
---
### Model Routing Plan
**ROUTING ANALYSIS:**
| Phase | Task Complexity | Model | Rationale | Cost Factor |
|-------|----------------|-------|-----------|-------------|
| **A: Understand** | Moderate | **Sonnet 3.5** | Needs to parse requirements, ask intelligent clarifying questions | 1x |
| **B: Plan** | Moderate-High | **Sonnet 3.5** | Architectural decision (JWT vs session), needs reasoning | 1x |
| **C: Execute** | Moderate | **Sonnet 3.5** | Code refactoring across multiple files, needs context awareness | 1x |
| **D: Verify** | Simple | **Haiku** | Run tests, read test output, fix obvious failures | 0.1x |
| **E: Complete** | Simple | **Haiku** | Format summary, list deliverables | 0.1x |
**COST COMPARISON:**
**Current (single model - Sonnet 3.5):**
- 44 turns × $0.08/turn = **$3.52**
**Optimized (single model - Sonnet 3.5):**
- 14 turns × $0.08/turn = **$1.12**
**Optimized (with routing):**
- Phase A-C: 11 turns × $0.08 = $0.88
- Phase D-E: 3 turns × $0.008 (Haiku) = $0.024
- **Total: $0.90**
**Routing savings:** $0.22 per session (20% additional reduction beyond turn optimization)
**RECOMMENDATION:** Use routing for Phase D-E. Verification is deterministic (run tests, check output), doesn't require Sonnet's reasoning. Phase E is pure formatting.
---
### Impact Projection
| Metric | Current | Optimized | Improvement |
|--------|---------|-----------|-------------|
| **Average turns** | 44 | 14 | **-30 turns (-68%)** |
| **Anti-patterns eliminated** | 6 found | 6 fixed | AP1, AP2, AP3, AP4, AP7, AP9 |
| **Tokens per session (est.)** | ~185K | ~58K | **-127K (-69%)** |
| **Cost per session** | $3.52 | $0.90 | **-$2.62 (-74%)** |
| **Monthly savings (10 sessions/day)** | — | $786/month | 300 sessions × $2.62 |
**CALCULATION DETAILS:**
**Turn reduction:**
- Redundant reads eliminated: -8 turns (AP1)
- Exploration spiral eliminated: -22 turns (AP2)
- Over-verification eliminated: -7 turns (AP3)
- Serial tool calls batched: -2 turns (AP4)
- Exit condition added: -3 turns (AP7)
- Planning phase prevents spiral: -22 turns prevented (AP9→AP2)
- Overhead: +2 turns (explicit Understand/Plan phases)
- **Net: 44 → 14 turns**
**Token reduction:**
- Current: ~4.2K tokens/turn × 44 = 185K tokens
- Optimized: ~4.1K tokens/turn × 14 = 57K tokens
- (Slightly lower per-turn due to less history echo)
**Cost calculation:**
- Sonnet input: $3/MTok, output: $15/MTok
- Estimated split: 60% input, 40% output
- Average turn cost: $0.08 (blended)
- Current: 44 × $0.08 = $3.52
- Optimized (no routing): 14 × $0.08 = $1.12
- Optimized (with routing): $0.90 (see routing table)
**Monthly projection:**
- 10 auth refactorings/day × 30 days = 300 sessions/month
- Savings per session: $2.62
- **Monthly savings: $786**
- **Annual savings: $9,432**
---
### Implementation Guide
**Priority-ordered steps:**
#### 1. Add Planning Phase [EFFORT: QUICK]
**Change:** Modify system prompt to require explicit planning before execution.
```
Before any code changes, state your plan:
- Chosen approach and rationale
- Files to modify (with line ranges)
- Verification strategy
- Estimated turns to complete
Do not begin execution until plan is stated.
```
**Impact:** Prevents AP2 (exploration spiral) and AP9 (no-plan execution).
**Savings:** ~22 turns per session.
**Effort:** 15 minutes to update prompt.
---
#### 2. Cache File Contents [EFFORT: QUICK]
**Change:** Add instruction to reference cached file contents.
```
When you read a file, note "CACHED: filename.py (N lines)".
In subsequent turns, reference the cached version instead of re-reading.
Only re-read if the file has been modified.
```
**Impact:** Prevents AP1 (redundant reads).
**Savings:** ~8 turns per session.
**Effort:** 10 minutes to update prompt + teach agent pattern.
---
#### 3. Define Verification Checkpoints [EFFORT: QUICK]
**Change:** Add explicit verification rules.
```
Verification rules:
- Do NOT run tests after each edit
- Make all planned changes, THEN run tests once
- If tests fail, fix and re-run (max 2 verification cycles)
```
**Impact:** Prevents AP3 (over-verification).
**Savings:** ~7 turns per session.
**Effort:** 10 minutes to update prompt.
---
#### 4. Batch Independent Tool Calls [EFFORT: QUICK]
**Change:** Instruct agent to batch file reads.
```
When you need to read multiple files:
- List all files needed
- Read them in a single turn using multiple view calls
Example: view auth.py, view user.py, view config.py (3 calls, 1 turn)
```
**Impact:** Prevents AP4 (serial single-tool).
**Savings:** ~2 turns per session.
**Effort:** 10 minutes to update prompt + provide example.
---
#### 5. Add Explicit Completion Declaration [EFFORT: QUICK]
**Change:** Require "TASK COMPLETE" statement.
```
When the task is complete:
- State "TASK COMPLETE" explicitly
- List deliverables
- Summarize changes in 3 sentences
- Do NOT ask "anything else?" or continue refining unprompted
```
**Impact:** Prevents AP7 (missing exit condition).
**Savings:** ~3 turns per session.
**Effort:** 10 minutes to update prompt.
---
#### 6. Implement Model Routing [EFFORT: MODERATE]
**Change:** Route verification and completion to Haiku.
**Architecture:**
- Phase A-C: Sonnet 3.5 (requires reasoning)
- Phase D-E: Haiku (deterministic tasks)
**Implementation:**
- Detect phase completion signals ("All changes implemented" → switch to Haiku)
- OR: User manually switches model for verify/complete steps
- OR: Use multi-agent orchestrator (agentic framework required)
**Impact:** Additional 20% cost reduction beyond turn optimization.
**Savings:** $0.22 per session.
**Effort:** 2-4 hours if using orchestrator, or manual model switching (5 seconds per session).
---
#### 7. Add Workflow Guardrails [EFFORT: MODERATE]
**Change:** Implement anti-drift safeguards.
```
If you start to doubt your approach mid-execution:
- STOP execution
- State: "Plan may be failing. Evidence: [specific test/error]"
- Propose pivot with rationale
- Wait for user confirmation before pivoting
Do NOT silently switch approaches.
```
**Impact:** Prevents AP2 from recurring (safety net if planning fails).
**Savings:** Prevents 15-20 turn spirals in edge cases.
**Effort:** 1 hour to design + test guardrail logic.
---
### Total Implementation Effort
- **Quick wins (steps 1-5):** 1 hour total → 74% cost reduction
- **Moderate additions (steps 6-7):** 3-5 hours → additional 5-10% reliability + cost gains
**Recommended sequence:**
1. Implement steps 1-5 first (fastest ROI)
2. Test on 10 sessions, measure turn reduction
3. Add step 6 (routing) if cost targets not met
4. Add step 7 (guardrails) if reliability concerns emerge
---
### Risk Assessment
**QUALITY SAFEGUARDS:**
- Planning phase may add 1-2 turns overhead (acceptable trade for 22-turn spiral prevention)
- Reduced verification does NOT mean zero verification — still runs full test suite once
- Cached files re-read if modified (no stale data risk)
- Guardrails prevent silent approach switches
**FAILURE MODES:**
- If plan is wrong, agent may waste 6-8 execution turns before discovering in verify phase
- **Mitigation:** Guardrail allows mid-execution pivot with user confirmation
- If tests reveal fundamental approach failure, may need replanning (+3-5 turns)
- **Still better than current:** 14 + 5 = 19 turns vs. 44 current
**RELIABILITY IMPACT:** **Zero degradation expected**. Verification still occurs, just batched efficiently.
---
### Success Metrics
**Track these metrics to validate optimization:**
| Metric | Target | Measurement |
|--------|--------|-------------|
| Average turns per auth refactor | ≤15 | Log turn counts per session |
| Exploration spirals | 0 per 10 sessions | Flag any approach reversals |
| Redundant file reads | ≤1 per session | Count duplicate view calls |
| Test runs per session | ≤3 | Count bash_tool test invocations |
| Cost per session | ≤$1.00 | Measure token consumption |
**If targets not met:** Review session logs for unaddressed anti-patterns or new failure modes.
By purchasing this prompt, you agree to our terms of service
CLAUDE-4-5-SONNET
This prompt analyzes your agent's workflow against 10 named anti-patterns, redesigns the session flow into a phased architecture, adds model routing for subtasks, and defines clear exit conditions. It's the architecture-level optimization that token-level fixes can't reach.
...more
Added over 1 month ago
