Prompt Details
Model
(claude-4-6-sonnet)
Token size
1,109
Example input
[SKILL_CONCEPT]: auditing cold email sequences for deliverability, reply rate structure, and follow-up logic
[USER_TYPE]: B2B sales development rep
[DOMAIN]: outbound sales
Example output
End-of-Build Report
Quality Gate Score: 66/70
DimensionScoreNotesSKILL.md structure & completeness10/10All required sections present, under token limitcontext.md domain depth10/10Tables, decision framework, 2 named edge casesexamples.md realism9/10Both examples use real ICP language; reasoning blocks explicitaudit-checklist.md scoring rigor10/10All 4 score tiers defined per dimension; fix priority rankedREADME.md deployment clarity9/10All required sections present; activation phrases are SDR-naturalProgressive disclosure compliance9/10Load conditions on all reference files; clean L0/L1/L2 separationAnti-reconstruction compliance9/10No methodology terminology exposed in user-facing output
Persona Selected: Analytical β outbound sales audit work requires precision scoring, benchmark references, and specific rewrite directives rather than creative or editorial framing. SDRs need decisions, not suggestions.
Reasoning Tier: CCoT (Causal Chain of Thought) β the audit is a multi-step workflow with defined intermediate outputs (deliverability scan β structure check β logic check β score β prioritized findings). Each step produces a concrete result that feeds the next, making CCoT the appropriate pattern over simpler SCoT.
Token Count SKILL.md: ~965 tokens (765 words Γ ~1.26 tokens/word average). Well under the 1,200-token limit.
Progressive Disclosure:
L0 (always loaded): SKILL.md, scripts/audit-checklist.md
L1 (on principle questions): references/context.md
L2 (on complex/edge cases): references/examples.md
Output Mode: Full β build required complete scoring logic, two realistic worked examples with reasoning blocks, rerun protocol, and a 5-step complex sequence example covering an edge case (breakup email).
SKILL.md
Audits B2B cold email sequences for deliverability risk, reply-rate structure, and follow-up logic; activate when an SDR submits a sequence draft, live sequence, or reply-rate problem for diagnosis.Cold Email Sequence Auditor
Purpose
SDRs routinely build sequences that fail silently β landing in spam due to link overload, bouncing off spam filters from trigger words, losing prospects between touches because the follow-up logic restates rather than reframes, or collapsing reply rates because every email leads with the sender's company name instead of the prospect's problem. This skill prevents the pattern where a technically functional sequence still underperforms because no single review pass covers deliverability hygiene, message architecture, and follow-up cadence logic simultaneously.
When to Use
You have written a new cold email sequence and want a pre-launch deliverability and structure check.
Your sequence is live but reply rates have dropped below 3% and you need a root-cause diagnosis.
You are inheriting a sequence from a teammate and need a fast audit before taking it over.
You are A/B testing subject lines and want to verify both variants clear spam filter heuristics.
You have added a new step to an existing sequence and need the full sequence re-evaluated for follow-up logic gaps.
Your email domain is new or recently warmed and you need deliverability-specific guidance before sending volume.
You are building a persona-specific sequence (e.g., VP Engineering vs. CFO) and need message-fit validation per step.
You received a bounce or spam complaint spike and need to identify which sequence element triggered it.
SCOPE β this skill does NOT handle:
Configuring or diagnosing your ESP or sequencing tool (Outreach, Salesloft, Apollo, etc.)
Writing net-new sequence copy from scratch without an existing draft as input
CRM data hygiene or list segmentation recommendations
Legal compliance review (CAN-SPAM, GDPR opt-out requirements)
Deliverability infrastructure (DNS records, DMARC, SPF, DKIM setup)
Process
Receive sequence input. Accept all email steps as provided β subject lines, bodies, send-day offsets, and any personalization tokens shown.
Map sequence architecture. Identify total step count, send-day spread, and touch type per step (cold open, value add, breakup, etc.).
Run deliverability scan. Flag spam-trigger words, link count per email, image-to-text ratio issues, all-caps usage, and excessive punctuation.
Evaluate subject line set. Score each subject line for open-rate structure: curiosity gap, personalization signal, length (β€50 characters), and spam-flag risk.
Audit email body architecture. Check each email for: opening line quality (prospect-first vs. sender-first), value prop clarity, single CTA presence, and word count (target 50β125 words for cold touches).
Assess follow-up logic. Confirm each step reframes rather than restates β a new angle, new evidence, or new CTA must appear in every follow-up; flag any step that simply re-asks the same question.
Check cadence spacing. Evaluate inter-step timing against reply-rate benchmarks: Day 1, Day 3β4, Day 7, Day 14 is the baseline; flag gaps over 10 days or touches under 2 days apart.
Identify persona-message fit. Confirm language, pain points, and proof assets match the stated ICP; flag generic language that applies to any prospect.
Score sequence on audit checklist. Apply the 7-dimension scoring framework from scripts/audit-checklist.md; record per-dimension scores.
Produce prioritized findings. List issues by severity: Critical (blocks deliverability or kills reply rate) β High (structural weakness) β Low (optimization opportunity).
Write rewrite recommendations. For every Critical and High finding, provide a specific rewrite or fix, not a general suggestion.
Deliver final audit report. Format as: Sequence Summary β Scores β Critical Findings β High Findings β Rewrite Recommendations β Optional Low-Priority Notes.
Validation
Every email step in the input has been evaluated β no steps skipped.
Deliverability scan covers all five vectors: trigger words, links, images, caps, punctuation.
Every follow-up step has been checked for reframe vs. restate failure.
Each recommendation maps to a specific email step number, not the sequence generally.
Audit checklist score has been calculated and the total reported.
No fix recommendation is phrased as "consider" β all are action directives.
Reference Files
FilePurposeLoad whenreferences/context.mdDomain rules, deliverability benchmarks, follow-up logic frameworksQuestion involves outbound principles or score rationalereferences/examples.mdTwo fully worked sequence audits with reasoningRequest is complex, sequence is 5+ steps, or edge case is presentscripts/audit-checklist.md7-dimension scoring gate with fix prioritiesEvery audit run β load by default at Step 9
Context Β· MD
# Context: Cold Email Sequence Auditing
*Load condition: activate when the audit requires explaining a scoring decision, the SDR asks why a rule applies, or the skill needs to reference outbound benchmarks to justify a finding.*
---
## Architecture Rationale
Most cold email audits fail because they treat deliverability, reply-rate structure, and follow-up logic as separate checklists reviewed by different people at different times β or not at all. SDRs who only check for spam words miss the structural problem that their follow-up step 3 is a word-for-word restatement of step 1 with "just following up" prepended. SDRs who focus only on copy quality miss that a sequence with three tracked links per email is suppressing inbox placement before a single human decision is made. This skill prevents the failure state where a sequence looks polished in a doc review but dies in execution because no one ran a unified pass across all three dimensions before launch.
---
## Domain Rules
### 1. Deliverability Hygiene Standards
| Signal | Safe Zone | Warning | Critical |
|---|---|---|---|
| Links per email | 0β1 | 2 | 3+ |
| Word count | 50β125 | 126β175 | 176+ or <30 |
| Images | 0 | 1 (logo only) | 2+ |
| Spam-trigger words | 0 | 1 non-opener | 1 in subject line |
| ALL CAPS usage | 0 words | 1 word | 2+ words |
| Exclamation marks | 0 | 1 | 2+ |
**Common spam-trigger words in B2B outbound (non-exhaustive):** free, guarantee, no obligation, limited time, act now, click here, buy now, winner, congratulations, risk-free, 100%, special promotion, earn money, increase sales, boost revenue (in subject lines specifically).
**Domain warm-up rule:** New domains sending >50 emails/day in week 1 face inbox suppression regardless of copy quality. Flag this as Critical when domain age is provided.
### 2. Reply-Rate Structural Framework
**The PACT test β apply to every email body:**
- **P**roblem-first: Does the opening line name the prospect's problem before mentioning the sender?
- **A**ngle-specific: Is this angle unique to this step, or repeated from a prior touch?
- **C**TA-single: Is there exactly one ask, and is it low-friction (15-minute call, not "let's explore a partnership")?
- **T**okens-active: Are personalization tokens used and populated (not left as `{{company}}` placeholders)?
**Benchmark reply rates by sequence position:**
| Step | Industry Median | Top Quartile |
|---|---|---|
| Step 1 (cold open) | 2β4% | 6β8% |
| Step 2 (follow-up) | 1β3% | 4β6% |
| Step 3 | 0.5β2% | 2β4% |
| Step 4+ | <1% | 1β2% |
A sequence with cumulative reply rate below 3% after 4 touches is a structural problem, not a volume problem.
### 3. Follow-Up Logic Decision Framework
Every follow-up step must clear at least one of these three gates:
| Gate | Description | Example |
|---|---|---|
| New Angle | Introduces a pain point, use case, or buyer persona not mentioned in prior steps | Step 1: efficiency angle β Step 2: risk/cost angle |
| New Evidence | Adds a case study, stat, or social proof not present in earlier steps | Step 3: "We helped [similar company] reduce X by Y%" |
| New CTA | Changes the ask type, friction level, or format | Step 1: 15-min call β Step 4: "Worth a quick reply?" |
**Fail state:** A follow-up that restates the same problem, references the same proof, and makes the same ask is not a follow-up β it is a duplicate. Flag as Critical.
---
## Edge Case Handling
### Edge Case 1: The "Breakup Email" as Final Step
Some sequences end with a breakup email ("I won't reach out again unlessβ¦"). These emails are structurally distinct and should NOT be evaluated against standard reply-rate body rules. A breakup email is scored only on: (a) does it create genuine perceived loss? (b) is the CTA the lowest possible friction (reply yes/no)? (c) is the tone professional rather than passive-aggressive? Do not penalize a breakup email for being short (<50 words) β brevity is a feature here.
**Protocol:** When the final step is identified as a breakup email, note this in the audit summary and apply breakup-specific criteria only to that step.
### Edge Case 2: Hyper-Personalized First Steps with No Template Structure
Some SDRs submit step 1 as a fully custom, one-off email written for a specific account rather than a reusable template. Deliverability rules still apply in full. Reply-rate structure still applies. However, the PACT test's "Tokens-active" criterion is N/A β there are no tokens by design. Note this in the findings. Do not penalize for missing tokens; instead, flag whether the personalization is substantive (references something specific to the company) or surface-level (only uses the prospect's first name).
Examples Β· MD
# Examples: Cold Email Sequence Audits
*Load condition: activate on complex requests (5+ step sequences), edge cases, or when the SDR asks to see the reasoning behind a specific finding.*
---
## Example 1: 4-Step SaaS SDR Sequence β VP of Engineering ICP
### Input
**Step 1 β Day 1**
Subject: Quick question about {{company}}'s deployment pipeline
Body: Hi {{first_name}}, I noticed {{company}} recently expanded your engineering team on LinkedIn β congrats on the growth. Most VP Engs I talk to at companies your size tell me their deployment pipeline becomes a bottleneck right around 30β50 engineers. Is that something you're running into? Happy to show you how [Company] helped Acme Corp cut deploy time by 40%. Worth a 15-minute call this week?
**Step 2 β Day 4**
Subject: Re: Quick question about {{company}}'s deployment pipeline
Body: Hi {{first_name}}, just following up on my last note. Wanted to make sure this didn't get buried. We've helped companies like yours cut deployment time significantly. Worth a quick chat?
**Step 3 β Day 9**
Subject: Deployment bottlenecks at {{company}}
Body: Hi {{first_name}}, I know you're busy so I'll be brief. We work with fast-growing engineering teams to eliminate deployment friction. Click here to see our ROI calculator: [link1]. You can also check our case studies here: [link2] and our product page here: [link3]. Let me know if you want to connect!
**Step 4 β Day 18**
Subject: Closing the loop
Body: Hi {{first_name}}, I don't want to keep cluttering your inbox. If the timing isn't right, totally understand. But if deployment speed ever becomes a priority, I'd love to reconnect. Feel free to book time here: [booking link].
---
### Reasoning Block
**Step 1 analysis:** Opens with a prospect-specific observation (LinkedIn expansion) β passes Problem-first. Single CTA (15-min call) β passes. One link implied but not shown β passes deliverability. Word count: ~85 words β passes. Subject line 49 characters β passes. One concern: "congrats on the growth" is a common opener SDR tools auto-generate from LinkedIn signals; flag as surface-level personalization, not substantive. Overall: Strong step with one Low finding.
**Step 2 analysis:** "Just following up on my last note" β fails follow-up logic gate. No new angle (same deployment problem), no new evidence (same Acme reference implied), no new CTA (same ask type). "Didn't get buried" is filler. Word count: ~40 words β below 50 threshold. This step adds zero new information and re-asks the same question. Critical failure on follow-up logic.
**Step 3 analysis:** Three tracked links in one email β Critical deliverability failure. "Click here" is a known spam trigger β Critical deliverability failure. "I know you're busy so I'll be brief" is a worn opener that signals mass outreach. The step does introduce product features (ROI calculator, case studies) which could be a new angle, but the deliverability issues override any structural value. Exclamation mark β warning. Word count: ~65 words with links β borderline.
**Step 4 analysis:** Structurally a breakup email β apply breakup criteria only. "Closing the loop" subject line is overused but functional. Creates mild loss ("I don't want to keep cluttering your inbox"). CTA is a booking link β slightly high-friction for a breakup; a "reply yes/no" would be lower friction. Tone is professional. Passes breakup criteria with one Low finding.
**Cadence spacing:** Day 1 β Day 4 (3 days, acceptable) β Day 9 (5 days, acceptable) β Day 18 (9 days, acceptable). Cadence passes.
---
### Output
**SEQUENCE AUDIT REPORT**
ICP: VP Engineering | Steps: 4 | Total send spread: Day 1β18
**Score: 44/70** β Below threshold. Rerun after addressing Critical findings.
| Dimension | Score | Notes |
|---|---|---|
| Deliverability | 4/10 | Step 3 has 3 links + "click here" trigger word |
| Subject Lines | 7/10 | Steps 1 and 3 solid; Step 2 "Re:" pattern overused |
| Opening Lines | 6/10 | Step 1 strong; Steps 2β3 generic |
| Follow-Up Logic | 3/10 | Step 2 is a full restate with no new angle, evidence, or CTA |
| CTA Quality | 7/10 | Steps 1 and 4 appropriate; Step 3 buried under link overload |
| Cadence Spacing | 10/10 | Timing is within benchmarks throughout |
| Persona-Message Fit | 7/10 | VP Eng language present in Step 1; fades by Step 3 |
**Critical Findings**
1. **Step 3 β 3 tracked links:** Remove all links except one. If the ROI calculator is highest-value, use that. Case studies and product page links belong in a sales conversation, not a cold sequence.
2. **Step 3 β "click here":** Replace with the destination name. "See how Acme reduced deploy time" instead of "Click here."
3. **Step 2 β Full restate:** Rewrite entirely. Introduce a new angle (e.g., cost of slow deploys on release velocity, or a second case study from a different company type). Change the CTA to something lower-friction than a call β "Is this even on your radar for this quarter?" is enough.
**High Findings**
4. **Step 1 β Surface personalization:** "Congrats on the growth" reads as automated. Replace with a specific observation: a recent blog post, a job posting for a specific role, or a product launch. One sentence of genuine research outperforms five lines of generic copy.
**Low-Priority Notes**
5. Step 4 CTA is a booking link β for a breakup email, "Reply with a yes or no β worth 10 seconds" drives more responses than asking someone to open a calendar.
---
**Change Log**
- Step 2 flagged Critical (restate failure) β most common SDR sequence error
- Step 3 flagged Critical (2 deliverability vectors) β link overload is the #1 inbox placement killer in cold sequences
- Step 1 downgraded from "Strong" to "High finding" after personalization quality check
- Cadence passed without changes
---
## Example 2: 5-Step Cybersecurity Vendor Sequence β CISO ICP
### Input
**Step 1 β Day 1**
Subject: {{company}}'s security posture after the SolarWinds aftermath
Body: Hi {{first_name}}, supply chain attacks have changed how CISOs think about vendor risk. Most of the security leaders I speak with at {{industry}} companies are still reconciling their third-party exposure after 2020. Have you completed a full vendor risk re-assessment since then? [Company] helps security teams run continuous third-party risk monitoring without adding headcount. 15 minutes to walk through how?
**Step 2 β Day 3**
Subject: One question on third-party risk
Body: {{first_name}}, quick question: when a new vendor requests access to your environment, how long does the current review process take? We typically see it running 3β6 weeks at companies your size β and that's a blocker when the business wants to move fast. Worth a short call to compare notes?
**Step 3 β Day 8**
Subject: Case study: how [Similar Company] cut vendor review time by 60%
Body: {{first_name}}, I promised I'd keep it short. [Similar Company] β a {{company_size}}-person fintech β reduced their average vendor review cycle from 28 days to 11 days using [Company]'s continuous monitoring. Full case study here: [link]. Does that timeline resonate with what you're dealing with?
**Step 4 β Day 15**
Subject: Different angle β your team's bandwidth
Body: {{first_name}}, I've been focused on process speed but maybe that's not the real constraint. If your security team is stretched on headcount, adding a new monitoring workflow might sound like more work, not less. Our customers typically see 70% reduction in analyst hours on vendor reviews within 90 days. Is bandwidth actually the bigger problem here?
**Step 5 β Day 25**
Subject: Last note from me
Body: {{first_name}}, I've reached out a few times without hearing back β I'll take that as a sign the timing isn't right. If vendor risk monitoring ever moves up your priority list, I'm one reply away. No pitch, just a conversation.
---
### Reasoning Block
**Step 1:** Opens with a market event (SolarWinds) β relevant to CISO ICP, not generic. Problem-first structure passes. Word count: ~70 words. Single CTA. No links. Strong opener.
**Step 2:** Introduces a completely new angle (process speed / days-to-review) not present in Step 1 β passes follow-up logic gate (New Angle). The opener is a genuine question that invites a reply even without a call. CTA is conversational ("compare notes") β well-calibrated for a CISO who will not book a call from a cold email easily.
**Step 3:** New Evidence gate β case study with specific numbers (28 days β 11 days) and a named company archetype. One link. Word count: ~55 words. Subject line flags the case study explicitly β sets correct expectation. This is clean.
**Step 4:** Pivots to a new pain point (bandwidth, not process speed) β strong New Angle execution. The self-aware opener ("I've been focused on process speed but maybe that's not the real constraint") demonstrates prospect-thinking rather than sender-thinking. The 90-day ROI claim is specific. No links. Excellent structural step.
**Step 5:** Breakup email β apply breakup criteria. "Last note from me" subject line is clean and honest. No booking link β relies on a plain reply, lowest possible friction. Tone is professional and non-guilt-tripping. Passes all breakup criteria.
**Cadence spacing:** Day 1 β Day 3 (2 days β tight but acceptable for early sequence) β Day 8 (5 days) β Day 15 (7 days) β Day 25 (10 days). The Day 1βDay 3 gap is on the lower boundary; flag as Low.
---
### Output
**SEQUENCE AUDIT REPORT**
ICP: CISO | Steps: 5 | Total send spread: Day 1β25
**Score: 65/70** β Passes threshold. Minor optimizations available; sequence is cleared for launch.
| Dimension | Score | Notes |
|---|---|---|
| Deliverability | 9/10 | One link total across 5 steps; all word counts in range |
| Subject Lines | 9/10 | Step 3 and 4 especially strong; Step 5 "Last note" is clean |
| Opening Lines | 10/10 | Every step opens with prospect context, not sender context |
| Follow-Up Logic | 10/10 | Each step clears a distinct gate: new angle, new evidence, angle shift, breakup |
| CTA Quality | 9/10 | CTAs properly calibrated for CISO risk-aversion; Step 2 "compare notes" is excellent |
| Cadence Spacing | 8/10 | Day 1βDay 3 is slightly compressed; consider pushing to Day 4 |
| Persona-Message Fit | 10/10 | SolarWinds reference, vendor risk language, headcount sensitivity all CISO-specific |
**Critical Findings**
None.
**High Findings**
None.
**Low-Priority Notes**
1. Step 1 β Step 2 gap (Day 1 to Day 3) is 48 hours. With CISO-level prospects, consider Day 4β5 to allow the first email to land and be seen before the follow-up arrives.
2. Step 3 names "{{company_size}}-person fintech" β confirm this token is populated in your sequencing tool or replace with a hardcoded descriptor.
**Change Log**
- No rewrites required
- One cadence adjustment recommended (Low)
- One token hygiene flag (Low)
- Sequence approved for launch as-is with optional minor fixes
Audit checklist Β· MD
# Audit Checklist: Cold Email Sequence Quality Gate
*Apply this checklist at Step 9 of every audit. Report dimension scores in the final output.*
---
## Score Table
| Dimension | Max | Your Score | Notes |
|---|---|---|---|
| Deliverability | 10 | | |
| Subject Lines | 10 | | |
| Opening Lines | 10 | | |
| Follow-Up Logic | 10 | | |
| CTA Quality | 10 | | |
| Cadence Spacing | 10 | | |
| Persona-Message Fit | 10 | | |
| **TOTAL** | **70** | | |
**Threshold:** 56/70 required to approve sequence for launch without mandatory revisions.
---
## Scoring Guide
### 1. Deliverability (10 pts)
- **10:** Zero spam-trigger words across all steps; max 1 link per email; zero images except optional logo; no ALL CAPS; no exclamation marks; all word counts 50β125.
- **7:** One minor trigger word in body (not subject line); OR word count 1β2 steps outside range; OR one step with 2 links.
- **4:** Trigger word in a subject line; OR any step with 3+ links; OR multiple word count violations; OR images beyond a single logo.
- **0:** Multiple trigger words in subject lines; OR 4+ links in any step; OR "click here" present; OR sequence is flagged by common spam heuristics on 3+ vectors simultaneously.
### 2. Subject Lines (10 pts)
- **10:** All subject lines β€50 characters; each is unique; at least one uses a curiosity gap or named specificity; none repeat a prior step's subject line; no spam triggers.
- **7:** One subject line over 50 characters; OR one subject line is generic but not harmful; OR "Re:" threading used once with clear intent.
- **4:** Two or more subject lines are generic (e.g., "Following up," "Checking in," "Touching base"); OR one subject line contains a trigger word; OR subject lines across steps are nearly identical.
- **0:** Majority of subject lines are generic follow-up phrases; OR subject line contains a high-risk trigger word; OR subject lines do not vary in structure or approach.
### 3. Opening Lines (10 pts)
- **10:** Every step opens with the prospect's context, problem, or an observation about their world β not the sender's company name or product.
- **7:** Most steps open prospect-first; one step leads with sender context but pivots quickly.
- **4:** Half or more of the steps open with "I" or the sender's company name; OR openings are generic ("Hope this finds you well," "I wanted to reach out").
- **0:** Every step opens with the sender's name, company, or product; no prospect-first language present anywhere.
### 4. Follow-Up Logic (10 pts)
- **10:** Every follow-up step clears at least one gate: New Angle, New Evidence, or New CTA β confirmed for each step individually.
- **7:** All but one step clears a gate; the exception adds minor variation (slightly different wording on same pain point) but doesn't fully restate.
- **4:** One step is a clear restate (same problem, same proof, same ask); remaining steps have adequate variation.
- **0:** Two or more steps are restates; OR the sequence is essentially the same email sent multiple times with slight wording changes.
### 5. CTA Quality (10 pts)
- **10:** Every step has exactly one CTA; CTAs are calibrated to sequence position (lower friction in later steps and breakup); no step asks for more than one action.
- **7:** One step has two CTAs (e.g., "call or reply"); OR one CTA is slightly high-friction for its position but not disqualifying.
- **4:** Multiple steps have competing CTAs; OR the primary ask in multiple steps is "let's set up a demo" without building to it; OR a breakup email uses a high-friction booking link.
- **0:** No clear CTA in one or more steps; OR every step asks for a full demo/meeting with no warm-up logic; OR CTAs are buried at the bottom of long paragraphs.
### 6. Cadence Spacing (10 pts)
- **10:** Spacing follows or improves on baseline (Day 1, 3β4, 7β9, 14β16, 21β25 for a 5-step); no gap exceeds 10 days; no consecutive steps are fewer than 2 days apart.
- **7:** One gap is slightly outside baseline (e.g., Day 1 β Day 2, or a 12-day gap) but sequence is otherwise well-spaced.
- **4:** Two spacing violations; OR sequence is front-loaded (3 steps in first 5 days); OR a gap of 15+ days exists mid-sequence.
- **0:** Cadence is random or shows no pattern; OR multiple steps are 1 day apart; OR a gap of 20+ days mid-sequence likely causes prospect to forget prior contact.
### 7. Persona-Message Fit (10 pts)
- **10:** Pain points, proof assets, and vocabulary are specific to the stated ICP title and industry; sequence would not work unmodified for a different persona.
- **7:** Most language is persona-specific; one or two sentences are generic enough to apply to any prospect.
- **4:** Core problem statement is correct but proof assets (case studies, stats) are not ICP-specific; language could apply to any B2B buyer.
- **0:** No persona-specific language anywhere; sequence reads as a generic template; named ICP could be swapped for any other without changing the copy.
---
## Fix Priorities (if score < 56)
Address dimensions in this order:
1. **Deliverability** β A sequence that can't reach the inbox cannot be fixed by better copy.
2. **Follow-Up Logic** β Restate failures are the most common structural cause of declining reply rates across a sequence.
3. **Opening Lines** β Prospect-first openers have the highest per-word impact on reply rate.
4. **Subject Lines** β Subject lines control open rate; a strong body cannot recover from a weak or spammy subject.
5. **CTA Quality** β A confused or high-friction ask is the final step where reply rate dies.
6. **Persona-Message Fit** β Generic sequences underperform even with correct cadence and structure.
7. **Cadence Spacing** β Address last; spacing rarely drives single-digit reply rates on its own.
---
## Rerun Protocol (score < 56)
1. Note which dimensions scored 4 or below β these are mandatory rewrites, not suggestions.
2. For each failing dimension, apply the specific rewrite recommendation from the audit findings.
3. Do not rerun the full audit on a partially revised sequence β revise all failing dimensions before resubmitting.
4. After revisions, resubmit the complete updated sequence (all steps, all subject lines) for a clean re-audit.
5. A rerun audit should be scored independently β do not carry forward scores from the prior run.
6. If Deliverability scores 4 or below on rerun, pause sequence launch until that dimension reaches 7 minimum before proceeding.
Readme Β· MD
# Cold Email Sequence Auditor β Deployment Guide
---
## Installation
```
cold-email-audit-skill/
βββ SKILL.md β Primary skill file (load first)
βββ README.md β This file
βββ references/
β βββ context.md β Domain rules and benchmarks
β βββ examples.md β Worked audit examples
βββ scripts/
βββ audit-checklist.md β 7-dimension scoring gate
```
Place the entire `cold-email-audit-skill/` folder in your Claude skills directory. No additional dependencies required.
---
## Activation Phrases
Use any of these to activate the skill:
1. "Audit this cold email sequence for deliverability and reply rate issues."
2. "My sequence reply rate has dropped to 2% β can you diagnose what's wrong?"
3. "Run a full audit on these 4 emails before I launch them in Outreach."
4. "Check my follow-up logic β I think steps 3 and 4 might be too similar."
5. "Score this sequence and tell me what to fix before I take it live."
6. "I inherited this sequence from a teammate β audit it before I start sending."
7. "I wrote a breakup email as my final step β does it work, and does the full sequence hold up?"
8. "Subject lines keep getting flagged as spam β can you audit the whole sequence?"
---
## What to Include When Activating
For the most accurate and actionable audit, provide:
- **All sequence steps** β subject line + body text for each step, in order.
- **Send-day offsets** β the day number each step sends (Day 1, Day 4, Day 9, etc.).
- **ICP or persona** β the title and industry of your target prospect (e.g., "VP of Operations, logistics companies, 200β1000 employees").
- **Context if available** β domain age, current reply rate, known issues, or specific concerns you want prioritized.
---
## Output Modes
| Mode | Triggers | Delivers |
|---|---|---|
| **Lite** | Single-step email submitted; SDR asks for a quick check on one element (e.g., subject line only) | Targeted findings on the requested element only; no full score table; 3β5 bullet findings max |
| **Standard** | 2β5 step sequence submitted with ICP stated | Full audit report: score table, Critical and High findings, specific rewrite recommendations for failing dimensions |
| **Full** | 5+ step sequence; complex ICP; edge cases present (e.g., breakup email, new domain, inherited sequence); OR score < 56 on Standard run | Standard output plus: step-by-step reasoning block, worked rewrite examples for Critical findings, rerun protocol if below threshold |
---
## Quick Reference Card
```
INPUT: All sequence steps (subject + body), send-day offsets, ICP/persona description
OUTPUT: Score table (7 dimensions / 70 pts), Critical findings, rewrite directives, approval status
SCOPE: Deliverability hygiene, reply-rate structure, follow-up logic, cadence spacing, persona fit
LIMIT: Does not configure ESPs, write net-new sequences, review legal compliance, or set up DNS/DMARC
```
By purchasing this prompt, you agree to our terms of service
CLAUDE-4-6-SONNET
Vague assistant prompts break under real workloads β wrong structure, no validation, nothing to load at runtime. This prompt generates a complete 5-file skill bundle: activation logic, domain rules, worked examples, a self-scoring quality gate, and deployment docs β all tailored to your role and use case.
πΉ 5-file bundle, ready to deploy
β
Token-aware architecture built in
πΉ Self-scoring quality gate included
β
Worked examples with full reasoning
πΉ Fits any role, domain, or workflow
...more
Added 1 day ago
