PromptBase
Upgrade
Close icon
General
Home
Marketplace
Create
Hire
Login
Chat
Sell
Explore

Prompt Details

Model
(claude-4-7-opus)
Token size
1,819
Example input
[STACK_SUMMARY]: Next.js 14 App Router with TypeScript Prisma and Postgres on Supabase. Deployed on Vercel with Stripe for billing Clerk for auth Resend for transactional email and Upstash Redis for rate limiting. No background job system everything runs inside API routes. [CRITICAL_USER_FLOWS]: One user signup and onboarding through Clerk and Supabase user creation. Two Stripe checkout and subscription activation through checkout sessions and webhook handler. Three core dashboard data load reading from 4 Supabase tables. Four project creation flow with API write and Stripe usage metering. Five transactional email delivery for password resets and invoice receipts through Resend. [INCIDENT_HISTORY]: Three months ago Stripe webhook receiver broke for 6 hours because the endpoint URL was accidentally changed during a refactor and no customers got paid feature access until we noticed. Two months ago Vercel deployed a broken build because a TypeScript error was ignored and production crashed for 25 minutes until we rolled back manually. Three weeks ago Supabase had a 40 minute regional outage that took down the entire dashboard and we had no status page to update customers. [MONITORING_SETUP]: Sentry for error tracking with email alerts on new issues. Vercel Analytics for traffic visibility. No uptime monitoring configured. No database metric alerts set up. No public status page. Founder manually checks Supabase dashboard after each deploy. [BUSINESS_PROFILE]: Nine thousand dollars MRR with 280 paying customers across Free Pro at 19 dollars and Team at 49 dollars tiers. Around 1200 total users. Solo founder is the only engineer. Business hours traffic 9am to 6pm US timezones weekdays. Customers use the dashboard for daily project management work and cannot tolerate multi-hour outages.
Example output
### πŸ—ΊοΈ Failure Surface Overview Based on your stack (Next.js on Vercel, Supabase Postgres, Stripe, Clerk, Resend, Upstash) and your incident history, these are the failure modes most likely to take you down: 1. **Stripe webhook handler silently broken** β€” Checkout completes but customers don't get paid features. Happens every 6–12 months per SaaS at your stage, especially after refactors. You've already hit this once. 2. **Bad Vercel deploy reaches production** β€” TypeScript errors ignored, env var missing, or runtime crash on boot. Happens 2–4x per year for solo founders shipping fast. You've hit this once. 3. **Supabase regional outage or degraded Postgres** β€” Dashboard fully down. Happens 1–2x per year on Supabase's free/Pro tiers. You've hit this once. 4. **Supabase connection pool exhaustion** β€” Dashboard slow or 500-ing, but Supabase itself is "up." Very common as you cross ~500 concurrent users, and Next.js serverless + Prisma is famously prone to this. 5. **Clerk auth outage or misconfiguration** β€” Nobody can sign in or sign up. Clerk has had 2–3 notable incidents per year. Happens maybe once per year per customer. 6. **Resend delivery failure** β€” Password resets and receipts stop sending. Customers can't log in after reset, finance gets noisy. Happens 1–2x per year. 7. **Upstash Redis outage or rate-limit misfire** β€” Either rate limiter fails open (abuse risk) or fails closed (locks out legitimate users). Rare but disruptive. 8. **Runaway API route / slow query chewing compute** β€” One bad query under load spikes Vercel function duration and Supabase CPU. Common as you add features without query review. The top 5 runbooks below cover #1–#5, which collectively represent ~90% of the outage-minutes you're likely to face in the next 12 months. --- ### 🎯 Top 5 Runbooks --- #### **Runbook 1 β€” Stripe Webhooks Failing / Customers Not Getting Paid Access** **Severity: SEV1** (revenue-impacting, silent) **Trigger Conditions** - Customer email: "I paid but I'm still on Free" or "my upgrade didn't go through" - Sentry error on `/api/webhooks/stripe` route - Stripe Dashboard β†’ Developers β†’ Webhooks shows failed delivery attempts (red) or 0 recent successful events during a period when checkouts are happening - Your own test purchase doesn't unlock Pro features **Diagnosis Steps** 1. Open Stripe Dashboard β†’ Developers β†’ Webhooks β†’ click your production endpoint. Check the "Recent events" tab. - **Good:** green 2xx responses, recent timestamps matching your checkout volume. - **Bad:** 4xx/5xx responses, or no events when you expect them. 2. Click any failed event β†’ "View response." Read the actual error body. Common culprits: - `400 Webhook signature verification failed` β†’ `STRIPE_WEBHOOK_SECRET` mismatch. - `404` β†’ endpoint URL is wrong (this is what bit you 3 months ago). - `500` β†’ your handler is crashing. Check Sentry. 3. Confirm the endpoint URL in Stripe matches your current route. It should be `https://<yourdomain>/api/webhooks/stripe` (or wherever your handler lives). 4. In Vercel β†’ your project β†’ Settings β†’ Environment Variables, confirm `STRIPE_WEBHOOK_SECRET` exists for Production and matches the "Signing secret" shown in Stripe Dashboard for that endpoint. 5. In Supabase SQL editor, run the query for the affected customer: ```sql SELECT id, email, stripe_customer_id, subscription_status, plan, updated_at FROM users WHERE email = 'customer@example.com'; ``` If `subscription_status` is stale but Stripe shows them active, you have a webhook delivery gap. **Recovery Actions** 1. **If URL is wrong:** In Stripe Dashboard β†’ Webhooks β†’ edit endpoint β†’ fix URL β†’ save. 2. **If secret is wrong:** Copy signing secret from Stripe β†’ Vercel Env Vars β†’ update `STRIPE_WEBHOOK_SECRET` for Production β†’ redeploy (env var changes require redeploy on Vercel). 3. **If handler is crashing:** Check Sentry, identify the bug, fix it, deploy. If you can't fix in <10 min, revert the last deploy: Vercel Dashboard β†’ Deployments β†’ find last known-good β†’ three dots β†’ "Promote to Production." 4. **Replay failed events:** Stripe Dashboard β†’ Webhooks β†’ click endpoint β†’ for each failed event, click "Resend." Replay the last 24–48 hours. Stripe retains events for 30 days. 5. **Manually reconcile affected customers** who can't wait for replay. In Stripe, find their `customer.subscription.created` or `checkout.session.completed` event; manually update your `users` table: ```sql UPDATE users SET subscription_status = 'active', plan = 'pro', stripe_subscription_id = 'sub_XXX' WHERE stripe_customer_id = 'cus_XXX'; ``` **Decision Tree** - **If replay succeeds but some customers still report issues:** they hit the gap during an event type you don't handle. Search Stripe events for their `customer_id`, check event types, expand your handler to cover `customer.subscription.updated` and `invoice.paid` if missing. - **If Stripe webhooks are delivering 2xx but customers still aren't upgraded:** the handler is silently swallowing errors. Check that your handler actually writes to Supabase and doesn't just `return 200` on unknown event types. Add Sentry `captureException` inside every `catch`. **Expected Time to Recovery:** 15–45 min to restore delivery; 1–3 hours to fully reconcile affected customers depending on how many. **Communication Checkpoints** - **T+0:** Post status page: "Investigating delayed subscription activation." - **T+15 min:** If not resolved, email affected customers directly (you can query them from Stripe by recent `checkout.session.completed` in the incident window). - **T+resolution:** Status page update + personal apology email to affected customers with confirmation their access is now active. Consider a 1-month credit for customers who waited >1 hour. --- #### **Runbook 2 β€” Bad Vercel Deploy Crashing Production** **Severity: SEV1** (full outage) or **SEV2** (degraded) **Trigger Conditions** - Sentry alert floods within 1–5 minutes of a deploy - Customer reports: "site is down," "blank page," "500 error" - Vercel Dashboard β†’ Deployments shows latest build as "Ready" but errors spiking in Analytics - `/` returns 500 or white-screen in your own browser **Diagnosis Steps** 1. **First action: roll back. Diagnose after.** Open Vercel Dashboard β†’ your project β†’ Deployments. 2. Find the current production deployment (top of list, "Current" badge). 3. Scroll to the previous deployment that has the "Current" badge history or a green checkmark from before the incident. 4. Before rolling back, glance at Sentry to confirm errors started with the latest deploy (timestamps should match deploy time). **Recovery Actions** 1. In Vercel Deployments list, hover the last known-good deployment β†’ click three-dot menu β†’ **"Promote to Production."** Confirm. 2. Production traffic swaps to that deployment within 10–30 seconds. Hard-refresh your site to confirm. 3. Verify key flows manually: load dashboard, trigger a login, visit Stripe checkout page. 4. Once stable, investigate the broken deploy in a local branch. Do not re-deploy until root cause is identified and a test exists to prevent recurrence. 5. If the broken deploy introduced a DB migration via Prisma, check whether the migration ran: ```bash npx prisma migrate status ``` If the schema changed and the rollback app doesn't match, you may need to manually reverse the migration in Supabase SQL editor before the rollback is fully stable. **Decision Tree** - **If rollback also fails (rare, but happens when a shared resource was mutated):** the issue is not in code but in a dependent system. Skip to the relevant runbook (Stripe, Supabase, Clerk). - **If the bad deploy ran a Prisma migration that the old code can't handle:** you have a forward-only problem. Options: (a) write a compatibility shim in a hotfix and deploy forward, (b) manually reverse the migration in Supabase SQL editor using the `down` equivalent, then roll back. Prefer forward-fix unless the migration is actively corrupting data. **Expected Time to Recovery:** 3–8 min if it's a pure code rollback. 20–40 min if a migration is involved. **Communication Checkpoints** - **T+0 (at rollback):** Status page: "Investigating errors on the dashboard β€” rollback in progress." - **T+5 min (post-rollback):** Status page: "Service restored. Monitoring." - **T+30 min:** Status page: "Resolved." No customer email needed unless the outage exceeded 15 min. --- #### **Runbook 3 β€” Supabase Regional Outage or Postgres Down** **Severity: SEV1** (full outage) **Trigger Conditions** - Sentry: flood of `PrismaClientInitializationError`, `ECONNREFUSED`, or `Can't reach database server` - Dashboard load fails for all users - Supabase Dashboard shows project status as degraded, or status.supabase.com shows an incident - You can't connect via `psql` or the Supabase SQL editor **Diagnosis Steps** 1. Open https://status.supabase.com β€” check for active incidents in your region. 2. Open Supabase Dashboard β†’ your project β†’ Database β†’ check project status indicator. 3. Try the Supabase SQL editor with a trivial query: `SELECT 1;` - **Good:** returns `1` instantly. - **Bad:** times out or errors. 4. Check Supabase Dashboard β†’ Database β†’ Connection Pooling. Confirm pooler is up. 5. Verify your app's `DATABASE_URL` is pointing to the connection pooler (port 6543 for Supabase transaction pooler), not direct (5432). Serverless on Vercel should always use the pooler. **Recovery Actions** - **If confirmed Supabase outage (status page is red):** you cannot fix this. Your job is communication and patience. 1. Post status page immediately: "Dashboard unavailable due to Supabase incident. Following their status page." 2. Tweet: "We're down due to a Supabase outage. Following https://status.supabase.com. No action needed from you; your data is safe." 3. Subscribe to Supabase's status page for updates. 4. Do **not** attempt to fail over, restore from backup, or point at a different project. These actions cause more damage than the outage itself. - **If Supabase is green but you can't connect:** 1. Check Supabase Dashboard β†’ Database β†’ Pooler β†’ active connections. If near max (default 200 for Pro), see Runbook 4. 2. Test connectivity from your own machine with the same `DATABASE_URL`. If it works locally but not on Vercel, the issue is Vercel ↔ Supabase β€” check Supabase's allowed IPs (should be `0.0.0.0/0` for Vercel serverless). **Decision Tree** - **If outage exceeds 30 min:** put up a read-only static marketing page on the root domain by deploying a minimal Next.js page that doesn't touch the database. Use Vercel's "Instant Rollback" to a previously deployed static version, or deploy a holding page. - **If data appears corrupted after recovery:** restore from Supabase Point-in-Time Recovery (available on Pro plan). Supabase Dashboard β†’ Database β†’ Backups β†’ PITR β†’ pick timestamp just before incident. This creates a new DB; you must then swap `DATABASE_URL`. **Expected Time to Recovery:** Out of your hands for Supabase incidents. Supabase historically restores in 30–90 min. Your own misconfig recovery: 15–30 min. **Communication Checkpoints** - **T+5 min:** Status page posted, tweet sent. - **T+30 min:** Status page update with Supabase's ETA if they've posted one. - **T+60 min:** Direct email to all paying customers acknowledging the extended outage. - **T+resolution:** Status page resolved, email to customers, post-mortem within 48 hours. --- #### **Runbook 4 β€” Supabase Connection Pool Exhaustion** **Severity: SEV2** (degraded β€” dashboard slow/timing out, but not fully down) **Trigger Conditions** - Sentry: `PrismaClientKnownRequestError` with "timed out fetching a new connection from the connection pool" or "too many connections for role" - Dashboard intermittently slow; some users OK, others getting 500s - Vercel function durations trending up in Analytics - Supabase Dashboard β†’ Database β†’ Pooler shows active connections near max **Diagnosis Steps** 1. In Supabase Dashboard β†’ Database β†’ Roles, check your app role's connection count vs max. 2. In Supabase SQL editor, run: ```sql SELECT count(*), state FROM pg_stat_activity GROUP BY state; ``` - **Good:** <50 active, most idle. - **Bad:** hundreds active, or many `idle in transaction`. 3. Identify the top queries holding connections: ```sql SELECT pid, now() - query_start AS duration, state, query FROM pg_stat_activity WHERE state != 'idle' ORDER BY duration DESC LIMIT 20; ``` 4. Check your `DATABASE_URL` β€” it should point to the **transaction pooler** (`...pooler.supabase.com:6543`) for Vercel serverless, not the direct connection (`...supabase.co:5432`). Direct connections from serverless will blow up your pool. **Recovery Actions** 1. **Immediate relief β€” kill long-running queries:** ```sql SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE state = 'idle in transaction' AND now() - query_start > interval '5 minutes'; ``` 2. **If `DATABASE_URL` is wrong (using direct connection):** swap to pooler URL in Vercel env vars, redeploy. 3. **If Prisma isn't reusing connections:** confirm you have a single `PrismaClient` instance pattern (the standard `globalThis.prisma` singleton). New clients per request will exhaust the pool instantly. 4. **Reduce Prisma connection limit per instance** β€” add `?connection_limit=1&pool_timeout=20` to `DATABASE_URL` for serverless. Serverless functions should hold 1 connection each, not 10. 5. **Upgrade Supabase plan** if you've outgrown the current pool size. Pro plan allows larger pools. **Decision Tree** - **If killing idle transactions doesn't help and pool stays saturated:** you have a runaway query pattern. Check recent deploys for new queries without indexes. Run `EXPLAIN ANALYZE` on suspect queries; add indexes via migration. - **If the problem recurs within hours:** implement rate limiting on the offending route using your existing Upstash Redis. Cap at, say, 30 req/min per user on expensive endpoints. **Expected Time to Recovery:** 10–25 min for immediate relief. Root cause fix (query optimization, pooler config) may take a few hours. **Communication Checkpoints** - **T+0:** Status page: "Investigating elevated response times on the dashboard." - **T+15 min:** If not resolved, update: "Identified database capacity issue, mitigating." - **T+resolution:** "Resolved." No customer email unless impact exceeded 30 min. --- #### **Runbook 5 β€” Clerk Auth Outage (Users Cannot Sign In)** **Severity: SEV1** (full user-facing block, no new revenue) **Trigger Conditions** - Sentry: errors from `@clerk/nextjs` middleware or `auth()` helpers - Customer reports: "can't log in," "stuck on loading," "Clerk error" - Sign-in page shows Clerk error UI or spins indefinitely - status.clerk.com shows incident **Diagnosis Steps** 1. Open https://status.clerk.com β€” check for active incidents. 2. Try to sign in yourself in an incognito window. Note the exact failure mode. 3. Check Clerk Dashboard β†’ your app β†’ Logs β†’ look for recent errors or spikes. 4. In Vercel env vars, confirm `CLERK_SECRET_KEY` and `NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY` are set for Production and match the keys in Clerk Dashboard β†’ API Keys. 5. Verify your Clerk domain config (Clerk Dashboard β†’ Domains) matches your production domain exactly. **Recovery Actions** - **If Clerk-side outage (status page red):** you cannot fix this. 1. Status page: "Login/signup temporarily unavailable due to our auth provider. Existing logged-in sessions are unaffected." 2. Tweet the same. 3. **Keep existing sessions alive** β€” do not deploy anything that invalidates sessions or re-runs `auth()` unnecessarily. Users already signed in can keep working if your app is resilient. - **If misconfiguration on your side:** 1. If keys are wrong, fix in Vercel env vars β†’ redeploy. 2. If domain is wrong, fix in Clerk Dashboard β†’ save β†’ no redeploy needed. 3. If middleware is throwing, roll back the last deploy (Runbook 2). **Decision Tree** - **If Clerk outage extends beyond 1 hour:** there is no practical workaround. Auth providers are single points of failure by design. Focus entirely on communication. - **If it's your config but you can't identify which key is wrong:** regenerate a fresh set of keys in Clerk Dashboard, update Vercel, redeploy. This rules out mismatched-key scenarios. **Expected Time to Recovery:** Clerk incidents: 15–120 min out of your hands. Config issues: 5–15 min. **Communication Checkpoints** - **T+5 min:** Status page + tweet. - **T+30 min:** Email all customers acknowledging login issues and linking to status page. - **T+resolution:** Status page resolved + follow-up email. --- ### πŸ’¬ Communication Templates **1. Status page β€” ongoing incident** > **[INVESTIGATING] β€” {SHORT_DESCRIPTION}** > Posted {TIME} UTC > > We're investigating reports of {SYMPTOM, e.g., "errors loading the dashboard"}. Our team is actively working on it. Next update by {TIME + 20 min} UTC. **2. Status page β€” resolution** > **[RESOLVED] β€” {SHORT_DESCRIPTION}** > Resolved {TIME} UTC. Duration: {X} minutes. > > The issue was caused by {ONE SENTENCE CAUSE}. Full service is restored. We'll publish a post-mortem within 48 hours. Thank you for your patience. **3. Customer apology email β€” significant impact (>30 min outage)** > Subject: We had an outage today β€” what happened and what we're doing > > Hi {FIRST_NAME}, > > From {START_TIME} to {END_TIME} UTC today, {PRODUCT} was {down / degraded / unable to process subscriptions}. If this affected your work, I'm sorry. > > The cause was {ONE SENTENCE}. It's fully resolved. I'll share a detailed post-mortem in the next 48 hours. > > If this directly impacted you, reply to this email and I'll personally make it right. > > β€” {FOUNDER_NAME}, Founder **4. Short post for X / Twitter during an incident** > We're currently experiencing {SYMPTOM}. Investigating now β€” updates at {STATUS_PAGE_URL}. Sorry for the disruption. **5. Internal note for contractors / collaborators (Slack / email)** > 🚨 Incident in progress β€” {START_TIME} UTC > Symptom: {ONE LINE} > Suspected cause: {ONE LINE or "unknown"} > Runbook: {RUNBOOK_NAME} > Current action: {WHAT YOU'RE DOING} > I'll ping you if I need hands. Don't deploy anything until resolved. --- ### πŸ“‹ Post-Mortem Template **When to write a full post-mortem:** - Any SEV1 incident - Any incident lasting >30 min - Any incident affecting >10% of paying customers - Any incident involving data loss or billing errors **When a lightweight incident note is enough:** - <10 min outage caught and rolled back - No customer reports - No data impact --- **Post-Mortem β€” {INCIDENT NAME}** **Date:** {DATE} **Duration:** {START} β†’ {END} UTC ({X} minutes) **Severity:** SEV1 / SEV2 / SEV3 **Author:** {FOUNDER_NAME} **Summary (2–3 sentences)** What happened, who was affected, how it was resolved. **Timeline (UTC)** - HH:MM β€” First signal (alert / customer report) - HH:MM β€” Investigation began - HH:MM β€” Root cause identified - HH:MM β€” Fix deployed / mitigation applied - HH:MM β€” Service fully restored - HH:MM β€” Status page marked resolved **Impact** - Customers affected: {NUMBER or %} - Revenue impact: {$ if measurable} - Data impact: {none / describe} - Support load: {number of tickets} **Root Cause** The underlying technical cause. Not "the deploy broke" β€” *why* did the deploy break? What allowed it to reach production? **Contributing Factors** Anything that made the incident worse or slower to resolve: missing monitoring, unclear runbook, tool failure, human error while tired. **What Went Well** Specific things that limited the damage. Reinforce these. **What Went Wrong** Honest list. No blame β€” you're the only engineer. **Action Items** (each with owner = you, and a due date) - [ ] {ACTION} β€” by {DATE} - [ ] {ACTION} β€” by {DATE} **Lessons Learned** The one or two sentences you want your future self to remember in 6 months. --- ### πŸ”” Monitoring Gap Analysis Given your stack and that you've already been burned by silent Stripe failures, deploys, and Supabase outages, here's what to add: **Uptime monitoring (you have none):** Set up Better Stack (free tier) or UptimeRobot to ping `https://yourdomain.com/api/health` every 1 minute. Create the health endpoint to return 200 only if it can query Supabase (`SELECT 1`). Alert via email + SMS when 2 consecutive checks fail. This is your #1 gap. **Stripe webhook delivery alert:** In Stripe Dashboard β†’ Developers β†’ Webhooks β†’ your endpoint β†’ "Notifications," enable email alerts for failed webhook deliveries. Also create a Sentry alert on any error in routes matching `/api/webhooks/stripe`. **Supabase connection pool:** Supabase Dashboard β†’ Reports β†’ Database β†’ set up an alert (Pro plan feature) when active connections exceed 80% of max. If on Free tier, poll `pg_stat_activity` count hourly via a cron-triggered Vercel function and alert via Resend if >80%. **Supabase database CPU:** Alert when CPU > 80% for 5 minutes. Same Reports panel. **Vercel deploy health:** In Vercel β†’ Settings β†’ Git β†’ enable "Production branch deploy notifications" to email. Pair with a Sentry "regression" alert that fires when error volume on the `production` environment increases >3x baseline in a 5-minute window post-deploy. **Clerk auth failures:** Sentry alert on any error from `@clerk/nextjs` with volume > 10 in 5 min. **Resend delivery failures:** Resend Dashboard β†’ Webhooks β†’ configure an `email.bounced` and `email.delivery_delayed` webhook to your own `/api/webhooks/resend` endpoint, which logs to Sentry. Alert if bounce rate > 5%. **Public status page:** Set up a free status page at Better Stack or Instatus, connected to your uptime monitor so it updates automatically. Subdomain: `status.yourdomain.com`. This addresses the gap from your Supabase incident three weeks ago. **Slow API route alert:** Vercel β†’ Analytics β†’ set a Web Vitals alert or use Sentry Performance to alert when p95 duration on any API route exceeds 3 seconds for 5 min. --- ### βœ… Pre-Incident Preparation Checklist - [ ] Set up Better Stack uptime monitor with SMS + email alerts on `/api/health` endpoint - [ ] Publish a public status page at `status.yourdomain.com` and link it from your app footer - [ ] Add a `/api/health` route that verifies Supabase connectivity with `SELECT 1` - [ ] Save a one-page "break glass" doc with your Vercel rollback steps, Supabase connection string, Stripe dashboard link, and Clerk dashboard link β€” pin it in your phone notes - [ ] Verify Supabase Point-in-Time Recovery is enabled (Pro plan) and test a restore to a throwaway project once - [ ] Enable Stripe webhook failure email notifications in Stripe Dashboard - [ ] Confirm `DATABASE_URL` uses Supabase transaction pooler (port 6543) with `connection_limit=1` - [ ] Export a list of all paying customer emails to a CSV saved locally, for mass-email during an outage when your dashboard may be down - [ ] Add a pre-deploy GitHub Actions check: `tsc --noEmit` + `prisma validate` to prevent the TypeScript-error-ignored scenario from recurring - [ ] Draft all 5 communication templates above into Notion or a text file, one-click ready --- ### πŸ’‘ Founder's Summary This playbook covers the 5 incident types most likely to hit a solo SaaS founder on Next.js + Vercel + Supabase + Stripe + Clerk: silent Stripe webhook failures, bad deploys, Supabase outages, connection pool exhaustion, and Clerk auth incidents. **Runbook 2 (bad Vercel deploy) is the one most likely to save you first** β€” it happens often, and a calm 5-minute rollback is infinitely better than a panicked 25-minute debug session. **The single most important preparation step this week is setting up an uptime monitor + public status page** (first two checklist items); without uptime monitoring you are blind to outages happening right now, and without a status page your customers have nowhere to look during the next Supabase incident.
πŸŒ€ Claude

Incident Response Playbook Builder

Add to Cart
Instant accessInstant access
Usage rightsCommercial use
Money-back guaranteeMoney‑back
By purchasing this prompt, you agree to our terms of service
CLAUDE-4-7-OPUS
Tested icon
Guide icon
4 examples icon
Free credits icon
A senior-SRE style incident response playbook builder for solo SaaS founders without an on-call team. 🎯 Perfect for: indie hackers, Series A founders, bootstrapped SaaS, mobile app operators. βš™οΈ 5 variables: stack, critical flows, incident history, monitoring, business profile πŸ”₯ Stack-specific runbooks with actual commands πŸ’Ž Works with Vercel, AWS, Fly, Railway, Supabase πŸ’Ό Commercial use allowed β€” no attribution needed. One prompt. A complete disaster-recovery system for your SaaS.
...more
Added 2 weeks ago
Report
Browse Marketplace