PromptBase
Upgrade
Close icon
General
Home
Marketplace
Create
Hire
Login
Chat
Sell
Explore

Prompt Details

Model
(claude-4-7-opus)
Token size
1,562
Example input
[CURRENT_SCHEMA]: model User id String id default cuid email String unique plan String default free stripeCustomerId String unique createdAt DateTime default now projects Project model Project id String id default cuid name String userId String user User relation fields userId references id createdAt DateTime default now [DESIRED_CHANGE]: Introduce organizations. Every user should belong to an org starting with a personal org per user and every project should be owned by an org instead of a user. Launching team accounts next month and need multi-tenancy in place first. [DATABASE_SETUP]: Postgres 15 on Supabase. Single primary no read replicas. Prisma 5.x as ORM. [PRODUCTION_CONSTRAINTS]: Around 180000 rows in users table and 720000 rows in projects table. Peak traffic 9am to 6pm EST weekdays. Can tolerate up to 30 seconds of degraded performance but no full downtime. Willing to migrate over 2 to 3 weeks. [APPLICATION_CONTEXT]: Next.js 14 App Router with Prisma and TypeScript deployed on Vercel. Around 60 API routes read or write to projects table. No dedicated test coverage for migrations.
Example output
### 🚦 Risk Assessment: RED Introducing a new mandatory tenancy boundary (`organizationId`) on a 720k-row `Project` table that is touched by ~60 API routes, with no read replica and no migration test coverage, is a multi-phase rollout — a single-shot `ALTER TABLE` here will either lock writes during peak hours or leave the app in a half-migrated state where some routes still scope by `userId` and others by `orgId`. ### 📋 What This Migration Actually Does Creates a new `Organization` table, a new `Membership` join table (User ↔ Organization with a role), and adds a nullable `organizationId String` column to `Project` (and optionally `User.defaultOrganizationId`). Backfills one personal `Organization` row per existing `User` (~180k rows), one `Membership` per user, and sets `Project.organizationId` on all ~720k projects to the owning user's personal org. Finally adds a `NOT NULL` constraint and foreign key `Project.organizationId → Organization.id`, creates a supporting index on `Project(organizationId)`, and (in the contract phase) drops `Project.userId` after all 60 API routes are migrated to scope by org. ### ⚠️ Key Risks 1. **`ALTER TABLE projects ADD COLUMN organizationId ... NOT NULL` taken naively will rewrite the entire 720k-row table and hold an `ACCESS EXCLUSIVE` lock.** In Postgres 15, adding a `NOT NULL` column with no default or with a volatile default forces a full table rewrite; every one of your ~60 API routes that touches `projects` will block on the lock. Production consequence: writes and reads on `projects` stall for the duration of the rewrite (likely 30s–several minutes on Supabase's shared infra), which exceeds your 30-second degradation tolerance. Affected operation: `ALTER TABLE projects`. 2. **Foreign key addition with immediate validation scans the full `projects` table under a `SHARE ROW EXCLUSIVE` lock.** `ALTER TABLE projects ADD CONSTRAINT ... FOREIGN KEY (organizationId) REFERENCES organizations(id)` without `NOT VALID` will validate all 720k rows synchronously and block concurrent writes to `projects`. Production consequence: write latency spike during peak hours, possible Vercel function timeouts cascading into 500s on project-create and project-update endpoints. 3. **Application inconsistency across 60 routes during the switch.** With no migration test coverage and 60 routes reading/writing `projects` via `userId`, a partial code deploy (some routes scoped by org, some by user) will either leak projects across orgs (a tenancy breach — the worst possible bug for a multi-tenant SaaS) or return empty results. Production consequence: silent data exposure between accounts, which is effectively unrecoverable from a trust standpoint even if fixed quickly. Affected operation: every `prisma.project.findMany/findUnique/create/update` call. ### 🗺️ Migration Plan Five phases over ~2.5 weeks. Each phase deploys independently and is safe to pause between. --- #### **Phase A — Expand: add Organization, Membership, and nullable `organizationId` on Project** *Goal: introduce new schema with zero impact on existing reads/writes.* *Duration: Day 1, ~15 min of DB work + deploy. Hold 24–48h before Phase B.* **Prisma schema change** (`schema.prisma`): ```prisma model Organization { id String @id @default(cuid()) name String slug String @unique isPersonal Boolean @default(true) createdAt DateTime @default(now()) members Membership[] projects Project[] } model Membership { id String @id @default(cuid()) userId String organizationId String role String @default("owner") // owner | admin | member createdAt DateTime @default(now()) user User @relation(fields: [userId], references: [id]) organization Organization @relation(fields: [organizationId], references: [id]) @@unique([userId, organizationId]) @@index([organizationId]) } model User { // ...existing fields... memberships Membership[] defaultOrganizationId String? // nullable for now } model Project { // ...existing fields... organizationId String? // NULLABLE in this phase organization Organization? @relation(fields: [organizationId], references: [id]) @@index([organizationId]) } ``` **Forward SQL** (what `prisma migrate deploy` will emit — review before running): ```sql -- Postgres 15 BEGIN; CREATE TABLE "Organization" ( "id" TEXT PRIMARY KEY, "name" TEXT NOT NULL, "slug" TEXT NOT NULL UNIQUE, "isPersonal" BOOLEAN NOT NULL DEFAULT true, "createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP ); CREATE TABLE "Membership" ( "id" TEXT PRIMARY KEY, "userId" TEXT NOT NULL REFERENCES "User"("id"), "organizationId" TEXT NOT NULL REFERENCES "Organization"("id"), "role" TEXT NOT NULL DEFAULT 'owner', "createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP ); CREATE UNIQUE INDEX "Membership_userId_organizationId_key" ON "Membership"("userId", "organizationId"); CREATE INDEX "Membership_organizationId_idx" ON "Membership"("organizationId"); ALTER TABLE "User" ADD COLUMN "defaultOrganizationId" TEXT; -- CRITICAL: nullable, no default — this is a metadata-only change in PG 11+, -- no table rewrite, takes milliseconds. ALTER TABLE "Project" ADD COLUMN "organizationId" TEXT; COMMIT; -- Index created OUTSIDE the transaction, CONCURRENTLY, to avoid locking Project writes. CREATE INDEX CONCURRENTLY "Project_organizationId_idx" ON "Project"("organizationId"); ``` **Rollback SQL:** ```sql DROP INDEX CONCURRENTLY IF EXISTS "Project_organizationId_idx"; BEGIN; ALTER TABLE "Project" DROP COLUMN IF EXISTS "organizationId"; ALTER TABLE "User" DROP COLUMN IF EXISTS "defaultOrganizationId"; DROP TABLE IF EXISTS "Membership"; DROP TABLE IF EXISTS "Organization"; COMMIT; ``` **Application changes in Phase A:** none. The new columns exist but no code path reads or writes them yet. Deploy Prisma client regeneration so the types are available for Phase B. --- #### **Phase B — Dual-write: create personal org on signup, dual-write new projects** *Goal: all NEW data going forward has correct org linkage; old data still orphaned.* *Duration: Day 2, ~30 min of code work + deploy. Hold 48h before Phase C.* **No SQL in this phase.** All changes are application-level. **Forward SQL:** none. **Rollback SQL:** none (application-only rollback via Vercel deploy revert). **Application changes** (TypeScript, Next.js App Router + Prisma): ```typescript // lib/orgs.ts — new helper import { prisma } from "@/lib/prisma"; import { createId } from "@paralleldrive/cuid2"; // or use cuid export async function createPersonalOrg(userId: string, userEmail: string) { return prisma.$transaction(async (tx) => { const org = await tx.organization.create({ data: { name: `${userEmail.split("@")[0]}'s workspace`, slug: `personal-${createId()}`, isPersonal: true, }, }); await tx.membership.create({ data: { userId, organizationId: org.id, role: "owner" }, }); await tx.user.update({ where: { id: userId }, data: { defaultOrganizationId: org.id }, }); return org; }); } // In your signup handler (e.g. app/api/auth/register/route.ts): const user = await prisma.user.create({ data: { email, ... } }); await createPersonalOrg(user.id, user.email); // In EVERY project-create route — dual-write: const user = await prisma.user.findUnique({ where: { id: session.userId }, select: { defaultOrganizationId: true }, }); const project = await prisma.project.create({ data: { name, userId: session.userId, // still required (NOT NULL) organizationId: user.defaultOrganizationId, // NEW — dual-write }, }); ``` **All reads still use `userId`.** Do not change any read paths yet. --- #### **Phase C — Backfill existing users and projects** *Goal: every existing user has a personal org and membership; every existing project has `organizationId` set.* *Duration: ~2–4 hours of script runtime across evenings/weekend. Hold 24h after completion before Phase D.* **Run from a Node script on your machine or a one-off Vercel function** — NOT an API route. Use batching to avoid exhausting Supabase's connection pool (Supabase's default pooler has tight limits). **Forward SQL / script** (`scripts/backfill-orgs.ts`): ```typescript // Run OUTSIDE business hours. Batches of 500, sleep 200ms between batches. import { PrismaClient } from "@prisma/client"; import { createId } from "@paralleldrive/cuid2"; const prisma = new PrismaClient(); const BATCH = 500; const SLEEP_MS = 200; async function backfillUsers() { while (true) { const users = await prisma.user.findMany({ where: { defaultOrganizationId: null }, take: BATCH, select: { id: true, email: true }, }); if (users.length === 0) break; for (const u of users) { await prisma.$transaction(async (tx) => { const org = await tx.organization.create({ data: { name: `${u.email.split("@")[0]}'s workspace`, slug: `personal-${createId()}`, isPersonal: true, }, }); await tx.membership.create({ data: { userId: u.id, organizationId: org.id, role: "owner" }, }); await tx.user.update({ where: { id: u.id }, data: { defaultOrganizationId: org.id }, }); }); } console.log(`Processed ${users.length} users`); await new Promise((r) => setTimeout(r, SLEEP_MS)); } } async function backfillProjects() { while (true) { // Pull projects that still have null organizationId, join to user's default org const rows = await prisma.$queryRaw<{ id: string; orgId: string }[]>` SELECT p."id" AS id, u."defaultOrganizationId" AS "orgId" FROM "Project" p JOIN "User" u ON u."id" = p."userId" WHERE p."organizationId" IS NULL AND u."defaultOrganizationId" IS NOT NULL LIMIT ${BATCH} `; if (rows.length === 0) break; await prisma.$transaction( rows.map((r) => prisma.project.update({ where: { id: r.id }, data: { organizationId: r.orgId }, }) ) ); console.log(`Processed ${rows.length} projects`); await new Promise((r) => setTimeout(r, SLEEP_MS)); } } (async () => { await backfillUsers(); await backfillProjects(); await prisma.$disconnect(); })(); ``` **Rollback SQL** (if backfill produced bad data and you need to redo): ```sql -- Reverses ONLY the backfill, not the schema. Safe because no reads depend on these columns yet. BEGIN; UPDATE "Project" SET "organizationId" = NULL; DELETE FROM "Membership" WHERE "organizationId" IN (SELECT "id" FROM "Organization" WHERE "isPersonal" = true); UPDATE "User" SET "defaultOrganizationId" = NULL; DELETE FROM "Organization" WHERE "isPersonal" = true; COMMIT; ``` **Application changes:** none. Dual-writes from Phase B continue handling new rows in parallel with the backfill. --- #### **Phase D — Switch reads to `organizationId`; add `NOT NULL` and FK** *Goal: all 60 API routes scope by org; enforce DB-level integrity.* *Duration: 3–5 days of code work, then ~10 min of SQL. This is the riskiest phase — do it on a weekend morning.* **Forward SQL** (run AFTER all 60 routes are deployed and verified): ```sql -- Step 1: Add FK as NOT VALID — instant, no table scan, takes a brief lock only on metadata. ALTER TABLE "Project" ADD CONSTRAINT "Project_organizationId_fkey" FOREIGN KEY ("organizationId") REFERENCES "Organization"("id") NOT VALID; -- Step 2: Validate the FK — uses SHARE UPDATE EXCLUSIVE lock (allows concurrent reads AND writes). ALTER TABLE "Project" VALIDATE CONSTRAINT "Project_organizationId_fkey"; -- Step 3: Enforce NOT NULL. In PG 12+, use a CHECK constraint first (validated without long lock), -- then promote to NOT NULL — this avoids the full-table scan under ACCESS EXCLUSIVE. ALTER TABLE "Project" ADD CONSTRAINT "Project_organizationId_not_null" CHECK ("organizationId" IS NOT NULL) NOT VALID; ALTER TABLE "Project" VALIDATE CONSTRAINT "Project_organizationId_not_null"; -- Now the NOT NULL promotion is fast because PG uses the validated CHECK as proof. ALTER TABLE "Project" ALTER COLUMN "organizationId" SET NOT NULL; -- Optional: drop the redundant CHECK now that NOT NULL is in place. ALTER TABLE "Project" DROP CONSTRAINT "Project_organizationId_not_null"; ``` **Rollback SQL:** ```sql ALTER TABLE "Project" ALTER COLUMN "organizationId" DROP NOT NULL; ALTER TABLE "Project" DROP CONSTRAINT IF EXISTS "Project_organizationId_fkey"; -- Application rollback: revert Vercel deploy to previous version (still reads by userId). ``` **Application changes** — update all 60 routes. Example pattern: ```typescript // BEFORE const projects = await prisma.project.findMany({ where: { userId: session.userId }, }); // AFTER — scope by org membership const memberships = await prisma.membership.findMany({ where: { userId: session.userId }, select: { organizationId: true }, }); const orgIds = memberships.map((m) => m.organizationId); const projects = await prisma.project.findMany({ where: { organizationId: { in: orgIds } }, }); // For project creation, REQUIRE organizationId explicitly // (use active org from session/cookie, not defaultOrganizationId blindly) const project = await prisma.project.create({ data: { name, userId: session.userId, organizationId: session.activeOrgId, }, }); ``` **Put this behind a feature flag (e.g. `READ_BY_ORG=true`) so you can flip per-route and revert instantly without a redeploy.** Add a `activeOrgId` to your session. Audit every single one of the 60 routes with grep: `grep -rn "userId:" app/api/ | grep -i project`. --- #### **Phase E — Contract: drop `Project.userId`** *Goal: remove the legacy column once no code reads it.* *Duration: Day ~18–21. Only after 7+ days of stable operation on org-scoped reads.* **Forward SQL:** ```sql -- Verify zero references first (see Data Integrity Checks). ALTER TABLE "Project" DROP COLUMN "userId"; ``` **Rollback SQL:** ```sql -- Rollback requires re-creating the column AND re-backfilling from Membership. -- This is slow and painful — that's why you wait 7+ days before running Phase E. ALTER TABLE "Project" ADD COLUMN "userId" TEXT; UPDATE "Project" p SET "userId" = ( SELECT m."userId" FROM "Membership" m WHERE m."organizationId" = p."organizationId" AND m."role" = 'owner' LIMIT 1 ); ALTER TABLE "Project" ALTER COLUMN "userId" SET NOT NULL; ALTER TABLE "Project" ADD CONSTRAINT "Project_userId_fkey" FOREIGN KEY ("userId") REFERENCES "User"("id") NOT VALID; ALTER TABLE "Project" VALIDATE CONSTRAINT "Project_userId_fkey"; ``` **Application changes:** remove `userId` from the `Project` model in `schema.prisma`, run `prisma generate`, deploy. Audit for any remaining `project.userId` references — this should return zero: `grep -rn "\.userId" app/ | grep -i project`. ### 🔍 Data Integrity Checks **Before Phase C (backfill):** ```sql SELECT COUNT(*) FROM "User" WHERE "defaultOrganizationId" IS NULL; -- Expected: ~180000 (all users, pre-backfill) SELECT COUNT(*) FROM "Project" WHERE "organizationId" IS NULL; -- Expected: ~720000 minus however many were created during Phase B dual-write ``` **After Phase C (backfill):** ```sql SELECT COUNT(*) FROM "User" WHERE "defaultOrganizationId" IS NULL; -- Expected: 0 SELECT COUNT(*) FROM "Project" WHERE "organizationId" IS NULL; -- Expected: 0 -- Every user has exactly one personal org SELECT COUNT(*) FROM "User" u LEFT JOIN "Membership" m ON m."userId" = u."id" LEFT JOIN "Organization" o ON o."id" = m."organizationId" AND o."isPersonal" = true WHERE o."id" IS NULL; -- Expected: 0 -- No orphaned projects (org matches owner's membership) SELECT COUNT(*) FROM "Project" p WHERE NOT EXISTS ( SELECT 1 FROM "Membership" m WHERE m."userId" = p."userId" AND m."organizationId" = p."organizationId" ); -- Expected: 0 ``` **Before Phase D SQL (FK + NOT NULL):** ```sql SELECT COUNT(*) FROM "Project" WHERE "organizationId" IS NULL; -- Expected: 0 — if not 0, STOP and rerun backfill ``` **Before Phase E (drop userId):** ```sql -- Confirm no code is still writing divergent values SELECT COUNT(*) FROM "Project" p JOIN "Membership" m ON m."organizationId" = p."organizationId" WHERE m."userId" != p."userId"; -- Expected: 0 (or explained by legitimate multi-member orgs once team accounts launch) ``` ### 📊 Monitoring Plan Watch these in Supabase dashboard + Vercel analytics during each Phase A, D, and E SQL execution: - **p95 API latency on `/api/projects/*` routes**: pause rollout if it exceeds 2× baseline (your baseline is likely 100–300ms; pause at >600ms sustained for 60s). - **Supabase active connections**: pause backfill script if active connections exceed 60% of your plan's limit (e.g. pause at >36 on a 60-connection plan). - **`pg_stat_activity` queries waiting on lock**: run `SELECT COUNT(*) FROM pg_stat_activity WHERE wait_event_type = 'Lock';` every 15s during Phase D SQL. Pause if >5 for more than 30s. - **5xx rate on Vercel**: pause rollout if 5xx rate exceeds 0.5% over any 1-minute window (normal is likely <0.1%). - **Backfill script throughput**: expect ~2,500 projects/minute at 500-batch / 200ms-sleep. If throughput drops below 500/min, something is contended — stop and investigate. - **Query on `projects` table lock duration** during Phase A index creation: `CREATE INDEX CONCURRENTLY` should not hold an exclusive lock, but if `pg_stat_activity` shows your `ALTER TABLE ADD COLUMN` running >2 seconds, kill it and investigate. ### ✅ Pre-Flight Checklist - [ ] Supabase point-in-time recovery verified (test a restore to a branch DB) - [ ] Staging database seeded with production-sized data (180k users, 720k projects) and full migration dry-run completed end-to-end - [ ] Feature flag `READ_BY_ORG` wired up per-route for Phase D rollout - [ ] All 60 API routes enumerated in a tracking doc with checkbox per route - [ ] Phase A and Phase D SQL windows scheduled for Saturday 6–8 AM EST (lowest traffic) - [ ] Backfill script tested on staging against 720k projects, measured runtime recorded - [ ] Status page / user communication drafted in case of degradation - [ ] `pg_stat_activity` monitoring query saved and ready to paste - [ ] Rollback SQL for each phase saved in a runbook, tested on staging - [ ] Second pair of eyes available on Slack/phone during Phase A and Phase D SQL execution ### 🔄 Rollback Procedure **During Phase A SQL (table lock exceeds 10s):** Cancel the query (`SELECT pg_cancel_backend(pid)` from `pg_stat_activity`). The `ADD COLUMN` is metadata-only in PG 15 for nullable columns, so this should not happen — but if it does, the transaction rolls back cleanly. No app impact. **During Phase B/C (dual-write or backfill):** Revert the Vercel deploy to remove dual-write code. Run the Phase C rollback SQL to clean up partial backfill data. Schema columns can stay (they're nullable and unused). **During Phase D SQL:** If FK validation hangs or NOT NULL promotion fails, cancel with `pg_cancel_backend`. The `NOT VALID` approach ensures no rows are locked during ADD, so the table remains fully available. If validation fails due to a NULL row discovered, run `UPDATE "Project" SET "organizationId" = (SELECT "defaultOrganizationId" FROM "User" WHERE "id" = "Project"."userId") WHERE "organizationId" IS NULL;` and retry validation. **During Phase D application rollout:** Flip the `READ_BY_ORG` feature flag to `false` globally. All routes revert to reading by `userId`. The dual-write continues, so no data is lost. **Point of no return: Phase E (`DROP COLUMN userId`).** Once this ships, rollback requires the painful re-backfill SQL above, which loses any user→project mappings that existed outside the primary owner relationship. Do not run Phase E until you have 7+ consecutive days of zero errors on org-scoped reads and you are comfortable making this one-way door decision. ### 💡 Founder's Summary Over 2.5 weeks you'll add an `Organization` table and link every project to an org, running it as five separate deploys — never a single big-bang migration — so your 720k-row projects table is never locked for more than a few seconds. Run Phase A and Phase D SQL on Saturday mornings at 6 AM EST with the `pg_stat_activity` query open in one tab and Vercel logs in another; the dangerous moments are Phase D (when 60 API routes switch from `userId` to `organizationId` scoping) and Phase E (when you drop `userId` for good, which is irreversible). Watch for any project showing up in the wrong account during Phase D rollout — that's a tenancy leak and you pull the `READ_BY_ORG` feature flag immediately, no debugging first.
🌀 Claude

Migration Safety Planner

Add to Cart
Instant accessInstant access
Usage rightsCommercial use
Money-back guaranteeMoney‑back
By purchasing this prompt, you agree to our terms of service
CLAUDE-4-7-OPUS
Tested icon
Guide icon
4 examples icon
Free credits icon
A senior-database-engineer style migration planner for solo SaaS founders running Postgres, MySQL, or MongoDB. Designed to help you plan schema changes safely before touching production — surfacing risk factors around locks, backfills, foreign keys, and rollout sequencing. You get a risk assessment, phased migration plan, forward and rollback SQL, data integrity checks, monitoring thresholds, and a pre-flight checklist tailored to your stack. Part of the Ship-Safely System.
...more
Added 2 weeks ago
Report
Browse Marketplace