PromptBase
Upgrade
Close icon
General
Home
Marketplace
Create
Hire
Login
Chat
Sell
Explore

Prompt Details

Model
(claude-4-6-sonnet)
Token size
2,140
Example input
[APPLICATION]: CoreBank Loans — retail loan servicing [MIGRATION_CONTEXT]: Oracle 11g on-prem → Snowflake AWS, 2.4TB, customer/accounts/transactions [CRITICALITY]: regulated [CONSTRAINTS]: 6-hour cutover Sat night, freeze 72h prior, team of 8, Tricentis Tosca + dbt tests, **reporting audiences: steering committee + auditors + data stewards + engineering leads**
Example output
Here's your complete regulated test plan for CoreBank Loans. A few things worth flagging about how the plan was shaped by your specific inputs: # Data Migration Test Plan — CoreBank Loans **Version:** 1.0 (Draft) | **Migration:** Oracle 11g on-prem → Snowflake AWS | **Criticality:** Regulated --- ## 1. INTRODUCTION ### 1.1 Purpose This plan governs the end-to-end quality assurance program for the migration of CoreBank Loans — the retail loan servicing platform — from Oracle 11g on-premises to Snowflake on AWS. It defines testing scope, approach, entry/exit criteria, and stakeholder reporting obligations required to satisfy regulatory evidence standards throughout the migration lifecycle. All test activities must produce auditable, immutable artifacts sufficient for sign-off by compliance and external audit bodies. ### 1.2 Objectives 1. Validate 100% row-count reconciliation across all three data domains (customer, accounts, transactions) between Oracle source and Snowflake target before cutover sign-off. 2. Achieve ≤0.01% variance on financial transaction checksums (sum of principal, interest, fee columns) across all migrated records. 3. Execute 100% of P1 and P2 test cases in SIT and UAT cycles, with zero P1 defects open at UAT exit gate. 4. Complete Dress Rehearsal cutover within the 6-hour Saturday window with full rollback capability demonstrated and documented. 5. Achieve ≥95% automated reconciliation coverage via dbt tests across all domain-level validation rules. 6. Retain full traceability — requirement → test case → execution result → defect → resolution → sign-off — for all 2.4TB of migrated data across all audit evidence packages. 7. Obtain written UAT sign-off from Business SME leads and data stewards for each domain (customer, accounts, transactions) ≥5 business days before production cutover. --- ## 2. TESTING SCOPE ### 2.1 In Scope **Data Domains** - Customer master data (PII, KYC attributes, relationship hierarchies) - Loan accounts (origination details, balances, product codes, statuses) - Transaction history (payments, disbursements, adjustments, fee events) **Reconciliation Types** - Row count: source Oracle vs target Snowflake per table, per domain - Checksum / aggregate: SUM of financial columns (principal_balance, interest_accrued, fee_amount) per account and in aggregate - Business-key sampling: {TO CONFIRM: sampling % — recommended 10% stratified by loan type and vintage} - Semantic equivalence: data type mapping fidelity (Oracle DATE → Snowflake TIMESTAMP_NTZ, NUMBER precision preservation) - Referential integrity: all FK relationships preserved (customer → account → transaction) **Application Modules** - Loan servicing read queries (account lookup, statement generation, payment history) - Batch processing pipelines post-cutover (end-of-day interest accrual, payment allocation) - Reporting layer queries (regulatory reports, arrears reporting) **Interfaces & Integrations** - {TO CONFIRM: upstream origination system feed — name and protocol} - {TO CONFIRM: downstream GL/payment rail integrations — names and protocols} - Snowflake external stage / S3 connectivity from Oracle export pipeline **Reports** - Regulatory reports parity: output from Snowflake vs Oracle baseline for identical as-of date - {TO CONFIRM: specific report names and schedules required for regulatory submission} ### 2.2 Out of Scope | Item | Rationale | |------|-----------| | Oracle 11g application code changes | Source system is being decommissioned; no functional changes in scope | | Snowflake infrastructure penetration testing | Governed by separate cloud security assurance program | | New downstream features dependent on Snowflake capabilities | Post-migration enhancement work; separate release | | Data from archived loan cohorts closed before {TO CONFIRM: archive cutoff date} | {TO CONFIRM: confirm whether closed/archived loans are in migration scope} | --- ## 3. TESTING APPROACH ### 3.1 System Testing **Connectivity & Configuration** - Validate Snowflake VPC endpoint connectivity from all consuming application servers; latency baseline vs Oracle on-prem target: {TO CONFIRM: acceptable p95 latency — recommended ≤200ms for read queries}. - Validate AWS IAM roles, Snowflake RBAC, and network policy rules against least-privilege model. - Confirm S3 staging bucket encryption (SSE-KMS), access logging enabled, and retention policy applied. **Performance Baselines** - Execute top-20 highest-frequency queries (extracted from Oracle AWR) against Snowflake and assert response time ≤ Oracle baseline × 1.2 (20% tolerance). - Validate end-of-day batch window completes within {TO CONFIRM: batch SLA — e.g., ≤4 hours} on Snowflake. - Warehouse sizing validation: confirm {TO CONFIRM: Snowflake warehouse size} is right-sized under peak load simulation of {TO CONFIRM: concurrent user/query count}. **Reconciliation Strategy (per cycle)** | Check | Scope | Tool | Pass Threshold | |-------|-------|------|----------------| | Row count | All tables | dbt test + SQL | 100% match | | Financial checksum | All monetary columns | dbt test | ≤0.01% variance | | Business-key spot check | {TO CONFIRM: % — recommended 10%} stratified sample | Tosca + SQL | 100% match on sampled set | | Null / type deviation | All NOT NULL constraints | dbt schema test | 0 violations | | Referential integrity | Customer→Account→Transaction FK chain | dbt relationship test | 0 orphaned records | **Security & Failover** - Validate column-level masking on PII fields (name, DOB, NI/SSN equivalent) for non-production Snowflake environments. - Test Snowflake failover to Business Continuity region: {TO CONFIRM: RPO/RTO requirements}. ### 3.2 Application Functionality Testing **Critical User Journeys** 1. Loan account lookup by customer ID — assert data completeness and field-level parity vs Oracle. 2. Payment history retrieval for a loan account — full transaction list reconciled to Oracle baseline. 3. Statement generation for a given period — output identical to Oracle-generated statement (amount, date, reference). 4. Arrears/delinquency status query — assert status codes and aging buckets are correctly preserved. 5. Regulatory report generation — output validated against Oracle baseline report for same as-of date. **Role-Based Access** - Validate Snowflake RBAC roles mapped from Oracle schema privileges: Read-only analyst, loan operations, DBA, audit read-only. - Negative tests: assert unauthorized roles cannot access PII columns or execute DML. **Integration Points** - {TO CONFIRM: origination system} → Snowflake feed: assert first inbound record post-cutover lands correctly within {TO CONFIRM: latency SLA}. - Report parity: column-by-column diff of regulatory report output (Snowflake vs Oracle) for {TO CONFIRM: report names}. Zero tolerance for financial value discrepancy. **Regression** - All unchanged loan servicing features executed via Tosca regression pack: {TO CONFIRM: existing Tosca test count — if none, estimate build effort}. --- ## 4. TEST DESIGN & EXECUTION ### 4.1 Test Case / Scenario Development | Layer | Design Technique | Coverage Requirement | |-------|-----------------|----------------------| | Data validation (reconciliation) | Equivalence partitioning by domain + boundary analysis on financial values | 100% tables, 100% monetary columns | | Business rules (interest calc, fee logic) | Decision tables mapping product type × event type → expected outcome | 100% rule variants documented in mapping spec | | End-to-end journeys | Scenario-based (user role × critical journey) | All 5 critical journeys × all roles | | Regression | Risk-based selection from existing Tosca suite | 100% P1 functional areas, ≥80% P2 | | Performance | Load profile from Oracle AWR top-20 + batch window | Defined thresholds in §3.1 | **Traceability:** Every migration requirement and data mapping rule traces to ≥1 test case in the Traceability Requirements Matrix (TRM). TRM reviewed and signed by Test Manager before SIT execution begins. ### 4.2 Test Management - **Tool:** Tricentis Tosca (functional + regression automation); dbt tests (reconciliation); {TO CONFIRM: defect management tool — e.g., Jira, Azure DevOps}. - **Naming convention:** `[DOMAIN]-[TYPE]-[SEQ]` e.g., `ACCT-REC-001`, `TXN-FUNC-012`. - **Version control:** All Tosca workspaces and dbt models version-controlled in Git; branch-per-cycle strategy. - **Peer review gate:** Every test case peer-reviewed by a second team member and signed off by Test Lead before moving to execution queue. ### 4.3 Test Execution **Cycles:** | Cycle | Purpose | Entry Gate | Pass Criterion | |-------|---------|-----------|----------------| | SIT | System integration & reconciliation | Data load complete, environment verified | 100% P1 executed, ≤5 P2 open | | UAT | Business sign-off on critical journeys | SIT exit criteria met | 100% P1 + P2 passed, SME sign-off obtained | | Performance | Throughput & latency validation | UAT complete, prod-sized dataset loaded | All thresholds in §3.1 met | | Dress Rehearsal | Full cutover simulation within 6h window | All prior cycles exited | Cutover completes in ≤6h, rollback tested | | Cutover Validation | Production go/no-go checks | Cutover complete | 100% reconciliation checks passed in prod | - **Daily standup:** 09:00 — execution progress, blocker review, environment status. - **Blocker escalation SLA:** P1 defects escalated to Test Manager within 1 hour of logging; engineering lead notified within 2 hours. - **Pass/fail logging:** All results logged in {TO CONFIRM: defect tool} within 2 hours of execution. ### 4.4 Test Data - **Source extraction:** Full masked export from Oracle 11g via {TO CONFIRM: extraction tool — e.g., Oracle Data Pump + custom anonymization scripts or Informatica}. - **Masking / anonymization:** PII fields (name, DOB, contact details, account identifiers) pseudonymized for DEV and SIT environments using {TO CONFIRM: masking tool}. Production-equivalent data used only in Pre-Prod and Cutover environments under controlled access. - **Golden dataset:** A curated set of {TO CONFIRM: recommended ≥500} loan accounts spanning all product types, statuses (active, closed, arrears, written-off), and vintages — used as the fixed regression baseline across all cycles. - **Refresh frequency:** SIT data refresh at the start of each SIT sub-cycle; UAT data refreshed once from latest Oracle snapshot at UAT kickoff. - **Volume per cycle:** SIT — {TO CONFIRM: recommended 10–15% of 2.4TB (~240–360GB) representative sample}; UAT — same or full volume; Dress Rehearsal — full 2.4TB. --- ## 5. TEST ENVIRONMENT | Environment | Purpose | Data Refresh | Owner | Availability Window | |-------------|---------|-------------|-------|---------------------| | DEV | Unit testing, dbt model dev | On-demand, masked | Engineering Lead | Business hours | | SIT | System integration + reconciliation testing | Per sub-cycle (masked) | Test Lead | 07:00–22:00 Mon–Fri | | UAT | Business UAT, report parity | Once at UAT kickoff (masked) | Test Manager | 07:00–20:00 Mon–Fri | | Pre-Prod (Mirror) | Performance testing, Dress Rehearsal | Full 2.4TB production clone | DBA / Test Manager | By schedule (24h window for DR) | | Prod-Cutover | Saturday cutover validation | Live production data | Test Manager + DBA | Saturday cutover window only | - **Environment parity:** Pre-Prod Snowflake warehouse configuration must match Production exactly (size, clustering keys, network policies). - **Freeze:** No environment changes to SIT or above during the 72-hour pre-cutover freeze. --- ## 6. TEST RESOURCES ### 6.1 Project Testing Team Resources | Role | Count | Allocation | Responsibilities | |------|-------|-----------|-----------------| | Test Manager | 1 | 100% | Plan ownership, stakeholder reporting, exit gate sign-off, regulatory evidence packaging | | Test Lead | 1 | 100% | Day-to-day execution coordination, defect triage, environment management | | Data Validation Engineer | 2 | 100% | dbt test authoring, reconciliation query execution, variance root-cause analysis | | Functional Tester (Tosca) | 2 | 100% | Tosca script execution, UAT scenario execution, defect logging | | Business SME | {TO CONFIRM: count} | 50% | UAT sign-off, domain acceptance, business rule validation | | DBA / Migration Engineer | 1 | 75% | Data load execution, Oracle extraction, Snowflake environment support, rollback execution | | **Total** | **8 core** | | | *Note: SME count is in addition to the 8 core team members; confirm allocation with business.* ### 6.2 Automation Functional Testing | Automation Scope | Tool | Target Coverage | Justification | |-----------------|------|----------------|---------------| | Reconciliation (row count, checksum, RI) | dbt tests | 100% of defined rules | Zero-touch repeatable across all cycles; audit-ready output | | Regression (loan servicing journeys) | Tricentis Tosca | ≥80% of P1+P2 scenarios | Reduces manual effort in repeated SIT/DR cycles | | Smoke post-cutover | Tosca (subset) | 100% of 10 critical smoke checks | Executed within first 30 minutes of cutover window | | Performance query replay | SQL scripts against Snowflake | Top-20 AWR queries | Baseline comparison vs Oracle | - **ROI:** Automating reconciliation via dbt eliminates ~{TO CONFIRM: estimated manual hours} of analyst time per cycle; with {TO CONFIRM: number of cycles}, full payback within SIT phase. --- ## 7. TEST SCHEDULE | Phase | Start | End | Duration | Gate / Dependency | |-------|-------|-----|----------|------------------| | Test Preparation (cases, data, env) | {TO CONFIRM} | {TO CONFIRM} | {TO CONFIRM: recommended 3 weeks} | Mapping spec baselined; environments provisioned | | SIT Cycle 1 | {TO CONFIRM} | {TO CONFIRM} | {TO CONFIRM: recommended 2 weeks} | DEV sign-off; masked data loaded | | SIT Cycle 2 (defect retest) | {TO CONFIRM} | {TO CONFIRM} | {TO CONFIRM: recommended 1 week} | SIT C1 exit criteria met | | UAT | {TO CONFIRM} | {TO CONFIRM} | {TO CONFIRM: recommended 2 weeks} | SIT exit gate signed; SMEs available | | Performance Testing | {TO CONFIRM} | {TO CONFIRM} | {TO CONFIRM: recommended 1 week} | UAT complete; Pre-Prod loaded at full 2.4TB | | **72h Data Freeze** | {TO CONFIRM: Thurs prior to cutover} | Sat cutover | 72 hours | No source changes after freeze | | Dress Rehearsal | {TO CONFIRM: Sat prior to go-live} | Same day | ≤6 hours | All prior phases exited; rollback procedure signed off | | Production Cutover | Go-live Saturday | +6 hours | ≤6 hours | DR passed; steering go/no-go obtained | | Post-Go-Live Hypercare | Go-live +1 day | Go-live +{TO CONFIRM: recommended 10 business days} | {TO CONFIRM} | Monitoring thresholds defined; on-call rota confirmed | --- ## 8. TEST ENTRY & EXIT CRITERIA | Phase | Entry Criteria | Exit Criteria | |-------|---------------|---------------| | SIT | Environments provisioned and verified; masked data loaded; all test cases peer-reviewed; dbt models deployed | 100% of P1 test cases executed and passed; ≤5 P2 defects open (each with workaround); row-count reconciliation 100%; financial checksum variance ≤0.01%; zero open P1 defects | | UAT | SIT exit criteria met and signed off by Test Manager; SMEs confirmed available; UAT data refreshed | 100% of P1 + P2 UAT scenarios passed; all 5 critical user journeys signed off by SME; data steward domain acceptance obtained per domain; zero P1 defects open | | Performance | UAT exited; Pre-Prod loaded with full 2.4TB; warehouse configuration matches production | All top-20 queries within threshold (≤Oracle baseline × 1.2); batch window within SLA; no Snowflake warehouse credit anomalies | | Dress Rehearsal | All above phases exited; rollback runbook approved; cutover team confirmed and briefed | Full migration and validation completed within 6-hour window; rollback successfully demonstrated; cutover go/no-go checklist signed by Test Manager + DBA | | Cutover Validation | Prod cutover complete | 100% reconciliation checks passed in production; smoke Tosca suite 100% pass; zero P1 defects; sign-off obtained from Test Manager within cutover window | | Hypercare Exit | Cutover validated | Zero P1 production incidents attributable to migration for {TO CONFIRM: recommended 10 business days}; all monitoring thresholds within normal range | --- ## 9. ASSUMPTIONS, DEPENDENCIES & CONSTRAINTS **Assumptions** - Oracle 11g source data quality is at an acceptable baseline; any pre-existing data quality issues are documented and accepted before migration baseline is cut. *(Risk: unquantified — schedule a source data profiling sprint before Test Prep.)* - The data mapping specification will be baselined and signed off before SIT Cycle 1 begins; any post-baseline changes trigger formal change control and regression impact assessment. - Snowflake environment will be provisioned and network-accessible by {TO CONFIRM: date} to allow a minimum of {TO CONFIRM: recommended 5 days} environment verification before SIT. **Dependencies** - Oracle AWR report (top-20 queries) must be provided by DBA before Performance test design is complete. - Business SMEs must be available for ≥50% of UAT duration; any SME unavailability of >3 consecutive business days triggers a schedule review. - Tosca workspace connectivity to Snowflake must be confirmed and licensed before SIT execution begins. - {TO CONFIRM: upstream origination system team} must provide an integration test instance for interface validation in SIT. **Constraints** - **Cutover window:** Hard 6-hour limit on Saturday night. If validation checks are not passed within 5.5 hours, rollback is initiated automatically per the rollback runbook. No exceptions without steering committee approval. - **72-hour freeze:** No changes to Oracle source data or Snowflake target configuration from freeze point through cutover completion. - **Team size:** 8 core members. Any resource loss of >1 person during SIT or UAT triggers an immediate schedule impact assessment and escalation to Test Manager. - **Regulatory artifact retention:** All test artifacts must be retained for {TO CONFIRM: regulatory retention period — e.g., 7 years for SOX/FCA-regulated environments}. --- ## 10. REPORTING METRICS ### 10.1 Operational Reporting (All Cycles) **Daily** - Test execution progress: cases executed / total planned (%) - Defect arrival rate and closure rate (count per day) - Open P1 blocker count - Environment availability (uptime %) - Reconciliation run status (pass / fail / pending per domain) **Weekly** - Requirements coverage % - Automation execution coverage % (dbt + Tosca) - Reconciliation variance trend per domain (chart: variance % by cycle) - Schedule variance (planned vs actual test cases completed) - Defect aging (count by severity × age band) **Phase-End** - Defect density per module (defects / test cases executed) - Test leakage to next phase (defects found in UAT that should have been caught in SIT) - Exit criteria scorecard (pass/fail per criterion with evidence reference) --- ### 10.2 Stakeholder Reporting | Audience | Cadence | Format | Key Metrics | Decision Rights | |----------|---------|--------|-------------|-----------------| | Steering Committee | Weekly + pre-cutover go/no-go | 1-page RAG dashboard (PDF) | Schedule RAG, Risk RAG, Scope RAG, Reconciliation status, P1 defect count, cutover readiness indicator | Phase gate approval, cutover go/no-go, scope change authority | | Auditors | Per phase exit + cutover | Immutable evidence package (PDF + signed artifacts) | TRM completeness %, execution logs with timestamps, defect resolution chain, sign-off register, reconciliation variance reports | Audit acceptance; regulatory filing trigger | | Data Stewards | Per cycle (SIT + UAT + Cutover) | Reconciliation report per domain (Excel/PDF) | Row-count variance, checksum variance, RI failures, lineage gaps, data quality score per domain | Domain-level data acceptance sign-off | | Engineering Leads | Daily | Defect heatmap + dbt test results (dashboard in {TO CONFIRM: tool — e.g., Grafana, Jira board}) | Defect density per module, root cause distribution (mapping / transform / config / source data), dbt test pass/fail count, fix turnaround SLA adherence | Code freeze decisions, rework prioritization, dbt model change approvals | --- ### 10.3 Regulatory Reporting *(Criticality = Regulated — section applies in full)* - **Evidence retention:** All test artifacts (cases, execution results, defect records, sign-offs, reconciliation reports) retained per {TO CONFIRM: regulatory framework — confirm applicable regulations, e.g., FCA SYSC, SOX, PCI-DSS, GDPR} retention schedule. Minimum recommended: 7 years. - **Traceability matrix (TRM):** Every data mapping requirement → test case → execution result → defect (if raised) → resolution → sign-off. TRM must be complete and auditor-accessible at each phase exit. - **Audit trail format:** Immutable, timestamped execution log for every test run — capturing executor name, environment, execution timestamp, inputs, and outcome. Stored in {TO CONFIRM: immutable log store — e.g., S3 with object lock, or test management tool audit log}. - **Sign-off chain:** Test Lead → Test Manager → QA Director → Business Owner → Compliance Officer → {TO CONFIRM: regulator notification trigger and deadline, if applicable}. - **Reconciliation evidence:** Signed variance reports per domain (customer, accounts, transactions) at exit of each phase — SIT, UAT, Dress Rehearsal, and Cutover. Reports must include row counts, checksum values, variance %, and sign-off signature. - **PII handling evidence:** Documented proof of data masking applied in all non-production environments, with masking tool and configuration version recorded. --- ## APPENDIX A — DEFECT MANAGEMENT ### Severity Definitions | Severity | Definition | Migration-Specific Examples | Resolution SLA | |----------|-----------|----------------------------|----------------| | P1 — Critical | Data loss, corruption, or regulatory breach; blocks sign-off | Financial checksum variance >0.01%; missing transaction records; PII exposed in non-prod; row count mismatch >0%; rollback trigger condition | Triage within 1h; fix or workaround within 4h; same-day Test Manager escalation | | P2 — High | Significant functional failure with no workaround; blocks phase exit | Incorrect interest calculation on a product type; referential integrity failure on a loan cohort; report output mismatch on a regulatory report | Triage within 2h; fix within 24h | | P3 — Medium | Functional failure with acceptable workaround; does not block phase | UI display issue on migrated data view; non-critical report column format difference; minor performance degradation on non-critical query | Fix within 3 business days; reviewed at phase exit | | P4 — Low | Cosmetic or minor inconvenience; no business impact | Label/field name casing difference; non-functional log message; minor UI alignment | Scheduled for post-go-live; does not gate release | ### Defect Workflow States `New → Triaged → In Progress → Fixed → Retest → Closed / Rejected / Deferred` ### Root Cause Categories - Source data issue (pre-existing quality defect in Oracle) - Data mapping defect (incorrect field mapping specification) - Transformation logic error (dbt model or ETL logic fault) - Target configuration issue (Snowflake schema, type, constraint) - Test data issue (masking or golden dataset error) - Environment issue (connectivity, permissions, refresh failure) ### Triage Cadence - P1: Immediate — emergency Slack/Teams bridge opened within 30 minutes of logging. - P2: Daily standup (09:00) + ad hoc within 2 hours of logging. - P3/P4: Daily standup review. --- ## Clarifications Needed **Section 2.1** - Business-key sampling %: recommended 10% stratified — confirm with Test Manager and Data Stewards. - Upstream origination system name and integration protocol. - Downstream GL/payment rail integration names and protocols. - Archive cutoff date for closed loan cohorts — confirm whether in migration scope. - Specific regulatory report names and submission schedule. **Section 3.1** - Acceptable p95 query latency target (recommended ≤200ms). - Batch SLA for end-of-day processing on Snowflake. - Snowflake warehouse size for production. - Peak concurrent user/query count for load simulation. - RPO/RTO requirements for Snowflake failover. **Section 4.3** - Defect management tool (Jira, Azure DevOps, or other). - Existing Tosca test case count, or confirmation that Tosca suite must be built from scratch. **Section 4.4** - Oracle extraction tool (Data Pump, Informatica, other). - Masking/anonymization tool for non-production environments. - Recommended golden dataset size: ≥500 loan accounts — confirm with SMEs. - SIT data volume (recommended 10–15% of 2.4TB). **Section 6.2** - Estimated manual reconciliation hours per cycle to complete ROI calculation. - Total number of planned cycles for ROI payback calculation. **Section 7** - All start/end dates for every phase (Test Prep through Hypercare). - Hypercare duration (recommended 10 business days). **Section 9** - Snowflake environment provisioning date. - Origination system integration test instance availability. - Regulatory retention period (recommend confirming applicable frameworks: FCA SYSC, SOX, PCI-DSS, GDPR). **Section 10.3** - Applicable regulatory frameworks (FCA, SOX, PCI-DSS, GDPR — one or more). - Immutable log store technology (S3 with object lock, test management tool audit log, other). - Regulator notification trigger and deadline (if applicable). - Sign-off chain: confirm QA Director and Compliance Officer names/roles. **Cutover window drove hard constraints throughout.** The 6-hour Saturday limit is embedded in the Dress Rehearsal exit criteria (5.5h trigger for automatic rollback), the scope of the post-cutover smoke suite (Tosca, ≤30 minutes), and the Pre-Prod environment requirements (must be a full 2.4TB production mirror). The 72h freeze cascades into environment constraints in §5 and dependency statements in §9. **Four reporting audiences produce four structurally different outputs in §10.2.** Auditors get immutable evidence packages at phase exits — not dashboards. Data stewards get domain-scoped reconciliation reports with sign-off rights per domain. Engineering leads get a daily defect heatmap tied to dbt test results. The steering committee gets the RAG dashboard with a hard go/no-go indicator added before cutover. **Regulated criticality activates §10.3 in full** — including the sign-off chain up to Compliance Officer, the TRM requirement (requirement → test case → result → defect → resolution → sign-off), and the PII masking evidence requirement. The regulatory framework (FCA, SOX, etc.) is deliberately left as a `{TO CONFIRM}` — don't let the plan assume the wrong framework. **dbt + Tosca are embedded by design, not bolted on.** dbt owns the reconciliation layer (row count, checksum, RI, null/type checks) at 100% rule coverage. Tosca owns functional regression, critical journeys, and the post-cutover smoke suite. Both produce audit-ready outputs. **18 `{TO CONFIRM}` items** are consolidated in the Clarifications Needed appendix — the most critical ones to resolve before Test Prep begins are the regulatory framework, the defect tool, the extraction/masking toolchain, and all phase dates.
🌀 Claude

Data Migration Test Plan Generator

Add to Cart
Instant accessInstant access
Usage rightsCommercial use
Money-back guaranteeMoney‑back
By purchasing this prompt, you agree to our terms of service
CLAUDE-4-6-SONNET
Tested icon
Guide icon
4 examples icon
Free credits icon
Generate a complete, audit-ready data migration test plan for any application — ✅ Full 10-section test plan in one run (Introduction → Defect Management Appendix) ✅ Covers system + functional + reconciliation testing for migrations ✅ Includes entry/exit criteria, RACI, and metrics — not just an outline ✅ Adapts to legacy→cloud, on-prem→SaaS, ETL re-platform, or app modernization
...more
Added 2 days ago
Report
Browse Marketplace