PromptBase
Upgrade
Close icon
General
Home
Marketplace
Create
Hire
Login
Chat
Sell
Explore

Prompt Details

Model
(claude-4-6-sonnet)
Token size
1,893
Example input
[DESIGN_DESCRIPTION]: "Consultant proposes 3 Fabric workspaces (Dev/Test/Prod), one Lakehouse per environment, migrate all ADF pipelines to Fabric Data Pipelines, move Databricks notebooks to Fabric Spark notebooks. F64 capacity for prod, F8 for dev/test. No mention of Purview or governance." [WORKLOAD_CONTEXT]: "~150 ADF pipelines, 40 Databricks notebooks, 20 Power BI datasets refreshing daily, ~2TB data in ADLS Gen2" [COMPLIANCE_CONTEXT]: "GDPR applies, data residency must stay UK"
Example output
```json { "review_summary": { "overall_design_risk": "Major gaps", "critical_blockers": 3, "gaps_requiring_clarification": 7, "design_strengths": { "1": "Environment separation across Dev/Test/Prod workspaces follows sound promotion hygiene", "2": "F64 prod capacity is a reasonable starting point for 150-pipeline + 20-dataset workload", "3": "Direct notebook migration path (Databricks → Fabric Spark) is architecturally valid" } }, "findings_by_dimension": { "fabric_architecture_fit": { "finding_1": { "finding": "Single Lakehouse per environment — no Warehouse in the design", "severity": "Blocker", "detail": "The design specifies one Lakehouse per environment for all workloads, but 20 Power BI datasets refreshing daily strongly suggests SQL-based semantic models sitting on top. Fabric Lakehouses expose a SQL endpoint, but it is read-only and unoptimised for concurrent BI queries. If any of those 20 datasets are hitting complex joins or require materialized views, a Fabric Warehouse should be in this design. The consultant has not justified why Lakehouse-only is sufficient for the BI layer." }, "finding_2": { "finding": "OneLake strategy not defined — no mention of Shortcuts or cross-workspace data sharing", "severity": "Gap", "detail": "The design places 2TB in ADLS Gen2 today. There is no mention of whether the migration creates Shortcuts pointing to existing ADLS storage during transition, or whether all data is physically moved into OneLake upfront. These are fundamentally different migration paths with very different timelines and cost implications. The design is silent on this." }, "finding_3": { "finding": "No workspace capacity assignment model described below F64/F8 split", "severity": "Gap", "detail": "Three workspaces are proposed but the design does not state whether Dev, Test, and Prod each have dedicated capacity assignments or share. Sharing F8 across Dev and Test means a heavy test run can starve dev notebooks — this needs a defined capacity isolation strategy." } }, "migration_gaps": { "finding_1": { "finding": "Delta Live Tables assessment absent", "severity": "Blocker", "detail": "The design states 40 Databricks notebooks will move to Fabric Spark notebooks. This assumes all notebooks are standard PySpark or SQL. If any of those 40 use Delta Live Tables (DLT), there is no direct equivalent in Fabric. DLT pipelines require a full redesign — not a migration. The design makes no mention of auditing notebooks for DLT dependency. This must be established before any timeline is credible." }, "finding_2": { "finding": "ADF Integration Runtime equivalents not addressed", "severity": "Gap", "detail": "150 ADF pipelines implies Integration Runtimes — likely a mix of Azure IR and possibly Self-Hosted IR for on-premises sources. Fabric Data Pipelines use an On-Premises Data Gateway for equivalent connectivity. The design does not confirm whether gateway infrastructure has been sized, licensed, or planned. If any pipelines connect to on-premises SQL Server, SAP, or file shares, this is a day-one blocker." }, "finding_3": { "finding": "Custom ADF connectors and linked services not inventoried", "severity": "Gap", "detail": "ADF supports 90+ connectors. Fabric Data Pipelines support a subset. The design does not include an inventory of which linked services are in use across 150 pipelines. Until that inventory exists, the assumption that all pipelines migrate cleanly is unverified." }, "finding_4": { "finding": "Databricks Unity Catalog governance mapping not addressed", "severity": "Gap", "detail": "If the existing Databricks environment uses Unity Catalog for table-level access control and lineage, none of that transfers to Fabric OneLake automatically. The design is silent on how Unity Catalog policies map to Fabric workspace permissions and item-level access. This is not a minor gap for a GDPR environment." } }, "capacity_cost": { "finding_1": { "finding": "F8 for Dev/Test is potentially undersized given 40 Spark notebooks", "severity": "Gap", "detail": "F8 capacity provides limited Spark compute. If Dev and Test both run Spark notebooks concurrently — which is normal during active migration — contention will cause queuing and failed jobs. The design does not show any modelling of concurrent Spark sessions against F8 limits." }, "finding_2": { "finding": "No Databricks vs Fabric cost delta provided", "severity": "Gap", "detail": "The design states F64 for Prod without a comparison against current Databricks DBU spend and ADF activity costs. For a 150-pipeline, 2TB environment, this comparison is a basic deliverable. Without it, there is no business case validation." }, "finding_3": { "finding": "Pause/resume strategy for F8 Dev/Test not mentioned", "severity": "Observation", "detail": "F8 capacity running 24/7 for Dev/Test environments adds unnecessary cost. Fabric supports capacity pause/resume. The design should include an idle schedule for non-production capacity. Minor, but a visible cost optimisation the consultant should have included." } }, "governance_security": { "finding_1": { "finding": "No Microsoft Purview integration — GDPR applies", "severity": "Blocker", "detail": "The design explicitly has no mention of Purview or governance. This is a critical blocker given GDPR applies to this environment. Data classification, sensitivity labels, lineage tracking, and audit logging are not optional under GDPR. Purview integration with Fabric OneLake provides scanning, classification, and lineage. Without it, the organisation cannot demonstrate data inventory compliance. This cannot be deferred post-migration." }, "finding_2": { "finding": "UK data residency not confirmed at Fabric capacity level", "severity": "Blocker", "detail": "The compliance context requires data to remain UK-resident. Fabric capacity must be provisioned in the UK South or UK West Azure region, and OneLake data residency must be verified at tenant and capacity level. The design makes no mention of region selection or tenant-level data residency configuration. This is a contractual and regulatory requirement — it must appear explicitly in the architecture." }, "finding_3": { "finding": "Row-level security not addressed for Power BI datasets", "severity": "Gap", "detail": "With 20 Power BI datasets and GDPR in scope, row-level security (RLS) on semantic models is likely required. The design does not confirm whether existing RLS rules from Power BI Premium will be reimplemented in Fabric semantic models. This is a material gap for any dataset containing personal data." }, "finding_4": { "finding": "Workspace-level vs item-level permission model not defined", "severity": "Gap", "detail": "Three workspaces are described but there is no permission model — who has Viewer, Contributor, Member, or Admin roles per workspace? Who can access Lakehouses directly via SQL endpoint? In a GDPR context, access design is not optional." } }, "migration_risk_sequencing": { "finding_1": { "finding": "No parallel run strategy described", "severity": "Gap", "detail": "The design proposes migrating 150 ADF pipelines and 40 notebooks but does not describe a period where old and new run simultaneously for validation. For a 2TB production environment with daily Power BI refreshes, a cutover without parallel validation is high risk." }, "finding_2": { "finding": "No rollback plan", "severity": "Gap", "detail": "If Fabric deployment fails post-cutover, the design provides no path to revert. ADF pipelines and Databricks workspaces should remain operational until Fabric is validated. The design is silent on this." }, "finding_3": { "finding": "Data validation methodology not specified", "severity": "Gap", "detail": "Moving 2TB from ADLS Gen2 into OneLake requires row-count, hash, or statistical parity checks between source and destination. The design describes no validation approach. Without this, the organisation cannot confirm migration completeness." }, "finding_4": { "finding": "Fabric skills gap not addressed", "severity": "Observation", "detail": "Fabric is not ADF and not Databricks. The workspace model, OneLake concepts, and Fabric Pipeline authoring differ meaningfully from both. The design assumes existing team skills transfer directly. No upskilling or enablement plan is mentioned." } } }, "questions_for_your_consultant": { "q1": { "question": "Which of the 40 Databricks notebooks use Delta Live Tables, and how are those being replaced in Fabric?", "why_it_matters": "DLT has no direct equivalent in Fabric. Any notebook using DLT requires a redesign, not a migration — this affects timeline and cost.", "red_flag_if_they_cant_answer": "They haven't audited the notebooks. The migration scope is unknown." }, "q2": { "question": "How is UK data residency enforced at the Fabric tenant and capacity level?", "why_it_matters": "GDPR and your stated data residency requirement make this non-negotiable. Region selection must be explicit in the architecture.", "red_flag_if_they_cant_answer": "They haven't confirmed tenant-level configuration. You may be at compliance risk from day one." }, "q3": { "question": "Why is there no Purview integration in this design, and what is the plan for GDPR data classification and lineage?", "why_it_matters": "Without Purview, you have no automated data inventory, no sensitivity label inheritance, and no lineage — all relevant to GDPR accountability obligations.", "red_flag_if_they_cant_answer": "Governance has been scoped out to reduce cost. This is a risk transfer to you, not a cost saving." }, "q4": { "question": "Have all 150 ADF linked services been inventoried against Fabric Data Pipeline connector support?", "why_it_matters": "Fabric does not support every ADF connector. An unsupported connector means a pipeline cannot migrate without custom development.", "red_flag_if_they_cant_answer": "They are assuming compatibility without verification. You may discover blockers mid-migration." }, "q5": { "question": "What is the parallel run strategy, and how long will ADF and Fabric run simultaneously before cutover?", "why_it_matters": "Without parallel validation, a failed Fabric deployment means production outage with no fast recovery path.", "red_flag_if_they_cant_answer": "The migration plan is a one-way cutover with no safety net." }, "q6": { "question": "What is the Databricks vs Fabric cost comparison for this workload, and how was F64 sized?", "why_it_matters": "F64 is a significant spend commitment. Without workload modelling, you may be oversized or — worse — undersized for peak loads.", "red_flag_if_they_cant_answer": "The SKU was estimated, not modelled. You have no basis to hold the consultant to a cost outcome." }, "q7": { "question": "Which Fabric features in this design are currently in preview, and what is the production readiness timeline?", "why_it_matters": "Several Fabric capabilities are still in preview as of mid-2025. Building a production architecture on preview features carries SLA and stability risk.", "red_flag_if_they_cant_answer": "They may be proposing a design that is not yet production-ready." } }, "recommended_next_steps": { "1": "Require the consultant to deliver a Databricks notebook audit — specifically identifying any Delta Live Tables usage — before the migration timeline is finalised", "2": "Demand a written confirmation of UK region capacity provisioning and OneLake data residency configuration as a prerequisite to contract sign-off", "3": "Insist Purview integration is scoped into the migration — not deferred — given GDPR applies. Request a governance design document covering classification, labels, lineage, and audit logging", "4": "Request a full ADF linked service inventory mapped against Fabric connector support, with any gaps flagged with proposed alternatives", "5": "Ask for a revised design that includes a Fabric Warehouse for the Power BI semantic model layer and justifies the Lakehouse-only decision if they push back", "6": "Do not approve a cutover plan that does not include a defined parallel run period and explicit rollback procedure" } } ```
🌀 Claude

Ms Fabric Migration Design Reviewer

Add to Cart
Instant accessInstant access
Usage rightsCommercial use
Money-back guaranteeMoney‑back
By purchasing this prompt, you agree to our terms of service
CLAUDE-4-6-SONNET
Tested icon
Guide icon
4 examples icon
Free credits icon
Your consultant handed you an MS Fabric migration design. Now what? This prompt stress-tests their architecture before you commit — checking for gaps, risks, and assumptions they haven't told you about. ✅ Reviews Fabric-specific design decisions: Lakehouses, Warehouses, Pipelines, Notebooks ✅ Flags Databricks/ADF patterns that don't translate cleanly to Fabric ✅ Identifies cost, performance, and governance risks in the proposed design
...more
Added 1 week ago
Report
Browse Marketplace