Migrating Power BI Premium to Microsoft Fabric: What to Expect
April 18, 2026 in ,

Migrating Power BI Premium to Microsoft Fabric: What to Expect

Power BI Premium per capacity (P-SKU) is being retired. Microsoft announced the end-of-life timeline in March 2024, and every organization running on P1, P2, or P3 capacity will need to move to a Fabric F-SKU before their agreement expires. The migration itself is simpler than most admins expect: workspace reassignment, not data migration. But the licensing model, compute behavior, and cost structure all shift in ways that require planning.

This guide covers the full scope of what changes, what stays the same, and where teams typically run into trouble.

P-SKU Retirement Timeline and Deadlines

Microsoft stopped selling Power BI Premium capacity to new customers on July 1, 2024. Existing customers without an Enterprise Agreement lost the ability to renew after February 1, 2025. For organizations on an EA, renewals continue until the EA term ends, but no later than January 1, 2028.

The distinction between EA and non-EA customers matters here. Non-EA customers face a hard end-of-life date of January 1, 2026. EA customers have more runway, but “more runway” still means a fixed deadline. Once the EA expires, P-SKUs cannot be renewed.

After a P-SKU subscription ends, Microsoft provides a grace period with three stages. For the first 30 days, organizations receive free Power BI Premium capacity matching their previous P-SKU size. After 30 days, throttling kicks in: all new interactive operations are delayed by 20 seconds. At the 90-day mark, the entire capacity may be frozen, and access to Power BI data could be lost. Capacities left in this state will eventually be removed by Microsoft.

One exception: sovereign cloud customers are not impacted, since Microsoft Fabric is not yet available in sovereign cloud environments.

How F-SKU Capacity Maps to Legacy P-SKUs

The mapping between old and new SKUs is straightforward at the top end. F64 equals P1. F128 equals P2. F256 equals P3. Each F-SKU is defined in Capacity Units (CUs) rather than v-cores, but the performance equivalence holds for Power BI workloads running at the same tier.

Below F64, things get more complicated. F2 through F32 have no P-SKU equivalent. They exist for smaller Fabric deployments or development environments, but they lack a critical feature that P-SKU customers take for granted: free viewer access. Any F-SKU below F64 requires every report consumer to hold a Power BI Pro license at $10 per user per month. For organizations with hundreds of viewers, choosing F32 to “save money” on capacity can end up costing more than F64 once per-user licensing is factored in.

F64 and above include the equivalent of Pro licensing for viewers. Report consumers need only a free Microsoft Fabric license. Content creators and publishers still require Power BI Pro.

Cost Comparison: P1 vs. F64 Pricing

A P1 capacity cost $4,995 per month. The F64 equivalent, purchased as a one-year reserved instance, costs approximately $5,003 per month. That is near-parity pricing, roughly $96 more per year.

Pay-as-you-go (PAYG) pricing is a different story. F64 on PAYG runs approximately $8,410 per month, about 68% more than the legacy P1 price. PAYG does allow pausing capacity when it is not in use, so organizations with predictable off-hours can reduce actual spend. But for 24/7 workloads, PAYG is significantly more expensive than reserved.

The reserved instance discount applies across all F-SKU tiers at roughly 40% savings over PAYG. A reasonable rule of thumb: if your capacity runs more than 60% of the time, reserved pricing pays for itself.

One additional cost factor that P-SKU customers did not face is OneLake storage. Fabric bills storage separately at approximately $0.023 per GB per month. For organizations storing terabytes in OneLake, this is a meaningful new line item, even though compute pricing stayed flat.

Fabric capacity spend also qualifies for Microsoft Azure Consumption Commitment (MACC). P-SKUs did not. For organizations with existing Azure commitments, this can offset the transition cost.

For a detailed comparison of licensing tiers and where Fabric pricing fits into the broader Power BI cost picture, see Power BI Pro vs. Premium vs. PPU vs. Fabric in 2026.

What Changes After Migrating to Fabric F-SKU

Moving from P-SKU to F-SKU is not just a billing switch. Several architectural and operational aspects change.

Shared CU Pool Replaces Dedicated v-Cores

P-SKU allocated dedicated v-cores exclusively for Power BI. F-SKU uses a shared pool of Capacity Units consumed by every Fabric workload: Power BI, Data Factory, Synapse Data Engineering, Synapse Data Warehousing, Real-Time Intelligence, and more. A heavy Spark notebook running in the same capacity will compete for CUs with Power BI report queries. This means workload scheduling becomes important in a way it never was under P-SKU, where Power BI had the compute pool to itself.

Smoothing and Throttling Replace Hard Limits

Fabric introduces a resource management model built on smoothing rather than hard capacity limits. Background operations like semantic model refreshes have their CU consumption spread across a 24-hour window. Interactive operations like report queries are smoothed over roughly five minutes. This sounds generous until the capacity gets overloaded.

Throttling kicks in at defined thresholds. When 24 hours’ worth of future capacity is consumed, background jobs get rejected. After 60 minutes of future capacity is consumed, interactive queries are delayed or rejected. Sustained overconsumption can trigger a full capacity freeze.

Full Fabric Platform Unlocks

F-SKU capacities provide access to the complete Fabric platform: OneLake, Data Factory pipelines, Spark-based data engineering, SQL-based data warehousing, real-time streaming analytics, and Copilot AI features (F64 and above). P-SKU customers were limited to Power BI workloads. For organizations already considering a broader data platform strategy, this is a meaningful expansion at no additional compute cost.

For a broader overview of how Fabric capacity, F-SKUs, and Direct Lake fit together, see Microsoft Fabric for Power BI Users.

Reports, Models, and Gateways That Survive the Migration

The migration from P-SKU to F-SKU preserves nearly everything that matters to day-to-day Power BI operations. Semantic models, reports, dashboards, and paginated reports continue to function without modification. Row-Level Security configurations persist. Deployment pipelines remain intact. Dataflows Gen2 carry over. Workspace IDs do not change, so external tools, APIs, and bookmarked URLs keep working.

Data gateway configurations also survive the migration. On-premises and VNet gateways remain in place. Organizations pulling data from SAP, Salesforce, or other enterprise sources through a gateway do not need to reconfigure those connections. Metrica’s Power BI Connector for SAP, for example, continues to function through its existing gateway setup without any changes to authentication or query configuration.

Data stays in its original region, preserving data sovereignty and compliance certifications. Microsoft has stated that once workspaces are assigned to a Fabric capacity with equivalent capability (such as F64 for P1), they should behave identically to how they performed under the legacy capacity.

Power BI Pro and Premium Per User licenses are also unaffected by the P-SKU retirement. They continue to work exactly as before.

Workspace Reassignment: How the Migration Works

The actual migration is a workspace reassignment operation. No data moves. Reports and models stay where they are. Only the underlying capacity assignment changes.

For a small number of workspaces, the manual approach works fine. Open workspace Settings, navigate to License Info, and switch from Power BI Premium to Microsoft Fabric capacity. Select the target F-SKU. The workspace reassigns in seconds. The Admin Portal also supports bulk assignment: you can reassign all workspaces for a specific user group, select individual workspaces by name, or assign the entire organization’s workspaces to a capacity at once.

For larger migrations, Microsoft provides an automated notebook tool through the Semantic Link Labs repository on GitHub. The tool creates F-SKUs in the same region as existing P-SKUs, copies capacity admins, and reassigns all workspaces. It handles single capacities, lists of capacities, or all capacities at once.

There are limitations. The notebook creates PAYG F-SKUs by default. Converting to reserved instances is a separate manual step. Capacity-level settings like disaster recovery, notification preferences, and delegated tenant overrides are not transferred automatically. XMLA endpoint connections in external tools may need re-authentication. Organizations should plan the migration for a low-activity window, since all active jobs (refreshes, queries) are canceled during workspace reassignment. Scheduled jobs resume once migration completes.

Direct Lake as a Post-Migration Optimization

Migration to F-SKU does not require converting existing Import mode models to Direct Lake. Every Import and DirectQuery model continues to work as-is on Fabric capacity. Direct Lake is an optional optimization step, and it involves significant effort.

Direct Lake is a storage mode that reads data directly from Delta tables in OneLake without importing it into the VertiPaq engine. The result is near-Import query performance without the memory overhead and refresh latency of full data imports. For organizations with large datasets and frequent refresh requirements, Direct Lake can meaningfully reduce both refresh times and CU consumption.

The conversion process is nontrivial. All Power Query (M) transformations in the semantic model must be recreated in the Lakehouse or Warehouse layer using Spark, T-SQL, or Dataflow Gen2. Data must be stored as Delta tables with V-Order optimization for best performance. Calculated columns, unsupported in Direct Lake semantic models, must be pushed to the transformation layer. Data types must match exactly between the semantic model and the underlying Delta tables.

Poorly optimized DAX queries that performed adequately under Import mode may surface performance problems under Direct Lake, since the data retrieval pattern differs. Organizations should audit and optimize their data models before or alongside any Direct Lake conversion.

For most teams, the pragmatic approach is to complete the P-to-F capacity migration first, stabilize operations, and then evaluate Direct Lake for specific high-value models where refresh frequency or dataset size justifies the conversion effort.

Ten Common Pitfalls in the P-SKU to F-SKU Migration

Even though the core migration is straightforward, several recurring issues catch teams off guard.

Throttling from shared CU contention. Under P-SKU, Power BI had dedicated compute. Under F-SKU, a heavy Spark job or Data Factory pipeline can starve Power BI queries. Organizations need to monitor CU consumption across workloads and stagger resource-intensive jobs outside business hours. A single user can consume over 200% of capacity, degrading performance for everyone else.

Power BI Report Server (PBIRS) key availability. P-SKU included a PBIRS license key. On F-SKU, PBIRS is only included with F64 reserved instance purchases. Organizations on PAYG or smaller F-SKUs lose PBIRS access unless they have a separate SQL Server Enterprise license with Software Assurance.

PAYG cost overruns. Without reserved instances or autoscale limits, F-SKU PAYG costs can exceed expectations by 2x or more. The mitigation is straightforward: review Capacity Metrics, stagger dataset refresh schedules, implement incremental refresh for large datasets, and convert to reserved pricing once utilization patterns are understood.

Below-F64 licensing surprise. Choosing F32 or smaller to reduce capacity costs eliminates free viewer access. Every report consumer needs a Pro license. For organizations with many viewers, the per-user costs quickly exceed the difference between F32 and F64 capacity pricing.

Dataflows failing after reassignment. Dataflows using deprecated connectors or legacy settings may break on F-SKU. Recreating them in Dataflows Gen2, updating connectors, and verifying gateway connectivity resolves most failures.

XMLA endpoint re-authentication. External tools like Tabular Editor and ALM Toolkit may lose connectivity after the capacity change. Connection strings in automation scripts need updating, and firewall rules should be verified.

Pausing capacity disrupts all access. When a PAYG F-SKU is paused, everything stops. Imported datasets, reports, dashboards: all go dark. Organizations with global teams or around-the-clock access requirements cannot pause capacity without user impact. OneLake storage billing continues even while compute is paused.

Cross-region workspace restrictions. Workspaces containing Fabric artifacts cannot move to a capacity in a different geography. Large semantic models in “large data format” must be converted before any region transfer. Provision the F-SKU capacity in the correct region from the start.

Embedded analytics compatibility. F-SKUs support both “embed for your customers” and “embed for your organization” scenarios. Legacy A-SKU capacities only supported customer-facing embedding. Organizations with embedded analytics should verify their specific pattern is compatible.

Capacity settings not auto-migrated. The automated migration notebook does not transfer disaster recovery configurations, notification settings, or delegated tenant overrides. These must be manually reconfigured in the Fabric Admin Portal after migration.

Planning the Migration Sequence

A practical migration sequence for most organizations follows this order. First, audit current P-SKU utilization using the Premium Capacity Metrics app. Understand peak CU consumption, refresh schedules, and which workspaces drive the most load. Second, register the Microsoft.Fabric resource provider in Azure and verify Fabric quota availability in the target region.

Third, provision the F-SKU capacity. Match P1 to F64, P2 to F128, P3 to F256. Start with reserved instances if utilization is predictable. Fourth, run the migration during a maintenance window using either the Admin Portal bulk assignment or the automated notebook tool. Fifth, validate that all reports, refreshes, gateways, and XMLA connections function correctly.

Sixth, monitor CU consumption closely for the first two weeks. Fabric’s smoothing and throttling behavior differs from P-SKU, and workloads that ran comfortably on dedicated v-cores may need adjustment under shared CU allocation.

Direct Lake conversion, if planned, should come last, after the capacity migration is stable and operations are running normally on F-SKU.

Microsoft documents the full migration automation process, including the Semantic Link Labs notebook and REST API options, in their official Fabric migration blog post.

Frequently Asked Questions About the P-SKU to Fabric Migration

These are the questions that come up most often when teams are evaluating or planning the move from Power BI Premium to Fabric.

Can I migrate back from F-SKU to P-SKU?

No. The migration is one-way. P-SKUs are being retired, so there is no active P-SKU to revert to once the switch is made.

Do I need to rebuild my Power BI reports after migration?

No. Reports, semantic models, dashboards, paginated reports, and RLS configurations all carry over unchanged. Workspace IDs remain the same. External tools and bookmarked URLs continue to work.

Is F64 more expensive than P1?

With a one-year reserved instance, F64 costs approximately $5,003 per month versus $4,995 for P1. The difference is negligible. PAYG pricing at roughly $8,410 per month is significantly higher, but allows pausing and scaling.

What happens if I do nothing and let my P-SKU expire?

Microsoft provides 30 days of free capacity, then begins throttling interactive operations with a 20-second delay. At 90 days, the capacity may be frozen and data access lost. Eventually, Microsoft removes the capacity entirely.

Does the migration require downtime?

Active jobs are canceled during workspace reassignment. Scheduled jobs resume once migration completes. Microsoft recommends performing the migration when users and jobs are not active to minimize disruption.