Microsoft Fabric ofr Power BI users
March 26, 2026 in , ,

Microsoft Fabric for Power BI Users: DirectLake, OneLake & F-SKUs

Power BI hasn’t gone away, but the platform it runs on has changed substantially. Microsoft Fabric is the new home for Power BI workloads, and the migration from Premium P-SKUs to Fabric F-SKUs is now mandatory for existing customers. For many Power BI teams, the transition raises practical questions that official announcements don’t always answer clearly: Does Direct Lake replace Import mode? What does OneLake actually do for a report author? And how does the F64 licensing threshold affect cost planning for a 300-person organization? This guide covers those questions directly, without the marketing gloss.

What Microsoft Fabric Changes for Power BI Users

Microsoft Fabric is a consolidation play. It brings Power BI, Azure Synapse Analytics, and Azure Data Factory under one platform, with a shared storage layer and a unified capacity model. For a Power BI practitioner, the most visible change isn’t in report authoring. It’s in how data gets stored, licensed, and processed underneath the reports.

Fabric as a Platform Shift

Before Fabric, Power BI Premium sat as a relatively self-contained product: capacity purchased through Microsoft, data stored wherever your pipelines pointed, licensing billed per user or per capacity. Fabric replaces that model with something closer to a cloud platform subscription. Capacity is provisioned through Azure in the form of F-SKUs. Data flows into OneLake. Workloads (Power BI, Data Engineering, Data Warehouse, Real-Time Intelligence, Data Science) all share the same compute and the same storage tier.

For a Power BI analyst or developer, day-to-day work inside Power BI Desktop and the Power BI service looks largely the same. Reports, semantic models, dashboards, scorecards, and paginated reports all continue to function. What changes is what sits beneath them, and the options available for how semantic models connect to data.

What Stays the Same for Power BI Users

Import mode and DirectQuery mode are both fully supported inside Fabric. Existing semantic models migrated from Power BI Premium workspaces do not need to be rebuilt. Scheduled refreshes continue to work, though they may need reconfiguring after workspaces are reassigned to a new Fabric capacity. DAX, measures, relationships, row-level security, and Power Query transformations all remain intact. The Power BI service UI is now embedded within the Fabric portal, but reports look the same to end users.

A few things did change at the product level since Fabric’s launch. Default semantic models were decoupled from lakehouses and warehouses in October 2025 and now exist as independent items. Metric Sets were deprecated. Azure Maps replaced Bing Maps in paginated reports. Translytical task flows became generally available in March 2026, allowing actions to be performed directly within reports rather than just read from them. But for most Power BI users, the core workflow is unchanged.

OneLake: The Unified Storage Layer Behind Fabric

Every Fabric tenant gets OneLake automatically. There is no provisioning step, no separate subscription, and no option to skip it. It is the mandatory storage foundation for all Fabric workloads, including Power BI semantic models that use Direct Lake mode.

How OneLake Works

OneLake is built on Azure Data Lake Storage Gen2 and exposes the same APIs and SDKs. All data stored in Fabric (lakehouses, warehouses, databases) lands in OneLake automatically in Delta Parquet format. Delta Parquet is a columnar format that supports ACID transactions through log-based versioning. Each write operation creates new Parquet files; the Delta log tracks which files are current for a given version. Multiple compute engines within Fabric can read the same data: T-SQL from a warehouse, Spark from a notebook, and the Analysis Services engine from a Power BI semantic model.

This shared format eliminates the data duplication that was common in older architectures, where data engineers would move data into a staging layer, then a warehouse, and Power BI would import a third copy for in-memory processing. With OneLake, data engineers write to lakehouses once, and Power BI can read from the same Delta tables using Direct Lake mode.

OneLake also supports shortcuts, references to external storage (ADLS Gen2, Amazon S3, Dataverse) that appear as local paths without physically moving the data. Snowflake interoperability through Iceberg table sharing reached general availability in February 2026.

What OneLake Means for Power BI Data Storage

For Power BI report authors, OneLake is mostly invisible during normal report development. The significance appears when data architecture decisions are made. A semantic model using Direct Lake mode reads directly from OneLake Delta tables without importing data into a separate in-memory store. That means no extract-and-load step between the data lake and the semantic model. The tradeoff is that Direct Lake has specific requirements for how Delta tables are structured and maintained, and there are per-SKU guardrails that constrain how large each table can be.

Direct Lake Mode: How It Compares to Import and DirectQuery

Direct Lake is the third storage mode option in Power BI, sitting alongside Import and DirectQuery. Understanding it precisely matters, because the name suggests something simpler than what it actually is.

How Direct Lake Processes Queries

Direct Lake reads column data from Delta Parquet files in OneLake directly into the VertiPaq in-memory engine, the same engine that powers Import mode. Unlike Import mode, there is no separate data refresh that extracts and stores a full copy of the data. Instead, Direct Lake uses a process called framing.

Framing is a lightweight operation triggered when the semantic model refreshes. Rather than pulling all data, framing reads the Delta table’s metadata log, identifies which Parquet files are current, and sets a version timestamp. Subsequent DAX queries then load column data from those Parquet files on demand, into memory, using that timestamp as the baseline. Framing takes seconds rather than the minutes or hours that a full Import refresh can require for large models. The actual column data loading into memory happens lazily, on the first query that touches a given column, and stays warm in memory until evicted by capacity pressure or a new framing cycle.

Because queries run through VertiPaq rather than being pushed to a source database, Direct Lake query performance is close to Import mode. Benchmark testing (FourMoo, January 2026) recorded median query durations of approximately 174,000 microseconds for Import, 247,000 for Direct Lake, and 724,000 for DirectQuery. Direct Lake’s overhead relative to Import is modest and typically acceptable for production reporting.

Direct Lake on OneLake vs. Direct Lake on SQL Endpoints

There are two variants of Direct Lake, and the distinction is significant for how failures are handled.

Direct Lake on SQL endpoints is the original version, generally available since November 2023. It routes through the SQL analytics endpoint of a lakehouse or warehouse. When guardrails are exceeded (for example, a table has too many rows for the current SKU), or when row-level security is applied at the SQL layer, queries fall back automatically to DirectQuery mode. This fallback is transparent to the user but carries the full performance penalty of DirectQuery. The fallback behavior can be controlled by setting the `DirectLakeBehavior` property on the semantic model.

Direct Lake on OneLake is the newer variant, reaching general availability in March 2026 with desktop support added in March 2025. It reads directly from Delta tables without going through a SQL endpoint. There is no DirectQuery fallback: if guardrails are exceeded, queries fail. This is a stricter mode that requires accurate SKU sizing but eliminates the hidden performance degradation that fallback introduces. Microsoft’s documentation recommends designing solutions to avoid fallback in either variant, treating it as a failure condition rather than a safety net.

Performance Comparison with Import and DirectQuery

For production workloads where data is managed in a Fabric lakehouse and the Delta tables are well-maintained, Direct Lake delivers performance that is competitive with Import. The practical difference most users notice is data freshness: Import mode reflects data as of the last full refresh, which may be hours old, while Direct Lake reflects the last framing operation (which can be scheduled at high frequency or triggered programmatically). DirectQuery always reflects current data but pays a latency cost per query.

Direct Lake does not support composite models, calculated columns, calculated tables, complex Delta column types, or auto date/time intelligence. It does support calculation groups, what-if parameters, and field parameters. Teams with complex semantic models built for Import mode may need to adjust their models before Direct Lake is viable.

For a detailed look at how Power BI connects to data sources across different modes, the Power BI data connections guide on the Metrica blog covers the architecture in more depth.

F-SKU Licensing: Replacing P-SKUs and What It Costs

The licensing change is arguably the most disruptive part of the Fabric transition for existing Power BI Premium customers. P-SKU sales officially ended in July 2024. Existing customers can continue using their current agreement but must migrate to F-SKUs when that agreement expires, with a 30-day grace period provided after the P-SKU lapses.

P-SKU to F-SKU Equivalencies and Migration Timeline

The SKU equivalencies are straightforward by compute:

  • P1 = F64
  • P2 = F128
  • P3 = F256

The compute capacity is matched, but the pricing model changes. P-SKUs were sold through Microsoft’s enterprise agreements and billed annually at roughly $4,500 per month for a P1. F-SKUs are purchased through Azure and billed on an hourly basis with reserved pricing options. A 1-year reserved F64 runs approximately $5,003 per month. That is modestly higher than P1 in raw capacity cost, but F64 includes access to all Fabric workloads (Data Engineering, Data Factory, Data Science, Real-Time Intelligence, Databases), not just Power BI.

The 30-day grace period after P-SKU expiration is meant to allow workspace reassignment, but the migration requires advance planning. Microsoft also recommends purchasing the F-SKU before the P-SKU expires to avoid overlap gaps.

The F64 Threshold and Pro License Requirements

F64 is the most consequential number in Fabric licensing for Power BI teams. Below F64 (F2 through F32), every user who consumes Power BI content needs a Power BI Pro license at $14 per user per month. Above F64, users with a viewer role can access content with a free Fabric license, meaning no per-user charge for read-only consumers.

For organizations with large numbers of report viewers, this threshold determines the total cost of ownership significantly. An organization with 300 Power BI users on F32 capacity pays approximately $2,501 per month for capacity plus $4,200 for Pro licenses ($14 x 300), totaling $6,701. The same organization on F64 pays $5,003 per month with no Pro licenses required for viewers, saving roughly $1,698 per month. The break-even point falls at approximately 250 active Pro license holders.

Content creators who publish reports still require Pro or Premium Per User licenses regardless of F-SKU size. The free-consumption threshold applies to viewers only.

Pricing Breakdown

For reference, reserved 1-year pricing in USD:

  • F2: $156/month
  • F8: $625/month
  • F32: $2,501/month
  • F64: $5,003/month
  • F128: $10,005/month
  • F256: $20,011/month

OneLake storage is billed separately at $0.023 per GB per month. Reserved pricing typically offers around 41% savings compared to pay-as-you-go rates. A detailed breakdown of Power BI licensing tiers and how they compare to Fabric is covered in the Power BI Pro vs. Premium vs. Fabric 2026 pricing guide on the Metrica blog.

Migrating from Power BI Premium to Fabric

Migration from Power BI Premium to Fabric is operationally straightforward in most cases. The complexity concentrates in a few specific scenarios.

Workspace Reassignment Process

The core migration step is reassigning workspaces from a Premium capacity to a Fabric capacity. This can be done through the Power BI Admin Portal by changing the workspace’s License Info setting from a P-SKU capacity to an F-SKU capacity. Microsoft provides an automated migration tool built on Fabric REST APIs for organizations with large numbers of workspaces, which can reduce manual effort significantly for environments with thousands of workspaces.

Before reassigning, teams should audit their workspaces for content that may behave differently under Fabric. Semantic models with Large Storage Format enabled, incremental refresh policies, or external data connections to on-premises sources all warrant individual review.

Cross-Region and Large Storage Format Complications

Two scenarios generate the most migration friction. First, Fabric items cannot be moved across Azure regions. If a Power BI Premium capacity was provisioned in one Azure region and the target Fabric capacity is in another, the content cannot be reassigned directly. Data residency requirements that were met by the P-SKU region must be re-evaluated against available Fabric regions.

Second, semantic models using Large Storage Format in Power BI Premium do not migrate cleanly to Fabric. Large Storage Format was introduced to handle models exceeding the standard 10 GB limit under Premium, but it is not natively compatible with certain Fabric features. Models in this format may require rebuild or optimization before migration completes successfully.

Scheduled Refreshes and Semantic Model Changes

After workspace reassignment, scheduled refresh configurations may need to be re-established. Credentials for data sources need to be re-entered in some cases, particularly for connections using gateway-based authentication. The gateway itself should be verified against Fabric connectivity requirements.

Semantic models that were using default datasets tied to lakehouses or warehouses should be reviewed in light of the October 2025 decoupling, where these became independent items. This affects how refresh and dependency chains are managed post-migration.

Common Challenges in the Fabric Transition

The transition introduces a set of friction points that are distinct from the typical Power BI implementation challenges. They aren’t insurmountable, but they’re worth knowing in advance.

Direct Lake Guardrails and Fallback Behavior

Each F-SKU has guardrails that determine the maximum data volume a Direct Lake semantic model can handle without falling back to DirectQuery. The guardrails cover Parquet file counts per table, row group counts, rows per table, and total on-disk data size. An F64, for example, supports up to 1.5 billion rows per table. Exceeding the guardrail for maximum model size causes all queries to fall back to DirectQuery, not just the query that hit the limit. Teams that don’t monitor table sizes actively can encounter unexpected performance degradation without a clear error message.

For Direct Lake on OneLake, exceeding guardrails does not trigger a fallback. Queries fail outright. This makes table size management non-optional for teams using the newer variant.

Licensing Confusion Below and Above F64

The F64 threshold introduces a decision that many Power BI admins haven’t had to make under the Premium model, where P1 already included viewer access for free users. Below F64, the per-user license cost is easy to underestimate during budget planning. Organizations that select F32 to save on capacity cost sometimes find that the Pro license burden for large user populations makes the total cost higher than F64 would have been.

The distinction between creators and viewers matters more under Fabric licensing than under Premium. Mapping user populations accurately before committing to a SKU tier is essential, not a secondary concern.

Feature Gaps in Direct Lake Mode

Direct Lake does not support everything that Import mode does. The absence of calculated columns and calculated tables is the most commonly encountered limitation for teams with complex DAX models. Calculated columns in Import mode pre-compute values during refresh and store them in memory. In Direct Lake, they do not exist in the model definition; the equivalent logic must be pushed upstream into the lakehouse using notebooks or pipelines before framing occurs. Teams with large calculated column libraries face significant refactoring work.

The lack of auto date/time intelligence is a related issue. Many report authors rely on Power BI’s automatic date table generation without realizing it. In Direct Lake mode, date intelligence tables must be built explicitly in the lakehouse or as separate dimension tables.

Organizational Readiness

Fabric exposes data engineers, data scientists, and Power BI developers to the same platform. For teams where these roles were previously in separate tooling environments, that unification creates both opportunity and friction. Power BI teams that have never worked with notebooks, lakehouses, or Delta tables suddenly find those concepts relevant to their data model performance. Planning for cross-functional knowledge transfer before the migration, not after, reduces the discovery period.

Best Practices for Power BI Users Moving to Fabric

Moving to Fabric is not simply a licensing change. Getting the most from the platform requires deliberate decisions at each stage.

Start with an Assessment

Before migrating any workspace, build an inventory. Catalog semantic model sizes, refresh frequencies, user counts per workspace, data source types, and whether any models use Large Storage Format or cross-region connections. The inventory reveals which workspaces migrate cleanly and which require remediation. Microsoft’s Power BI REST APIs and the Admin Portal’s usage metrics provide most of this data programmatically.

Evaluate Direct Lake Readiness

Not every semantic model should be migrated to Direct Lake. Import mode remains valid and well-supported within Fabric. The decision to adopt Direct Lake should be based on: whether data is already managed in a Fabric lakehouse as Delta tables, whether the current Import refresh duration is a business problem, whether the model relies on features that Direct Lake does not support, and whether the target SKU’s guardrails are comfortably above current data volumes. Teams that check all four boxes are good candidates. Teams with heavy DAX calculated column usage or non-Delta data sources should stay on Import until the migration path is clearer.

Right-Size Your F-SKU

The SKU selection decision is consequential because it determines both performance limits and the Pro license threshold. Start by calculating the total monthly cost at F32 (capacity plus Pro licenses for all active viewers) versus F64 (capacity only). For most organizations with more than 250 active report consumers, F64 is the more cost-effective choice despite the higher capacity price. Factor in Fabric burst capacity behavior: F-SKUs allow temporary bursting beyond purchased capacity, which can reduce the need to overprovision.

Plan for Governance

Fabric’s expanded access surface (lakehouses, notebooks, pipelines all visible to workspace members) requires updated governance policies. OneLake data access, sensitivity labels, and workspace role assignments should be reviewed before Fabric is opened broadly. Power BI content governance that was previously managed through the Premium Admin Portal needs to be extended to Fabric workloads as well.

Fabric Workloads Beyond Power BI

Power BI is one of seven workload types in Fabric, and understanding the others is useful even for teams focused primarily on reporting.

Data Engineering and Data Factory

Data Engineering provides Spark notebooks and lakehouses for transforming data into the Delta tables that Power BI’s Direct Lake mode reads from. Data Factory provides 170-plus connectors, Dataflow Gen2, Copy Jobs, and orchestration pipelines. For Power BI teams that previously managed Power Query transformations in Dataflows v1, Dataflow Gen2 is the successor within Fabric with better performance and broader connector support.

Data Science and Real-Time Intelligence

Data Science workloads handle ML model training, experiment tracking, and semantic link (the connection between Python and Power BI semantic models). Copilot’s DAX query generation in Power BI also draws on these capabilities. Real-Time Intelligence covers eventstreams, KQL databases, and Data Activator for alert-driven automation. These workloads are most relevant for Power BI teams building live dashboards on streaming data or incorporating predictive outputs into reports.

How Fabric Affects Power BI Connectivity to SAP and Salesforce

For Power BI users whose reports depend on SAP or Salesforce data, Fabric introduces new integration patterns that are worth understanding alongside the existing connector-based approaches.

SAP Connectivity in Fabric

SAP connectivity in Fabric has expanded significantly. The SAP HANA connector, SAP Table connector, and SAP BW OpenHub connector are all available in Fabric pipelines and Copy Jobs for large-scale data movement. Dataflow Gen2 supports SAP BW and SAP HANA for self-service extraction scenarios.

More significantly, mirroring for SAP via SAP Datasphere reached general availability in March 2026. The mirroring approach combines SAP Datasphere’s replication flows with Fabric’s mirroring engine: Datasphere handles initial snapshot and change data capture from SAP S/4HANA, SAP ECC, SAP BW/4HANA, or other SAP sources, then lands the data into Azure Data Lake Storage Gen2. Fabric mirroring picks it up from there, continuously merging changes into OneLake. The result is near-real-time SAP data in Delta Parquet format, accessible by Power BI through Direct Lake mode without a traditional ETL pipeline.

Bi-directional zero-copy sharing between SAP Business Data Cloud and Microsoft Fabric also became available, allowing SAP data products to appear in OneLake without physical data movement.

For Power BI teams currently using the Power BI Connector for SAP to connect semantic models directly to SAP systems, the Fabric mirroring path represents a different architectural choice: landing SAP data into OneLake first and using Direct Lake or Import mode from there, rather than live DirectQuery into SAP. The right approach depends on latency requirements, data volume, and whether the SAP landscape includes Datasphere.

Salesforce Data in Fabric

Salesforce connectivity in Fabric runs through Dataflow Gen2, which supports the Salesforce objects connector for extraction into lakehouses. The Modern Evaluator in Fabric Data Factory, updated in February 2026 to support 80-plus connectors, includes Salesforce. Authentication uses OAuth 2.0 (organizational account), and the connector supports both Copy activity in pipelines and full or append loads via Copy Jobs.

The practical use case for most Power BI teams is extracting Salesforce data into a Fabric lakehouse on a scheduled basis, then building semantic models on top of those Delta tables using Import or Direct Lake mode. This approach provides better performance than live DirectQuery against Salesforce and allows data from multiple sources to be joined at the lakehouse layer. We covered the full range of Salesforce-to-Power BI integration patterns, including when a dedicated connector makes more sense than a lakehouse staging approach, in the Power BI Salesforce integration methods comparison on the Metrica blog.

FAQ: Microsoft Fabric for Power BI Users

Does moving to Fabric break existing Power BI reports?

No. Existing Power BI reports, dashboards, and semantic models continue to function after workspace migration to Fabric capacity. Import mode and DirectQuery mode are fully supported. The report authoring experience in Power BI Desktop is unchanged. Some configurations, particularly scheduled refresh credentials and Large Storage Format models, may need attention during migration, but the reports themselves do not break.

Is Direct Lake mode better than Import mode?

Not unconditionally. Direct Lake eliminates the need for full data refresh cycles and is well-suited for large Delta tables managed in a Fabric lakehouse. Import mode remains the better choice when the semantic model uses calculated columns, calculated tables, or features that Direct Lake doesn’t support, or when the source data isn’t in a Fabric lakehouse. For models where those constraints don’t apply, Direct Lake’s near-Import performance combined with faster framing cycles is a genuine advantage.

What happens if I stay on Import mode in Fabric?

Nothing changes. Import mode works exactly as it did in Power BI Premium. Fabric doesn’t require any semantic model to use Direct Lake. Teams can continue using Import mode indefinitely, with the same scheduled refresh behavior and in-memory processing they had before.

Do all my users need Pro licenses after moving to F64?

No. At F64 and above, users with a viewer role on a workspace can access Power BI content using only a free Fabric license. Content creators who publish, edit, or manage Power BI items still need a Pro or Premium Per User license. Below F64, all consumers require Pro licenses.

What is framing in Direct Lake, and how is it different from refresh?

Framing is a lightweight operation that reads the Delta table metadata log and sets a version timestamp for the semantic model. It takes seconds rather than minutes because no data is moved. After framing, column data is loaded into memory on demand when queries run. A traditional Import refresh, by contrast, extracts all data from the source, processes it, and stores it in memory during the refresh window. Framing gives Direct Lake its freshness advantage without the overhead of a full reload.

Can Direct Lake use data outside of OneLake?

Yes, via OneLake shortcuts. A shortcut creates a reference to external storage (ADLS Gen2, Amazon S3, Dataverse) that appears as a local OneLake path. Direct Lake on SQL endpoints can query tables backed by shortcuts, making it possible to use Direct Lake mode on data that lives outside Fabric without physically copying it to OneLake.

How long do P-SKU customers have to migrate?

P-SKU renewals ended in July 2024. Existing customers continue on their current agreement until it expires, at which point migration to F-SKUs is required. Microsoft provides a 30-day grace period after P-SKU expiration to complete workspace reassignment. Given the planning involved, particularly for cross-region scenarios and Large Storage Format models, starting the assessment process well before the renewal date is strongly advisable.