Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!Calling all Data Engineers! Fabric Data Engineer (Exam DP-700) live sessions are back! Starting October 16th. Sign up.
I have a mirrored database set up in a trial Fabric workspace. The data is coming from an Azure SQL database. Works great. I have a lakehouse in the same workspace and I created a shortcut to the mirrored database in the lakehouse. So all of the mirrored tables now show up under the Files list in the lakehouse. I'm trying to do a very simple read of the data using a notebook and this code:
Solved! Go to Solution.
Hi @dave_fackler,
This is supported, but you need the correct path and the right Fabric/OneLake permissions.
The error text mentions โStorage Blob Data Contributor,โ but with Fabric OneLake, access is governed by Fabric item/OneLake permissions rather than Azure Storage RBAC. For mirrored databases, make sure your account has Read all OneLake data on the mirrored database (Workspace ->the mirrored database -> Share). That permission explicitly allows accessing the mirrored data files from Spark/OneLake Explorer. Docs: Share your mirrored database & permissions
Mirroring lands your source tables as Delta files in OneLake. You can explore/query them with Spark in notebooks by creating a Lakehouse shortcut to the mirrored database/tables and reading from it. Docs: Explore mirrored data with notebooks - Mirroring overview
# Example: read a mirrored Delta table through a Lakehouse shortcut (ABFS path)
path = "abfss://<workspaceName>@onelake.dfs.fabric.microsoft.com/<lakehouseName>.lakehouse/Files/<shortcutName>/<schema>/<tableName>"
df = spark.read.format("delta").load(path)
display(df)ABFS path format example (from Microsoft docs): abfss://myWorkspace@onelake.dfs.fabric.microsoft.com/myLakehouse.lakehouse/Files/. Docs: OneLake ABFSS path example
Tip: ensure the path targets the actual table folder that contains a _delta_log (thatโs the Delta table root). Higher-level folders (schema/database only) wonโt load as Delta tables. Delta read-by-path points to folder with _delta_log Delta logs live under _delta_log
If the target is Delta, create the shortcut in the Lakehouse Tables section (not Files). Fabric auto-discovers it as a table so you can read it like any managed table via Spark/SQL/semantic model.
# Read a table-shortcut registered under Tables
df = spark.read.table("dbo.MyShortcutTable") # Spark
# or
spark.sql("SELECT * FROM dbo.MyShortcutTable").show() # SQLDocs: OneLake shortcuts (Tables vs Files, auto-recognition) - Lakehouse & Delta tables (auto-discovery incl. shortcuts)
If you found this helpful, consider giving some Kudos. If I answered your question or solved your problem, please mark this as the solution.
Hi @dave_fackler,
This is supported, but you need the correct path and the right Fabric/OneLake permissions.
The error text mentions โStorage Blob Data Contributor,โ but with Fabric OneLake, access is governed by Fabric item/OneLake permissions rather than Azure Storage RBAC. For mirrored databases, make sure your account has Read all OneLake data on the mirrored database (Workspace ->the mirrored database -> Share). That permission explicitly allows accessing the mirrored data files from Spark/OneLake Explorer. Docs: Share your mirrored database & permissions
Mirroring lands your source tables as Delta files in OneLake. You can explore/query them with Spark in notebooks by creating a Lakehouse shortcut to the mirrored database/tables and reading from it. Docs: Explore mirrored data with notebooks - Mirroring overview
# Example: read a mirrored Delta table through a Lakehouse shortcut (ABFS path)
path = "abfss://<workspaceName>@onelake.dfs.fabric.microsoft.com/<lakehouseName>.lakehouse/Files/<shortcutName>/<schema>/<tableName>"
df = spark.read.format("delta").load(path)
display(df)ABFS path format example (from Microsoft docs): abfss://myWorkspace@onelake.dfs.fabric.microsoft.com/myLakehouse.lakehouse/Files/. Docs: OneLake ABFSS path example
Tip: ensure the path targets the actual table folder that contains a _delta_log (thatโs the Delta table root). Higher-level folders (schema/database only) wonโt load as Delta tables. Delta read-by-path points to folder with _delta_log Delta logs live under _delta_log
If the target is Delta, create the shortcut in the Lakehouse Tables section (not Files). Fabric auto-discovers it as a table so you can read it like any managed table via Spark/SQL/semantic model.
# Read a table-shortcut registered under Tables
df = spark.read.table("dbo.MyShortcutTable") # Spark
# or
spark.sql("SELECT * FROM dbo.MyShortcutTable").show() # SQLDocs: OneLake shortcuts (Tables vs Files, auto-recognition) - Lakehouse & Delta tables (auto-discovery incl. shortcuts)
If you found this helpful, consider giving some Kudos. If I answered your question or solved your problem, please mark this as the solution.
Hi @dave_fackler ,
Thank you for reaching out to Microsoft Fabric Community.
Thank you @tayloramy for the prompt response.
I wanted to check if you had the opportunity to review the information provided and resolve the issue..?Please let us know if you need any further assistance.We are happy to help.
Thank you.
Using the abfss path worked. Thank you for the assistance!
Hi @dave_fackler,
Glad I could help. If the problem is resolved, kindly mark the post as the solution so that other forum users can easily find it in the future.
If you found this helpful, consider giving some Kudos. If I answered your question or solved your problem, please mark this as the solution.
| User | Count |
|---|---|
| 7 | |
| 6 | |
| 3 | |
| 3 | |
| 3 |