Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!Calling all Data Engineers! Fabric Data Engineer (Exam DP-700) live sessions are back! Starting October 16th. Sign up.
Hey
I have a problem with using NotebookUtils. My design is that I have two workspaces wilt following utilities:
-BRONZE WORKSPACE-
Notebook: Ochrestrator - default lakehouse is Bronze
Notebook: Process Bronze - default lakehouse is Bronze
Datalake: Bronze
-SILVER WORKSPACE-
Notebook: Process Silver - default lakehouse is Silver
Datalake: Silver
From Orchestrator i make these two notebook runs, but when executing Silver it seems like the default lakehouse is inherited from the calling notebook, because when I print the tables in the default lakehouse from silver it shows the bronze tables.
Solved! Go to Solution.
I didn't know that .run keeps the same default lakehouse, but I can see it makes sense. The default lakehouse is set at Spark Session start (you can parameterise it though)
.run doesn't create a new spark session, but reuses the old one ("The notebook being referenced runs on the Spark pool of the notebook that calls this function.") from here;
https://learn.microsoft.com/en-us/fabric/data-engineering/notebook-utilities
What we do is explicitly use the ABFSS path rather than default lakehouses. (we also seperate the Notebooks/Pipelines into a separate workspace completely so have to use ABFSS paths to specify lakehouses.)
So df.read.format('delta').load('abfss://<silverworkspace>@onelake.dfs.fabric.microsoft.com/<silverlakehouse>/Tables/...')
I didn't know that .run keeps the same default lakehouse, but I can see it makes sense. The default lakehouse is set at Spark Session start (you can parameterise it though)
.run doesn't create a new spark session, but reuses the old one ("The notebook being referenced runs on the Spark pool of the notebook that calls this function.") from here;
https://learn.microsoft.com/en-us/fabric/data-engineering/notebook-utilities
What we do is explicitly use the ABFSS path rather than default lakehouses. (we also seperate the Notebooks/Pipelines into a separate workspace completely so have to use ABFSS paths to specify lakehouses.)
So df.read.format('delta').load('abfss://<silverworkspace>@onelake.dfs.fabric.microsoft.com/<silverlakehouse>/Tables/...')
| User | Count |
|---|---|
| 1 | |
| 1 | |
| 1 | |
| 1 | |
| 1 |
| User | Count |
|---|---|
| 1 | |
| 1 | |
| 1 | |
| 1 | |
| 1 |