Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Calling all Data Engineers! Fabric Data Engineer (Exam DP-700) live sessions are back! Starting October 16th. Sign up.

Reply
mmilosanovic
Advocate IV
Advocate IV

Some methods in notebooks do not work when executed from Data Pipelines deployed via SPN

Hi,

 

I already created the ticket with Microsoft (2504161420001430) and also opened the issue here and also seems to be related to this issue here.

 

`notebookutils.runtime.context.get("currentWorkspaceName")` works when executed directly from the Notebook, but does not work when executed from the DataPipeline which was deployed in another workspace using Service Principal via Fabric Core APIs (Create Item - https://learn.microsoft.com/en-us/rest/api/fabric/core/items/create-item?tabs=HTTP).

If I reopen the notebook with my personal account and rerun it again from the DataPipeline, then it works. This implies that the issue is coming from the service principal somehow losing permissions/token and is not able to read the current workspace name where it is running in from the notebookutils runtime context.

What I noticed is that this issue is also present for other built-in methods in Fabric. For example: `notebookutils.lakehouse.get()` and `synapsesql()`. Details can be found here: https://github.com/microsoft/fabric-cicd/issues/202#issuecomment-2797384465. Same approach is being followed in the fabric-cicd library and it is using SPN auth + Fabric Core Create Item APIs (as already mentioned and referenced above).

 

Did anyone else experience the same/similar issues?

 

Best regards,

Milos

37 REPLIES 37
SparkedMy401
Regular Visitor

Yes, I'm having the same issue.. I'm using the fabric-cicd library

annhwallinger
Helper I
Helper I

we are also having the same issue

deepakagarwal
Regular Visitor

Having same issue.

v-tsaipranay
Community Support
Community Support

Hi @mmilosanovic ,

Thanks for sharing the details and for raising a support ticket .

 

The issue likely stems from the notebook’s runtime context not being fully initialized when triggered via a Service Principal (SPN) using the Fabric Core APIs. This affects methods like notebookutils.runtime.context.get() and others that rely on workspace-level context.

 

In the meantime, please try the following steps:

  • Ensure the SPN has Contributor or higher role at the workspace level.

  • Verify that the workspace is explicitly included in the SPN’s access scope.

  • Test execution using a user-assigned managed identity if supported.

  • As a workaround, consider passing the required context (e.g., workspace name) as parameters to the notebook via the pipeline.

Please continue monitoring your support case with microsoft and share any updates or guidance as it becomes available, so that other community members who have similar problems to solve it faster.

 

Thank you.

g3kuser
Helper I
Helper I

we have the same issue with workspace identity as executing user. There are errors from cluster just with import statements in notebook and on applying run magic command. Even notebookutils.lakehouse.getWithProperties method also fails with 403 error. We were able to execute all of our code artefacts successfully with executing user as SPN (we generated our own SPN and added a secret to it) whereas when deploying same using workspace identity and running through it failed miserably. We even created a secret for the workspace identity and tried to use it by initializing credential class still no success. 

g3kuser_0-1744866681504.png

 

Thank you for the detail in your response. You said you were able to run as SPN.... do you know if you were able to run these simple lines? 

 

import json
import sempy.fabric as fabric
from notebookutils import mssparkutils

#Instantiate the client
client = fabric.FabricRestClient()

# Get Workspace
workspaceId = fabric.resolve_workspace_id([workspace name here])

It works by explicitly creating an Service principal token implementation and passing that to fabric rest client. Here is a blog that can helped me with this implementation.

 

https://fabric.guru/using-service-principal-authentication-with-fabricrestclient

annhwallinger
Helper I
Helper I

We have also raised a support request for the same issue

v-tsaipranay
Community Support
Community Support

Hi @mmilosanovic ,

 

Could you please confirm if the issue has been resolved after raising a support case? If a solution has been found, it would be greatly appreciated if you could share your insights with the community. This would be helpful for other members who may encounter similar issues.

 

Thank you for your understanding and assistance.

hi @v-tsaipranay , not yet resolved. Still in the back and forth with MS support. Last thing I got is this: "I have consulted with the notebook team, and they have confirmed that it is a known issue." 

I have asked if there is a workaround and got the following feedback:

  • "It has been noted that there are issues with executing notebooks via Service Principal (SPN) authentication, particularly with certain functions such as notebookutils. The internal team is actively investigating this matter to identify any limitations or configuration issues that may be causing these failures, and they will share any updates they have over the link.
  • Also, I've informed Santhiya that by 20th May, Pipeline product team are releasing connection experience where users can create SPN connection via it."

 

So I guess we all monitor the progress here: https://github.com/microsoft/fabric-cicd/issues/248, and also keep an eye for the connection experience updates based on the seconds bullet above.

 

AlijH
Advocate I
Advocate I

My team is also experiencing the same issue but when running notebooks called via API e.g. a making a request like this:

https://api.fabric.microsoft.com/v1/workspaces/<workspace_id>/items/<notebook_id>/jobs/instances?jobType=RunNotebook

from a service principal (in our case the managed identity of an ADF factory)

I imagine the Fabric Pipeline is doing the same API call under the hood.

As notebookutils.runtime.context.get("currentWorkspaceId", "") is working and we only have two workspaces at the moment (dev and prd) for a temporary workaround we have hardcoded environment specific values based on the workspace id into the custom package that we are using to manage ELT.

Still deciding whether or not we want to move these hardcoded environment specific values to a JSON file managed via CI/CD or swap back to notebookutils once the current issues are resolved. Would be nice to have less moving parts but also equally nice to have less reliance on MS managed bits and bobs that haven't proved completely reliable yet.

We have also observed the following log output when running a notebook via Service Princiapl that imports papermill (we have a development workflow where we can build out data modelling locally and run it via local spark installs before using CI/CD to push it up to Fabric, use papermill when running locally in place of notebookutils.notebook calls) this one is easier to tidy up just had to make sure we don't import papermill when running in a remote context but might help whoever is looking into this.

Failed to fetch cluster details Traceback (most recent call last): File "/home/trusted-service-user/cluster-env/clonedenv/lib/python3.11/site-packages/synapse/ml/fabric/service_discovery.py", line 110, in get_mlflow_shared_host raise Exception( Exception: Fetch cluster details returns 401:b'' Fetch cluster details returns 401:b'' Traceback (most recent call last): File "/home/trusted-service-user/cluster-env/clonedenv/lib/python3.11/site-packages/synapse/ml/fabric/service_discovery.py", line 152, in set_envs set_fabric_env_config(builder.fetch_fabric_client_param(with_tokens=False)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/trusted-service-user/cluster-env/clonedenv/lib/python3.11/site-packages/synapse/ml/fabric/service_discovery.py", line 72, in fetch_fabric_client_param shared_host = get_fabric_context().get("trident.aiskill.shared_host") or self.get_mlflow_shared_host(pbienv) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/trusted-service-user/cluster-env/clonedenv/lib/python3.11/site-packages/synapse/ml/fabric/service_discovery.py", line 110, in get_mlflow_shared_host raise Exception( Exception: Fetch cluster details returns 401:b'' ## Not In PBI Synapse Platform ##
mmilosanovic
Advocate IV
Advocate IV

Just to update everyone based on the most recent discussion with Microsoft support on the support case: They have pointed me to Fabric Community forums (so here where we already started discussion and report this issue) and to the Ideas part, so I also created an idea referencing this thread and pointing that this is a known issue and also asking for a fix and/or a workaround.

Please vote here: https://community.fabric.microsoft.com/t5/Fabric-Ideas/Enable-notebookutils-and-other-methods-to-wor...

Hi @mmilosanovic ,

Thank you for the update and for continuing to raise visibility across the support channels and community.

 

As acknowledged internally, this behavior is a known limitation when executing notebooks via Service Principal authentication, where certain methods like notebookutils may not function as expected due to context initialization issues. The product team is actively reviewing the scenario, and we anticipate future improvements to address these limitations.

 

Your engagement through the Ideas forum is appreciated and will help with prioritization. We’ll continue to monitor this and share any updates or guidance as they become available.

 

Also please try to review the information provided by @gaya3krishnan86 which might help you.

 

If this post helps, then please give us Kudos and consider Accept it as a solution to help the other members find it more quickly.

 

Thankyou.

Hi everyone,

 

I just got feedback from the support that Notebook PG team has fixed the issue. I tested again and it seems to work now. I only tested 

notebookutils.runtime.context.get("currentWorkspaceName")

so not sure if this also fixes all the 

notebookutils 

methods or 

synapsesql()

methods.

 

Can someone else also give it a try and share findings here?

 

Best regards,

Milos

Hi @mmilosanovic ,

 

Thank you for following up and sharing that the issue was resolved through Microsoft Support. We're glad to hear that your concern has been addressed successfully.

Please make your answer as accepted. Your input is valuable, and sharing the outcome helps others in the community facing similar issues.

If you have any further questions or need assistance in the future, please don’t hesitate to create a new post on the Microsoft Fabric Community Forum we’re always here to help.

 

Thank you.

 

hi @v-tsaipranay this is still not resolved and ongoing with MS support, specifically around other methods (issue seem to be the same with authorization) like synapsesql().

Hi @mmilosanovic ,

 

Thank you for your confirmation and patience. The Microsoft team is actively looking into the issue, including related methods like synapseSQL(), which seem to face the same authorization behavior.

 

Thank you.

synapsesql() still doesn't work and per Microsoft yesterday is not supported with SPN at this time and there is no date for when it will be so I guess it is not a real tool yet.

gronnerup
Advocate I
Advocate I

You're absolutely right - this is a known issue when running notebooks in the context of a Service Principal, especially when using notebookutils.runtime.context or mssparkutils.env. I ran into the same thing recently and did a deep dive into how execution context really works in Fabric.

If you're interested, I wrote up my findings (including this bug and a workaround) in this blog post:
Who's Calling? Understanding Execution Context in Microsoft Fabric

Your workaround was so helpful thank you. I don't suppose you have a workaround for synapsesql() ?

v-tsaipranay
Community Support
Community Support

Hi @mmilosanovic ,

 

Thank you for your patience and understanding. Our CSS engineers are actively working on the issue, and since it is currently in progress, we expect it to be resolved soon. We appreciate your cooperation and will keep you informed with any updates.

 

Thank you.

Can you elaborate on 'soon' for those of us trying to control project schedules? This is a BUG not an IDEA. I don't see a workaround for several of the calls including the synapsesql() function?

gaya3krishnan86
Frequent Visitor

I have noticed today under SPN ownership it no longer can run spark.sql and returns mwc token error. Whereas most of the other library methods with sempy, notebookutils is working even though it gives some cluster issues but the process continues to run successfully. 

v-tsaipranay
Community Support
Community Support

Hi @mmilosanovic ,

 

Could you please confirm if the issue has been resolved through the support ticket with Microsoft?

If the issue has been resolved, we kindly request you to share the resolution or key insights here to help others in the community. If we don’t hear back, we’ll go ahead and close this thread.

Should you need further assistance in the future, we encourage you to reach out via the Microsoft Fabric Community Forum and create a new thread. We’ll be happy to help.

 

Thank you.

It has not been resolved.

note resolved yet, support is in back and forth with Notebook team and I am awaiting feedback.

Hi everyone,

 

I have recently identified the same issue when invoking notebook via execute item APIs via SPN which has `notebookutils.notebook.runMultiple()` method in it. I have opened another support case and asked if it can get prioritized together with the one I still have opened because it is related to the same issue. Support case ID: 2507011420003367

 

Here is the error message:

ms suppor ticket runMultiple method error.png

 

 

Hi @mmilosanovic ,

 

Thank you for sharing the update and tracker ID. This case has already been escalated to the PG team, who are currently working on it. We appreciate your patience and they will resolve the issue as soon as possible.

Hi, it seems that there was an issue which was misleading. More details from MS support available below:

mmilosanovic_0-1751907534061.png

 

Hi @mmilosanovic 

I have an ingestion process using pipelines and notebooks deployed through fabric ci-cd library under SPN ownership. I have heavily used runmutiple method and it does work fine under SPN context. I previously used to get some cluster errors with sempy library under SPN which has disappeared starting from this week. SPN that I use is not a workspace identity but a separate SPN created in our tenant with Contributor permission on Fabric workspaces and read only Fabric API access. May be try invoking the notebook through a data factory pipeline and see if that works. 

Thanks,

 

Gayatri

BhaveshPatel
Community Champion
Community Champion

Hello

Thanks & Regards,
Bhavesh

Love the Self Service BI.
Please use the 'Mark as answer' link to mark a post that answers your question. If you find a reply helpful, please remember to give Kudos.

hi @BhaveshPatel , sorry I do not understand you answer and do not know how it is related to the topic? This thread is about using SPN to deploy to multi environment setups and as soon as SPN takes over ownership it seems that there are some authorization issue coming from the backend.

annhwallinger
Helper I
Helper I

Microsoft responded to my case yesterday indicating that synapsesql() does not yet support SPN and there is no deployment date for when it will. 

I just received the same response, but also that the product team started discussions with the Product Management but they do not have any ETA. They are pointing this out here:

 

https://learn.microsoft.com/en-us/fabric/data-warehouse/service-principals#limitations 

Service principal support in Data Factory - Microsoft Fabric | Microsoft Learn

My ingestion process, which uses pipelines and notebooks, is deployed through the Fabric CI/CD library under SPN ownership. I have extensively used notebookutils (such as GetToken, Mount, getSecret, getWithProperties, updateDefinition, and env methods), sempy (using FabricRestClient to read the list of workspaces, lakehouses, and create shortcuts), and MSAL libraries.

Until last week, I was encountering cluster errors related to the sempy library and the execution of the spark.sql method under the SPN. However, these issues have disappeared as of this week. The process has been running successfully under the SPN for the past four days.

The SPN I am using is not a workspace identity, but a separate SPN created in our tenant with Contributor permissions on Fabric workspaces and read-only Fabric API access.

I have an ongoing support request for the issues I encountered under the SPN. Based on diagnostic logs, I was initially informed it was a permission issue. However, something seems to have changed behind the scenes this week—despite no changes made to permissions on my end, recent retests show successful execution.


Thanks,

Gayatri

Thank you for the info. That all sounds pretty much same as my experience except that we use the synapsesql() function and that still does not work and is not supporting SPN per Microsoft so it all fails at that step and cannot move forward. The other functions before it gets to that one that used to fail are working now.

KevinChant
Super User
Super User

I have had similar issues, typically they are due to issues like the default lakehouse specified.

 

Look to change what is specified with dynamic replacement when using fabric-cicd:
https://microsoft.github.io/fabric-cicd/latest/how_to/parameterization/#inputs 

Helpful resources

Announcements
November Fabric Update Carousel

Fabric Monthly Update - November 2025

Check out the November 2025 Fabric update to learn about new features.

Fabric Data Days Carousel

Fabric Data Weeks

Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.

Users online (27)