Skip to main content
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Calling all Data Engineers! Fabric Data Engineer (Exam DP-700) live sessions are back! Starting October 16th. Sign up.

Reply
Ayush05-gateway
New Contributor III

mssparkutils.fabric.sql not working anymore. Has there been a change in notebookutils?

mssparkutils.fabric.sql not working anymore. Has there been a change in notebookutils?

I am using this utility to write directly to fabric warehouse and skip adding a step to ingest in lakehouse and then write to warehouse in next step.

 

1 ACCEPTED SOLUTION

Hi @Ayush05-gateway 

Thanks for the extra detail. The exact call you are using:

from notebookutils import mssparkutils df = mssparkutils.fabric.sql("<DWH name>", sql_query) 

is not part of the documented API. Microsoft has been moving from the old mssparkutils namespace to notebookutils. The rename is official and backward compatibility exists for many helpers, but there is no published fabric.sql method. See the rename notice and current surface here: NotebookUtils (former MSSparkUtils) for Fabric and the companion note on the MSSparkUtils page. Undocumented members can change between runtime updates, which likely explains why the call worked 2 weeks ago and now fails.

Supported ways to query or write a Warehouse from a notebook

Option 1 - Python notebook: connect and query with notebookutils.data

The official Python-notebook experience includes data utilities to connect to a Warehouse and run T-SQL. Microsoft documents this workflow here: Use Python experience on Notebook (see the sections on Warehouse interaction and notebookutils.data; they also show the notebookutils.data.help() output and the connect_to_artifact function).

import notebookutils as nbu # Connect by name or ID (optionally pass workspace ID and artifact type) conn = nbu.data.connect_to_artifact("<Warehouse name or ID>") df = conn.query("SELECT TOP 10 * FROM dbo.YourTable;") display(df) 

Docs: Python notebook data utilities.

Option 2 - Python notebook: use the %%tsql cell magic

You can execute T-SQL directly in a Python notebook cell and bind to a specific Warehouse.

%%tsql --dw "<Warehouse name>" SELECT COUNT(*) AS rows_in_table FROM dbo.YourTable; 

Docs: Run T-SQL code in Fabric Python notebooks and Connect to Fabric Data Warehouse.

Option 3 - PySpark notebook: Fabric Spark connector for Warehouse

For Spark notebooks, use the Warehouse connector. It supports reading and, with current GA runtimes, writing via a two-phase process that uses COPY INTO under the hood.

# Read from a Warehouse table df = spark.read.synapsesql("MyWarehouse.dbo.SourceTable") # Write a Spark DataFrame to a Warehouse table df_to_write.write.mode("append").synapsesql("MyWarehouse.dbo.TargetTable") 

Docs: Spark connector for Microsoft Fabric Data Warehouse.

Why you did not find fabric.sql in docs

  • The supported NotebookUtils areas cover files, environment, chaining notebooks, secrets, and, in Python notebooks, notebookutils.data for Warehouse connections. There is no published mssparkutils.fabric.sql or notebookutils.fabric.sql.
  • The rename to NotebookUtils is official; new features land under notebookutils while the mssparkutils namespace will be retired. 

 

If you found this helpful, consider giving some Kudos. If I answered your question or solved your problem, please mark this as the solution.

View solution in original post

6 REPLIES 6
tayloramy
Contributor

Hi @Ayush05-gateway

This library has been renamed to NotebookUtils. NotebookUtils (former MSSparkUtils) for Fabric - Microsoft Fabric | Microsoft Learn

 

Please try using NotebookUtils instead of mssparkutils. 

If you found this helpful, consider giving some Kudos. If I answered your question or solved your problem, mark this post as the solution.  

Hi @tayloramy 

 

I am using below statement :

 

from notebookutils import mssparkutils
df = mssparkutils.fabric.sql("<DWH name>", sql_query)
 
but this is not working. it was working 2 weeks ago.
 
Is there a documentation that i can refer to. I have already looked at the above link. there is no fabric method or sql method.
 

adding to above, i looked at notebookutils.data, notebookutils.warehouse but none of the 2 has any function to query warehouse. 

Hi @Ayush05-gateway 

Thanks for the extra detail. The exact call you are using:

from notebookutils import mssparkutils df = mssparkutils.fabric.sql("<DWH name>", sql_query) 

is not part of the documented API. Microsoft has been moving from the old mssparkutils namespace to notebookutils. The rename is official and backward compatibility exists for many helpers, but there is no published fabric.sql method. See the rename notice and current surface here: NotebookUtils (former MSSparkUtils) for Fabric and the companion note on the MSSparkUtils page. Undocumented members can change between runtime updates, which likely explains why the call worked 2 weeks ago and now fails.

Supported ways to query or write a Warehouse from a notebook

Option 1 - Python notebook: connect and query with notebookutils.data

The official Python-notebook experience includes data utilities to connect to a Warehouse and run T-SQL. Microsoft documents this workflow here: Use Python experience on Notebook (see the sections on Warehouse interaction and notebookutils.data; they also show the notebookutils.data.help() output and the connect_to_artifact function).

import notebookutils as nbu # Connect by name or ID (optionally pass workspace ID and artifact type) conn = nbu.data.connect_to_artifact("<Warehouse name or ID>") df = conn.query("SELECT TOP 10 * FROM dbo.YourTable;") display(df) 

Docs: Python notebook data utilities.

Option 2 - Python notebook: use the %%tsql cell magic

You can execute T-SQL directly in a Python notebook cell and bind to a specific Warehouse.

%%tsql --dw "<Warehouse name>" SELECT COUNT(*) AS rows_in_table FROM dbo.YourTable; 

Docs: Run T-SQL code in Fabric Python notebooks and Connect to Fabric Data Warehouse.

Option 3 - PySpark notebook: Fabric Spark connector for Warehouse

For Spark notebooks, use the Warehouse connector. It supports reading and, with current GA runtimes, writing via a two-phase process that uses COPY INTO under the hood.

# Read from a Warehouse table df = spark.read.synapsesql("MyWarehouse.dbo.SourceTable") # Write a Spark DataFrame to a Warehouse table df_to_write.write.mode("append").synapsesql("MyWarehouse.dbo.TargetTable") 

Docs: Spark connector for Microsoft Fabric Data Warehouse.

Why you did not find fabric.sql in docs

  • The supported NotebookUtils areas cover files, environment, chaining notebooks, secrets, and, in Python notebooks, notebookutils.data for Warehouse connections. There is no published mssparkutils.fabric.sql or notebookutils.fabric.sql.
  • The rename to NotebookUtils is official; new features land under notebookutils while the mssparkutils namespace will be retired. 

 

If you found this helpful, consider giving some Kudos. If I answered your question or solved your problem, please mark this as the solution.

Ayush05-gateway
New Contributor III

thank you @tayloramy .

I was using pyspark mode, changed to python mode and got it working. 

Glad to hear it! 
Happy I could help. 

Helpful resources

Announcements
Users online (25)