Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!Calling all Data Engineers! Fabric Data Engineer (Exam DP-700) live sessions are back! Starting October 16th. Sign up.
Within a notebook I would like to retrieve an active variable value from a variable-libary. It is clear how to update the parameter from a Data Pipeline (as depicted below), but I cannot find a python package and method to do this independently in the notebook.
No problem updating parameter from a Data Pipeline
But I would like to do something like this:
PS: I have seen the REST API method and will use that if necesssary Items - Get Variable Library - REST API (VariableLibrary) | Microsoft Learn
Solved! Go to Solution.
In your Notebook, declare the parameter like this (example in PySpark):
dbutils.widgets.text("my_param", "")
my_param_value = dbutils.widgets.get("my_param")
print(f"The pipeline library variable is: {my_param_value}")
or in regular Python cell if not using Spark:
import sys
# Parameters are usually passed as command line args after the script name
# e.g., sys.argv[1], sys.argv[2], etc.
print("Notebook parameters:", sys.argv)
Hi @PhilPrentiss ,
As of now, the new Microsoft Fabric Variable Libraries do not support direct access from notebooks. This means you cannot retrieve variable values directly using Python code or a built-in SDK method inside a standalone notebook.
The current supported usage is within Data Pipelines, where you can reference variables from the variable library and then pass them as parameters into a notebook activity. Inside the notebook, you can access those parameters using methods like dbutils.widgets.get() in PySpark or sys.argv in standard Python.
So, while a direct method like get_variable() may seem logical, it isnโt available yet. The only reliable workaround is to use a pipeline to pass variable values into your notebook. Microsoft may add direct notebook support in future updates, but for now, this pipeline-based method is the correct and supported approach.
If this solution worked for you, kindly mark it as Accept as Solution and feel free to give a Kudos, it would be much appreciated!
Thank you.
There is now a way to do refer to variables in notebooks.
If you have a variable library named My_Variable_Library_Name, and a variable named "LakehouseId", then to get it in a PySpark cell you would do the following:
VariableLib = notebookutils.variableLibrary.getLibrary("My_Variable_Library_Name")
value = VariableLib.LakehouseId
print(value)
I found this here: Making Notebooks Smarter in Microsoft Fabric with Variable Libraries | by Kapil Kulshrestha | Medium
Also, if you want to assign a notebook to a specific lakehouse, your can do that with variable library entries like this:
%%configure
{
"defaultLakehouse": {
"name": {
"variableName": "$(/**/myVL/LHname)"
},
"id": {
"variableName": "$(/**/myVL/LHid)"
},
"workspaceId": "<(optional) workspace-id-that-contains-the-lakehouse>"
}
}where the name of the variable library is myVL, and the variables with the lakehouse name and id are LHname and LHid, respectively.
More information is at Develop, execute, and manage notebooks - Microsoft Fabric | Microsoft Learn.
Thanks.
In your Notebook, declare the parameter like this (example in PySpark):
dbutils.widgets.text("my_param", "")
my_param_value = dbutils.widgets.get("my_param")
print(f"The pipeline library variable is: {my_param_value}")
or in regular Python cell if not using Spark:
import sys
# Parameters are usually passed as command line args after the script name
# e.g., sys.argv[1], sys.argv[2], etc.
print("Notebook parameters:", sys.argv)
I'm looking for a solution specific to the new fabric variable libraries
Hi @PhilPrentiss ,
As of now, the new Microsoft Fabric Variable Libraries do not support direct access from notebooks. This means you cannot retrieve variable values directly using Python code or a built-in SDK method inside a standalone notebook.
The current supported usage is within Data Pipelines, where you can reference variables from the variable library and then pass them as parameters into a notebook activity. Inside the notebook, you can access those parameters using methods like dbutils.widgets.get() in PySpark or sys.argv in standard Python.
So, while a direct method like get_variable() may seem logical, it isnโt available yet. The only reliable workaround is to use a pipeline to pass variable values into your notebook. Microsoft may add direct notebook support in future updates, but for now, this pipeline-based method is the correct and supported approach.
If this solution worked for you, kindly mark it as Accept as Solution and feel free to give a Kudos, it would be much appreciated!
Thank you.
There is now a way to do refer to variables in notebooks.
If you have a variable library named My_Variable_Library_Name, and a variable named "LakehouseId", then to get it in a PySpark cell you would do the following:
VariableLib = notebookutils.variableLibrary.getLibrary("My_Variable_Library_Name")
value = VariableLib.LakehouseId
print(value)
I found this here: Making Notebooks Smarter in Microsoft Fabric with Variable Libraries | by Kapil Kulshrestha | Medium
Also, if you want to assign a notebook to a specific lakehouse, your can do that with variable library entries like this:
%%configure
{
"defaultLakehouse": {
"name": {
"variableName": "$(/**/myVL/LHname)"
},
"id": {
"variableName": "$(/**/myVL/LHid)"
},
"workspaceId": "<(optional) workspace-id-that-contains-the-lakehouse>"
}
}where the name of the variable library is myVL, and the variables with the lakehouse name and id are LHname and LHid, respectively.
More information is at Develop, execute, and manage notebooks - Microsoft Fabric | Microsoft Learn.
Thanks.
| User | Count |
|---|---|
| 11 | |
| 6 | |
| 3 | |
| 3 | |
| 3 |