Skip to main content
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Calling all Data Engineers! Fabric Data Engineer (Exam DP-700) live sessions are back! Starting October 16th. Sign up.

Reply
abhidotnet
New Contributor III

How can I connect to a SQL Mirror Database?

Iโ€™m working in a workspace in Microsoft Fabric where Iโ€™ve set up a notebook intended to connect to a SQL Mirror Database hosted in the same Fabric environment. However, when I try using the โ€œConnect to Sourceโ€ option in the notebook, the database doesnโ€™t appear in the listโ€”only Lakehouses from other workspaces show up.


Is there a recommended way to connect a Spark notebook to a SQL Mirror Database in Fabric?
Am I missing a configuration step, or is there a specific connection string or method I should use manually?


Appreciate any guidance or examples from those whoโ€™ve set this up successfully!



3 ACCEPTED SOLUTIONS
ToddChitt
Honored Contributor

This may not be a real solution to your issue, but WHY do you want to connect a Notebook to a Mirrored SQL database? Typically, you use Notebooks to collect and process non-tabular data sources (Excel, JSON, XML, etc.) and from there dump it into tabular destinations (read: tables). 

If data is ALREADY in tables in the mirrored database, why not use T-SQL to access and process it?




Did I answer your question? If so, mark my post as a solution. Also consider helping someone else in the forums!

Proud to be a Super User!





View solution in original post

Element115
Valued Contributor II

because you can do things more easily and in a saner way in Python than with the antedeluvian 4GL language that SQL is.  That would be my reason #1.  

 

#2 you got a bunch of libs in Python to do data science and data engineering that do not exist in the SQL world.  

 

and on and on it goes.

View solution in original post

abhidotnet
New Contributor III

I have solved this issue by creating a User data function. This process is explained here in Microsoft Documentation.

Python programming model for Fabric User data functions (Preview) - Microsoft Fabric | Microsoft Lea...

This allows me to make the connection and fetch query results. I am able to get views as well as tables.
As of now, I haven't figured out how to do the stored procedure execution and that is next in line.

my function looks like this -->

import pandas as pd
import fabric.functions as fn

udf = fn.UserDataFunctions()

# Select 'Manage connections' and add a connection to a Fabric SQL Database 
# Replace the alias "<alias for sql database>" with your connection alias.
@udf.connection(argName="sqlDB",alias="MYMIRRORDATABASE")
@udf.function()
def read_from_sql_db(sqlDB: fn.FabricSqlConnection, query: str)-> pd.DataFrame:
    '''
    Description: Read data from Mirror DATABASE using input query.
    
    Args:
        sqlDB (fn.FabricSqlConnection): Fabric SQL database connection.
    
    Returns:
        DataFrame query result.

    '''

    # Establish a connection to the SQL database
    connection = sqlDB.connect()

    df = pd.read_sql(query, connection)
    print(df.head())
    return df;




View solution in original post

13 REPLIES 13
ToddChitt
Honored Contributor

This may not be a real solution to your issue, but WHY do you want to connect a Notebook to a Mirrored SQL database? Typically, you use Notebooks to collect and process non-tabular data sources (Excel, JSON, XML, etc.) and from there dump it into tabular destinations (read: tables). 

If data is ALREADY in tables in the mirrored database, why not use T-SQL to access and process it?




Did I answer your question? If so, mark my post as a solution. Also consider helping someone else in the forums!

Proud to be a Super User!





Element115
Valued Contributor II

because you can do things more easily and in a saner way in Python than with the antedeluvian 4GL language that SQL is.  That would be my reason #1.  

 

#2 you got a bunch of libs in Python to do data science and data engineering that do not exist in the SQL world.  

 

and on and on it goes.

abhidotnet
New Contributor III

I require to create a report, dump it in a formatted excel file and send that file over to Azure Blob Storage. I understand there are a number of ways to skin this cat, but there are existing libraries in python that do this pretty easily. I want to leverage the code we already have and use it.

v-priyankata
Honored Contributor

Hi @abhidotnet 

Thank you for reaching out to the Microsoft Fabric Forum Community.
@ToddChitt Thanks for your valuable inputs.
It's also a good idea to consider suggestions from the user, we can try that. adding another point below.

The connect to data option in fabric notebooks only shows Lakehouse & KQL databases, we are not able see SQL Mirrored Databases listed there, even if your in the same workspace. To work around this, we can create a Lakehouse and adding a One Lake shortcut that points to the mirrored SQL database. please refer below document.
Explore Data in Your Mirrored Database With Notebooks - Microsoft Fabric | Microsoft Learn


Please feel free to reach out here if you need further assistance.
Thanks.

Hi @abhidotnet 

I hope the information provided was helpful. If you still have questions, please don't hesitate to reach out to the community.

 

The information was helpful. There is one caveat: the shortcut to the mirror does not list the views created in the mirror. I can probably work around the views, but I also have a separate request that involves firing a stored procedure. 

BhaveshPatel
Honored Contributor

Hi @abhidotnet 

 

Rather than writing programming code like python or scala in notebook, you should use UI/ UX in Dataflow Gen 2 and use Mirrored SQL Database. ( Click, Click and Click and you are done..)

BhaveshPatel_0-1752831226878.png

BhaveshPatel_1-1752831685577.png

 

 

Thanks & Regards,
Bhavesh

Love the Self Service BI.
Please use the 'Mark as answer' link to mark a post that answers your question. If you find a reply helpful, please remember to give Kudos.
Element115
Valued Contributor II

careful though with DFg2 as it is a lot more expensive in terms of CUs than doing the same with either SQL script or PySpark Notebook in a pipeline.

v-priyankata
Honored Contributor

Hi @abhidotnet 

I wanted to check if you had the opportunity to review the information provided by all. Please feel free to contact us if you have any further questions.

Hi @abhidotnet 

Hope everythingโ€™s going smoothly on your end. We havenโ€™t heard back from you, so I wanted to check if the issue got sorted. if you have any other issues please reach community.

 

I was on vacation. I am going to explore this option of creating a lakehouse and a shortcut.

apturlov
New Contributor III

@abhidotnet If you need to use a notebook with your mirrored SQL database there is a way. I cannot tell you if it's a recommended way, but it works.

First, you need a Lakehouse that you will use with your notebook. In that Lakehouse you create a shortcut to your mirrored SQL database and choose your mirror from Internal sources/OneLake.

When your shortcut is created it will appear under Files/[Shortcut name] in the Lakehouse and you would see all tables you picked for your shortcut presented as delta tables.

apturlov_0-1755133705681.png

Now you can go to your notebook and load tables from the shortcut into a dataframe using delta format:

apturlov_1-1755133849107.png

Hope this helps.

 

abhidotnet
New Contributor III

I have solved this issue by creating a User data function. This process is explained here in Microsoft Documentation.

Python programming model for Fabric User data functions (Preview) - Microsoft Fabric | Microsoft Lea...

This allows me to make the connection and fetch query results. I am able to get views as well as tables.
As of now, I haven't figured out how to do the stored procedure execution and that is next in line.

my function looks like this -->

import pandas as pd
import fabric.functions as fn

udf = fn.UserDataFunctions()

# Select 'Manage connections' and add a connection to a Fabric SQL Database 
# Replace the alias "<alias for sql database>" with your connection alias.
@udf.connection(argName="sqlDB",alias="MYMIRRORDATABASE")
@udf.function()
def read_from_sql_db(sqlDB: fn.FabricSqlConnection, query: str)-> pd.DataFrame:
    '''
    Description: Read data from Mirror DATABASE using input query.
    
    Args:
        sqlDB (fn.FabricSqlConnection): Fabric SQL database connection.
    
    Returns:
        DataFrame query result.

    '''

    # Establish a connection to the SQL database
    connection = sqlDB.connect()

    df = pd.read_sql(query, connection)
    print(df.head())
    return df;




Helpful resources

Announcements
Users online (10,084)