Skip to main content
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Calling all Data Engineers! Fabric Data Engineer (Exam DP-700) live sessions are back! Starting October 16th. Sign up.

Reply
pmscorca
Valued Contributor

Exporting the result of a SQL query on a lakehouse

Hi,

I need to query a lakehouse by the related SQL analytics endpoint and then exporting the resulting dataset in Excel or csv format.

I've tried to use the available Excel feature after the query execution but it downloads a file having a query connection and not the resulting dataset. I think that exporting the resulting data and not the query it is very useful! I expected more from this Excel feature. Is it the right behaviour?

Does it exist another manner to query a lakehouse in SQL language, deliverying the resulting dataset in an automatic mode? Thanks

 

1 ACCEPTED SOLUTION
v-saisrao-msft
Honored Contributor II

Hi @pmscorca,

Thank you @AntoineW @BhaveshPatel, for your insights.
The behavior youโ€™re experiencing is expected. When you use โ€œOpen in Excelโ€ from a Fabric SQL analytics endpoint, it creates a live query connection instead of a static dataset. The Excel file will look empty until you enable content and run the query.

Below is the link for your reference:
Query Using the SQL Query Editor - Microsoft Fabric | Microsoft Learn

 

Thank you.

View solution in original post

7 REPLIES 7
AntoineW
Contributor III

Hello @pmscorca,

 

Yes, unfortunately, this is currently expected behavior. When you use the โ€œExport to Excelโ€ feature from a SQL analytics endpoint in Fabric, the downloaded Excel file typically contains a live query connection, not the actual dataset.

 

If you want the actual result set exported instead of just the connection, here are some supported alternatives:

 

1) Use a Notebook to Export to CSV or Parquet

- Write a Spark or SQL notebook that queries the lakehouse.

- Save the result to lakehouse (OneLake) as a CSV or Parquet file.

Once the file is there, you can easily open it with OneLake Explorer in Windows, which lets you interact with the CSV like any local file and even open it directly in Excel.

 

2) Power BI Export

- If your query powers a Power BI semantic model, you can use the Export Data option in a table visual to get CSV extracts.

 

Hope it can help you ! 
Best regards,

Antoine

pmscorca
Valued Contributor

Hi, thanks for your replies.

I think a simple solution for the business users like running a SQL query and the exporting to an Excel or CSV file (not parquet file!).

I don't remember that it is possible to expose the resulting dataset of a SQL query by using a Power BI table visual.

I think that the "Export to Excelโ€ feature from a SQL analytics endpoint in Fabric is uncomplete: it's a pity!

A Fabric user already access to the Fabric environment: why should he access to the lakehouse again to get the resulting dataset from the exported Excel?

Thanks

v-saisrao-msft
Honored Contributor II

Hi @pmscorca,

Thank you @AntoineW @BhaveshPatel, for your insights.
The behavior youโ€™re experiencing is expected. When you use โ€œOpen in Excelโ€ from a Fabric SQL analytics endpoint, it creates a live query connection instead of a static dataset. The Excel file will look empty until you enable content and run the query.

Below is the link for your reference:
Query Using the SQL Query Editor - Microsoft Fabric | Microsoft Learn

 

Thank you.

Hi,

whether the functionality is incomplete or new functionality needs to be implemented, the important thing is to meet the requirement of exporting the data resulting from a SQL query to Excel or CSV.
Please vote my idea Exporting to Excel the resulting dataset of a SQL query on a lakehouse 
Thanks

v-saisrao-msft
Honored Contributor II

Hi @pmscorca,

Thank you for sharing your idea and including the link. This will help other community members vote on it.

 

Thank you.

Well, please vote it. Thanks!

BhaveshPatel
Honored Contributor

Hi @pmscorca 

 

You can get the resulting dataset in CSV or Parquet. For  that, You need to set up Azure Storage Explorer and set the resulting dataset there by using CSV file using Apache Spark Dataframe. ( Python Dataframe )

# Unmanaged Tables in Apache Spark Dataframe ( Data + Metadata )

 

# Apache Spark Dataframe ( Python )

df = pandas data frame

sdf = spark.createDataFrame(df)

display(df)

 

# Use Delta lake 

sdf.write.format("delta").mode("overwrite").saveAsTable("YourTable")

 

Thanks & Regards,
Bhavesh

Love the Self Service BI.
Please use the 'Mark as answer' link to mark a post that answers your question. If you find a reply helpful, please remember to give Kudos.

Helpful resources

Announcements
Users online (10,586)