Skip to main content
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Calling all Data Engineers! Fabric Data Engineer (Exam DP-700) live sessions are back! Starting October 16th. Sign up.

Reply
juanisivo
New Contributor

Loading CSV table from notebook resources to a delta table in a lakehouse using code snippet

Hello, I am using a code snippet to load a CSV to a table. The CSV is tored in the built in resource folder of my notebook and the target table is in a lakehouse already linked to my notebook. This is the code:

 

# Starts a load table operation in a Lakehouse artifact
notebookutils.lakehouse.loadTable(
    {
        "relativePath": './builtin/log/bronze_to_silver_log.csv', # path of the csv in the built in resources folder of the notebook
        "pathType": "File",
        "mode": "Append",
        "recursive": False,
        "formatOptions": {
            "format": "Csv",
            "header": True,
            "delimiter": ","
        }
    },
    'bronze_to_silver_log', # the name of the table
    'silver', # the name of the lakehouse
    workspaceId={workspace_id}
)
 
But I am getting the following error:
 
Py4JError: An error occurred while calling z:notebookutils.lakehouse.loadTable. Trace: py4j.Py4JException: Method loadTable([class java.lang.String, class java.lang.String, class java.lang.String, class java.util.HashSet]) does not exist at py4j.reflection.ReflectionEngine.getMethod(ReflectionEngine.java:321) at py4j.reflection.ReflectionEngine.getMethod(ReflectionEngine.java:342) at py4j.Gateway.invoke(Gateway.java:276) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:238) at java.base/java.lang.Thread.run(Thread.java:829)
 
Seems that the loadTable method doesn't exist.
Any guess?
1 ACCEPTED SOLUTION
v-prasare
Honored Contributor II

Hi @juanisivo,

 

Replace the notebookutils.lakehouse.loadTable block with standard PySpark code using .read() and .saveAsTable() โ€” this is the official, stable, and Fabric-supported approach for loading data from a CSV file to a Lakehouse table.

Microsoft recommends using PySpark APIs in Fabric Notebooks for reading/writing data to Lakehouse tables. The method notebookutils.lakehouse.loadTable() is not part of the documented, supported APIs and is likely either an internal or deprecated utility.

 

You can use PySpark to load data from CSV, Parquet, JSON, and other file formats into a lakehouse. You can also create tables directly from these DataFrames.

ex:

df = spark.read.option("header", True).csv("Files/YourFolder/yourfile.csv")
df.write.mode("overwrite").saveAsTable("lakehouse_name.table_name")

 

 

Thanks,

Prashanth Are

MS Fabric community support

 

If this post helps, then please consider Accept it as the solution to help the other members find it more quickly and give Kudos if helped you resolve your query

 

View solution in original post

4 REPLIES 4
Srisakthi
Contributor III

Hi @juanisivo ,

 

There is no such method available. 

As per the article, You can use relative paths like builtin/YourData.txt for quick exploration. The notebookutils.nbResPath method helps you compose the full path. You can spark to read from the relative path and write to table.

Refer - https://learn.microsoft.com/en-us/fabric/data-engineering/how-to-use-notebook#notebook-resources

 

Regards,

Srisakthi

 

If this response helps you, please "Accept as solution" and give "Kudos". It can helps others.

Hi @Srisakthi,

 

I don't understand why you say that there is no such method. The method does exit as shown in the attached image. It is part of the lakehouse help.

And not only that, it is also used in a built-in code snippet called "Load table" that "starts a load table operation in a Lakehouse artifact".

 

Do you suggest another solution to copy a csv file from the notebook resources to a table in a lakehouse?

 

Screenshot 2025-05-13 084050.png

 

Best regards,

Juan

v-prasare
Honored Contributor II

Hi @juanisivo,

 

Replace the notebookutils.lakehouse.loadTable block with standard PySpark code using .read() and .saveAsTable() โ€” this is the official, stable, and Fabric-supported approach for loading data from a CSV file to a Lakehouse table.

Microsoft recommends using PySpark APIs in Fabric Notebooks for reading/writing data to Lakehouse tables. The method notebookutils.lakehouse.loadTable() is not part of the documented, supported APIs and is likely either an internal or deprecated utility.

 

You can use PySpark to load data from CSV, Parquet, JSON, and other file formats into a lakehouse. You can also create tables directly from these DataFrames.

ex:

df = spark.read.option("header", True).csv("Files/YourFolder/yourfile.csv")
df.write.mode("overwrite").saveAsTable("lakehouse_name.table_name")

 

 

Thanks,

Prashanth Are

MS Fabric community support

 

If this post helps, then please consider Accept it as the solution to help the other members find it more quickly and give Kudos if helped you resolve your query

 

v-prasare
Honored Contributor II

@juanisivo,
As we havenโ€™t heard back from you, we wanted to kindly follow up to check if the solution provided for your issue worked? or let us know if you need any further assistance here?

 

 

 

Thanks,

Prashanth Are

MS Fabric community support

 

If this post helps, then please consider Accept it as the solution to help the other members find it more quickly and give Kudos if helped you resolve your query

Helpful resources

Announcements
Users online (25)