Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!Calling all Data Engineers! Fabric Data Engineer (Exam DP-700) live sessions are back! Starting October 16th. Sign up.
I have a Data Pipeline that performs several steps one after the other:
1. I read some data from a SQL Server and write it in a table t1 of a lakehouse L1.
2. (Yellow) I wait 2 seconds (I will explain why I do this later).
3. (Red) With a notebook I read this table t1 that I have just written and I write it in another lakehouse L2 in a table t2.
4. (Blue) I run a query on t1 to get the smallest value of one of its columns (select min(...) from .....)
5. Other operations.
I run this Data Pipeline every day, and some days it works correctly, but other days it fails in step 4 (blue). It gives me an error that the table t1 on which I am performing the query does not exist, which I don't understand because in the previous step (red) I have read via notebook that table t1 and it has not given me any error.
I put a wait of 2 seconds to give the flow time to load the table correctly, but sometimes it doesn't even find it.
I don't understand how in step 4 it doesn't find the table t1 if in step 3 it has found it. I don't understand how in a step after step 1 (step in which the table is written) t1 is not found. I put a timeout but I don't understand why I put it, as it should not be necessary.
Solved! Go to Solution.
Is step 4 reading from the SQL Analytics Endpoint?
If so, my guess is that there is some (varying) delay from the creation of the table in Spark until it is synced to the SQL Analytics Endpoint ("metadata sync").
However, these docs say that the new table should be available immediately in the SQL Analytics Endpoint: https://learn.microsoft.com/en-us/fabric/data-engineering/lakehouse-sql-analytics-endpoint
Is step 4 reading from the SQL Analytics Endpoint?
If so, my guess is that there is some (varying) delay from the creation of the table in Spark until it is synced to the SQL Analytics Endpoint ("metadata sync").
However, these docs say that the new table should be available immediately in the SQL Analytics Endpoint: https://learn.microsoft.com/en-us/fabric/data-engineering/lakehouse-sql-analytics-endpoint
Check out the November 2025 Fabric update to learn about new features.
Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!