Skip to main content
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Calling all Data Engineers! Fabric Data Engineer (Exam DP-700) live sessions are back! Starting October 16th. Sign up.

Reply
JasperN
Frequent Visitor

Issues using lakehouse as ODS-layer with datawarehouse for ETL and reporting-layer

Hi everyone,

 

I'm running into some issues running this "Master" pipeline:

JasperN_0-1738231832500.png

 

Basic dataflow:

  1. Load data from CRM and ERP into an ODS lakehouse using separate data pipelines via the "Invoke Pipeline" activity.
  2. Use "Master" stored procedure to ETL data into an Reporting datawarehouse via the "Stored procedure" activity.
    1. First into tables with STG schema (all with 1 stored procedure per staging table)
    2. Then from STG into reporting tables with DWH schema (again, all with 1 stored procedure per reporting table)

Both the lakehouse and datawarehouse, as well as all the pipelines are located in the same Fabric workspace.

The stored procedures are all located in the datawarehouse.

 

Both the CRM and the ERP data pipeline use the "Copy data" activity to get data into the lakehouse.

CRM is using a REST API to connect to the source, while ERP is using an on-prem datagateway.

 

JasperN_1-1738232586695.png

JasperN_2-1738232594544.png

 

Issues I have experienced:

  1. Initial setup:
    Using the "Copy data" activity with the "Table action" set to "Overwrite" (tab "Destination" in the "Copy data" activity) often resulted in no new data being written to the lakehouse tables.
    So the tables just retained the data from the previous data load. In this case no errors where reported and when I looked more into detail in the history, I could even see the correct amount of line read and written even though no new data was actually loaded.
  2. Current setup:
    Drop and recreate the lakehouse tables first using a notebook and PySpark to force the lakehouse into accepting the new data (at best) and more easilly flag the previous problem (at worst). (The notebook is located on the same workspace.)
    However, now the tables / new data often end up in an "Unidentified folder" instead of in the "Tables" folder.
    This causes the "Stored procedure" activity to result in an error because the data cannot be used by the datawarehouse.

 

JasperN_3-1738233436450.png

(FYI - dbo.ladad is the first table used by the first stored procedure.)

 

Usually, by the time I'm able to look into the issue the lakehouse tables are actually in the "Tables" folder and I'm able to rerun the pipeline from the failed activity and it goes through fine.

That's why I also added some wait time after the data load pipelines, which unfortunately doesn't really help in reducing the incidence of the error.

 

This issue is becoming more and more frequent in the last weeks (run is 2 times per day and a lot of the "Succeeded" runs are my pipeline reruns starting from the failed activity):

JasperN_4-1738233510351.png

 

I've honestly looked into this for far too long and honestly don't really know what to do to fix this issue.

It just seems that using a lakehouse as an ODS layer with a pipeline to load the data is inherently still unstable, which makes it unusable for reporting. 

That being set, as far as I can find online, I notice that this approach is actually the recommend method by Microsoft (lakehouse for bulk import, datawarehouse for reporting).

 

That is why I would like to ask the community if anyone has run into these issues before?

And does anyone know how to fix it?

Or is there a better way to do what I'm trying to do in Fabric?

 

Thank you so much in advance for any insight you can provide!

 

Kind regards,

Jasper

2 ACCEPTED SOLUTIONS
nilendraFabric
Super User
Super User

Hello @JasperN 

 

Thanks for discussing the solution over private chat. 

I am sharing details you have shared , here with the community, as this is a issue faced by many users 

 

import time
import json
from notebookutils import mssparkutils

# Get the default Lakehouse path
default_lakehouse_path = next(mp.source for mp in mssparkutils.fs.mounts() if mp.mountPoint == '/default')
table_path = f"{default_lakehouse_path}/Tables/anbkw"

# Max retries & wait time
max_retries = 20
wait_time = 30 # Seconds

print(f"Checking for table at: {table_path}")

# Polling loop
for attempt in range(max_retries):
if mssparkutils.fs.exists(table_path):
print(f"Table 'anbkw' found in Tables folder on attempt {attempt+1}")
# mssparkutils.notebook.exit(json.dumps({"status": "success", "message": "Table found"})) # SUCCESS
mssparkutils.notebook.exit("success") # SUCCESS
break

print(f"Attempt {attempt+1}: Table 'anbkw' not found. Retrying in {wait_time} seconds...")
time.sleep(wait_time)

# If table never appears, fail the notebook and return an error message
# mssparkutils.notebook.exit(json.dumps({"status": "failed", "message": "Table not found after max retries"}))
mssparkutils.notebook.exit("failed") # FAILED

 

This checks whether the last table has been loaded into the Lakehouse and returns either "success" or "failed" in the "output.result.exitValue" property of the JSON output.
 

In my pipeline, I validate this using an "If Condition" activity with the following dynamic content expression:

 
@equals(activity('Verify data load in ODS lakehouse').output.result.exitValue,'success')
 
This is how my pipeline looks now:
 
IMG_7146.png

โ€ƒ

 
 

 

View solution in original post

JasperN
Frequent Visitor

@nilendraFabric 

Unfortunately, the proposed solution didn't help.

I still got the same error as before, even though the tables were loaded correctly into the lakehouse and I got confirmation from the notebook that they were not in the "Unidentified folder".

Even worse, the pipeline got reported as a success (luckily, the Outlook activities were correctly implemented).

 

JasperN_0-1738936761083.png

 

My current theory is that this is a metadata refresh issue in both the ODS lakehouse and DWH warehouse.

In this case, we also need to implement the same logic for the warehouse as we did for the ODS lakehouse.

 

Current new pipeline implementation:

 

JasperN_1-1738936841713.png

 

The lakehouse and warehouse metadata checks have been split of into a child pipeline:
(Using a legacy activity because I need the pipeline return variables which at the time of writing weren't available for the new preview activity.)

 

JasperN_2-1738936920753.png

 

Flow in child pipeline:

 

  1. Previous notebook activity to check and force metadata refresh in ODS lakehouse
  2. Until activity to check and force metadata refresh in DWH warehouse
    1. Script activity to verify & force metadata reload in warehouse
      (Using a notebook with a Fabric warehouse is still buggy)
    2. Counter variables
    3. Wait time (30 seconds)
  3. Set a return variable (1 = success, 0 = fail)

 

Code used pipelines:

 

  • Until activity expression (max 20 tries with 30 seconds between each try) (child pipeline) 

 

 

@or(
    equals(activity('Check lakehouse table availability').output.resultSets[2].rows[0].TableExists, 1),
    greaterOrEquals(variables('RetryCount'), 20)
)

  

  • Check DWH metadata and force reload script (child pipeline)

 

-- Step 1: Force a Metadata Refresh by listing all tables

SELECT name FROM ODS.sys.tables;

-- Step 2: Try a dummy query on the expected table (prevents Fabric caching issues)

BEGIN TRY
    SELECT TOP 1 * FROM ODS.dbo.anbkw;
END TRY

BEGIN CATCH
    -- Ignore errors if table doesn't exist yet
END CATCH;

-- Step 3: Declare variable for table existence

DECLARE @TableExists INT = 0;

-- Step 4: Check if the table is accessible

IF EXISTS (SELECT 1 FROM ODS.sys.tables WHERE name = 'anbkw')
BEGIN

    BEGIN TRY
        IF EXISTS (SELECT 1 FROM ODS.dbo.anbkw)
        BEGIN
            SET @TableExists = 1; -- Table is accessible
        END
    END TRY

    BEGIN CATCH
        -- If an error occurs, keep @TableExists = 0 (Do nothing)
    END CATCH

END

-- Step 5: Return Result for Fabric Pipeline

SELECT TableExists = CAST(@TableExists AS INT);
 
  • Set Count variable (child pipeline)

 

@add(int(variables('RetryCount')), 1)
 
  • Set RetryCount (child pipeline)
 
@variables('Count')
 
  • Set SuccessVariable (child pipeline)
 
@if(
    and(
        equals(activity('Verify and force metadata reload ODS').output.result.exitValue, 'success'),
        equals(activity('Check lakehouse table availability').output.resultSets[2].rows[0].TableExists, 1)
    ),
    1,
    0
)
 
  • If condition (parent pipeline):
 
@equals(activity('Check ODS and DWH metadata update').output.pipelineReturnValue.LoadSuccessful, 1)

View solution in original post

7 REPLIES 7
nilendraFabric
Super User
Super User

Hello Jasper ,

Give it a try 

  1. A Get Metadata activity  that checks for the existence of the "Last Table in Your Sequensce" folder in the Tables section.
  2. An If Condition activity that evaluates the output of the Get Metadata activity.
  3. Two Set Variable activities within the If Condition:
    • "Folder Exists" sets a variable if the folder is found.  -> run your normal flow 
    • "Folder Does Not Exist" sets a variable if the folder is not found.  --> retry again using while or loop in predefined time
  4. A pipeline variable named "FolderStatus" to store the result.


    from notebookutils import mssparkutils
    import json

    # Get the default Lakehouse path
    default_lakehouse_path = next(mp.source for mp in mssparkutils.fs.mounts() if mp.mountPoint == '/default')

    # Construct the path to the Unidentified folder
    # unidentified_folder_path = f"{default_lakehouse_path}/Tables/Unidentified"




    print(f"Path to Unidentified folder: {unidentified_folder_path}")
    unidentified_folder_path="abfss://Taxi_data@onelake.dfs.fabric.microsoft.com/taxiLH.Lakehouse/Tables/rate_code"
    # Check if the folder exists
    folder_exists = mssparkutils.fs.exists(unidentified_folder_path)

    # Output the result
    output = {"folder_exists": folder_exists}
    print(json.dumps(output))



     
    nilendraFabric_0-1738316663605.png

     

     

    PS : Unidentified  is not discoverable using mssparkutils
    Hope this works.


    Please let me know if this works and accept the answer here.

    Thanks 

v-achippa
Community Support
Community Support

Hi @JasperN,

 

Thank you for reaching out to Microsoft Fabric Community.

 

The primary issue here is inconsistency in the Lakehouse metadata update after the Copy data operation. This results in data being stored in an "Unidentified folder" temporarily before appearing in the "Tables" folder.

So the stored procedure fails because it tries to access the table before the Lakehouse metadata update is complete. please try below steps:

  • After the Copy Data activity, run this SQL command in a Notebook before running the stored procedure โ€œREFRESH TABLE lakehouse_schema.table_nameโ€.
    This forces an immediate update of the Lakehouse metadata so that the stored procedure can recognize the new data.
  • If the issue continues, replace "Overwrite" in Copy Data with "Truncate Table" first before inserting new data.
    This ensures data is cleared before writing new records and helps in avoiding inconsistencies in the lakehouse.

 

If this post helps, then please consider Accepting as the solution to help the other members find it more quickly, don't forget to give a "Kudos" โ€“ Iโ€™d truly appreciate it! 

 

Thanks and regards,

Anjan Kumar Chippa

Hi!

Thank you for the proposed solution!

However, this does not work because it is already part of my current setup in which I first drop and recreated the ODS lakehouse tables before the data load.

After each CREATE TABLE statement I run the REFRESH TABLE command as well but it doens't work.

 

Regarding the other solution: I only get the options to Append or Overwrite, not Truncate Table.

nilendraFabric
Super User
Super User

Hello @JasperN 

 

Thanks for discussing the solution over private chat. 

I am sharing details you have shared , here with the community, as this is a issue faced by many users 

 

import time
import json
from notebookutils import mssparkutils

# Get the default Lakehouse path
default_lakehouse_path = next(mp.source for mp in mssparkutils.fs.mounts() if mp.mountPoint == '/default')
table_path = f"{default_lakehouse_path}/Tables/anbkw"

# Max retries & wait time
max_retries = 20
wait_time = 30 # Seconds

print(f"Checking for table at: {table_path}")

# Polling loop
for attempt in range(max_retries):
if mssparkutils.fs.exists(table_path):
print(f"Table 'anbkw' found in Tables folder on attempt {attempt+1}")
# mssparkutils.notebook.exit(json.dumps({"status": "success", "message": "Table found"})) # SUCCESS
mssparkutils.notebook.exit("success") # SUCCESS
break

print(f"Attempt {attempt+1}: Table 'anbkw' not found. Retrying in {wait_time} seconds...")
time.sleep(wait_time)

# If table never appears, fail the notebook and return an error message
# mssparkutils.notebook.exit(json.dumps({"status": "failed", "message": "Table not found after max retries"}))
mssparkutils.notebook.exit("failed") # FAILED

 

This checks whether the last table has been loaded into the Lakehouse and returns either "success" or "failed" in the "output.result.exitValue" property of the JSON output.
 

In my pipeline, I validate this using an "If Condition" activity with the following dynamic content expression:

 
@equals(activity('Verify data load in ODS lakehouse').output.result.exitValue,'success')
 
This is how my pipeline looks now:
 
IMG_7146.png

โ€ƒ

 
 

 

Thanks for posting this, @nilendraFabric

 

Small update in the meantime: I've added a refreshMounts() function in the notebook code in the hopes that helps forcing a metadata reload while checking if the tables are loaded.

 

import time
import json
from notebookutils import mssparkutils

# Get the default Lakehouse path
default_lakehouse_path = next(mp.source for mp in mssparkutils.fs.mounts() if mp.mountPoint == '/default')
table_path = f"{default_lakehouse_path}/Tables/anbkw"

# Max retries & wait time
max_retries = 20
wait_time = 30  # Seconds

print(f"Checking for table at: {table_path}")

# Force a metadata refresh
print("Forcing metadata refresh in the Lakehouse...")
mssparkutils.fs.refreshMounts()

# Polling loop
for attempt in range(max_retries):
    if mssparkutils.fs.exists(table_path):
        print(f"Table 'anbkw' found in Tables folder on attempt {attempt+1}")
        # mssparkutils.notebook.exit(json.dumps({"status": "success", "message": "Table found"}))  # SUCCESS
        mssparkutils.notebook.exit("success")  # SUCCESS
        break

    print(f"Attempt {attempt+1}: Table 'anbkw' not found. Retrying in {wait_time} seconds...")
    time.sleep(wait_time)
    mssparkutils.fs.refreshMounts()  # Refresh metadata before retrying

# If table never appears, fail the notebook and return an error message
# mssparkutils.notebook.exit(json.dumps({"status": "failed", "message": "Table not found after max retries"}))
mssparkutils.notebook.exit("failed")  # FAILED

Lastly, below some screenshots from the pipeline as it exists now, including the steps in the "If Condition".

JasperN_0-1738673353778.png

JasperN_1-1738673367528.png

JasperN_2-1738673375109.png

 

If anything is still unclear, feel free to reach out for some support! ๐Ÿ‘

I'll also update this topic should I stumble upon anything else!

 

@JasperN Thanks for sharing the details 

JasperN
Frequent Visitor

@nilendraFabric 

Unfortunately, the proposed solution didn't help.

I still got the same error as before, even though the tables were loaded correctly into the lakehouse and I got confirmation from the notebook that they were not in the "Unidentified folder".

Even worse, the pipeline got reported as a success (luckily, the Outlook activities were correctly implemented).

 

JasperN_0-1738936761083.png

 

My current theory is that this is a metadata refresh issue in both the ODS lakehouse and DWH warehouse.

In this case, we also need to implement the same logic for the warehouse as we did for the ODS lakehouse.

 

Current new pipeline implementation:

 

JasperN_1-1738936841713.png

 

The lakehouse and warehouse metadata checks have been split of into a child pipeline:
(Using a legacy activity because I need the pipeline return variables which at the time of writing weren't available for the new preview activity.)

 

JasperN_2-1738936920753.png

 

Flow in child pipeline:

 

  1. Previous notebook activity to check and force metadata refresh in ODS lakehouse
  2. Until activity to check and force metadata refresh in DWH warehouse
    1. Script activity to verify & force metadata reload in warehouse
      (Using a notebook with a Fabric warehouse is still buggy)
    2. Counter variables
    3. Wait time (30 seconds)
  3. Set a return variable (1 = success, 0 = fail)

 

Code used pipelines:

 

  • Until activity expression (max 20 tries with 30 seconds between each try) (child pipeline) 

 

 

@or(
    equals(activity('Check lakehouse table availability').output.resultSets[2].rows[0].TableExists, 1),
    greaterOrEquals(variables('RetryCount'), 20)
)

  

  • Check DWH metadata and force reload script (child pipeline)

 

-- Step 1: Force a Metadata Refresh by listing all tables

SELECT name FROM ODS.sys.tables;

-- Step 2: Try a dummy query on the expected table (prevents Fabric caching issues)

BEGIN TRY
    SELECT TOP 1 * FROM ODS.dbo.anbkw;
END TRY

BEGIN CATCH
    -- Ignore errors if table doesn't exist yet
END CATCH;

-- Step 3: Declare variable for table existence

DECLARE @TableExists INT = 0;

-- Step 4: Check if the table is accessible

IF EXISTS (SELECT 1 FROM ODS.sys.tables WHERE name = 'anbkw')
BEGIN

    BEGIN TRY
        IF EXISTS (SELECT 1 FROM ODS.dbo.anbkw)
        BEGIN
            SET @TableExists = 1; -- Table is accessible
        END
    END TRY

    BEGIN CATCH
        -- If an error occurs, keep @TableExists = 0 (Do nothing)
    END CATCH

END

-- Step 5: Return Result for Fabric Pipeline

SELECT TableExists = CAST(@TableExists AS INT);
 
  • Set Count variable (child pipeline)

 

@add(int(variables('RetryCount')), 1)
 
  • Set RetryCount (child pipeline)
 
@variables('Count')
 
  • Set SuccessVariable (child pipeline)
 
@if(
    and(
        equals(activity('Verify and force metadata reload ODS').output.result.exitValue, 'success'),
        equals(activity('Check lakehouse table availability').output.resultSets[2].rows[0].TableExists, 1)
    ),
    1,
    0
)
 
  • If condition (parent pipeline):
 
@equals(activity('Check ODS and DWH metadata update').output.pipelineReturnValue.LoadSuccessful, 1)

Helpful resources

Announcements
November Fabric Update Carousel

Fabric Monthly Update - November 2025

Check out the November 2025 Fabric update to learn about new features.

Fabric Data Days Carousel

Fabric Data Days

Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.

Users online (25)