Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!Calling all Data Engineers! Fabric Data Engineer (Exam DP-700) live sessions are back! Starting October 16th. Sign up.
I am trying to create a managed table in lakehouse using NB with rows manually entered (SQL equivalent INSERT INTO) but I am getting this following error, i have no idea how to debug this. it seems to create the delta table without any columns
%%pyspark
from pyspark.sql import SparkSession
from pyspark.sql.types import *
from pyspark.sql import functions as sf
from datetime import datetime
# Initialize Spark session
spark = SparkSession.builder \
.appName("session_one") \
.getOrCreate()
schema = StructType([
StructField('id',IntegerType(), True),
StructField('schema_name', StringType(), True),
StructField('table_name', StringType(), True),
StructField('watermark_value', TimestampType(), True),
StructField('full_path', StringType(), True)
])
row_one = [
(1, 'lorem', 'ipsum', datetime(1, 1, 1, 0, 0, 0), None),
]
df_one = spark.createDataFrame(row_one, schema)
df_two = df_one.withColumn('full_path', sf.concat(sf.col('schema_name'),sf.lit('.'),sf.col('table_name')))
df_two.show()
df_two.write.format("delta").saveAsTable("watermark")
How can I satisfy `No Delta transaction log entries were found ` req
Solved! Go to Solution.
This issue can be solved by using tablebuilder api
Does it work if you use this code below?
---------------------------------------------
from pyspark.sql.types import *
from pyspark.sql import functions as sf
from datetime import datetime
schema = StructType([
StructField('id',IntegerType(), True),
StructField('schema_name', StringType(), True),
StructField('table_name', StringType(), True),
StructField('watermark_value', TimestampType(), True),
StructField('full_path', StringType(), True)
])
row_one = [
(1, 'lorem', 'ipsum', datetime(1, 1, 1, 0, 0, 0), None),
]
df_one = spark.createDataFrame(row_one, schema)
df_two = df_one.withColumn('full_path', sf.concat(sf.col('schema_name'),sf.lit('.'),sf.col('table_name')))
df_two.show()
df_two.write.mode("overwrite").saveAsTable("watermark")
-----------------------------------------------
I don't think you need to specify %%pyspark as this is the default.
I don't think you need to initalize the spark session in your code in Fabric notebooks.
Maybe you need to add .mode("overwrite") or .mode("append") in the saveAsTable expression.
By the way, does your code run without errors if you remove line 28 in your code? (The saveAsTable line)
Or maybe this could work (I asked ChatGPT how to create a similar table with SQL syntax)
%%sql
-- Step 1: Create the Table
CREATE TABLE watermark (
id INT,
schema_name VARCHAR(255),
table_name VARCHAR(255),
watermark_value TIMESTAMP,
full_path VARCHAR(255)
);
-- Step 2: Insert Data into the Table
INSERT INTO watermark (id, schema_name, table_name, watermark_value, full_path)
VALUES (1, 'lorem', 'ipsum', '0001-01-01 00:00:00', NULL);
-- Step 3: Update the `full_path` Column
UPDATE watermark
SET full_path = schema_name || '.'
|| table_name;
This issue can be solved by using tablebuilder api
Check out the November 2025 Fabric update to learn about new features.
Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!