Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!Calling all Data Engineers! Fabric Data Engineer (Exam DP-700) live sessions are back! Starting October 16th. Sign up.
in the dataflow gen2, I am adding a custom column with a very complex business rule.
for example:
if column2 = "xyz" and column3 != '765'
...
then custom_column = 'system1'
else if column2 = ...
...
for a complex rule is it ok the formula is very complex and probably around 50 + lines?
is it better to do this logic into a stored proc and update the table after the ingestion in dataflow?
thank you
Solved! Go to Solution.
Can you share the current IF statement? Or share some sample data and expected outcome? Maybe we can optimize it to reduce the complexity or reduce the lines.
If it's just a conditional judgment on the same row of data, there usually won't be a significant performance issue. And complexity isn't entirely determined by the number of lines.
When further optimization is not possible, it may be better to use stored procedures to implement the same logic on the data source side than to use Dataflow. For some simple logical operations, the M language used in the dataflow itself will be converted into SQL statements supported by the corresponding data source and executed by the data source.
Can you share the current IF statement? Or share some sample data and expected outcome? Maybe we can optimize it to reduce the complexity or reduce the lines.
If it's just a conditional judgment on the same row of data, there usually won't be a significant performance issue. And complexity isn't entirely determined by the number of lines.
When further optimization is not possible, it may be better to use stored procedures to implement the same logic on the data source side than to use Dataflow. For some simple logical operations, the M language used in the dataflow itself will be converted into SQL statements supported by the corresponding data source and executed by the data source.
In addition to what @liuqi_pbi mentioned here: "For some simple logical operations, the M language used in the dataflow itself will be converted into SQL statements supported by the corresponding data source and executed by the data source."
This is the automatic Query Folding mechanism in Power Query M.
In addition, if your source is SQL based and you wish to push complex transformations back to the source, you can write your own SQL query in the dataflow connection, in order to make the source SQL server do the processing job.
Also, just copying the raw data into a staging table in Fabric by using data pipeline copy or Dataflow Gen2 fast copy, and then run transformations on the staged data (e.g. stored procedure or notebook) before loading it into prod table, makes great sense.