Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Calling all Data Engineers! Fabric Data Engineer (Exam DP-700) live sessions are back! Starting October 16th. Sign up.

nehaljain15

Scaling Optimization with Microsoft Fabric: Turning a Sales Planning Puzzle into a Scalable Solution

By @nehaljain15@audreykono 

If you’ve ever tried to optimize a large, rule-heavy process with thousands of moving parts, you know it can feel less like “planning” and more like juggling flaming swords while riding a unicycle. We recently faced one of those puzzles: assigning resources across hundreds of sales territories, each with its own constraints, attributes, and workload balance requirements.

The stakes?

  • Get it right, and you have a fair, efficient plan that supports growth.
  • Get it wrong, and you risk overloaded teams, missed opportunities, and fractured customer relationships.

Traditionally, this sort of challenge meant weeks of spreadsheet wrangling and never-ending planning conversations with sales teams. The bigger the data, the more bottlenecks. We needed a scalable solution to automate resource assignment – one that could swiftly navigate millions of combinations without breaking sales constraints and guardrails. That’s when we paired the optimization logic with Microsoft Fabric’s capabilities, unlocking a faster and efficient way forward.

The Challenge: Complexity at Scale

On paper, “assign resources to sales territories” sounds like a straightforward task. In reality, it is a combinatorial explosion problem where the number of possible assignments grows dramatically when you factor in:

  • Thousands of accounts
  • Hundreds of Geos
  • Hundreds of territories within a Geo
  • Multiple specialist resource roles, each with unique deployment rules
  • Constraints around workload balance, customer grouping, sales territory attributes and coverage
  • Annual changes to the business structure

Even a single geographical region can generate millions of possible assignment configurations. Local machines and manual tweaks just couldn’t keep up.

 

The Linear Programming Optimization model itself wasn’t the problem. We could express it in Python with linear programming libraries and a comprehensive logic to adhere to the constraints and create pure sales territories. The real blocker was scale: solving multiple large, constraint-heavy optimization runs in sequence could take days.

 

The Approach: Optimization Meets Fabric

We framed the problem as a classic constraint optimization task: find assignments that minimize penalties (like misaligned groupings or overloaded resources) while maximizing coverage and fairness.

But instead of running our Python solution locally in a single long loop, we moved it into Microsoft Fabric’s Data Science experience. Below is a diagram (figure 1) comparing sequential local execution vs Fabric’s parallel, distributed execution. Notice how Fabric enables multiple optimization jobs to run simultaneously, cutting bottlenecks dramatically.

 

nehaljain15_0-1757696044849.png

Figure 1: Local vs Fabric flow

Fabric workflow was straightforward and included the following steps:

  1. Data in the Lakehouse – All input data (territories, resource headcount, attributes) stored in a single, accessible location.
  2. Parallel Execution – Split the problem by region or logical grouping, then run optimization jobs simultaneously using Fabric’s data pipeline and scalable compute.
  3. Automated Outputs & Alerts – Store results back in the Lakehouse and create automated alerts to notify stakeholders via Microsoft Teams when runs complete.

This wasn’t about rewriting the optimization logic - it was about giving it room to breathe with Fabric’s distributed compute.

 

The Impact: From Hours to Minutes

Moving to Fabric significantly changed the timeline and the possibilities:

nehaljain15_1-1757722126883.png

Table 1: Runtime and Capacity Statistics


That’s a 96% runtime reduction and a 3,000% increase in the scale of what could be computed - without changing the underlying optimization model.

 

Why Fabric Was the Game-Changer

Fabric didn’t just make things faster - it removed the ceiling on complexity.

  • Massive Parallelism: Instead of running one scenario at a time, Fabric’s compute allowed us to execute dozens in parallel.
  • Seamless Integration: Lakehouse storage meant no messy file transfers between tools.
  • Team Collaboration: Automatic alerts kept stakeholders in the loop without manual check-ins.

 

The diagram below highlights how Fabric brings data, model runs, and alerts together in a single streamlines pipeline – reducing silos and simplifying orchestration.

 

nehaljain15_1-1757696185830.png

Figure 2: Fabric data science pipeline

This freed up our time to prioritize improving the model and business value, not babysitting long-running processes.

 

The Takeaway

When models involve layers of rules, constraints, and parameters that balloon into millions of combinations, local machines often reach their limits. Using Microsoft Fabric can transform the experience.Whether it’s sales territory planning, supply chain op timization, or workforce scheduling, pairing your model with Fabric’s scalable, serverless compute means you can optimize without compromise.


Have you used Fabric for optimization? Drop your story - we’d love to hear how you’ve scaled your toughest problems.