Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!Calling all Data Engineers! Fabric Data Engineer (Exam DP-700) live sessions are back! Starting October 16th. Sign up.
Hello everyone,
I’m encountering a strange issue while trying to connect to a Standard or High Concurrency Spark session in our Fabric workspace. Because of this, I’m unable to execute notebooks manually — although pipelines are still running successfully.
Here’s the relevant error message snippet:
Initially, I assumed it was a capacity limit issue — but the Fabric capacity metrics show usage below 20%, and the Monitoring Hub confirms that no Spark sessions or pipelines are actively running.
We even left it idle for two full days, but the issue persists. Moreover, it’s affecting all users in the workspace, not just me.
The workspace has been active for about two months, and this problem only started recently.
Hello @malimahesh,
Here’s what typically causes this specific behavior:
Even though the Monitoring Hub shows no active sessions, Fabric’s backend may still have ghost sessions that didn’t clean up correctly.
These orphaned sessions consume Spark concurrency slots, so the controller refuses new sessions.
Pipelines can still run because they’re using queued Fabric Jobs, not interactive Spark controllers.
🧠 Clue: The error persists across users and restarts but capacity % is low.
Each Fabric capacity enforces API throttles for Spark session management (number of session start/stop calls per minute).
If you’ve had multiple users (or automated retries) launching sessions, the rate limiter can block all new session requests for a few hours.
These throttles aren’t visible in capacity metrics (which show CPU/memory CU usage, not API call limits).
This is the most reliable fix.
In the Admin Portal → Capacity Settings → Fabric Capacity → Refresh or Restart.
This forces a reset of Spark controllers and cleans up orphaned sessions.
🕒 After restart, wait ~10 minutes before retrying.
Go to:
Fabric Home → Monitoring Hub → Spark Jobs
Filter by last 7 days and all statuses.
If you see “Starting” or “Queued” jobs stuck indefinitely — cancel them manually.
If the restart doesn’t help:
Move the affected workspace temporarily to another Fabric capacity (even a Trial or Low SKU).
Wait 5–10 minutes for propagation.
Move it back to the original capacity.
✅ This rebinds the workspace’s Spark controller and resets the job queue association.
If it persists after a restart, send a ticket to Microsoft support.
Documentation :
- https://learn.microsoft.com/en-us/fabric/admin/capacity-settings?tabs=power-bi-premium
- https://learn.microsoft.com/en-us/fabric/data-engineering/spark-job-concurrency-and-queueing
- https://learn.microsoft.com/en-us/fabric/data-engineering/spark-detail-monitoring
Hope it can help you !
Best regards,
Antoine
Hi @malimahesh,
That 430 TooManyRequestsForCapacity error means you’ve hit Spark’s concurrency/queue limits for your capacity or session API, not necessarily a CPU/Memory shortage. It’s common to see this when overall capacity graphs are low but there are lingering sessions or a burst of submissions. Microsoft documents the behavior-including the exact 430 message-here: Spark concurrency and queueing and Job queueing.
If you found this helpful, consider giving some Kudos. If I answered your question or solved your problem, mark this post as the solution.
| User | Count | 
|---|---|
| 17 | |
| 16 | |
| 8 | |
| 7 | |
| 6 |