Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!Calling all Data Engineers! Fabric Data Engineer (Exam DP-700) live sessions are back! Starting October 16th. Sign up.
Is there an easy way of getting over time the number of nodes allocated for Spark jobs/notebooks?
We can obviously report on individual jobs that are running (Monitor) and the number of CUs in use over time (Capacity Metrics), but one issue we're having is spark jobs grabbing a whole bunch of nodes and then causing the 'not enough nodes' error for other users. I'd like to both monitor node allocations as well maybe trigger alerts when we get close to the limit.
I'm already having to go on the hunt for rogue users - I'd like to catch it before it happens.
Hi @spencer_sa,
Perhaps you can try to use real-time intelligence eventsteam with event or alert to notice based on current idle spark nodes.
Set alerts on Fabric workspace item events in Real-Time hub - Microsoft Fabric | Microsoft Learn
Regards,
Xiaoxin Sheng