Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!Calling all Data Engineers! Fabric Data Engineer (Exam DP-700) live sessions are back! Starting October 16th. Sign up.
Hi,
I've created a UDF component with get_element_schedules(token, workspace_id, jobType) function. This function returns a dict object with the list of pipelines (jobType=Pipeline) and notebooks (jobType=Notebook) in the workspace_id and the schedules associated to each pipeline/notebook. This dict object is created as a result of some calls to fabric API.
def get_element_schedules(token: str, workspace_id: str, jobType: str) -> dict:
[....]
elements_response = requests.get(elements_url, headers=headers)
elements_data = elements_response.json().get("value", [])
[...]
return {"result": result}
The problem is that I want to create get_pipelines_schedules(token, workspace_id) and get_notebooks_schedules(token, workspace_id) function as:
def get_pipelines_schedules(token: str, workspace_id: str) -> dict:
return get_element_schedules(token, workspace_id, jobType='Pipeline')
def get_notebooks_schedules(token: str, workspace_id: str) -> dict:
return get_element_schedules(token, workspace_id, jobType='Notebook')
But I'm having an error when I call theese two functions:
<coroutine object get_job_schedules at 0x76144ee29900>
I've seen this is because of asyncronous calls. I've tried to convert as async both parent and child functions (using httpx instead of request because Copilot says async is not supported by request), but still not working. I think httpx is not working well.
If I duplicate both functions as a copy passing the corresponding value of jobType, it works, but if I reuse the code, I get coroutine issue.
Any ideas?
Solved! Go to Solution.
Hi @amaaiia,
The string "<coroutine object ...>" appears when an async function is called without await. You donโt need async here unless you truly want concurrency. Use a sync HTTP client and your wrapper functions will work as-is.
import httpx
def get_element_schedules(token: str, workspace_id: str, job_type: str) -> dict:
headers = {"Authorization": f"Bearer {token}"}
if job_type == "Pipeline":
items_url = f"https://api.fabric.microsoft.com/v1/workspaces/{workspace_id}/datapipeline/items"
schedule_job_type = "Pipeline"
elif job_type == "Notebook":
items_url = f"https://api.fabric.microsoft.com/v1/workspaces/{workspace_id}/notebook/items"
schedule_job_type = "DefaultJob" # notebooks use DefaultJob for schedules
else:
raise ValueError("job_type must be Pipeline or Notebook")
items = httpx.get(items_url, headers=headers, timeout=60).json().get("value", [])
result = []
for item in items:
item_id = item["id"]
sched_url = (
f"https://api.fabric.microsoft.com/v1/workspaces/{workspace_id}"
f"/items/{item_id}/jobs/{schedule_job_type}/schedules"
)
schedules = httpx.get(sched_url, headers=headers, timeout=60).json().get("value", [])
result.append({"itemId": item_id, "name": item.get("displayName"), "schedules": schedules})
return {"result": result}
def get_pipelines_schedules(token: str, workspace_id: str) -> dict:
return get_element_schedules(token, workspace_id, job_type="Pipeline")
def get_notebooks_schedules(token: str, workspace_id: str) -> dict:
return get_element_schedules(token, workspace_id, job_type="Notebook")If you found this helpful, consider giving some Kudos. If I answered your question or solved your problem, mark this post as the solution.
Hi @amaaiia,
The string "<coroutine object ...>" appears when an async function is called without await. You donโt need async here unless you truly want concurrency. Use a sync HTTP client and your wrapper functions will work as-is.
import httpx
def get_element_schedules(token: str, workspace_id: str, job_type: str) -> dict:
headers = {"Authorization": f"Bearer {token}"}
if job_type == "Pipeline":
items_url = f"https://api.fabric.microsoft.com/v1/workspaces/{workspace_id}/datapipeline/items"
schedule_job_type = "Pipeline"
elif job_type == "Notebook":
items_url = f"https://api.fabric.microsoft.com/v1/workspaces/{workspace_id}/notebook/items"
schedule_job_type = "DefaultJob" # notebooks use DefaultJob for schedules
else:
raise ValueError("job_type must be Pipeline or Notebook")
items = httpx.get(items_url, headers=headers, timeout=60).json().get("value", [])
result = []
for item in items:
item_id = item["id"]
sched_url = (
f"https://api.fabric.microsoft.com/v1/workspaces/{workspace_id}"
f"/items/{item_id}/jobs/{schedule_job_type}/schedules"
)
schedules = httpx.get(sched_url, headers=headers, timeout=60).json().get("value", [])
result.append({"itemId": item_id, "name": item.get("displayName"), "schedules": schedules})
return {"result": result}
def get_pipelines_schedules(token: str, workspace_id: str) -> dict:
return get_element_schedules(token, workspace_id, job_type="Pipeline")
def get_notebooks_schedules(token: str, workspace_id: str) -> dict:
return get_element_schedules(token, workspace_id, job_type="Notebook")If you found this helpful, consider giving some Kudos. If I answered your question or solved your problem, mark this post as the solution.
| User | Count |
|---|---|
| 8 | |
| 6 | |
| 3 | |
| 3 | |
| 3 |