Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Calling all Data Engineers! Fabric Data Engineer (Exam DP-700) live sessions are back! Starting October 16th. Sign up.

Reply
alfBI
Contributor II

Refresh SQL Endpoint using semantic link labs: Intermittent failures

Hi,

 

Recently we saw that MS has released Items - Refresh Sql Endpoint Metadata - REST API (SQLEndpoint) | Microsoft Learn as well its corresponding implementation on the semantic-labs library

https://github.com/microsoft/semantic-link-labs/wiki/Code-Examples#refresh-sql-endpoint-metadata

 

we have preferred to choose the implementation of the semantic-labs because its simplicity but after managing all the problems related with the libraries (they have to be include on a fabric environment ion order to allow the notebook be used on a pipeline) we notice that the notebook execution fails intermittebntly with following error message

 

Notebook execution failed at Notebook service with http status code - '200', please check the Run logs on Notebook, additional details
- 'Error name - KeyError, Error value - "['Table Name', 'Status', 'Start Time', 'End Time', 'Last Successful Sync Time'] not in index"' :

 

alfBI_0-1752131739513.png

 

 

The Run logs of notebook does not give so much detail.

 

Any idea about what is going wrong here?

 

Thanks,

 

Alfons

 

1 ACCEPTED SOLUTION

Final Solution was to get rid of semantic labs approach and use directl ythe api as shown here

Example code using the new fabric rest api · GitHub

Thx


View solution in original post

15 REPLIES 15
v-dineshya
Honored Contributor II

Hi @alfBI ,

Thank you for reaching out to the Microsoft Community Forum.

 

The error message " KeyError, Error value - "['Table Name', 'Status', 'Start Time', 'End Time', 'Last Successful Sync Time'] not in index"'",  This typically indicates that the notebook is trying to access columns in a DataFrame that don’t exist, because the Lakehouse is empty or the SQL Endpoint metadata has not been initialized properly.

 

Please check below things to fix the issue.

 

1.  Before accessing columns in the notebook, check if the DataFrame contains the expected columns. Please refer below sample python script.


expected_cols = ['Table Name', 'Status', 'Start Time', 'End Time', 'Last Successful Sync Time']
if all(col in df.columns for col in expected_cols):
df = df[expected_cols]
else:
print("Expected columns not found. DataFrame is likely empty.")

 

2. Check that the Lakehouse has at least one table or object before triggering the SQL Endpoint refresh. An empty Lakehouse will cause the API to return an empty response.

 

3. Place the notebook execution in a try-except block and log errors to help with debugging. Please refer sample python code in try-except block.

 

try:
# notebook execution logic
except KeyError as e:
print(f"KeyError encountered: {e}")
# optionally skip or retry

 

I hope this information helps. Please do let us know if you have any further queries.

 

Regards,

Dinesh

HI v-dineshya,

 

Using semantic labs link the notebook code is extremely simple

 

 

#%pip install semantic-link-labs

# Welcome to your new notebook
# Type here in the cell editor to add code!
import sempy_labs as labs

item = 'Stage' # Enter the name or ID of the Fabric item
type = 'Lakehouse' # Enter the item type
workspace = 'a0ad263f-c689-480b-bcd2-cc1a5cc9169f' # Enter the name or ID of the workspace

# Example 1: Refresh the metadata of all tables
tables = None
x = labs.refresh_sql_endpoint_metadata(item=item, type=type, workspace=workspace, tables=tables)
display(x)
 

honestly I have no idea about how to apply your workaround here

 

Thx

 

Final Solution was to get rid of semantic labs approach and use directl ythe api as shown here

Example code using the new fabric rest api · GitHub

Thx


alfBI
Contributor II

I missed to add that what is curious is that If I open a failed execution

alfBI_0-1752213700550.png
and I rerun from the failed refresh

 

alfBI_1-1752213735662.png

 

 

it works, so it looks like just after the ingestion of tables on lakehouse the API needs some time to notice that lakehouse has tables. I will try again addind a time activity (30 seconds) in front of the refresh

 

 

 

v-dineshya
Honored Contributor II

Hi @alfBI ,

Thank you for response.  As you mentioned in your previous response, the notebook execution is success. You want to check the issue again by adding a time activity before the refresh. Once done your testing.  Please do let us know if you have any further queries.

 

Regards,

Dinesh

alfBI
Contributor II

Hi,

 

Add the time delay did not make any difference. It's quite clear that for some reasons call to API that manage trhe refresh of the SQL Endpoint fails but no idea why. 
I have tested that scheduling the notebook to run at different times, sometimes works fine others not

 

alfBI_0-1752424170125.png

 

Success execution\\

alfBI_1-1752424209838.png

 

 

Failed execution

 

alfBI_2-1752424260844.png

 

alfBI_3-1752424369502.png

 

 

alfBI_4-1752424381417.png

 

but I am not able to undertand what make the difference to make it fail. Same lakehouse, same tables,....

 

 

 

Alfons 

 

v-dineshya
Honored Contributor II

Hi @alfBI ,

Thank you for reaching out to the Microsoft Community Forum.

 

The intermittent failures when using the refresh_sql_endpoint_metadata function from the semantic-link-labs library in Microsoft Fabric, particularly encountering a KeyError related to missing DataFrame columns.

 

Please refer below workarounds.

 

1. Validate DataFrame Columns Before Access. Add a check before accessing the columns


expected_cols = ['Table Name', 'Status', 'Start Time', 'End Time', 'Last Successful Time']
if all(col in x.columns for col in expected_cols):
    display(x[expected_cols])
else:
    print("Expected columns not found. DataFrame is likely empty.")
expected_cols = ['Table Name', 'Status', 'Start Time', 'End Time', 'Last Successful Sync

 

Note: This prevents the notebook from failing when the DataFrame is empty.

 

2. Make sure at least one table exists in the Lakehouse before triggering the refresh. You can add a pre-check using the semantic-link-labs API to list tables and confirm presence.

 

3. Wrap the refresh logic in a try-except block


try:
    x = labs.refresh_sql_endpoint_metadata(item=item, type=type, workspace=workspace, tables=tables)
    display(x)
except KeyError as e:
    print(f"KeyError encountered: {e}")

 

Note: This helps log errors and optionally retry or skip execution

 

4. If you are using a multi-step ETL/ELT pipeline, consider forcing a sync of the T-SQL endpoint using Semantic link.

 

I hope this information helps. Please do let us know if you have any further queries.

 

Regards,

Dinesh

SJCuthbertson
New Contributor III

This issue was reported to the semantic-link-labs package maintainers mid-June: SQL endpoint refresh fails on empty lakehouse · Issue #719 · microsoft/semantic-link-labs 

 

and the fix was published in release 0.11.0 of the SLL package: Release semantic-link-labs 0.11.0 · microsoft/semantic-link-labs · GitHub

 

I haven't verified yet but upgrading to use at least this version should resolve the problem, i.e.

 

 

!pip install semantic-link-labs>=0.11.0

 

 

Hi,

 

We have tried to apply the fix you suggested using the version later than 11.0 but problems remains. Again the call to 

 

x = labs.refresh_sql_endpoint_metadata(item=artifact_name, type=artifact_type, workspace=workspace, tables=tables)
 
seems to work intermittently. If I ran the notebook manually seems fine but on the schedule execution following errror is raised:
 

 

---> 17 x = labs.refresh_sql_endpoint_metadata(item=artifact_name, type=artifact_type, workspace=workspace, tables=tables) 18 display(x) File ~/jupyter-env/python3.11/lib/python3.11/site-packages/sempy/_utils/_log.py:371, in mds_log.<locals>.get_wrapper.<locals>.log_decorator_wrapper(*args, **kwargs) 368 start_time = time.perf_counter() 370 try: --> 371 result = func(*args, **kwargs) 373 # The invocation for get_message_dict moves after the function 374 # so it can access the state after the method call 375 message.update(extractor.get_completion_message_dict(result, arg_dict)) File ~/jupyter-env/python3.11/lib/python3.11/site-packages/sempy_labs/_sql_endpoints.py:144, in refresh_sql_endpoint_metadata(item, type, workspace, tables) 136 if tables: 137 payload = { 138 "tableDefinitions": [ 139 {"schema": schema, "tableNames": tables} 140 for schema, tables in tables.items() 141 ] 142 } --> 144 result = _base_api( 145 request=f"v1/workspaces/{workspace_id}/sqlEndpoints/{sql_endpoint_id}/refreshMetadata", 146 method="post", 147 client="fabric_sp", 148 status_codes=[200, 202], 149 lro_return_json=True, 150 payload=payload, 151 ) 153 columns = { 154 "Table Name": "string", 155 "Status": "string", (...) 160 "Error Message": "string", 161 } 163 if result: File ~/jupyter-env/python3.11/lib/python3.11/site-packages/sempy_labs/_helper_functions.py:2249, in _base_api(request, client, method, payload, status_codes, uses_pagination, lro_return_json, lro_return_status_code) 2241 response = requests.request( 2242 method.upper(), 2243 url, 2244 headers=headers, 2245 json=payload, 2246 ) 2248 if lro_return_json: -> 2249 return lro(c, response, status_codes).json() 2250 elif lro_return_status_code: 2251 return lro(c, response, status_codes, return_status_code=True) File ~/jupyter-env/python3.11/lib/python3.11/site-packages/sempy_labs/_helper_functions.py:1592, in lro(client, response, status_codes, sleep_time, return_status_code) 1590 result = response.status_code 1591 else: -> 1592 response = client.get(f"/v1/operations/{operationId}/result") 1593 result = response 1595 return result File ~/jupyter-env/python3.11/lib/python3.11/site-packages/sempy/fabric/_client/_rest_client.py:188, in BaseRestClient.get(self, path_or_url, *args, **kwargs) 169 def get(self, path_or_url: str, *args, **kwargs): 170 """ 171 GET request to the Fabric and PowerBI REST API. 172 (...) 186 The response from the REST API. 187 """ --> 188 return self.request("GET", path_or_url, *args, **kwargs) File ~/jupyter-env/python3.11/lib/python3.11/site-packages/sempy/fabric/_client/_rest_client.py:429, in FabricRestClient.request(self, method, path_or_url, lro_wait, lro_max_attempts, lro_operation_name, *args, **kwargs) 396 def request(self, 397 method: str, 398 path_or_url: str, (...) 402 *args, 403 **kwargs): 404 """ 405 Request to the Fabric REST API. 406 (...) 427 The response from the REST API. 428 """ --> 429 response = super().request(method, path_or_url, *args, **kwargs) 431 if not lro_wait or response.status_code != 202: 432 return response File ~/jupyter-env/python3.11/lib/python3.11/site-packages/sempy/fabric/_client/_rest_client.py:167, in BaseRestClient.request(self, method, path_or_url, *args, **kwargs) 164 kwargs["url"] = url 165 kwargs["headers"] = headers --> 167 return self.http.request(method, *args, **kwargs) File ~/jupyter-env/python3.11/lib/python3.11/site-packages/requests/sessions.py:589, in Session.request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json) 584 send_kwargs = { 585 "timeout": timeout, 586 "allow_redirects": allow_redirects, 587 } 588 send_kwargs.update(settings) --> 589 resp = self.send(prep, **send_kwargs) 591 return resp File ~/jupyter-env/python3.11/lib/python3.11/site-packages/requests/sessions.py:710, in Session.send(self, request, **kwargs) 707 r.elapsed = timedelta(seconds=elapsed) 709 # Response manipulation hooks --> 710 r = dispatch_hook("response", hooks, r, **kwargs) 712 # Persist cookies 713 if r.history: 714 # If the hooks create history then we want those cookies too File ~/jupyter-env/python3.11/lib/python3.11/site-packages/requests/hooks.py:30, in dispatch_hook(key, hooks, hook_data, **kwargs) 28 hooks = [hooks] 29 for hook in hooks: ---> 30 _hook_data = hook(hook_data, **kwargs) 31 if _hook_data is not None: 32 hook_data = _hook_data File ~/jupyter-env/python3.11/lib/python3.11/site-packages/sempy/_utils/_log.py:371, in mds_log.<locals>.get_wrapper.<locals>.log_decorator_wrapper(*args, **kwargs) 368 start_time = time.perf_counter() 370 try: --> 371 result = func(*args, **kwargs) 373 # The invocation for get_message_dict moves after the function 374 # so it can access the state after the method call 375 message.update(extractor.get_completion_message_dict(result, arg_dict)) File ~/jupyter-env/python3.11/lib/python3.11/site-packages/sempy/fabric/_client/_rest_client.py:105, in BaseRestClient.__init__.<locals>.validate_rest_response(response, *args, **kwargs) 102 @log_rest_response 103 def validate_rest_response(response, *args, **kwargs): 104 if response.status_code >= 400: --> 105 raise FabricHTTPException(response) FabricHTTPException: 400 Bad Request for url: https://api.fabric.microsoft.com//v1/operations/09343336-2bab-430b-8deb-5bda3c915a0e/result Error: {"requestId":"ffc1dc36-d14c-4245-9f5e-9dc8ebb2687a","errorCode":"OperationHasNoResult","message":"The operation has no result"} Headers: {'Cache-Control': 'no-store, must-revalidate, no-cache', 'Pragma': 'no-cache', 'Transfer-Encoding': 'chunked', 'Content-Type': 'application/json; charset=utf-8', 'x-ms-public-api-error-code': 'OperationHasNoResult', 'Strict-Transport-Security': 'max-age=31536000; includeSubDomains', 'X-Frame-Options': 'deny', 'X-Content-Type-Options': 'nosniff', 'RequestId': 'ffc1dc36-d14c-4245-9f5e-9dc8ebb2687a', 'Access-Control-Expose-Headers': 'RequestId', 'request-redirected': 'true', 'home-cluster-uri': 'https://wabi-west-europe-b-primary-redirect.analysis.windows.net/', 'Date': 'Mon, 15 Sep 2025 03:21:49 GMT'}

 

Any idea abotu what is wrong? Seems extremely difficult to use this 

v-dineshya
Honored Contributor II

Hi @alfBI ,

It appears this issue might require deeper investigation from the Power BI support team. I recommend opening a Microsoft support ticket so they can trace the issue.

To raise a support ticket for Fabric and Power BI, kindly follow the steps outlined in the following guide:

How to create a Fabric and Power BI Support ticket - Power BI | Microsoft Learn

 

Regards,

Dinesh

Well, we have done something similar by creating a bug here

Issues · microsoft/semantic-link-labs

 

Hope it helps,

 

Alfons

 

v-dineshya
Honored Contributor II

Hi @alfBI ,

Thank you for the update. Once you got update from Microsoft issues forum. Please keep us posted.

 

Regards,

Dinesh

Hi,

 

There was a bug fixed on latest release  0.12.3

Ensure to use this version and new requested parameters (time out, and time unit)

 

sample call

 

x = labs.refresh_sql_endpoint_metadata(item='XXXX', type='Lakehouse', workspace='XXXX', timeout_unit='Seconds',timeout_value='60')

 

If someone is interested to know how the issue evolves check this thread

 

https://github.com/microsoft/semantic-link-labs/issues/870#issuecomment-3301567878

 

SJCuthbertson
New Contributor III

This error that you've pasted here, and raised via sempy-labs GitHub as issue #870, is completely unrelated to the error that you originally started this community forum discussion with. 

 

The original one was a KeyError, to do with what tables you were refreshing or what tables exist in the lakehouse. 

 

This one is a FabricHTTPException from a totally different part of the code. They're not related. 

 

I'm not sure of the underlying cause of these intermittent FabricHTTPException errors (github #870) and I'm not at all sure if it's a problem in sempy-labs. I think more likely it's a problem in the API itself, but let's wait for v0.12.4 of sempy-labs to get the complete fix that Michael has started there. 

 

I have a Fabric support case open with Microsoft about this too and I would recommend you do the same, because more support cases will mean more attention. 

Yes, the original topic of the problem was related with the change of the parameters supported by the API and as a result by sempy labs  (the table parameter was not longer supported). This problem was fixed in latest release 12.3. As well you said this fix not has definitely solved the problem of refreshing the sql endpoint as an HTTP exception is triggered (we noticed also this problem before 12.3). Let's see if the problem relies on sempy or the API (in this later case I agree with you about to create a Fabric ticket).  Thx for the clarification for future post readers.

Helpful resources

Announcements
Users online (10,084)