Skip to main content
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Calling all Data Engineers! Fabric Data Engineer (Exam DP-700) live sessions are back! Starting October 16th. Sign up.

Reply
DennesTorres
Valued Contributor

Identify a shortcut in pyspark notebook

Hi,

 

Shortcuts are very powerful and they are a great feature to enable a data mesh architecture.

 

However, thinking on corporation level it may be hard to manage who links with who. How could we identify a shortcut in a notebook?

 

I already tried:

 

metadata

os.listdir

mssparkutils.fs.ls

show tblproperties

spark.catalog.listtables

 

Nothing worked, they follow the shortcut and give no cluet that's not a local table.

 

Fabric has the information, of course, it changes the icon for shortcuts and shows in the lineage. But how could we access this information in a notebook to improve our shortcut management?

 

Kind Regards,

 

Denne

5 REPLIES 5
Anonymous
Not applicable

Hi @DennesTorres  - Thanks for using Fabric Community,

I have reached the internal team for help on this. I will update you once I hear from them.

Appreciate your patience.

Anonymous
Not applicable

Hi @DennesTorres ,

Apologies for the delay in reply from our side. 

The fabric release plan states that this feature will be available during Q4 2023 (create and manage shortcuts via REST API): Link 

vgchennamsft_0-1698060534528.png

Appreciate your patience

Anonymous
Not applicable

Hi @DennesTorres ,

We havenโ€™t heard from you on the last response and was just checking back to see if your query got answered? Otherwise, will respond back with the more details and we will try to help.

Anonymous
Not applicable

Hi @DennesTorres ,

We havenโ€™t heard from you on the last response and was just checking back to see if your query got answered? Otherwise, will respond back with the more details and we will try to help.

Syliano
New Contributor

I resolve this by using sempy :

import sempy.fabric as fabric
import json
import requests
import fnmatch
import base64

-- Get metadata
from pyspark.sql import SparkSession
from pyspark.sql.types import StructType, StructField, StringType


# Rรฉcupรฉrer les donnรฉes depuis Fabric
ws = fabric.get_workspace_id()
items = fabric.list_items(workspace=ws)

# Construire une liste de dictionnaires ร  partir des donnรฉes
items_data = [dict(row) for row in items.to_dict("records")]

# print(items_data)

# Dรฉfinir le schรฉma (adaptรฉ selon vos colonnes)
schema = StructType([
    StructField("Id", StringType(), True),
    StructField("Display Name", StringType(), True),
    StructField("Description", StringType(), True),
    StructField("Type", StringType(), True),
    StructField("Workspace Id", StringType(), True),
])

# Crรฉer le DataFrame Spark
items_df = spark.createDataFrame(items_data, schema)

# Crรฉer une vue SparkSQL
items_df.createOrReplaceTempView("items_view")

-- get current lakehouse 

ws = fabric.get_workspace_id()
items = fabric.list_items(workspace= ws)
item = items[(items['Display Name'] == "Bronze")& (items['Type']=="Lakehouse")].iloc[0]["Id"]

-- call api

client = fabric.FabricRestClient()
r  = client.request(method="get", path_or_url=f"/v1/workspaces/{ws}/items/{item}/shortcuts")

-- update or crรฉate shortcup to an other workspace

 

This code di you help you ?

 

Helpful resources

Announcements
Users online (25)