Skip to main content
Nekt SDK makes it easy to interact with your data lake through code. It turns complex operations into simple, intuitive methods — so you can load, save, and manage data without worrying about the low-level technical details.
Explore the SDK using Google Colab or check all templates.

How to use Nekt SDK

  1. Generate a token with access to the appropriate resources (tables, volumes, secrets). This is the safest way to secure your resources outside Nekt environment.
  2. Install the SDK (already included in our templates):
pip install git+https://github.com/nektcom/nekt-sdk-py.git#egg=nekt-sdk
  1. Using the SDK
import nekt

# Add your token here to initialize the SDK:
nekt.data_access_token = "MY_SECRET_TOKEN" 
Now you can work with data from your Lakehouse.

Nekt SDK methods

Find all functions available listed here:
import nekt

# Add your token here to initialize the SDK:
nekt.data_access_token = "MY_SECRET_TOKEN" 
Provide the token that has access to the resources you want to access.Generate your token here.
Load a table from your lakehouse as a Spark DataFrame using the .load_table() method.Parameters:
  • layer_name (str): The name of the layer where the table is located
  • table_name (str): The name of the table you want to load
Layer and table names must use the same capitalization as you use on your Catalog.
Returns: Spark DataFrameExample:
import nekt

df = nekt.load_table(
   layer_name="Bronze",
   table_name="pipedrive_deals"
)
We recommend using type hinting to make this explicit:
import nekt
from pyspark.sql import DataFrame

deals_df: DataFrame = nekt.load_table(
   layer_name="Bronze",
   table_name="pipedrive_deals"
)
Save a Spark DataFrame as a table in your lakehouse using the .save_table() method.Parameters:
  • df (DataFrame): The Spark DataFrame to save
  • layer_name (str): The name of the layer where the table will be saved
  • table_name (str): The name of the table to create
  • folder_name (str, optional): The folder name within the layer. If not provided, the table will be saved in the root of the layer
Layer, table and folder names must use the same capitalization as you use on your Catalog.
Returns: bool (success status)Example:
import nekt

nekt.save_table(
   df=transformed_df,
   layer_name="Transformation",
   table_name="customer_metrics",
   folder_name="analytics"
)
Example without folder:
nekt.save_table(
   df=transformed_df,
   layer_name="Transformation",
   table_name="customer_metrics"
)
.save_table() currently only supports overwrite mode, so your table will always have only data written by the last call of this method.
Access the shared Spark session instance using the .get_spark_session() method. This is useful when you need direct access to Spark operations, like creating a new dataframe with custom schema or data.Returns: SparkSession objectExample:
import nekt

# Get the Spark session
spark = nekt.get_spark_session()

# Create a DataFrame with custom schema
from pyspark.sql.types import StructType, StructField, StringType, IntegerType

schema = StructType([
   StructField("id", IntegerType(), True),
   StructField("name", StringType(), True)
])

data = [(1, "Alice"), (2, "Bob")]
custom_df = spark.createDataFrame(data, schema)

# Create a DataFrame from a range
range_df = spark.range(1000)

# Read files directly (useful when used with `.load_volume`)
csv_df = spark.read.csv("path/to/file.csv", header=True)
Load a volume from your lakehouse using the .load_volume() method. Volumes allow you to store and access unstructured data files.Parameters:
  • layer_name (str): The name of the layer where the volume is located
  • volume_name (str): The name of the volume you want to load
Layer and volume names must use the same capitalization as you use on your Catalog.
Returns: List[Dict[str, str]] containing file paths in the volumeExample:
import nekt

files = nekt.load_volume(
   layer_name="Raw",
   volume_name="csv_uploads"
)

# Access the file paths
for file in files:
   print(file['path'])
Load a secret value from your organization’s secrets vault using the .load_secret() method. Secrets are useful for storing sensitive information like API keys and credentials.Parameters:
  • key (str): The secret key to retrieve
Key name must use the same capitalization as you use on your Catalog.
Returns: str (the secret value)Example:
import nekt

api_key = nekt.load_secret(key="MY_SECRET_API_KEY")

# Use the secret in your code
response = requests.get(
   "https://api.example.com/data",
   headers={"Authorization": f"Bearer {api_key}"}
)
Make sure your token has permission to access the secret you’re trying to load.
Load a Delta table from your lakehouse using the .load_delta_table() method.All data stored on your lakehouse is saved on delta format, and you can leverage it if needed (delta tables provide ACID transactions and time travel capabilities).Parameters:
  • layer_name (str): The name of the layer where the Delta table is located
  • table_name (str): The name of the Delta table you want to load
Layer and table names must use the same capitalization as you use on your Catalog.
Returns: DeltaTable objectExample:
import nekt

delta_table = nekt.load_delta_table(
   layer_name="Transformation",
   table_name="customer_data"
)

# You can now use Delta table operations like time travel

Need Help?

If you encounter any issues with our SDK or if you have feedback, reach out to our support team. We are here to help.