Skip to main content
A pipeline in Nekt is a data workflow that moves data through the platform:
  • Source → Catalog when a source extraction runs
  • Catalog → Catalog when a query, notebook, or history runs
  • Catalog → Destination when data is loaded into an external tool
Pipelines are the structure that handles data movement and keeps it orchestrated and monitored.

Configuring pipelines

Whenever you create a source, query, notebook, history, or destination, the platform guides you to configure how its pipeline runs: when it runs, how data is synced, and so on. As your workspace grows, you can orchestrate pipelines. For example: when source X finishes extracting, trigger query Y, and when that execution completes, send the result to destination Z. That connects the executions of a source, a query, and a destination pipeline.

Controlling pipelines

Pipelines are controlled by triggers (scheduled, event-based, or manual) that determine when they run. Each time a pipeline runs, it creates a run. You can monitor success via the run status and logs.