Every time we trigger the extraction of a source, the execution of a transformation or the sending of data to a destination, we call it a run. You can monitor the execution of your runs to make sure the process is completed with no errors.
It’s how we call an individual execution of a data pipeline (e.g. data source extraction, data transformation, destination loader). Every time a pipeline is executed, we have a new register on the list of runs. By tracking each execution, we ensure we’re working with the most up-to-date data.A run is created every time we trigger a pipeline. These are the types of trigger:
Scheduled: Set the pipeline to run at specific times. Let’s say you need a report for the monthly sales: you can schedule a pipeline to be executed once a month, before your meeting to present the report; if you work with an operation that requires real time data, your pipeline can be executed every 12 hours for example; and so on.
Event-triggered: Run the pipeline whenever a specific event occurs, such as the extraction of a source.
Manually triggered: Execute the pipeline on-demand, useful for generating reports when needed with the most up to date information.
You’ll find a list of runs with its status, pipeline name, and some metadata about the run.You can filter the list by status and/or pipeline, so you can easily check the situation of a specific pipeline.