Frequently asked questions.
I have my source connected but its tables are empty on the Catalog. Why?
You probably haven’t had any successfully completed runs yet. When a source is created, the tables are already identified and created in the Catalog, but the data only becomes available once a successful run has occurred. Make sure you’ve triggered a manual run or wait for the scheduled automatic trigger. Once the run is complete, your data should be visible in the Catalog.
How can I prevent certain users from accessing specific sources, transformations, or destinations?
Access to sources, destinations, and transformations is determined by table and layer access permissions. For a member user to have a certain level X of access to a source, destination, or transformation, they must have the same level X (or higher) access to each of the tables involved. You can read more about permissions in Permissions.
Does Nekt apply any processing to the data when extracting it from the source?
No! We extract data exactly as it is sent from the APIs. So the way you see the data in Nekt is the same as you would if you used any tool to hit the endpoint and view the result. The only modifications you might notice are: column names are always standardized to snake_case, and a new column called nekt_sync_at is added for internal processing.
What is the difference between a Full Sync and an Incremental Sync?
A Full Sync ignores previously extracted data and makes a copy of all data from the source, exactly as it exists at the time of extraction. Incremental Sync is more efficient and only fetches the data that has changed since the last extraction. Read more in Sync Types.
Why can’t I set Incremental Sync for some streams?
The use of Incremental Sync depends on the existence of a field that identifies when each record was last modified. Typically, it’s a column called “last_updated_at” or something similar. If the API doesn’t return this field, or if you didn’t specify it correctly when creating the source, Incremental Sync won’t be possible.
How much will I spend on my Cloud server account because of Nekt?
Although we can make some predictions based on information you provide, the best way to estimate your costs is to evaluate the cost of each resource’s initial runs and extrapolate that to your expected future usage.
Read more about how to monitor this in Cloud, and feel free to reach out if you have any questions!
How is the security and privacy of my data ensured at Nekt?
Nekt follows industry best practices for security and privacy. We ensure that your data stays within your cloud environment, minimizing risk and maintaining control over your data.
Will I loose my data if I cancel my subscription?
No. Canceling your subscription won’t delete your data. It stays safely stored in your cloud environment.Also, Nekt doesn’t store your data, so everything remains in your own cloud infrastructure unless you choose to export it.
What is Nekt credit?
Nekt credits are used to pay for the value delivered as Nekt manages and monitors your data pipelines (1 credit = 1 minute of pipeline run time). We don’t charge for rows, number of connectors, or user seats. Failed pipelines and initial data source loads are also free.
I already have a source connected. If new columns are added in this source, will Nekt automatically import them?
If you selected to sync all fields when connecting the source, Nekt considers that you want everything to be synced, including new columns. In this case, the “Auto-add fields” flag will be active (you can check it on the source details screen).
Whether the new fields will be populated for all rows or just for newly imported ones depends on the sync type:
null
in that column unless a full sync is triggered manually.Can I delete tables?
If you have the permission to do it, you can delete not only tables, but also sources, destinations and visualizations. The only limitation is that you can not delete a resource that is being used elsewhere. So you can’t delete a table if it is being used in a transformation, for example. It is also important to note that deleting a source doesn’t automatically delete the tables associated with it. If you want to delete the table as well, you have to do it individually.
Why is my run taking a long time?
The time it takes for a run to finish depends mostly on the amont of data being processed and how powerful is the configuration of the machine dedicated to it. You can add more resources to process a specific pipeline if you need it to be faster or to support a bigger amount of data - but keep in mind it has some impact on costs.
When you are adding a new table to the catalog, you can consider that this process has two steps, which might represent a run a bit longer than you were expecting. Read more about how the data extraction happens here.