
location_id so you can filter or join across sub-accounts in the catalog.
Configuring HighLevel as a Source
In the Sources tab, click on the “Add source” button located on the top right of your screen. Then, select the HighLevel option from the list of connectors. Click Next and you’ll be prompted to add your access.1. Add account access
Authorize Nekt with OAuth using an agency user that can access the locations you want to sync. The following configurations are available:- OAuth (refresh token): Sign in and grant access so Nekt can call the HighLevel API on your behalf.
- Location IDs: The sub-account location IDs to include in the sync. See HighLevel help for how to find a location ID.
-
Start date: Earliest point in time for incremental streams (
contacts,opportunities). Records updated on or after this window are considered for historical loads and incremental bookmarks.
2. Select streams
Choose which data streams you want to sync. For faster extractions, select only the streams that are relevant to your analysis. You can select entire groups of streams or pick specific ones.Tip: The stream can be found more easily by typing its name.Select the streams and click Next.
3. Configure data streams
Customize how you want your data to appear in your catalog. Select the desired layer where the data will be placed, a folder to organize it inside the layer, a name for each table (which will effectively contain the fetched data), and the type of sync.- Layer: choose between the existing layers on your catalog. This is where you will find your new extracted tables as the extraction runs successfully.
- Folder: a folder can be created inside the selected layer to group all tables being created from this new data source.
- Table name: we suggest a name, but feel free to customize it. You have the option to add a prefix to all tables at once and make this process faster!
-
Sync Type: you can choose between INCREMENTAL and FULL_TABLE.
- Incremental (
contacts,opportunities): each run fetches updates since the last successful bookmark (driven by API update timestamps and your Start date). - Full table (
locations,custom_fields,campaigns,pipelines): each run replaces the current snapshot for that table.
- Incremental (
4. Configure data source
Describe your data source for easy identification within your organization, not exceeding 140 characters. To define your Trigger, consider how often you want data to be extracted from this source. This decision usually depends on how frequently you need the new table data updated (every day, once a week, or only at specific times). Optionally, you can define some additional settings:- Configure Delta Log Retention and determine for how long we should store old states of this table as it gets updated. Read more about this resource here.
- Determine when to execute an Additional Full Sync. This will complement the incremental data extractions, ensuring that your data is completely synchronized with your source every once in a while.
5. Check your new source
You can view your new source on the Sources page. If needed, manually trigger the source extraction by clicking on the arrow button. Once executed, your data will appear in your Catalog.Streams and Fields
Below you’ll find all available data streams from HighLevel and their corresponding fields:Locations
Locations
Reference stream that emits one row per configured Location ID, with the agency company identifier. Child streams (contacts, opportunities, etc.) run in the context of each location.Key fields:
location_id- Sub-account location identifiercompany_id- Agency company identifier associated with the OAuth token
Contacts
Contacts
Contacts for each location, loaded via the search API. Incremental replication uses
date_updated.Identifiers:id- Contact identifier (primary key)location_id- Location scope
first_name,last_name,first_name_lower_case,last_name_lower_caseemail,valid_email,phone,phone_labelcompany_name,business_name,business_id,websiteaddress,city,state,country,postal_codedate_of_birth- Unix timestamp from the APItype,source,assigned_to,followers,tags,dnd
additional_emails- Array of objects:email,valid_email_dateadditional_phones- Array of objects:phone,phone_label
custom_fields- Array of objects:id(field definition),value(string; non-string API values are coerced to string or JSON text)opportunities- JSON string containing opportunity payloads returned on the contact (serialized for catalog compatibility)
date_added,date_updated(replication key)
Custom fields
Custom fields
Custom field definitions for the location (for example fields attached to contacts or opportunities).Key fields:
id- Field definition identifierlocation_id- Location scopename,field_key,model,data_type,placeholder,positiondocument_type,parent_id,standardpicklist_options- Array of allowed labels (normalized from API picklist options)date_added
Campaigns
Campaigns
Marketing campaigns for the location.Key fields:
id- Campaign identifiername- Campaign namestatus- Campaign status from the APIlocation_id- Location scope
Pipelines
Pipelines
Opportunity pipelines and their stages for the location.Key fields:
id- Pipeline identifiername- Pipeline namelocation_id- Location scopeshow_in_funnel,show_in_pie_chart,origin_iddate_added,date_updated
stages):id,name,positionshow_in_pie_chart,show_in_funnel,origin_id
Opportunities
Opportunities
Opportunities (deals) per location from the search API. Incremental replication uses
updated_at (filtered via API date_updated).Identifiers:id- Opportunity identifierlocation_id- Location scopecontact_id- Related contactpipeline_id,pipeline_stage_id,pipeline_stage_u_id
name,status,source,monetary_value(string),assigned_to,followerslost_reason_id,index_versioncreated_at,updated_at(replication key)last_status_change_at,last_stage_change_at,last_action_date
contact object):id,name,company_name,email,phone,tags
notes,tasks,calendar_events- Arrays of string identifiers or payloads from the APIcustom_fields- Array of objects:id,field_value(string; complex values serialized to string)relations- Array of related records:association_id,relation_id,primary,object_key,record_id,full_name,contact_name,company_name,email,phone,tags,attributed
Data Model
Relationships are driven by location scope, then CRM keys between opportunities, contacts, pipelines, and field definitions.Use Cases for Data Analysis
Examples below use placeholder schema and table names. Replacenekt_raw and the table names with the layer and table names you configured for this source (for example nekt_raw.hilevel_opportunities).
1. Pipeline overview by stage
Summarize open opportunities by pipeline and stage usingmonetary_value (cast in SQL because it is stored as a string in the catalog).
SQL query
SQL query
- AWS
- GCP
Implementation Notes
Replication and state
- Incremental streams partition bookmark state by
location_id, so each sub-account progresses independently. locationsexists mainly to drive per-location extraction in the Singer tap; enable it in the catalog only if you want that reference table materialized.
Data shape
contacts.opportunitiesand some custom field values are stored as strings (JSON text) where the API returns nested structures or mixed types.opportunities.monetary_valueis modeled as a string; cast explicitly in SQL for numeric aggregations.
API usage
- Selecting fewer streams reduces runtime and API load. Prefer incremental streams with a tight Start date for large locations.
Skills for agents
Download HighLevel skills file
HighLevel connector documentation as plain markdown, for use in AI agent contexts.