> ## Documentation Index
> Fetch the complete documentation index at: https://docs.nekt.com/llms.txt
> Use this file to discover all available pages before exploring further.

# HighLevel as a data source

> Bring data from HighLevel (GoHighLevel) to Nekt.

HighLevel (GoHighLevel) is a business management platform for service-based businesses. It provides customer relationship management, marketing automation, and operations tooling to help teams capture leads, manage pipelines, and run campaigns.

<img height="50" src="https://mintcdn.com/nekt/3MFKt2g7jzqFpztO/assets/logo/logo-gohighlevel.png?fit=max&auto=format&n=3MFKt2g7jzqFpztO&q=85&s=4b196a09600762b3a2faa68c80cacaf8" data-path="assets/logo/logo-gohighlevel.png" />

The connector is built for **agency** accounts: you authenticate with OAuth, then sync one or more **location IDs** (sub-accounts) in a single source. All location-scoped streams include `location_id` so you can filter or join across sub-accounts in the catalog.

## Configuring HighLevel as a Source

In the [Sources](https://app.nekt.ai/sources) tab, click on the "Add source" button located on the top right of your screen. Then, select the HighLevel option from the list of connectors.

Click **Next** and you'll be prompted to add your access.

### 1. Add account access

Authorize Nekt with OAuth using an agency user that can access the locations you want to sync.

The following configurations are available:

* **OAuth (refresh token)**: Sign in and grant access so Nekt can call the HighLevel API on your behalf.

* **Location IDs**: The sub-account location IDs to include in the sync. See [HighLevel help](https://help.gohighlevel.com/support/solutions/articles/48001204848-how-do-i-find-my-client-s-location-id-) for how to find a location ID.

* **Start date**: Earliest point in time for **incremental** streams (`contacts`, `opportunities`). Records updated on or after this window are considered for historical loads and incremental bookmarks.

Once you're done, click **Next**.

### 2. Select streams

Choose which data streams you want to sync. For faster extractions, select only the streams that are relevant to your analysis. You can select entire groups of streams or pick specific ones.

> Tip: The stream can be found more easily by typing its name.

Select the streams and click **Next**.

### 3. Configure data streams

Customize how you want your data to appear in your catalog. Select the desired layer where the data will be placed, a folder to organize it inside the layer, a name for each table (which will effectively contain the fetched data), and the type of sync.

* **Layer**: choose between the existing layers on your catalog. This is where you will find your new extracted tables as the extraction runs successfully.

* **Folder**: a folder can be created inside the selected layer to group all tables being created from this new data source.

* **Table name**: we suggest a name, but feel free to customize it. You have the option to add a **prefix** to all tables at once and make this process faster!

* **Sync Type**: you can choose between INCREMENTAL and FULL\_TABLE.
  * **Incremental** (`contacts`, `opportunities`): each run fetches updates since the last successful bookmark (driven by API update timestamps and your **Start date**).
  * **Full table** (`locations`, `custom_fields`, `campaigns`, `pipelines`): each run replaces the current snapshot for that table.

Once you are done configuring, click **Next**.

### 4. Configure data source

Describe your data source for easy identification within your organization, not exceeding 140 characters.

To define your [Trigger](https://docs.nekt.com/get-started/core-concepts/triggers), consider how often you want data to be extracted from this source. This decision usually depends on how frequently you need the new table data updated (every day, once a week, or only at specific times).

Optionally, you can define some additional settings:

* Configure Delta Log Retention and determine for how long we should store old states of this table as it gets updated. Read more about this resource [here](https://docs.nekt.com/get-started/core-concepts/resource-control).

* Determine when to execute an **Additional [Full Sync](https://docs.nekt.com/get-started/core-concepts/types-of-sync#additional-full-sync)**. This will complement the incremental data extractions, ensuring that your data is completely synchronized with your source every once in a while.

Once you are ready, click **Next** to finalize the setup.

### 5. Check your new source

You can view your new source on the [Sources](https://app.nekt.ai/sources) page. If needed, manually trigger the source extraction by clicking on the arrow button. Once executed, your data will appear in your Catalog.

<Warning>For you to be able to see it on your [Catalog](https://app.nekt.ai/catalog), you need at least one successful source run.</Warning>

# Streams and Fields

Below you'll find all available data streams from HighLevel and their corresponding fields:

<AccordionGroup>
  <Accordion title="Locations">
    Reference stream that emits one row per configured **Location ID**, with the agency **company** identifier. Child streams (contacts, opportunities, etc.) run in the context of each location.

    **Key fields:**

    * `location_id` - Sub-account location identifier
    * `company_id` - Agency company identifier associated with the OAuth token

    **Replication:** FULL\_TABLE (not incremental). In the tap catalog this stream is **off by default**; enable it if you want an explicit dimension table of configured locations.
  </Accordion>

  <Accordion title="Contacts">
    Contacts for each location, loaded via the search API. **Incremental** replication uses `date_updated`.

    **Identifiers:**

    * `id` - Contact identifier (primary key)
    * `location_id` - Location scope

    **Profile:**

    * `first_name`, `last_name`, `first_name_lower_case`, `last_name_lower_case`
    * `email`, `valid_email`, `phone`, `phone_label`
    * `company_name`, `business_name`, `business_id`, `website`
    * `address`, `city`, `state`, `country`, `postal_code`
    * `date_of_birth` - Unix timestamp from the API
    * `type`, `source`, `assigned_to`, `followers`, `tags`, `dnd`

    **Emails and phones:**

    * `additional_emails` - Array of objects: `email`, `valid_email_date`
    * `additional_phones` - Array of objects: `phone`, `phone_label`

    **Custom data:**

    * `custom_fields` - Array of objects: `id` (field definition), `value` (string; non-string API values are coerced to string or JSON text)
    * `opportunities` - JSON **string** containing opportunity payloads returned on the contact (serialized for catalog compatibility)

    **Timestamps:**

    * `date_added`, `date_updated` (replication key)
  </Accordion>

  <Accordion title="Custom fields">
    Custom field definitions for the location (for example fields attached to contacts or opportunities).

    **Key fields:**

    * `id` - Field definition identifier
    * `location_id` - Location scope
    * `name`, `field_key`, `model`, `data_type`, `placeholder`, `position`
    * `document_type`, `parent_id`, `standard`
    * `picklist_options` - Array of allowed labels (normalized from API picklist options)
    * `date_added`

    **Replication:** FULL\_TABLE.
  </Accordion>

  <Accordion title="Campaigns">
    Marketing campaigns for the location.

    **Key fields:**

    * `id` - Campaign identifier
    * `name` - Campaign name
    * `status` - Campaign status from the API
    * `location_id` - Location scope

    **Replication:** FULL\_TABLE.
  </Accordion>

  <Accordion title="Pipelines">
    Opportunity pipelines and their stages for the location.

    **Key fields:**

    * `id` - Pipeline identifier
    * `name` - Pipeline name
    * `location_id` - Location scope
    * `show_in_funnel`, `show_in_pie_chart`, `origin_id`
    * `date_added`, `date_updated`

    **Stages (nested array `stages`):**

    * `id`, `name`, `position`
    * `show_in_pie_chart`, `show_in_funnel`, `origin_id`

    **Replication:** FULL\_TABLE.
  </Accordion>

  <Accordion title="Opportunities">
    Opportunities (deals) per location from the search API. **Incremental** replication uses `updated_at` (filtered via API `date_updated`).

    **Identifiers:**

    * `id` - Opportunity identifier
    * `location_id` - Location scope
    * `contact_id` - Related contact
    * `pipeline_id`, `pipeline_stage_id`, `pipeline_stage_u_id`

    **Core attributes:**

    * `name`, `status`, `source`, `monetary_value` (string), `assigned_to`, `followers`
    * `lost_reason_id`, `index_version`
    * `created_at`, `updated_at` (replication key)
    * `last_status_change_at`, `last_stage_change_at`, `last_action_date`

    **Embedded contact summary (`contact` object):**

    * `id`, `name`, `company_name`, `email`, `phone`, `tags`

    **Related collections:**

    * `notes`, `tasks`, `calendar_events` - Arrays of string identifiers or payloads from the API
    * `custom_fields` - Array of objects: `id`, `field_value` (string; complex values serialized to string)
    * `relations` - Array of related records: `association_id`, `relation_id`, `primary`, `object_key`, `record_id`, `full_name`, `contact_name`, `company_name`, `email`, `phone`, `tags`, `attributed`
  </Accordion>
</AccordionGroup>

# Data Model

Relationships are driven by **location** scope, then CRM keys between opportunities, contacts, pipelines, and field definitions.

```mermaid theme={null}
graph TD;
    subgraph "Scope"
        Locations["locations"];
    end

    subgraph "Reference"
        CustomFields["custom_fields"];
        Campaigns["campaigns"];
        Pipelines["pipelines"];
    end

    subgraph "Core CRM"
        Contacts["contacts"];
        Opportunities["opportunities"];
    end

    CustomFields -- "location_id" --> Locations;
    Campaigns -- "location_id" --> Locations;
    Pipelines -- "location_id" --> Locations;
    Contacts -- "location_id" --> Locations;
    Opportunities -- "location_id" --> Locations;

    Opportunities -- "contact_id" --> Contacts;
    Opportunities -- "pipeline_id" --> Pipelines;

    Contacts -- "custom_fields[].id" --> CustomFields;
    Opportunities -- "custom_fields[].id" --> CustomFields;
```

# Use Cases for Data Analysis

Examples below use placeholder schema and table names. Replace `nekt_raw` and the table names with the **layer** and **table names** you configured for this source (for example `nekt_raw.hilevel_opportunities`).

### 1. Pipeline overview by stage

Summarize open opportunities by pipeline and stage using `monetary_value` (cast in SQL because it is stored as a string in the catalog).

<Accordion title="SQL query">
  <Tabs>
    <Tab title="AWS">
      ```sql theme={null}
      SELECT
         o.pipeline_id,
         o.pipeline_stage_id,
         o.status,
         COUNT(*) AS opportunity_count,
         SUM(TRY_CAST(o.monetary_value AS DOUBLE)) AS total_value
      FROM
         nekt_raw.hilevel_opportunities o
      WHERE
         o.location_id = 'YOUR_LOCATION_ID'
      GROUP BY
         o.pipeline_id,
         o.pipeline_stage_id,
         o.status
      ORDER BY
         total_value DESC NULLS LAST
      ```
    </Tab>

    <Tab title="GCP">
      ```sql theme={null}
      SELECT
         o.pipeline_id,
         o.pipeline_stage_id,
         o.status,
         COUNT(*) AS opportunity_count,
         SUM(SAFE_CAST(o.monetary_value AS FLOAT64)) AS total_value
      FROM
         `nekt_raw.hilevel_opportunities` o
      WHERE
         o.location_id = 'YOUR_LOCATION_ID'
      GROUP BY
         1,
         2,
         3
      ORDER BY
         total_value DESC
      ```
    </Tab>
  </Tabs>
</Accordion>

## Implementation Notes

### Replication and state

* Incremental streams partition bookmark state by `location_id`, so each sub-account progresses independently.
* **`locations`** exists mainly to drive per-location extraction in the Singer tap; enable it in the catalog only if you want that reference table materialized.

### Data shape

* **`contacts.opportunities`** and some custom field values are stored as **strings** (JSON text) where the API returns nested structures or mixed types.
* **`opportunities.monetary_value`** is modeled as a string; cast explicitly in SQL for numeric aggregations.

### API usage

* Selecting fewer streams reduces runtime and API load. Prefer incremental streams with a tight **Start date** for large locations.

## Skills for agents

<Snippet file="agent-skills-intro.mdx" />

<Card title="Download HighLevel skills file" icon="wand-magic-sparkles" href="/sources/gohighlevel.md">
  HighLevel connector documentation as plain markdown, for use in AI agent contexts.
</Card>
