> ## Documentation Index
> Fetch the complete documentation index at: https://docs.nekt.com/llms.txt
> Use this file to discover all available pages before exploring further.

# Pipedrive as a data source

> Bring data from Pipedrive to Nekt.

Pipedrive is a sales customer relationship management (CRM) platform designed to help sales teams manage their pipelines and close deals more effectively. It provides tools for lead management, deal tracking, and sales forecasting with a focus on visual pipeline management.

<img height="30" src="https://mintcdn.com/nekt/0tn1_nwKYqAHn7jo/assets/logo/logo-pipedrive.png?fit=max&auto=format&n=0tn1_nwKYqAHn7jo&q=85&s=46b24678e92c974097525843566bdf88" data-path="assets/logo/logo-pipedrive.png" />

## Configuring Pipedrive as a Source

In the [Sources](https://app.nekt.ai/sources) tab, click on the "Add source" button located on the top right of your screen. Then, select the Pipedrive option from the list of connectors.

Click **Next** and you'll be prompted to add your access.

### 1. Add account access

You'll need your Pipedrive API Token for this connection. You can find it in your [Pipedrive account settings](https://support.pipedrive.com/en/article/how-can-i-find-my-personal-api-key). Once you have it, add the account access and the connector configuration.

The following configurations are available:

* **Start Date**: The earliest date from which records will be synced. This should be in DD-MM-YYYY format.

* **Extract Deal Child Streams**: Whether to extract child streams related to deals (`DealFlow` and `DealProducts`). Note that enabling this will significantly increase extraction time, as additional requests for each Deal need to be performed to get the additional information.

* **Extract Deleted Deals**: Whether to extract deals that have been deleted. Only deals deleted within the last 30 days can be retrieved.

* **Extract Deal Installments**: Whether to extract deal installments or not. The deal installments will return in a separate column inside the `Deals` stream.

Once you're done, click **Next**.

### 2. Select streams

Choose which data streams you want to sync - you can select all streams or pick specific ones that matter most to you.

> Tip: The stream can be found more easily by typing its name.

Select the streams and click **Next**.

### 3. Configure data streams

Customize how you want your data to appear in your catalog. Select a name for each table (which will contain the fetched data) and the type of sync.

* **Table name**: we suggest a name, but feel free to customize it. You have the option to add a **prefix** and make this process faster!

* **Sync Type**: you can choose between INCREMENTAL and FULL\_TABLE.
  * Incremental: every time the extraction happens, we'll get only the new data - which is good if, for example, you want to keep every record ever fetched.
  * Full table: every time the extraction happens, we'll get the current state of the data - which is good if, for example, you don't want to have deleted data in your catalog.

Once you are done configuring, click **Next**.

### 4. Configure data source

Describe your data source for easy identification within your organization, not exceeding 140 characters.

To define your [Trigger](https://docs.nekt.com/get-started/core-concepts/triggers), consider how often you want data to be extracted from this source. This decision usually depends on how frequently you need the new table data updated (every day, once a week, or only at specific times).

Optionally, you can determine when to execute a [full sync](https://docs.nekt.com/get-started/core-concepts/types-of-sync#additional-full-sync). This will complement the incremental data extractions, ensuring that your data is completely synchronized with your source every once in a while.

Once you are ready, click **Next** to finalize the setup.

### 5. Check your new source

You can view your new source on the [Sources](https://app.nekt.ai/sources) page. If needed, manually trigger the source extraction by clicking on the arrow button. Once executed, your data will appear in your Catalog.

<Warning>For you to be able to see it on your [Catalog](https://app.nekt.ai/catalog), you need at least one successful source run.</Warning>

# Streams and Fields

Below you'll find all available data streams from Pipedrive and their corresponding fields:

<AccordionGroup>
  <Accordion title="Activities">
    Stream for managing activities like calls, meetings, and tasks.

    **Key Fields:**

    * `id` - Unique identifier for the activity
    * `subject` - Subject or title of the activity
    * `type` - Type of activity (call, meeting, email, etc.)
    * `due_date` - Due date for the activity
    * `done` - Whether the activity is completed
    * `user_id` - ID of the user who owns the activity
    * `person_id` - ID of the associated person
    * `deal_id` - ID of the associated deal
    * `org_id` - ID of the associated organization
    * `location` - Location where the activity takes place
    * `note` - Notes attached to the activity
    * `add_time` - When the activity was created
    * `update_time` - When the activity was last updated

    **Additional Fields:**
    Location details, conference meeting information, recurrence settings, participants, and various flags for activity management.
  </Accordion>

  <Accordion title="Deals">
    Core stream for deal management and sales pipeline tracking.

    **Key Fields:**

    * `id` - Unique identifier for the deal
    * `title` - Title of the deal
    * `value` - Monetary value of the deal
    * `currency` - Currency of the deal value
    * `stage_id` - ID of the current stage of the deal
    * `pipeline_id` - ID of the pipeline the deal belongs to
    * `person_id` - ID of the person associated with the deal
    * `org_id` - ID of the organization associated with the deal
    * `owner_id` - ID of the user who owns the deal
    * `status` - Current status of the deal (open, won, lost, deleted)
    * `probability` - Probability of winning the deal (0-100)
    * `expected_close_date` - Expected close date for the deal
    * `add_time` - When the deal was created
    * `update_time` - When the deal was last updated
    * `won_time` - When the deal was won
    * `lost_time` - When the deal was lost

    **Revenue Metrics:**

    * `acv` - Annual Contract Value
    * `arr` - Annual Recurring Revenue
    * `mrr` - Monthly Recurring Revenue

    **Activity Counters:**

    * `activities_count` - Total number of activities on the deal
    * `done_activities_count` - Number of completed activities
    * `undone_activities_count` - Number of pending activities
    * `notes_count` - Number of notes on the deal
    * `files_count` - Number of files attached to the deal
  </Accordion>

  <Accordion title="Persons">
    Stream for managing contact information and person records.

    **Key Fields:**

    * `id` - Unique identifier for the person
    * `name` - Full name of the person
    * `first_name` - First name of the person
    * `last_name` - Last name of the person
    * `owner_id` - ID of the user who owns this person
    * `org_id` - ID of the organization this person belongs to
    * `job_title` - Job title of the person
    * `birthday` - Birthday of the person
    * `add_time` - When the person was added
    * `update_time` - When the person was last updated

    **Contact Information:**

    * `emails` - Email addresses associated with the person
    * `phones` - Phone numbers associated with the person
    * `im` - Instant messaging accounts
    * `postal_address` - Postal address information

    **Metadata:**

    * `notes` - Notes about the person
    * `picture_id` - ID of the person's profile picture
    * `label_ids` - Array of label IDs associated with the person
    * `custom_fields` - Custom fields defined for persons
  </Accordion>

  <Accordion title="Organizations">
    Stream for managing company and organization records.

    **Key Fields:**

    * `id` - Unique identifier for the organization
    * `name` - Name of the organization
    * `owner_id` - ID of the user who owns this organization
    * `add_time` - When the organization was added
    * `update_time` - When the organization was last updated

    **Address Information:**

    * `address` - Complete address information including:
      * `value` - Full address text
      * `country` - Country
      * `admin_area_level_1` - State/Province
      * `admin_area_level_2` - County
      * `locality` - City
      * `postal_code` - Postal/ZIP code
      * `formatted_address` - Formatted full address

    **Metadata:**

    * `label_ids` - Array of label IDs associated with the organization
    * `custom_fields` - Custom fields defined for organizations
  </Accordion>

  <Accordion title="Notes">
    Stream for managing notes attached to deals, persons, and organizations.

    **Key Fields:**

    * `id` - Unique identifier for the note
    * `content` - Content of the note
    * `add_time` - When the note was created
    * `update_time` - When the note was last updated
    * `user_id` - ID of the user who created the note

    **Associations:**

    * `deal_id` - ID of the deal associated with the note
    * `person_id` - ID of the person associated with the note
    * `org_id` - ID of the organization associated with the note
    * `lead_id` - ID of the lead associated with the note

    **Pinning Options:**

    * `pinned_to_deal_flag` - Whether the note is pinned to a deal
    * `pinned_to_person_flag` - Whether the note is pinned to a person
    * `pinned_to_organization_flag` - Whether the note is pinned to an organization
    * `pinned_to_lead_flag` - Whether the note is pinned to a lead
  </Accordion>

  <Accordion title="Pipelines">
    Stream for managing sales pipelines and their configuration.

    **Key Fields:**

    * `id` - Unique identifier for the pipeline
    * `name` - Name of the pipeline
    * `order_nr` - Order number of the pipeline
    * `add_time` - When the pipeline was created
    * `update_time` - When the pipeline was last updated
    * `selected` - Whether the pipeline is currently selected
    * `is_deal_probability_enabled` - Whether deal probability is enabled for this pipeline
  </Accordion>

  <Accordion title="Stages">
    Stream for managing pipeline stages and their settings.

    **Key Fields:**

    * `id` - Unique identifier for the stage
    * `name` - Name of the stage
    * `pipeline_id` - ID of the pipeline this stage belongs to
    * `order_nr` - Order number of the stage within the pipeline
    * `deal_probability` - Default probability percentage for deals in this stage
    * `add_time` - When the stage was created
    * `update_time` - When the stage was last updated

    **Deal Rotting Settings:**

    * `is_deal_rot_enabled` - Whether deal rotting is enabled for this stage
    * `days_to_rotten` - Number of days after which deals in this stage become rotten
  </Accordion>

  <Accordion title="Users">
    Stream for managing user accounts and permissions.

    **Key Fields:**

    * `id` - Unique identifier for the user
    * `name` - Full name of the user
    * `email` - Email address of the user
    * `phone` - Phone number of the user
    * `active_flag` - Whether the user is active
    * `is_admin` - Whether the user has admin privileges
    * `role_id` - Role ID of the user
    * `created` - When the user account was created
    * `last_login` - Last login time of the user

    **Localization:**

    * `lang` - Language ID for the user
    * `locale` - Locale setting for the user
    * `timezone_name` - Timezone name of the user
    * `default_currency` - Default currency for the user

    **Access Control:**

    * `access` - Access permissions for the user across different apps
  </Accordion>

  <Accordion title="Deal Products">
    Stream for managing products associated with deals.

    **Key Fields:**

    * `id` - Unique identifier for the deal product
    * `deal_id` - ID of the deal this product is associated with
    * `product_id` - ID of the product
    * `name` - Name of the product
    * `quantity` - Quantity of the product in the deal
    * `item_price` - Unit price of the product
    * `sum` - Total sum for this product (price × quantity)
    * `currency` - Currency of the product pricing
    * `add_time` - When the product was added to the deal
    * `update_time` - When the product was last updated

    **Pricing & Discounts:**

    * `tax` - Tax amount for this product
    * `tax_method` - Tax calculation method
    * `discount` - Discount amount applied to this product
    * `discount_type` - Type of discount applied (percentage or fixed amount)
    * `comments` - Comments about this product in the deal
  </Accordion>

  <Accordion title="Deal Fields">
    Stream for managing custom field definitions for deals.

    **Key Fields:**

    * `id` - Unique identifier for the deal field
    * `key` - API key for the field
    * `name` - Display name of the field
    * `field_type` - Type of the field (text, enum, date, etc.)
    * `description` - Description of the field
    * `add_time` - When the field was created
    * `update_time` - When the field was last updated

    **Field Configuration:**

    * `mandatory_flag` - Whether the field is mandatory
    * `edit_flag` - Whether the field can be edited
    * `searchable_flag` - Whether the field is searchable
    * `sortable_flag` - Whether the field can be sorted
    * `filtering_allowed` - Whether the field can be used for filtering
    * `bulk_edit_allowed` - Whether the field allows bulk editing

    **Display Settings:**

    * `details_visible_flag` - Whether the field is visible in detail view
    * `add_visible_flag` - Whether the field is visible when adding records
    * `important_flag` - Whether the field is marked as important
    * `show_in_pipelines` - Pipeline visibility settings for the field
  </Accordion>

  <Accordion title="Person Fields">
    Stream for managing custom field definitions for persons.

    **Key Fields:**

    * `id` - Unique identifier for the person field
    * `key` - API key for the field
    * `name` - Display name of the field
    * `field_type` - Type of the field (text, enum, date, etc.)
    * `description` - Description of the field

    **Field Configuration:**
    Similar configuration options as Deal Fields, including mandatory settings, visibility options, and editing permissions.
  </Accordion>

  <Accordion title="Organization Fields">
    Stream for managing custom field definitions for organizations.

    **Key Fields:**

    * `id` - Unique identifier for the organization field
    * `key` - API key for the field
    * `name` - Display name of the field
    * `field_type` - Type of the field (text, enum, date, etc.)
    * `description` - Description of the field
    * `created_by_user_id` - ID of the user who created the field
    * `display_field` - Display field configuration

    **Field Configuration:**
    Similar configuration options as other field types, with additional organization-specific settings.
  </Accordion>

  <Accordion title="Deal Flow">
    Stream for tracking historical changes and activities on deals.

    **Key Fields:**

    * `id` - Unique identifier for the flow entry
    * `deal_id` - ID of the deal this flow entry belongs to
    * `timestamp` - When the change occurred
    * `item_type` - Type of item that changed
    * `field_key` - Key of the field that was modified
    * `old_value` - Previous value
    * `new_value` - New value
    * `user_id` - ID of the user who made the change

    **Change Tracking:**
    Comprehensive tracking of all deal modifications, including field changes, activity additions, email communications, file attachments, and other deal-related events. This stream provides a complete audit trail for deal progression and team interactions.
  </Accordion>
</AccordionGroup>

## Data Model

The following diagram illustrates the relationships between the core data streams in Pipedrive. The arrows indicate the join keys that link the different entities, providing a clear overview of the data structure.

```mermaid theme={null}
graph TD;
    subgraph "Core Entities"
        Deals("Deals");
        Persons("Persons");
        Organizations("Organizations");
        Users("Users");
    end

    subgraph "Deal Details"
        Activities("Activities");
        Notes("Notes");
        Deal_Products("Deal Products");
        Deal_Flow("Deal Flow");
    end

    subgraph "Pipeline Structure"
        Pipelines("Pipelines");
        Stages("Stages");
    end

    Deals -- "person_id" --> Persons;
    Deals -- "org_id" --> Organizations;
    Deals -- "owner_id" --> Users;
    Deals -- "stage_id" --> Stages;

    Activities -- "deal_id" --> Deals;
    Activities -- "person_id" --> Persons;
    Activities -- "org_id" --> Organizations;
    Activities -- "user_id" --> Users;

    Notes -- "deal_id" --> Deals;
    Notes -- "person_id" --> Persons;
    Notes -- "org_id" --> Organizations;
    Notes -- "user_id" --> Users;

    Persons -- "org_id" --> Organizations;
    Persons -- "owner_id" --> Users;

    Organizations -- "owner_id" --> Users;

    Stages -- "pipeline_id" --> Pipelines;

    Deal_Products -- "deal_id" --> Deals;

    Deal_Flow -- "deal_id" --> Deals;
    Deal_Flow -- "user_id" --> Users;
```

## Use Cases for Data Analysis

This guide outlines the most valuable business intelligence use cases when consolidating Pipedrive data, along with ready-to-use SQL queries that you can run on [Explorer](https://app.nekt.ai/explorer).

## Sales Performance

### 1. Sales Funnel Analysis

Track conversion rates and identify bottlenecks in your sales pipeline.

**Business Value:**

* Identify which pipeline stages have the highest drop-off rates
* Optimize sales processes by focusing on problematic stages
* Forecast revenue based on historical conversion patterns

<Accordion title="SQL code" defaultOpen>
  ```sql lines theme={null}
  WITH funnel_analysis AS (
    SELECT 
      p.name AS pipeline_name,
      s.name AS stage_name,
      s.order_nr,
      COUNT(DISTINCT d.id) AS deals_count,
      SUM(d.value) AS total_value,
      ROUND(AVG(d.probability), 2) AS avg_probability,
      COUNT(DISTINCT CASE WHEN d.status = 'won' THEN d.id END) AS won_deals,
      COUNT(DISTINCT CASE WHEN d.status = 'lost' THEN d.id END) AS lost_deals
    FROM nekt_raw.pipedrive_deals d
    JOIN nekt_raw.pipedrive_stages s ON d.stage_id = s.id
    JOIN nekt_raw.pipedrive_pipelines p ON d.pipeline_id = p.id
    WHERE d.add_time >= current_date - interval '90' day
      AND NOT d.is_deleted
    GROUP BY p.name, s.name, s.order_nr
  ),
  conversion_rates AS (
    SELECT 
      pipeline_name,
      stage_name,
      order_nr,
      deals_count,
      total_value,
      avg_probability,
      won_deals,
      lost_deals,
      ROUND(
        CAST(won_deals AS DOUBLE) / NULLIF(deals_count, 0) * 100, 2
      ) AS win_rate_percentage,
      LAG(deals_count) OVER (
        PARTITION BY pipeline_name 
        ORDER BY order_nr
      ) AS previous_stage_deals
    FROM funnel_analysis
  )
  SELECT 
    pipeline_name,
    stage_name,
    order_nr,
    deals_count,
    total_value,
    avg_probability,
    win_rate_percentage,
    CASE 
      WHEN previous_stage_deals > 0 THEN
        ROUND(
          CAST(deals_count AS DOUBLE) / previous_stage_deals * 100, 2
        )
      ELSE NULL
    END AS stage_conversion_rate
  FROM conversion_rates
  ORDER BY pipeline_name, order_nr
  ```
</Accordion>

### 2. Sales Representative Performance Dashboard

Compare individual and team performance metrics.

**Business Value:**

* Identify top-performing sales representatives
* Allocate resources and training effectively
* Set realistic targets based on historical performance

<Accordion title="SQL code" defaultOpen>
  ```sql lines theme={null}
  WITH rep_metrics AS (
    SELECT 
      u.id AS user_id,
      u.name AS rep_name,
      u.email,
      COUNT(DISTINCT d.id) AS total_deals,
      COUNT(DISTINCT CASE WHEN d.status = 'won' THEN d.id END) AS won_deals,
      COUNT(DISTINCT CASE WHEN d.status = 'lost' THEN d.id END) AS lost_deals,
      COUNT(DISTINCT CASE WHEN d.status = 'open' THEN d.id END) AS open_deals,
      SUM(CASE WHEN d.status = 'won' THEN d.value ELSE 0 END) AS won_revenue,
      SUM(CASE WHEN d.status = 'open' THEN d.value ELSE 0 END) AS pipeline_value,
      AVG(CASE WHEN d.status = 'won' THEN d.value END) AS avg_deal_size,
      COUNT(DISTINCT d.person_id) AS unique_contacts,
      COUNT(DISTINCT d.org_id) AS unique_organizations
    FROM nekt_raw.pipedrive_users u
    LEFT JOIN nekt_raw.pipedrive_deals d ON u.id = d.owner_id
      AND d.add_time >= current_date - interval '90' day
    WHERE u.active_flag = true
      AND NOT COALESCE(d.is_deleted, false)
    GROUP BY u.id, u.name, u.email
  ),
  team_benchmarks AS (
    SELECT 
      AVG(won_revenue) AS avg_team_revenue,
      AVG(CAST(won_deals AS DOUBLE) / NULLIF(total_deals, 0)) AS avg_team_win_rate,
      AVG(avg_deal_size) AS avg_team_deal_size
    FROM rep_metrics
    WHERE total_deals > 0
  )
  SELECT 
    rm.rep_name,
    rm.email,
    rm.total_deals,
    rm.won_deals,
    rm.lost_deals,
    rm.open_deals,
    rm.won_revenue AS won_revenue,
    rm.pipeline_value AS pipeline_value,
    ROUND(
      CAST(rm.won_deals AS DOUBLE) / NULLIF(rm.total_deals, 0) * 100, 2
    ) AS win_rate_percentage,
    rm.avg_deal_size AS avg_deal_size,
    rm.unique_contacts,
    rm.unique_organizations,
    CASE 
      WHEN rm.won_revenue > tb.avg_team_revenue THEN 'Above Average'
      WHEN rm.won_revenue = tb.avg_team_revenue THEN 'Average'
      ELSE 'Below Average'
    END AS revenue_performance,
    ROUND(
      rm.won_revenue / NULLIF(tb.avg_team_revenue, 0) * 100, 2
    ) AS revenue_vs_team_avg_percent
  FROM rep_metrics rm
  CROSS JOIN team_benchmarks tb
  WHERE rm.total_deals > 0
  ORDER BY rm.won_revenue DESC
  ```
</Accordion>

### 3. Lead Source & Channel Performance

Analyze the effectiveness of different lead generation channels.

**Business Value:**

* Optimize marketing spend allocation
* Identify most profitable lead sources
* Improve lead qualification processes

<Accordion title="SQL code" defaultOpen>
  ```sql lines theme={null}
  WITH
  	channel_mapping AS (
  		SELECT
  			CAST(option.id AS INT) AS channel_id,
  			option.label AS channel_name
  		FROM
  			nekt_raw.pipedrive_deal_fields pdf,
  			UNNEST (pdf.options) AS t (option)
  		WHERE
  			pdf.key = 'channel'
  	),
  	lead_performance AS (
  		SELECT
  			COALESCE(d.origin, 'Unknown') AS lead_source,
  			COALESCE(d.channel, 0) AS channel_id,
  			COUNT(DISTINCT d.id) AS total_leads,
  			COUNT(
  				DISTINCT CASE
  					WHEN d.status = 'won' THEN d.id
  				END
  			) AS won_deals,
  			COUNT(
  				DISTINCT CASE
  					WHEN d.status = 'lost' THEN d.id
  				END
  			) AS lost_deals,
  			COUNT(
  				DISTINCT CASE
  					WHEN d.status = 'open' THEN d.id
  				END
  			) AS open_deals,
  			SUM(d.value) AS total_pipeline_value,
  			SUM(
  				CASE
  					WHEN d.status = 'won' THEN d.value
  					ELSE 0
  				END
  			) AS won_revenue,
  			SUM(
  				CASE
  					WHEN d.status = 'open' THEN d.value
  					ELSE 0
  				END
  			) AS open_pipeline_value,
  			AVG(
  				CASE
  					WHEN d.status = 'won' THEN d.value
  				END
  			) AS avg_won_deal_size,
  			AVG(
  				CASE
  					WHEN d.status IN ('won', 'lost') THEN DATE_DIFF (
  						'day',
  						d.add_time,
  						COALESCE(d.won_time, d.lost_time)
  					)
  				END
  			) AS avg_sales_cycle_days,
  			MIN(d.add_time) AS first_lead_date,
  			MAX(d.add_time) AS last_lead_date
  		FROM
  			nekt_raw.pipedrive_deals d
  		WHERE
  			d.add_time >= CURRENT_DATE - interval '180' DAY
  			AND NOT d.is_deleted
  		GROUP BY
  			COALESCE(d.origin, 'Unknown'),
  			COALESCE(d.channel, 0)
  	),
  	source_metrics AS (
  		SELECT
  			lead_source,
  			channel_id,
  			total_leads,
  			won_deals,
  			lost_deals,
  			open_deals,
  			ROUND(
  				CAST(won_deals AS DOUBLE) / NULLIF(total_leads, 0) * 100,
  				2
  			) AS conversion_rate_percent,
  			ROUND(
  				CAST(won_deals AS DOUBLE) / NULLIF(won_deals + lost_deals, 0) * 100,
  				2
  			) AS win_rate_percent,
  			total_pipeline_value,
  			won_revenue,
  			open_pipeline_value,
  			avg_won_deal_size,
  			avg_sales_cycle_days,
  			ROUND(won_revenue / NULLIF(total_leads, 0), 2) AS revenue_per_lead,
  			DATE_FORMAT (first_lead_date, '%Y-%m-%d') AS first_lead_date,
  			DATE_FORMAT (last_lead_date, '%Y-%m-%d') AS last_lead_date
  		FROM
  			lead_performance
  	)
  SELECT
  	sm.lead_source,
  	sm.channel_id,
  	COALESCE(cm.channel_name, 'Unknown') AS channel_name,
  	sm.total_leads,
  	sm.won_deals,
  	sm.lost_deals,
  	sm.open_deals,
  	sm.conversion_rate_percent,
  	sm.win_rate_percent,
  	FORMAT ('$%,.0f', CAST(sm.won_revenue AS DOUBLE)) AS won_revenue,
  	FORMAT ('$%,.0f', CAST(sm.open_pipeline_value AS DOUBLE)) AS open_pipeline_value,
  	FORMAT ('$%,.0f', CAST(sm.avg_won_deal_size AS DOUBLE)) AS avg_won_deal_size,
  	sm.avg_sales_cycle_days,
  	FORMAT ('$%,.2f', CAST(sm.revenue_per_lead AS DOUBLE)) AS revenue_per_lead,
  	sm.first_lead_date,
  	sm.last_lead_date,
  	RANK() OVER (
  		ORDER BY
  			sm.conversion_rate_percent DESC
  	) AS conversion_rank,
  	RANK() OVER (
  		ORDER BY
  			sm.revenue_per_lead DESC
  	) AS revenue_efficiency_rank,
  	CASE
  		WHEN sm.total_leads >= 50
  		AND sm.conversion_rate_percent >= 15 THEN 'High Performance'
  		WHEN sm.total_leads >= 20
  		AND sm.conversion_rate_percent >= 10 THEN 'Good Performance'
  		WHEN sm.total_leads >= 10
  		AND sm.conversion_rate_percent >= 5 THEN 'Average Performance'
  		ELSE 'Needs Improvement'
  	END AS performance_category
  FROM
  	source_metrics sm
  	LEFT JOIN channel_mapping cm ON sm.channel_id = cm.channel_id
  WHERE
  	sm.total_leads >= 5
  ORDER BY
  	sm.revenue_per_lead DESC,
  	sm.conversion_rate_percent DESC
  ```
</Accordion>

## Customer Relationship Management

### 4. Customer Lifecycle Analysis

Understand customer journey from first contact to closed deal.

**Business Value:**

* Optimize customer touchpoints and engagement strategies
* Identify patterns in successful customer journeys
* Improve customer experience and retention

<Accordion title="SQL code" defaultOpen>
  ```sql lines theme={null}
  WITH
  	customer_journey AS (
  		SELECT
  			d.id AS deal_id,
  			d.title AS deal_title,
  			d.value AS deal_value,
  			d.status,
  			p.name AS contact_name,
  			o.name AS organization_name,
  			u.name AS owner_name,
  			d.add_time AS deal_created,
  			d.won_time,
  			d.lost_time,
  			COALESCE(d.won_time, d.lost_time, d.update_time) AS deal_closed,
  			DATE_DIFF (
  				'day',
  				d.add_time,
  				COALESCE(d.won_time, d.lost_time, CURRENT_TIMESTAMP)
  			) AS days_in_pipeline,
  			COUNT(a.id) AS total_activities,
  			COUNT(
  				CASE
  					WHEN a.done = TRUE THEN a.id
  				END
  			) AS completed_activities,
  			COUNT(n.id) AS total_notes,
  			COUNT(
  				CASE
  					WHEN a.type = 'call'
  					AND a.done = TRUE THEN a.id
  				END
  			) AS calls_made,
  			COUNT(
  				CASE
  					WHEN a.type = 'meeting'
  					AND a.done = TRUE THEN a.id
  				END
  			) AS meetings_held,
  			MIN(a.add_time) AS first_activity,
  			MAX(a.add_time) AS last_activity
  		FROM
  			nekt_raw.pipedrive_deals d
  			LEFT JOIN nekt_raw.pipedrive_persons p ON d.person_id = p.id
  			LEFT JOIN nekt_raw.pipedrive_organizations o ON d.org_id = o.id
  			LEFT JOIN nekt_raw.pipedrive_users u ON d.owner_id = u.id
  			LEFT JOIN nekt_raw.pipedrive_activities a ON d.id = a.deal_id
  			AND NOT a.is_deleted
  			LEFT JOIN nekt_raw.pipedrive_notes n ON d.id = n.deal_id
  			AND n.active_flag = TRUE
  		WHERE
  			d.add_time >= CURRENT_DATE - interval '180' DAY
  			AND NOT d.is_deleted
  		GROUP BY
  			d.id,
  			d.title,
  			d.value,
  			d.status,
  			p.name,
  			o.name,
  			u.name,
  			d.add_time,
  			d.won_time,
  			d.lost_time,
  			d.update_time
  	),
  	journey_segments AS (
  		SELECT
  			*,
  			CASE
  				WHEN days_in_pipeline <= 7 THEN 'Quick (≤7 days)'
  				WHEN days_in_pipeline <= 30 THEN 'Standard (8-30 days)'
  				WHEN days_in_pipeline <= 90 THEN 'Extended (31-90 days)'
  				ELSE 'Long (>90 days)'
  			END AS sales_cycle_segment,
  			CASE
  				WHEN total_activities = 0 THEN 'No Engagement'
  				WHEN total_activities <= 5 THEN 'Low Engagement'
  				WHEN total_activities <= 15 THEN 'Medium Engagement'
  				ELSE 'High Engagement'
  			END AS engagement_level
  		FROM
  			customer_journey
  	)
  SELECT
  	deal_title,
  	contact_name,
  	organization_name,
  	owner_name,
  	FORMAT ('$%,.0f', CAST(deal_value AS DOUBLE)) AS deal_value_formatted,
  	status,
  	sales_cycle_segment,
  	engagement_level,
  	days_in_pipeline,
  	total_activities,
  	completed_activities,
  	calls_made,
  	meetings_held,
  	total_notes,
  	ROUND(
  		CAST(completed_activities AS DOUBLE) / NULLIF(total_activities, 0) * 100,
  		2
  	) AS activity_completion_rate,
  	DATE_FORMAT (deal_created, '%Y-%m-%d') AS deal_created_date,
  	DATE_FORMAT (first_activity, '%Y-%m-%d') AS first_activity_date,
  	DATE_FORMAT (last_activity, '%Y-%m-%d') AS last_activity_date
  FROM
  	journey_segments
  ORDER BY
  	deal_value DESC,
  	days_in_pipeline DESC
  ```
</Accordion>

### 5. Revenue Forecasting & Pipeline Health

Predict future revenue and assess pipeline quality.

**Business Value:**

* Accurate revenue forecasting for business planning
* Early identification of pipeline gaps
* Data-driven sales target setting

<Accordion title="SQL code" defaultOpen>
  ```sql lines theme={null}
  WITH
  	monthly_forecast AS (
  		SELECT
  			DATE_TRUNC ('month', d.expected_close_date) AS forecast_month,
  			COUNT(DISTINCT d.id) AS deals_count,
  			SUM(d.value) AS total_pipeline_value,
  			SUM(d.value * d.probability / 100.0) AS weighted_forecast,
  			AVG(d.probability) AS avg_probability,
  			COUNT(
  				CASE
  					WHEN d.probability >= 80 THEN d.id
  				END
  			) AS high_probability_deals,
  			COUNT(
  				CASE
  					WHEN d.probability BETWEEN 50 AND 79  THEN d.id
  				END
  			) AS medium_probability_deals,
  			COUNT(
  				CASE
  					WHEN d.probability < 50 THEN d.id
  				END
  			) AS low_probability_deals,
  			SUM(
  				CASE
  					WHEN d.probability >= 80 THEN d.value
  					ELSE 0
  				END
  			) AS high_prob_value,
  			SUM(
  				CASE
  					WHEN d.probability BETWEEN 50 AND 79  THEN d.value
  					ELSE 0
  				END
  			) AS medium_prob_value,
  			SUM(
  				CASE
  					WHEN d.probability < 50 THEN d.value
  					ELSE 0
  				END
  			) AS low_prob_value
  		FROM
  			nekt_raw.pipedrive_deals d
  		WHERE
  			d.status = 'open'
  			AND NOT d.is_deleted
  			AND d.expected_close_date IS NOT NULL
  			AND d.expected_close_date >= CURRENT_DATE
  			AND d.expected_close_date <= CURRENT_DATE + interval '12' MONTH
  		GROUP BY
  			DATE_TRUNC ('month', d.expected_close_date)
  	),
  	historical_performance AS (
  		SELECT
  			DATE_TRUNC ('month', d.won_time) AS won_month,
  			COUNT(DISTINCT d.id) AS historical_won_deals,
  			SUM(d.value) AS historical_won_revenue,
  			AVG(d.value) AS avg_historical_deal_size
  		FROM
  			nekt_raw.pipedrive_deals d
  		WHERE
  			d.status = 'won'
  			AND NOT d.is_deleted
  			AND d.won_time >= CURRENT_DATE - interval '12' MONTH
  		GROUP BY
  			DATE_TRUNC ('month', d.won_time)
  	)
  SELECT
  	DATE_FORMAT (mf.forecast_month, '%Y-%m') AS MONTH,
  	mf.deals_count,
  	FORMAT ('$%,.0f', CAST(mf.total_pipeline_value AS DOUBLE)) AS total_pipeline_value,
  	FORMAT ('$%,.0f', CAST(mf.weighted_forecast AS DOUBLE)) AS weighted_forecast,
  	ROUND(mf.avg_probability, 2) AS avg_probability_percent,
  	mf.high_probability_deals,
  	mf.medium_probability_deals,
  	mf.low_probability_deals,
  	FORMAT ('$%,.0f', CAST(mf.high_prob_value AS DOUBLE)) AS high_prob_value,
  	FORMAT ('$%,.0f', CAST(mf.medium_prob_value AS DOUBLE)) AS medium_prob_value,
  	FORMAT ('$%,.0f', CAST(mf.low_prob_value AS DOUBLE)) AS low_prob_value,
  	COALESCE(hp.historical_won_deals, 0) AS last_year_won_deals,
  	FORMAT (
  		'$%,.0f',
  		CAST(COALESCE(hp.historical_won_revenue, 0) AS DOUBLE)
  	) AS last_year_won_revenue,
  	CASE
  		WHEN hp.historical_won_revenue > 0 THEN ROUND(
  			mf.weighted_forecast / hp.historical_won_revenue * 100,
  			2
  		)
  		ELSE NULL
  	END AS forecast_vs_historical_percent
  FROM
  	monthly_forecast mf
  	LEFT JOIN historical_performance hp ON mf.forecast_month = hp.won_month + interval '12' MONTH
  ORDER BY
  	mf.forecast_month
  ```
</Accordion>

## Implementation Notes

### Data Quality Considerations

* Ensure consistent data entry for lead sources and channels
* Regularly clean up duplicate contacts and organizations
* Validate date fields for accurate timeline analysis
* Monitor custom field usage for comprehensive reporting

### Automation Opportunities

* Schedule these queries to run daily/weekly for dashboard updates
* Set up alerts for significant changes in key metrics
* Implement automated reporting for sales team performance reviews

These SQL transformations provide a solid foundation for comprehensive Pipedrive analytics. Customize the date ranges, filters, and metrics based on your specific business requirements and reporting needs.

## Skills for agents

<Snippet file="agent-skills-intro.mdx" />

<Card title="Download Pipedrive skills file" icon="wand-magic-sparkles" href="/sources/pipedrive.md">
  Pipedrive connector documentation as plain markdown, for use in AI agent contexts.
</Card>
