> ## Documentation Index
> Fetch the complete documentation index at: https://docs.nekt.com/llms.txt
> Use this file to discover all available pages before exploring further.

# Runrunit as a data source

> Bring data from Runrunit to Nekt.

[Runrun.it](https://runrun.it) (Runrunit) is a work-management platform for planning tasks, projects, and teams. The connector extracts clients, projects, tasks, task attachments, dashboards, and users from the [Runrun.it API](https://runrun.it/api/documentation).

<img height="50" src="https://mintcdn.com/nekt/0tn1_nwKYqAHn7jo/assets/logo/logo-runrunit.png?fit=max&auto=format&n=0tn1_nwKYqAHn7jo&q=85&s=e8fabf5fa698a7aa1f112af414f21ea5" data-path="assets/logo/logo-runrunit.png" />

## Configuring Runrunit as a Source

In the [Sources](https://app.nekt.ai/sources) tab, click on the "Add source" button located on the top right of your screen. Then, select the Runrunit option from the list of connectors.

Click **Next** and you'll be prompted to add your access.

### 1. Add account access

Connect using credentials from your Runrun.it workspace. See the [API documentation](https://runrun.it/api/documentation#readme) for where to obtain an application key and user token.

The following configurations are available:

* **Application Key** (`app_key`): The application key associated with your Runrun.it workspace.

* **User Token** (`user_token`): A user token for API calls. Ensure this user has access to the data you need to extract.

Once you're done, click **Next**.

### 2. Select streams

Choose which data streams you want to sync. For faster extractions, select only the streams that are relevant to your analysis. You can select entire groups of streams or pick specific ones.

> Tip: The stream can be found more easily by typing its name.

Select the streams and click **Next**.

### 3. Configure data streams

Customize how you want your data to appear in your catalog. Select the desired layer where the data will be placed, a folder to organize it inside the layer, a name for each table (which will effectively contain the fetched data) and the type of sync.

* **Layer**: choose between the existing layers on your catalog. This is where you will find your new extracted tables as the extraction runs successfully.
* **Folder**: a folder can be created inside the selected layer to group all tables being created from this new data source.
* **Table name**: we suggest a name, but feel free to customize it. You have the option to add a **prefix** to all tables at once and make this process faster!
* **Sync Type**: you can choose between INCREMENTAL and FULL\_TABLE.
  * Incremental: every time the extraction happens, we'll get only the new data - which is good if, for example, you want to keep every record ever fetched.
  * Full table: every time the extraction happens, we'll get the current state of the data - which is good if, for example, you don't want to have deleted data in your catalog.

Once you are done configuring, click **Next**.

### 4. Configure data source

Describe your data source for easy identification within your organization, not exceeding 140 characters.

To define your [Trigger](https://docs.nekt.com/get-started/core-concepts/triggers), consider how often you want data to be extracted from this source. This decision usually depends on how frequently you need the new table data updated (every day, once a week, or only at specific times).

Optionally, you can define some additional settings:

* Configure Delta Log Retention and determine for how long we should store old states of this table as it gets updated. Read more about this resource [here](https://docs.nekt.com/get-started/core-concepts/resource-control).
* Determine when to execute an **Additional [Full Sync](https://docs.nekt.com/get-started/core-concepts/types-of-sync#additional-full-sync)**. This will complement the incremental data extractions, ensuring that your data is completely synchronized with your source every once in a while.

Once you are ready, click **Next** to finalize the setup.

### 5. Check your new source

You can view your new source on the [Sources](https://app.nekt.ai/sources) page. If needed, manually trigger the source extraction by clicking on the arrow button. Once executed, your data will appear in your Catalog.

<Warning>For you to be able to see it on your [Catalog](https://app.nekt.ai/catalog), you need at least one successful source run.</Warning>

# Streams and Fields

Below you'll find all available data streams from Runrunit and their corresponding fields:

<AccordionGroup>
  <Accordion title="clients">
    Clients (customers) in the workspace, including budget, time, and cost rollups.

    **Key Fields:**

    * `id` - Unique identifier for the client
    * `name` - Client name
    * `custom_field` - Custom field value for the client
    * `is_visible` - Whether the client is visible for use
    * `project_ids` - Identifiers of projects linked to the client

    **Budget & time:**

    * `budgeted_hours_month`, `budgeted_cost_month` - Monthly budgeted hours and cost
    * `time_worked`, `time_pending_not_assigned`, `time_pending_queued`, `time_pending`, `time_total`, `time_progress` - Time tracking in seconds (and progress ratio)
    * `cost_worked`, `cost_pending`, `cost_total` - Cost rollups

    **Recent activity (seconds per day):**

    * `activities_0_days_ago` through `activities_6_days_ago`, `activities` - Worked time today and over the prior six days

    **Deprecated / nested:**

    * `project_groups` - Legacy project group records (`id`, `name`, `client_id`, `is_default`, `created_at`, `updated_at`)
    * `time_pending_backlog` - Deprecated pending backlog time in seconds
  </Accordion>

  <Accordion title="projects">
    Projects with schedule, board stage, task counts, and financial rollups.

    **Key Fields:**

    * `id` - Unique identifier for the project
    * `name` - Project name
    * `client_id` - Associated client identifier
    * `is_closed`, `is_active` - Lifecycle flags
    * `start_date`, `close_date`, `desired_date`, `estimated_delivery_date` - Schedule fields (string dates where applicable)

    **Structure:**

    * `project_group_id`, `project_sub_group_id` - Group and subgroup identifiers
    * `project_group_name`, `project_sub_group_name`, `project_group_is_default`, `project_sub_group_is_default` - Denormalized group labels
    * `client_name` - Denormalized client name
    * `board_stage_id`, `board_stage_name`, `board_stage_color` - Current board stage

    **Tasks & points:**

    * `tasks_count`, `tasks_not_assigned_count`, `tasks_queued_count`, `tasks_working_on_count`, `tasks_closed_count`, `tasks_backlog_count`
    * `tasks_count_progress` - Ratio of completed tasks
    * `task_points_sum`, `task_points_progress`, and point sums by status (`task_points_not_assigned_sum`, etc.)

    **Time & cost:**

    * `time_worked`, `time_pending`, `time_pending_not_assigned`, `time_pending_queued`, `time_total`, `time_progress`
    * `cost_worked`, `cost_pending`, `extra_costs`, `cost_total`, `cost_progress`, `budgeted_cost`
    * `overdue`, `over_budget` - Schedule and budget indicators
    * `activities_0_days_ago` through `activities_7_days_ago`, `activities` - Recent worked time in seconds
    * `time_pending_backlog` - Deprecated

    **Sharing & permissions:**

    * `is_public`, `is_shared`, `sharing_details`, `use_new_permissions`
    * `created_at` - When the project was created
  </Accordion>

  <Accordion title="tasks">
    Tasks including board placement, estimates, assignments, subtasks, and custom fields. The tap requests all statuses (including closed) via `bypass_status_default`.

    **Key Fields:**

    * `id`, `uid` - Task identifiers
    * `title` - Task title
    * `state` - Workflow state (for example `not_assigned`, `working_on`, `queued`, `closed`)
    * `is_closed`, `is_assigned`, `is_working_on`, `on_going`, `is_urgent`, `is_subtask`
    * `project_id`, `client_id` - Foreign keys
    * `user_id` - User who created the task
    * `responsible_id`, `responsible_name` - Primary responsible user
    * `created_at` - Creation timestamp

    **Board & type:**

    * `board_id`, `board_name`, `board_stage_id`, `board_stage_name`, `board_stage_description`, `board_stage_position`
    * `type_id`, `type_name`, `type_color` - Task type
    * `team_id`, `team_name` - Team when unassigned to a person
    * `queue_position` - Position on the assignee list
    * `workflow_id`, `task_state_id`, `task_state_name`, `task_status_id`, `task_status_name`

    **Dates & estimates:**

    * `desired_date`, `desired_date_with_time`, `desired_start_date`, `close_date`, `start_date`
    * `estimated_start_date`, `estimated_delivery_date`, `gantt_bar_start_date`, `gantt_bar_end_date`
    * `current_estimate_seconds`, `estimated_at`, `last_estimated_at`, `estimate_updated`, `estimated_delivery_date_updated`, `reestimate_count`
    * `scheduled_start_time`, `is_scheduled`
    * `stage_depart_estimated_at`, `board_remaining_time`
    * `parents_max_desired_date` - Latest desired date among prerequisites

    **Time & progress:**

    * `time_worked`, `time_pending`, `time_total`, `time_progress`, `current_worked_time`
    * `activities_0_days_ago` through `activities_7_days_ago`, `activities`

    **Client & project (denormalized):**

    * `client_name`, `project_name`, `project_group_name`, `project_group_id`, `project_group_is_default`
    * `project_sub_group_name`, `project_sub_group_id`, `project_sub_group_is_default`
    * `user_name` - Creator display name

    **Hierarchy & dependencies:**

    * `parent_task_id`, `parent_task_title`, `subtask_ids`, `subtasks_count`, `subtasks_closed_count`, `subtasks_count_progress`
    * `parent_ids`, `opened_parent_ids`, `child_ids` - Prerequisite and dependent task identifiers
    * `all_subtasks_time_worked`, `all_subtasks_time_total`, `all_subtasks_time_progress`, `all_subtasks_times_updating`
    * `current_level` - Depth in the hierarchy

    **Recurrence:**

    * `repetition_rule` - Object with `rrule_text`, `rrule_time`, `attributes_to_clone`, `board_stage_id`, `expected_next_occurrence_time`
    * `repetition_rule_id`

    **Assignments (array `assignments`):**

    * Per row: `id`, `task_id`, `assignee_id`, `assignee_name`, `team_id`, `team_name`, `queue_position`, `priority`
    * `current_estimate_seconds`, `time_worked`, `estimated_start_date`, `estimate_updated`, `start_date`, `close_date`, `is_closed`, `reestimate_count`, `is_working_on`
    * `automatic_time_worked_updated_at`, `assignee_avatar_url`, `assignee_avatar_large_url`, `time_worked_not_persisted`

    **Evaluations:**

    * `evaluation_status`, `approved`, `current_evaluator_id`
    * `evaluator_ids`, `pending_evaluator_ids`, `approved_evaluator_ids`, `rejected_evaluator_ids`

    **Tags & custom fields:**

    * `tags_data` - Array of `{ name, color }`
    * `tag_list`, `tags`, `task_tags`
    * `custom_fields` - Array of `{ id, name, label }` (normalized from API maps or lists)
    * `form_id`

    **Sharing & followers:**

    * `is_shared`, `sharing_details`, `follower_ids`

    **Miscellaneous:**

    * `attachments_count`, `checklist_id`, `points`, `was_reopened`, `overdue`
    * `priority` - Deprecated; prefer `queue_position`
  </Accordion>

  <Accordion title="task_documents">
    Files and documents attached to tasks. This stream is loaded **after** `tasks` (one API request per task). Relate rows to tasks using replication metadata from the parent stream where present, or using `attachable_id` / `attachable_type` when the attachment targets a task.

    **Key Fields:**

    * `id` - Document identifier
    * `type` - Record type classification
    * `attachable_id`, `attachable_type`, `attachable_name` - Resource the document is attached to

    **Files:**

    * `data_file_name`, `data_file_size`, `data_content_type` - Underlying data file metadata
    * `file_name`, `file_size`, `file_content_type`, `file_extension` - Display file metadata
    * `thumbnail_file_name`, `preview_file_name` - Preview assets
    * `uploaded_at`, `transfered` - Upload time and transfer completion flag

    **Uploader & remote storage:**

    * `uploader_id`, `uploader_name`
    * `remote_id`, `remote_icon_url` - Third-party storage identifiers
    * `is_shared`, `tags_data`, `has_approval_request`, `field_label`, `evaluations`
  </Accordion>

  <Accordion title="dashboards">
    User-owned dashboards.

    **Key Fields:**

    * `id` - Unique identifier for the dashboard
    * `name` - Dashboard name
    * `user_id` - Owner user identifier
  </Accordion>

  <Accordion title="users">
    Workspace users, roles, schedule, and preferences.

    **Key Fields:**

    * `id` - User identifier (string)
    * `name`, `email` - Display name and email
    * `avatar_url`, `avatar_large_url` - Profile images
    * `cost_hour` - Hourly cost
    * `created_at` - Account creation time

    **Roles & permissions:**

    * `is_master`, `is_manager`, `is_auditor` - Role flags
    * `can_create_client_project_and_task_types`, `can_create_boards`
    * `budget_manager` - May edit project extra costs
    * `is_eligible_to_access_reports`, `is_eligible_to_whatsapp`

    **Profile & availability:**

    * `time_zone`, `position`, `on_vacation`, `birthday`, `phone`, `gender`, `marital_status`
    * `in_company_since`, `language`
    * `is_certified`, `is_certified_expert` - Runrun.it certification flags
    * `is_mensurable` - RR Ratings participation
    * `team_ids`, `led_team_ids` - Team membership and leadership
    * `demanders_count`, `partners_count`, `has_all_users_as_partners`, `has_all_users_as_demanders`

    **Mobile & time tracking:**

    * `is_blocked_on_mobile`, `bypass_block_by_time_worked`, `blocked_by_time_worked_at`
    * `time_tracking_mode` - Deprecated; use enterprise setting

    **Shifts (array `shifts`):**

    * `weekday`, `work_day`, `shift_start`, `lunch_start`, `lunch_end`, `shift_end`, `work_time`

    **Preferences (object `preferences`):**

    * `theme`, `task_list_background_image_url`, `skip_time_adjust_on_task_assignment_deliver`, `skip_move_task_to_next_board_stage_suggestion`

    **Deprecated (prefer `preferences`):**

    * `theme`, `task_list_background_image_url`, `skip_time_adjust_on_task_assignment_deliver` at root level

    **Security metadata:**

    * `password_updated_at`, `password_expired_at`
    * `shift_work_time_per_week`
    * `alt_id`, `oid` - Internal identifiers
  </Accordion>
</AccordionGroup>

# Data Model

The following diagram summarizes how streams relate for typical analysis. Join keys follow the field names exposed in each stream.

```mermaid  theme={null}
graph TD;
    subgraph "Reference"
        Clients("clients");
        Users("users");
        Dashboards("dashboards");
    end

    subgraph "Work items"
        Projects("projects");
        Tasks("tasks");
        TaskDocuments("task_documents");
    end

    Projects -- "client_id" --> Clients;
    Tasks -- "client_id" --> Clients;
    Tasks -- "project_id" --> Projects;
    Tasks -- "user_id" --> Users;
    Tasks -- "responsible_id" --> Users;
    Dashboards -- "user_id" --> Users;
    TaskDocuments -- "parent task context / attachable_*" --> Tasks;
```

# Use Cases for Data Analysis

This section outlines a simple pattern for analyzing Runrunit task load by project. Adjust schema and table names to match your catalog (for example tables under `nekt_raw` with your chosen prefix).

### 1. Open tasks by project

**Business Value:**

* See how much work is still open per project
* Prioritize projects with many unclosed tasks

<Accordion title="SQL query">
  <Tabs>
    <Tab title="AWS">
      ```sql  theme={null}
      SELECT
         p.name AS project_name,
         c.name AS client_name,
         COUNT(*) FILTER (WHERE NOT t.is_closed) AS open_tasks,
         COUNT(*) FILTER (WHERE t.is_closed) AS closed_tasks
      FROM
         nekt_raw.runrunit_tasks t
         LEFT JOIN nekt_raw.runrunit_projects p ON t.project_id = p.id
         LEFT JOIN nekt_raw.runrunit_clients c ON t.client_id = c.id
      GROUP BY
         p.name,
         c.name
      ORDER BY
         open_tasks DESC
      ```
    </Tab>

    <Tab title="GCP">
      ```sql  theme={null}
      SELECT
         p.name AS project_name,
         c.name AS client_name,
         COUNTIF(NOT t.is_closed) AS open_tasks,
         COUNTIF(t.is_closed) AS closed_tasks
      FROM
         `nekt_raw.runrunit_tasks` t
         LEFT JOIN `nekt_raw.runrunit_projects` p ON t.project_id = p.id
         LEFT JOIN `nekt_raw.runrunit_clients` c ON t.client_id = c.id
      GROUP BY
         1,
         2
      ORDER BY
         open_tasks DESC
      ```
    </Tab>
  </Tabs>
</Accordion>

## Implementation Notes

### Sync behavior

* Streams use **full table** replication in the tap (no `replication_key`). Choosing INCREMENTAL vs FULL\_TABLE in Nekt still controls how the platform merges updates into your catalog tables.
* `task_documents` depends on `tasks`; include both streams if you need attachments.

### API access

* Ensure the **User Token** belongs to a user with permission to read clients, projects, tasks, users, and dashboards you expect in the extract.
* Large workspaces can require many requests for `tasks` and especially `task_documents`; limit streams or sync frequency if needed.

## Skills for agents

<Snippet file="agent-skills-intro.mdx" />

<Card title="Download Runrunit skills file" icon="wand-magic-sparkles" href="/sources/runrunit.md">
  Runrunit connector documentation as plain markdown, for use in AI agent contexts.
</Card>


Built with [Mintlify](https://mintlify.com).