> ## Documentation Index
> Fetch the complete documentation index at: https://docs.nekt.com/llms.txt
> Use this file to discover all available pages before exploring further.

# Oracle as a data source

> Bring data from Oracle to Nekt.

Oracle is a multinational technology company that provides database management systems, cloud services, and enterprise software solutions. Its database platform is widely used for enterprise applications, data warehousing, and business intelligence, offering robust data management and analytics capabilities.

<img width="200px" src="https://mintcdn.com/nekt/3MFKt2g7jzqFpztO/assets/logo/logo-oracle.png?fit=max&auto=format&n=3MFKt2g7jzqFpztO&q=85&s=46bdae74d15c750ab13f80d395ade809" data-path="assets/logo/logo-oracle.png" />

## Required pre-work

In order to connect Nekt to a database, you have to do some pre work to ensure access is granted in a secure way.

<AccordionGroup>
  <Accordion title="Network Configuration (AWS RDS)">
    * Establish a peering connection between Nekt VPC and database VPC
      * On your AWS Console, access the VPC service and go to [Peering Connection](https://us-east-1.console.aws.amazon.com/vpcconsole/home?region=us-east-1#PeeringConnections:). Make sure you are logged in with the account that contains the database you want to connect with.
      * Select **Create peering connection**
      * Setup as requested
        * Give your connection a name (something as 'Nekt \<> Database')
        * Select `nekt-vpc` ID as requester (on VPC ID field)
        * Choose 'Another account' on the **Select another VPC to peer with section**. Inform the Account ID.
          * To get the Account ID, go to the RDS service (you'll find it searching on the AWS Console).
          * Click on **DB Instances**
          * Select the desired database
          * Copy the Account ID associated with this database.
        * Inform the VPC ID (Accepter)
          * On the desired database details, on the Connectivity and Security section, click on the VPC.
          * Copy the VPC ID
        * Click on **Create peering connection**. You'll notice the status is 'Pending acceptance'.
      * Go to [Peering Connection](https://us-east-1.console.aws.amazon.com/vpcconsole/home?region=us-east-1#PeeringConnections:) again and your new peering connection should be listed, yet still pending acceptance.
      * On the **Actions** menu, click on Accept Request and confirm it.
      * You should edit your Peering Connection name to 'Nekt \<> Database', to keep the pattern.
      * **Nekt VPC**
        * Access the created Peering Connection - that now should have the status 'Active' and a value on **Accepter CIDRs**. Copy this value, it will be the Nekt VPC IP.
        * In the VPC dashboard menu, go to Route Tables
        * In every route table with 'nekt' in its name, follow these steps:
          * Click on the **Routes** tab
          * Click on **Edit routes**
          * Click on **Add route**
          * On the 'Destination' column, paste the Nekt VPC IP (Accepter CIDRs previously copied)
          * On the 'Target' column, choose 'Peering Connection' and the `Nekt <> Database` option in the field that will open.
          * Keep 'Status' and 'Propagated' columns as default
          * Target: Peering connection established between Nekt and database
          * Save changes
      * **Database VPC**
        * Repeat the process done for Nekt VPC, but now use the Nekt VPC IP as Destination
    * Search for RDS on your AWS Console and access it.
    * Select your database and go to Connectivity & Security.
    * Click on **VPC security groups**.
    * Selecting your DB security group, go to the Inbound Rules tab
    * Click on **Edit inbound rules**
      * Add the following inbound rule to the security group:
        * Type: Oracle-RDS
        * Source: `Custom` with the Nekt VPC IP as value
        * Add a description to better identify it. Something like 'Nekt'
        * Save rule

    Done! With that, you are ready to follow the next steps and connect Nekt with your database hosted on AWS through the interface of our application.
  </Accordion>

  <Accordion title="Network Configuration (Non-AWS)">
    * Ask Nekt to create a fixed IP in your AWS infra.
    * In your database provider, give access to the IP provided by Nekt.

    Done! With that, you are ready to follow the next steps and connect Nekt with your database through the interface of our application.
  </Accordion>

  <Accordion title="Log-Based Replication Setup (Oracle LogMiner)">
    <Warning>
      Log-based replication requires additional database configuration. This setup must be completed by a database administrator before enabling LOG\_BASED sync in Nekt.
    </Warning>

    Log-based replication uses Oracle LogMiner to capture data changes (inserts, updates, deletes) in near real-time. This is the most efficient method for capturing changes without impacting source system performance.

    ### Prerequisites

    1. **Archive Log Mode**: The database must be in ARCHIVELOG mode

    2. **Supplemental Logging**: Must be enabled for the tables you want to replicate

    3. **User Permissions**: The extraction user needs specific Oracle privileges

    ### Standard Oracle Setup

    Connect to your Oracle database as SYSDBA and execute:

    ```sql theme={null}
    -- 1. Enable archive log mode (requires database restart)
    SHUTDOWN IMMEDIATE;
    STARTUP MOUNT;
    ALTER DATABASE ARCHIVELOG;
    ALTER DATABASE OPEN;

    -- 2. Enable supplemental logging at database level
    ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

    -- 3. Grant LogMiner permissions to your extraction user
    GRANT EXECUTE ON DBMS_LOGMNR TO your_username;
    GRANT SELECT ON V_$LOGMNR_CONTENTS TO your_username;
    GRANT SELECT ON V_$LOGMNR_LOGS TO your_username;
    GRANT SELECT ON V_$ARCHIVED_LOG TO your_username;
    GRANT SELECT ON V_$LOG TO your_username;
    GRANT SELECT ON V_$LOGFILE TO your_username;
    GRANT SELECT ON V_$DATABASE TO your_username;

    -- 4. For specific tables, enable supplemental logging
    ALTER TABLE schema.table_name ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
    ```

    ### AWS RDS Oracle Setup

    For AWS RDS Oracle instances, use the RDS admin procedures:

    ```sql theme={null}
    -- 1. Enable supplemental logging
    exec rdsadmin.rdsadmin_util.alter_supplemental_logging(p_action=>'ADD');

    -- 2. Enable force logging
    exec rdsadmin.rdsadmin_util.force_logging(p_enable => true);

    -- 3. Grant LogMiner access to your extraction user
    exec rdsadmin.rdsadmin_util.grant_sys_object('DBMS_LOGMNR', 'your_username', 'EXECUTE', true);
    exec rdsadmin.rdsadmin_util.grant_sys_object('DBMS_LOGMNR_D', 'your_username', 'EXECUTE', true);
    exec rdsadmin.rdsadmin_util.grant_sys_object('V_$LOGMNR_CONTENTS', 'your_username', 'SELECT', true);
    exec rdsadmin.rdsadmin_util.grant_sys_object('V_$LOGMNR_LOGS', 'your_username', 'SELECT', true);
    exec rdsadmin.rdsadmin_util.grant_sys_object('V_$ARCHIVED_LOG', 'your_username', 'SELECT', true);
    exec rdsadmin.rdsadmin_util.grant_sys_object('V_$LOG', 'your_username', 'SELECT', true);
    exec rdsadmin.rdsadmin_util.grant_sys_object('V_$LOGFILE', 'your_username', 'SELECT', true);
    exec rdsadmin.rdsadmin_util.grant_sys_object('V_$DATABASE', 'your_username', 'SELECT', true);
    ```

    Additionally, ensure your RDS instance has **backup retention period > 0** to enable archive logs:

    ```bash theme={null}
    aws rds modify-db-instance --db-instance-identifier your-instance --backup-retention-period 1
    ```

    ### Oracle Multitenant (CDB/PDB) Setup

    For Oracle multitenant environments with Pluggable Databases:

    ```sql theme={null}
    -- Connect to CDB root as SYSDBA
    -- Enable supplemental logging at CDB level
    ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

    -- Grant permissions to common user (prefixed with C##)
    GRANT EXECUTE ON DBMS_LOGMNR TO C##your_username CONTAINER=ALL;
    GRANT SELECT ON V_$LOGMNR_CONTENTS TO C##your_username CONTAINER=ALL;
    -- ... (same grants as above with CONTAINER=ALL)

    -- Configure pdb_services in Nekt to discover PDB schemas
    ```

    ### Verification

    Verify your LogMiner setup:

    ```sql theme={null}
    -- Check if archivelog mode is enabled
    SELECT LOG_MODE FROM V$DATABASE;
    -- Should return: ARCHIVELOG

    -- Check supplemental logging status
    SELECT SUPPLEMENTAL_LOG_DATA_MIN, SUPPLEMENTAL_LOG_DATA_ALL FROM V$DATABASE;
    -- Should show YES for at least MIN

    -- Check archive logs are being generated
    SELECT SEQUENCE#, FIRST_CHANGE#, NEXT_CHANGE#, STATUS FROM V$ARCHIVED_LOG WHERE ROWNUM <= 5 ORDER BY SEQUENCE# DESC;
    ```
  </Accordion>
</AccordionGroup>

## Configuring Oracle as a Source

In the [Sources](https://app.nekt.ai/sources) tab, click on the "Add source" button located on the top right of your screen. Then, select the Oracle option from the list of connectors.

Click **Next** and you'll be prompted to add your access.

### 1. Add account access

Once you have done the pre-work defined above, you can inform your database accesses.

The following configurations are available:

* **Host** (required): The hostname or IP address of your Oracle database server
* **Port** (required): The port for Oracle connection (default: `1521`)
* **User** (required): Database user for authentication
* **Password** (required): Password for authentication
* **Service Name** (required): Oracle service name for the connection (also referred to as schema name)
* **Pluggable Database Services** (optional): List of Oracle PDB service names for multitenant (CDB/PDB) environments
* **Thick Mode** (optional, default: `true`): Enable Oracle thick mode for enhanced performance. Required for LogMiner operations
* **Chunk Size** (optional, default: `25000`): Number of rows to fetch at a time. Reduce if your row data is too large

**Advanced Configuration**

* **SID**: Alternative to Service Name for older Oracle configurations
* **Filter Schemas**: Array of schema names to include (if empty, all schemas are discovered)
* **Use batch query**: Enable keyset pagination with retry logic. Breaks large table extraction into smaller batches ordered by primary key, each with a fresh connection.
* **Invalid date handling**: How to handle Oracle date/timestamp values outside Python's representable year range (1–9999). Options are `coerce` (coerce to nearest valid boundary), `null` (convert to null), or `error` (raise error and stop pipeline).
* **SSH Tunnel**: Configuration for secure connections through a bastion server
* **SSL/TLS**: Enable encrypted connections with certificate configuration

Once you are done configuring, click **Next**.

### 2. Select streams

The next step is letting us know which streams you want to bring. You can select entire groups of streams or only a subset of them.

> Tip: The stream can be found more easily by typing its name.

Click **Next**.

### 3. Configure data streams

Customize how you want your data to appear in your catalog. Select a name for each table (which will contain the fetched data) and the type of sync.

* **Table name**: we suggest a name, but feel free to customize it. You have the option to add a **prefix** and make this process faster!
* **Sync Type**: you can choose between INCREMENTAL, FULL\_TABLE, and LOG\_BASED:
  * **Incremental**: Every time the extraction happens, we'll get only the new data based on a replication key column. Good for append-only tables or when you want to keep historical records.
  * **Full Table**: Every time the extraction happens, we'll get the current state of the data. Good if you don't want to have deleted data in your catalog or for small reference tables.
  * **Log Based**: Uses Oracle LogMiner to capture data changes (inserts, updates, deletes) from the database transaction logs. This is the most efficient method for capturing all changes including deletes, with minimal impact on source database performance.

<Note>
  Log-based replication requires additional database setup. See the "Log-Based Replication Setup" section in the pre-work above.
</Note>

**Log-Based Sync Details:**

Uses Oracle LogMiner to read transaction logs and capture:

* **INSERT**: New records with all column values
* **UPDATE**: Modified records with new values
* **DELETE**: Removed records (marked with `_sdc_deleted_at` timestamp)

*Key Features:*

* Near real-time change data capture (CDC)
* Captures deletes (not possible with incremental sync)
* Minimal impact on source database performance
* Supports Oracle Multitenant (CDB/PDB) environments
* AWS RDS Oracle compatibility with automatic fallback to archived logs

*System Columns Added:*

* `_sdc_lsn`: The Oracle System Change Number (SCN) when the change was committed
* `_sdc_deleted_at`: Timestamp when the record was deleted (null for inserts/updates)

Click **Next**.

### 4. Configure data source

Describe your data source for easy identification within your organization. You can inform things like what data it brings, to which team it belongs, etc.

To define your [Trigger](https://docs.nekt.com/get-started/core-concepts/triggers), consider how often you want data to be extracted from this source. This decision usually depends on how frequently you need the new table data updated (every day, once a week, or only at specific times).

Click **Done** to finalize the setup.

### 5. Check your new source

You can view your new source on the [Sources](https://app.nekt.ai/sources) page. Now, for you to be able to see it on your [Catalog](https://app.nekt.ai/catalog), you have to wait for the pipeline to run. You can now monitor it on the [Sources](https://app.nekt.ai/sources) page to see its execution and completion. If needed, manually trigger the pipeline by clicking on the refresh icon. Once executed, your new table will appear in the Catalog section.

<Warning>For you to be able to see it on your [Catalog](https://app.nekt.ai/catalog), you need at least one successful source run.</Warning>

# Streams and Fields

Because Oracle is a relational database, the streams and fields correspond directly to the tables, views, and columns available in the schemas you have granted access to.

During the setup process, Nekt will automatically discover all accessible tables and views. You will be able to select specifically which ones you want to sync into your catalog. The data types from Oracle (e.g., `VARCHAR2`, `NUMBER`, `DATE`, `TIMESTAMP`) will be automatically mapped to standard Nekt data types during the extraction.

## Implementation Notes

### Oracle Multitenant Support

For Oracle 12c+ multitenant architecture (CDB/PDB), configure the `pdb_services` option with your PDB service names. The tap will:

* Connect to each PDB to discover schemas and tables
* Properly switch containers during extraction
* Handle LogMiner operations across the multitenant environment

### Column Name Sanitization

To ensure compatibility with data warehouses like BigQuery, column names and primary keys are automatically sanitized during extraction. Any characters that are not letters, numbers, or underscores (such as `$`) are replaced with underscores. Additionally, if a column name starts with a number, an underscore is prepended.

### Performance Considerations

| Setting         | Recommendation                                                                      |
| --------------- | ----------------------------------------------------------------------------------- |
| Chunk Size      | Reduce from 25,000 if memory issues occur with wide tables                          |
| Use Batch Query | Enable for very large tables to use keyset pagination and avoid connection timeouts |
| Thick Mode      | Keep enabled (`true`) for LogMiner and optimal performance                          |
| Filter Schemas  | Specify schemas to reduce discovery time on large databases                         |

### Troubleshooting

| Issue                               | Solution                                                                                                                                                                                                   |
| ----------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| LogMiner fails to start             | Verify supplemental logging is enabled and user has required grants                                                                                                                                        |
| No changes captured                 | Check archive logs exist and haven't been purged                                                                                                                                                           |
| AWS RDS permission denied           | Use `rdsadmin.rdsadmin_util` procedures for grants                                                                                                                                                         |
| SCN gaps in data                    | Normal behavior - LogMiner processes committed transactions only                                                                                                                                           |
| Slow extraction                     | Enable thick mode, adjust chunk\_size, filter to needed schemas, or enable Use Batch Query                                                                                                                 |
| Dates arriving in unexpected format | The connector attempts to set Oracle NLS variables (`NLS_LANG`, `NLS_DATE_FORMAT`, etc.) automatically. If initialization fails, dates might fall back to strings. Check your Instant Client installation. |

> If you encounter any issues, reach out to us via Slack, and we'll gladly assist you!

## Skills for agents

<Snippet file="agent-skills-intro.mdx" />

<Card title="Download Oracle skills file" icon="wand-magic-sparkles" href="/sources/oracle.md">
  Oracle connector documentation as plain markdown, for use in AI agent contexts.
</Card>
