Skip to main content
Oracle is a multinational technology company that provides database management systems, cloud services, and enterprise software solutions. Its database platform is widely used for enterprise applications, data warehousing, and business intelligence, offering robust data management and analytics capabilities.

0. Required pre work

In order to connect Nekt to a database, you have to do some pre work to ensure access is granted in a secure way.
  • Establish a peering connection between Nekt VPC and database VPC
    • On your AWS Console, access the VPC service and go to Peering Connection. Make sure you are logged in with the account that contains the database you want to connect with.
    • Select Create peering connection
    • Setup as requested
      • Give your connection a name (something as ‘Nekt <> Database’)
      • Select nekt-vpc ID as requester (on VPC ID field)
      • Choose ‘Another account’ on the Select another VPC to peer with section. Inform the Account ID.
        • To get the Account ID, go to the RDS service (you’ll find it searching on the AWS Console).
        • Click on DB Instances
        • Select the desired database
        • Copy the Account ID associated with this database.
      • Inform the VPC ID (Accepter)
        • On the desired database details, on the Connectivity and Security section, click on the VPC.
        • Copy the VPC ID
      • Click on Create peering connection. You’ll notice the status is ‘Pending acceptance’.
    • Go to Peering Connection again and your new peering connection should be listed, yet still pending acceptance.
    • On the Actions menu, click on Accept Request and confirm it.
    • You should edit your Peering Connection name to ‘Nekt <> Database’, to keep the pattern.
    • Nekt VPC
      • Access the created Peering Connection - that now should have the status ‘Active’ and a value on Accepter CIDRs. Copy this value, it will be the Nekt VPC IP.
      • In the VPC dashboard menu, go to Route Tables
      • In every route table with ‘nekt’ in its name, follow these steps:
        • Click on the Routes tab
        • Click on Edit routes
        • Click on Add route
        • On the ‘Destination’ column, paste the Nekt VPC IP (Accepter CIDRs previously copied)
        • On the ‘Target’ column, choose ‘Peering Connection’ and the Nekt <> Database option in the field that will open.
        • Keep ‘Status’ and ‘Propagated’ columns as default
        • Target: Peering connection established between Nekt and database
        • Save changes
    • Database VPC
      • Repeat the process done for Nekt VPC, but now use the Nekt VPC IP as Destination
  • Search for RDS on your AWS Console and access it.
  • Select your database and go to Connectivity & Security.
  • Click on VPC security groups.
  • Selecting your DB security group, go to the Inbound Rules tab
  • Click on Edit inbound rules
    • Add the following inbound rule to the security group:
      • Type: Oracle-RDS
      • Source: Custom with the Nekt VPC IP as value
      • Add a description to better identify it. Something like ‘Nekt’
      • Save rule
Done! With that, you are ready to follow the next steps and connect Nekt with your database hosted on AWS through the interface of our application.
  • Ask Nekt to create a fixed IP in your AWS infra.
  • In your database provider, give access to the IP provided by Nekt.
Done! With that, you are ready to follow the next steps and connect Nekt with your database through the interface of our application.
Log-based replication requires additional database configuration. This setup must be completed by a database administrator before enabling LOG_BASED sync in Nekt.
Log-based replication uses Oracle LogMiner to capture data changes (inserts, updates, deletes) in near real-time. This is the most efficient method for capturing changes without impacting source system performance.

Prerequisites

  1. Archive Log Mode: The database must be in ARCHIVELOG mode
  2. Supplemental Logging: Must be enabled for the tables you want to replicate
  3. User Permissions: The extraction user needs specific Oracle privileges

Standard Oracle Setup

Connect to your Oracle database as SYSDBA and execute:
-- 1. Enable archive log mode (requires database restart)
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
ALTER DATABASE ARCHIVELOG;
ALTER DATABASE OPEN;

-- 2. Enable supplemental logging at database level
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

-- 3. Grant LogMiner permissions to your extraction user
GRANT EXECUTE ON DBMS_LOGMNR TO your_username;
GRANT SELECT ON V_$LOGMNR_CONTENTS TO your_username;
GRANT SELECT ON V_$LOGMNR_LOGS TO your_username;
GRANT SELECT ON V_$ARCHIVED_LOG TO your_username;
GRANT SELECT ON V_$LOG TO your_username;
GRANT SELECT ON V_$LOGFILE TO your_username;
GRANT SELECT ON V_$DATABASE TO your_username;

-- 4. For specific tables, enable supplemental logging
ALTER TABLE schema.table_name ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

AWS RDS Oracle Setup

For AWS RDS Oracle instances, use the RDS admin procedures:
-- 1. Enable supplemental logging
exec rdsadmin.rdsadmin_util.alter_supplemental_logging(p_action=>'ADD');

-- 2. Enable force logging
exec rdsadmin.rdsadmin_util.force_logging(p_enable => true);

-- 3. Grant LogMiner access to your extraction user
exec rdsadmin.rdsadmin_util.grant_sys_object('DBMS_LOGMNR', 'your_username', 'EXECUTE', true);
exec rdsadmin.rdsadmin_util.grant_sys_object('DBMS_LOGMNR_D', 'your_username', 'EXECUTE', true);
exec rdsadmin.rdsadmin_util.grant_sys_object('V_$LOGMNR_CONTENTS', 'your_username', 'SELECT', true);
exec rdsadmin.rdsadmin_util.grant_sys_object('V_$LOGMNR_LOGS', 'your_username', 'SELECT', true);
exec rdsadmin.rdsadmin_util.grant_sys_object('V_$ARCHIVED_LOG', 'your_username', 'SELECT', true);
exec rdsadmin.rdsadmin_util.grant_sys_object('V_$LOG', 'your_username', 'SELECT', true);
exec rdsadmin.rdsadmin_util.grant_sys_object('V_$LOGFILE', 'your_username', 'SELECT', true);
exec rdsadmin.rdsadmin_util.grant_sys_object('V_$DATABASE', 'your_username', 'SELECT', true);
Additionally, ensure your RDS instance has backup retention period > 0 to enable archive logs:
aws rds modify-db-instance --db-instance-identifier your-instance --backup-retention-period 1

Oracle Multitenant (CDB/PDB) Setup

For Oracle multitenant environments with Pluggable Databases:
-- Connect to CDB root as SYSDBA
-- Enable supplemental logging at CDB level
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

-- Grant permissions to common user (prefixed with C##)
GRANT EXECUTE ON DBMS_LOGMNR TO C##your_username CONTAINER=ALL;
GRANT SELECT ON V_$LOGMNR_CONTENTS TO C##your_username CONTAINER=ALL;
-- ... (same grants as above with CONTAINER=ALL)

-- Configure pdb_services in Nekt to discover PDB schemas

Verification

Verify your LogMiner setup:
-- Check if archivelog mode is enabled
SELECT LOG_MODE FROM V$DATABASE;
-- Should return: ARCHIVELOG

-- Check supplemental logging status
SELECT SUPPLEMENTAL_LOG_DATA_MIN, SUPPLEMENTAL_LOG_DATA_ALL FROM V$DATABASE;
-- Should show YES for at least MIN

-- Check archive logs are being generated
SELECT SEQUENCE#, FIRST_CHANGE#, NEXT_CHANGE#, STATUS FROM V$ARCHIVED_LOG WHERE ROWNUM <= 5 ORDER BY SEQUENCE# DESC;

1. Add your Oracle access

  1. Once you have done the pre work defined in section 0, you can inform your database accesses. In the Sources tab, click on the “Add source” button located on the top right of your screen. Then, select the Oracle option from the list of connectors.
  2. Click Next and you’ll be prompted to add your database access.
    • Host (required): The hostname or IP address of your Oracle database server
    • Port (required): The port for Oracle connection (default: 1521)
    • User (required): Database user for authentication
    • Password (required): Password for authentication
    • Service Name (required): Oracle service name for the connection (also referred to as schema name)
    • Pluggable Database Services (optional): List of Oracle PDB service names for multitenant (CDB/PDB) environments
    • Thick Mode (optional, default: true): Enable Oracle thick mode for enhanced performance. Required for LogMiner operations
    • Chunk Size (optional, default: 25000): Number of rows to fetch at a time. Reduce if your row data is too large

    Advanced Configuration

    • SID: Alternative to Service Name for older Oracle configurations
    • Filter Schemas: Array of schema names to include (if empty, all schemas are discovered)
    • SSH Tunnel: Configuration for secure connections through a bastion server
    • SSL/TLS: Enable encrypted connections with certificate configuration
  3. Once you are done configuring, click Next.

2. Select your Oracle DB streams

  1. The next step is letting us know which streams you want to bring. You can select entire groups of streams or only a subset of them.
    Tip: The stream can be found more easily by typing its name.
  2. Click Next.

3. Configure your Oracle DB data streams

  1. Customize how you want your data to appear in your catalog. Select a name for each table (which will contain the fetched data) and the type of sync.
  • Table name: we suggest a name, but feel free to customize it. You have the option to add a prefix and make this process faster!
  • Sync Type: you can choose between INCREMENTAL, FULL_TABLE, and LOG_BASED:
    • Incremental: Every time the extraction happens, we’ll get only the new data based on a replication key column. Good for append-only tables or when you want to keep historical records.
    • Full Table: Every time the extraction happens, we’ll get the current state of the data. Good if you don’t want to have deleted data in your catalog or for small reference tables.
    • Log Based: Uses Oracle LogMiner to capture data changes (inserts, updates, deletes) from the database transaction logs. This is the most efficient method for capturing all changes including deletes, with minimal impact on source database performance.
Log-based replication requires additional database setup. See the “Log-Based Replication Setup” section in the pre-work above.
  1. Click Next.

Replication Methods

Full Table Sync

Complete table extraction on every sync. Best for:
  • Small reference/lookup tables
  • Tables where you need only the current state
  • Initial data loads

Incremental Sync

Extracts only new/modified rows based on a replication key (e.g., updated_at, id). Best for:
  • Large tables with timestamp columns
  • Append-only tables (logs, events)
  • When you want to preserve historical data

Log-Based Sync (Oracle LogMiner)

This is the recommended method for capturing all data changes including deletes.
Uses Oracle LogMiner to read transaction logs and capture:
  • INSERT: New records with all column values
  • UPDATE: Modified records with new values
  • DELETE: Removed records (marked with _sdc_deleted_at timestamp)
Key Features:
  • Near real-time change data capture (CDC)
  • Captures deletes (not possible with incremental sync)
  • Minimal impact on source database performance
  • Supports Oracle Multitenant (CDB/PDB) environments
  • AWS RDS Oracle compatibility with automatic fallback to archived logs
System Columns Added:
  • _sdc_lsn: The Oracle System Change Number (SCN) when the change was committed
  • _sdc_deleted_at: Timestamp when the record was deleted (null for inserts/updates)

4. Configure your Oracle DB data source

  1. Describe your data source for easy identification within your organization. You can inform things like what data it brings, to which team it belongs, etc.
  2. To define your Trigger, consider how often you want data to be extracted from this source. This decision usually depends on how frequently you need the new table data updated (every day, once a week, or only at specific times).

Check your new source!

  1. Click Done to finalize the setup. Once completed, you’ll receive confirmation that your new source is set up!
  2. You can view your new source on the Sources page. Now, for you to be able to see it on your Catalog, you have to wait for the pipeline to run. You can now monitor it on the Sources page to see its execution and completion. If needed, manually trigger the pipeline by clicking on the refresh icon. Once executed, your new table will appear in the Catalog section.

Implementation Notes

Oracle Multitenant Support

For Oracle 12c+ multitenant architecture (CDB/PDB), configure the pdb_services option with your PDB service names. The tap will:
  • Connect to each PDB to discover schemas and tables
  • Properly switch containers during extraction
  • Handle LogMiner operations across the multitenant environment

Performance Considerations

SettingRecommendation
Chunk SizeReduce from 25,000 if memory issues occur with wide tables
Thick ModeKeep enabled (true) for LogMiner and optimal performance
Filter SchemasSpecify schemas to reduce discovery time on large databases

Troubleshooting

IssueSolution
LogMiner fails to startVerify supplemental logging is enabled and user has required grants
No changes capturedCheck archive logs exist and haven’t been purged
AWS RDS permission deniedUse rdsadmin.rdsadmin_util procedures for grants
SCN gaps in dataNormal behavior - LogMiner processes committed transactions only
Slow extractionEnable thick mode, adjust chunk_size, filter to needed schemas
If you encounter any issues, reach out to us via Slack, and we’ll gladly assist you!