
0. Required pre work
In order to connect Nekt to a database, you have to do some pre work to ensure access is granted in a secure way.Network Configuration (AWS RDS)
Network Configuration (AWS RDS)
- Establish a peering connection between Nekt VPC and database VPC
- On your AWS Console, access the VPC service and go to Peering Connection. Make sure you are logged in with the account that contains the database you want to connect with.
- Select Create peering connection
- Setup as requested
- Give your connection a name (something as ‘Nekt <> Database’)
- Select
nekt-vpcID as requester (on VPC ID field) - Choose ‘Another account’ on the Select another VPC to peer with section. Inform the Account ID.
- To get the Account ID, go to the RDS service (you’ll find it searching on the AWS Console).
- Click on DB Instances
- Select the desired database
- Copy the Account ID associated with this database.
- Inform the VPC ID (Accepter)
- On the desired database details, on the Connectivity and Security section, click on the VPC.
- Copy the VPC ID
- Click on Create peering connection. You’ll notice the status is ‘Pending acceptance’.
- Go to Peering Connection again and your new peering connection should be listed, yet still pending acceptance.
- On the Actions menu, click on Accept Request and confirm it.
- You should edit your Peering Connection name to ‘Nekt <> Database’, to keep the pattern.
- Nekt VPC
- Access the created Peering Connection - that now should have the status ‘Active’ and a value on Accepter CIDRs. Copy this value, it will be the Nekt VPC IP.
- In the VPC dashboard menu, go to Route Tables
- In every route table with ‘nekt’ in its name, follow these steps:
- Click on the Routes tab
- Click on Edit routes
- Click on Add route
- On the ‘Destination’ column, paste the Nekt VPC IP (Accepter CIDRs previously copied)
- On the ‘Target’ column, choose ‘Peering Connection’ and the
Nekt <> Databaseoption in the field that will open. - Keep ‘Status’ and ‘Propagated’ columns as default
- Target: Peering connection established between Nekt and database
- Save changes
- Database VPC
- Repeat the process done for Nekt VPC, but now use the Nekt VPC IP as Destination
- Search for RDS on your AWS Console and access it.
- Select your database and go to Connectivity & Security.
- Click on VPC security groups.
- Selecting your DB security group, go to the Inbound Rules tab
- Click on Edit inbound rules
- Add the following inbound rule to the security group:
- Type: Oracle-RDS
- Source:
Customwith the Nekt VPC IP as value - Add a description to better identify it. Something like ‘Nekt’
- Save rule
- Add the following inbound rule to the security group:
Network Configuration (Non-AWS)
Network Configuration (Non-AWS)
- Ask Nekt to create a fixed IP in your AWS infra.
- In your database provider, give access to the IP provided by Nekt.
Log-Based Replication Setup (Oracle LogMiner)
Log-Based Replication Setup (Oracle LogMiner)
Log-based replication uses Oracle LogMiner to capture data changes (inserts, updates, deletes) in near real-time. This is the most efficient method for capturing changes without impacting source system performance.Additionally, ensure your RDS instance has backup retention period > 0 to enable archive logs:
Prerequisites
- Archive Log Mode: The database must be in ARCHIVELOG mode
- Supplemental Logging: Must be enabled for the tables you want to replicate
- User Permissions: The extraction user needs specific Oracle privileges
Standard Oracle Setup
Connect to your Oracle database as SYSDBA and execute:AWS RDS Oracle Setup
For AWS RDS Oracle instances, use the RDS admin procedures:Oracle Multitenant (CDB/PDB) Setup
For Oracle multitenant environments with Pluggable Databases:Verification
Verify your LogMiner setup:1. Add your Oracle access
- Once you have done the pre work defined in section 0, you can inform your database accesses. In the Sources tab, click on the “Add source” button located on the top right of your screen. Then, select the Oracle option from the list of connectors.
-
Click Next and you’ll be prompted to add your database access.
- Host (required): The hostname or IP address of your Oracle database server
- Port (required): The port for Oracle connection (default:
1521) - User (required): Database user for authentication
- Password (required): Password for authentication
- Service Name (required): Oracle service name for the connection (also referred to as schema name)
- Pluggable Database Services (optional): List of Oracle PDB service names for multitenant (CDB/PDB) environments
- Thick Mode (optional, default:
true): Enable Oracle thick mode for enhanced performance. Required for LogMiner operations - Chunk Size (optional, default:
25000): Number of rows to fetch at a time. Reduce if your row data is too large
Advanced Configuration
- SID: Alternative to Service Name for older Oracle configurations
- Filter Schemas: Array of schema names to include (if empty, all schemas are discovered)
- SSH Tunnel: Configuration for secure connections through a bastion server
- SSL/TLS: Enable encrypted connections with certificate configuration
- Once you are done configuring, click Next.
2. Select your Oracle DB streams
-
The next step is letting us know which streams you want to bring. You can select entire groups of streams or only a subset of them.
Tip: The stream can be found more easily by typing its name.
- Click Next.
3. Configure your Oracle DB data streams
- Customize how you want your data to appear in your catalog. Select a name for each table (which will contain the fetched data) and the type of sync.
- Table name: we suggest a name, but feel free to customize it. You have the option to add a prefix and make this process faster!
- Sync Type: you can choose between INCREMENTAL, FULL_TABLE, and LOG_BASED:
- Incremental: Every time the extraction happens, we’ll get only the new data based on a replication key column. Good for append-only tables or when you want to keep historical records.
- Full Table: Every time the extraction happens, we’ll get the current state of the data. Good if you don’t want to have deleted data in your catalog or for small reference tables.
- Log Based: Uses Oracle LogMiner to capture data changes (inserts, updates, deletes) from the database transaction logs. This is the most efficient method for capturing all changes including deletes, with minimal impact on source database performance.
Log-based replication requires additional database setup. See the “Log-Based Replication Setup” section in the pre-work above.
- Click Next.
Replication Methods
Full Table Sync
Complete table extraction on every sync. Best for:- Small reference/lookup tables
- Tables where you need only the current state
- Initial data loads
Incremental Sync
Extracts only new/modified rows based on a replication key (e.g.,updated_at, id). Best for:
- Large tables with timestamp columns
- Append-only tables (logs, events)
- When you want to preserve historical data
Log-Based Sync (Oracle LogMiner)
This is the recommended method for capturing all data changes including deletes.
- INSERT: New records with all column values
- UPDATE: Modified records with new values
- DELETE: Removed records (marked with
_sdc_deleted_attimestamp)
- Near real-time change data capture (CDC)
- Captures deletes (not possible with incremental sync)
- Minimal impact on source database performance
- Supports Oracle Multitenant (CDB/PDB) environments
- AWS RDS Oracle compatibility with automatic fallback to archived logs
_sdc_lsn: The Oracle System Change Number (SCN) when the change was committed_sdc_deleted_at: Timestamp when the record was deleted (null for inserts/updates)
4. Configure your Oracle DB data source
- Describe your data source for easy identification within your organization. You can inform things like what data it brings, to which team it belongs, etc.
- To define your Trigger, consider how often you want data to be extracted from this source. This decision usually depends on how frequently you need the new table data updated (every day, once a week, or only at specific times).
Check your new source!
- Click Done to finalize the setup. Once completed, you’ll receive confirmation that your new source is set up!
- You can view your new source on the Sources page. Now, for you to be able to see it on your Catalog, you have to wait for the pipeline to run. You can now monitor it on the Sources page to see its execution and completion. If needed, manually trigger the pipeline by clicking on the refresh icon. Once executed, your new table will appear in the Catalog section.
Implementation Notes
Oracle Multitenant Support
For Oracle 12c+ multitenant architecture (CDB/PDB), configure thepdb_services option with your PDB service names. The tap will:
- Connect to each PDB to discover schemas and tables
- Properly switch containers during extraction
- Handle LogMiner operations across the multitenant environment
Performance Considerations
| Setting | Recommendation |
|---|---|
| Chunk Size | Reduce from 25,000 if memory issues occur with wide tables |
| Thick Mode | Keep enabled (true) for LogMiner and optimal performance |
| Filter Schemas | Specify schemas to reduce discovery time on large databases |
Troubleshooting
| Issue | Solution |
|---|---|
| LogMiner fails to start | Verify supplemental logging is enabled and user has required grants |
| No changes captured | Check archive logs exist and haven’t been purged |
| AWS RDS permission denied | Use rdsadmin.rdsadmin_util procedures for grants |
| SCN gaps in data | Normal behavior - LogMiner processes committed transactions only |
| Slow extraction | Enable thick mode, adjust chunk_size, filter to needed schemas |
If you encounter any issues, reach out to us via Slack, and we’ll gladly assist you!