Oracle is a multinational technology company that provides database management systems, cloud services, and enterprise software solutions. Its database platform is widely used for enterprise applications, data warehousing, and business intelligence, offering robust data management and analytics capabilities.Documentation Index
Fetch the complete documentation index at: https://docs.nekt.com/llms.txt
Use this file to discover all available pages before exploring further.

Required pre-work
In order to connect Nekt to a database, you have to do some pre work to ensure access is granted in a secure way. For FULL_TABLE and INCREMENTAL replication, the Oracle user only needsCONNECT plus SELECT privileges on the target schemas and tables.
For LOG_BASED replication, additional permissions and configurations are required.
Network Configuration (AWS RDS)
Network Configuration (AWS RDS)
- Establish a peering connection between Nekt VPC and database VPC
- On your AWS Console, access the VPC service and go to Peering Connection. Make sure you are logged in with the account that contains the database you want to connect with.
- Select Create peering connection
- Setup as requested
- Give your connection a name (something as ‘Nekt <> Database’)
- Select
nekt-vpcID as requester (on VPC ID field) - Choose ‘Another account’ on the Select another VPC to peer with section. Inform the Account ID.
- To get the Account ID, go to the RDS service (you’ll find it searching on the AWS Console).
- Click on DB Instances
- Select the desired database
- Copy the Account ID associated with this database.
- Inform the VPC ID (Accepter)
- On the desired database details, on the Connectivity and Security section, click on the VPC.
- Copy the VPC ID
- Click on Create peering connection. You’ll notice the status is ‘Pending acceptance’.
- Go to Peering Connection again and your new peering connection should be listed, yet still pending acceptance.
- On the Actions menu, click on Accept Request and confirm it.
- You should edit your Peering Connection name to ‘Nekt <> Database’, to keep the pattern.
- Nekt VPC
- Access the created Peering Connection - that now should have the status ‘Active’ and a value on Accepter CIDRs. Copy this value, it will be the Nekt VPC IP.
- In the VPC dashboard menu, go to Route Tables
- In every route table with ‘nekt’ in its name, follow these steps:
- Click on the Routes tab
- Click on Edit routes
- Click on Add route
- On the ‘Destination’ column, paste the Nekt VPC IP (Accepter CIDRs previously copied)
- On the ‘Target’ column, choose ‘Peering Connection’ and the
Nekt <> Databaseoption in the field that will open. - Keep ‘Status’ and ‘Propagated’ columns as default
- Target: Peering connection established between Nekt and database
- Save changes
- Database VPC
- Repeat the process done for Nekt VPC, but now use the Nekt VPC IP as Destination
- Search for RDS on your AWS Console and access it.
- Select your database and go to Connectivity & Security.
- Click on VPC security groups.
- Selecting your DB security group, go to the Inbound Rules tab
- Click on Edit inbound rules
- Add the following inbound rule to the security group:
- Type: Oracle-RDS
- Source:
Customwith the Nekt VPC IP as value - Add a description to better identify it. Something like ‘Nekt’
- Save rule
- Add the following inbound rule to the security group:
Network Configuration (Non-AWS)
Network Configuration (Non-AWS)
- Ask Nekt to create a fixed IP in your AWS infra.
- In your database provider, give access to the IP provided by Nekt.
Log-Based Replication Setup (Oracle LogMiner)
Log-Based Replication Setup (Oracle LogMiner)
Log-based replication uses Oracle LogMiner to capture data changes (inserts, updates, deletes) in near real-time. This is the most efficient method for capturing changes without impacting source system performance.Additionally, ensure your RDS instance has backup retention period > 0 to enable archive logs:
Prerequisites
- Archive Log Mode: The database must be in ARCHIVELOG mode
- Supplemental Logging: Must be enabled for the tables you want to replicate
- User Permissions: The extraction user needs specific Oracle privileges depending on your database version
Standard Oracle Setup
Connect to your Oracle database as SYSDBA and execute:AWS RDS Oracle Setup
For AWS RDS Oracle instances, use the RDS admin procedures:Oracle Multitenant (CDB/PDB) Setup
For Oracle multitenant environments with Pluggable Databases:Verification
Verify your LogMiner setup:Configuring Oracle as a Source
In the Sources tab, click on the “Add source” button located on the top right of your screen. Then, select the Oracle option from the list of connectors. Click Next and you’ll be prompted to add your access.1. Add account access
Once you have done the pre-work defined above, you can inform your database accesses. The following configurations are available:- Host (required): The hostname or IP address of your Oracle database server
- Port (required): The port for Oracle connection (default:
1521) - User (required): Database user for authentication
- Password (required): Password for authentication
- Service Name (required): Oracle service name for the connection (also referred to as schema name)
- Pluggable Database Services (optional): List of Oracle PDB service names for multitenant (CDB/PDB) environments
- Thick Mode (optional, default:
true): Enable Oracle thick mode for enhanced performance. Required for LogMiner operations - Chunk Size (optional, default:
25000): Number of rows to fetch at a time. Reduce if your row data is too large
- SID: Alternative to Service Name for older Oracle configurations
- Filter Schemas: Array of schema names to include (if empty, all schemas are discovered)
- Use batch query: Enable keyset pagination with retry logic. Breaks large table extraction into smaller batches ordered by primary key, each with a fresh connection.
- Invalid date handling: How to handle Oracle date/timestamp values outside Python’s representable year range (1–9999). Options are
coerce(coerce to nearest valid boundary),null(convert to null), orerror(raise error and stop pipeline). - SSH Tunnel: Configuration for secure connections through a bastion server
- SSL/TLS: Enable encrypted connections with certificate configuration
- Connect timeout: Oracle Net CONNECT_TIMEOUT in seconds (listener-level handshake budget).
- Transport connect timeout: Oracle Net TRANSPORT_CONNECT_TIMEOUT in seconds (TCP-level connect budget).
- TNS retry count: Oracle Net RETRY_COUNT — driver-level retries on transient TNS errors before surfacing them.
- TNS retry delay: Oracle Net RETRY_DELAY in seconds between driver-level connect retries.
- Keepalive interval (minutes): Oracle Net EXPIRE_TIME in minutes — TCP keepalive interval to detect dead connections in long runs. Set to 0 to disable.
- Discovery retry max: Maximum attempts for discovery-phase operations on transient Oracle errors.
- Discovery retry delay: Initial delay in seconds between discovery retries. Uses exponential backoff (capped at 60s).
- Connection pool size: SQLAlchemy QueuePool pool_size. Raise for high-throughput workloads.
- Connection pool max overflow: SQLAlchemy QueuePool max_overflow.
- Connection pool recycle (seconds): Seconds after which a pooled connection is recycled.
- Pool pre-ping: Whether SQLAlchemy should pre-ping pooled connections before use.
2. Select streams
The next step is letting us know which streams you want to bring. You can select entire groups of streams or only a subset of them.Tip: The stream can be found more easily by typing its name.Click Next.
3. Configure data streams
Customize how you want your data to appear in your catalog. Select a name for each table (which will contain the fetched data) and the type of sync.- Table name: we suggest a name, but feel free to customize it. You have the option to add a prefix and make this process faster!
- Sync Type: you can choose between INCREMENTAL, FULL_TABLE, and LOG_BASED:
- Incremental: Every time the extraction happens, we’ll get only the new data based on a replication key column. Good for append-only tables or when you want to keep historical records.
- Full Table: Every time the extraction happens, we’ll get the current state of the data. Good if you don’t want to have deleted data in your catalog or for small reference tables.
- Log Based: Uses Oracle LogMiner to capture data changes (inserts, updates, deletes) from the database transaction logs. This is the most efficient method for capturing all changes including deletes, with minimal impact on source database performance.
Log-based replication requires additional database setup. See the “Log-Based Replication Setup” section in the pre-work above.
- INSERT: New records with all column values
- UPDATE: Modified records with new values
- DELETE: Removed records (marked with
_sdc_deleted_attimestamp)
- Near real-time change data capture (CDC)
- Captures deletes (not possible with incremental sync)
- Minimal impact on source database performance
- Supports both traditional (non-CDB) and Oracle Multitenant (CDB/PDB) environments
- AWS RDS Oracle compatibility with automatic fallback to archived logs
_sdc_lsn: The Oracle System Change Number (SCN) when the change was committed_sdc_deleted_at: Timestamp when the record was deleted (null for inserts/updates)
4. Configure data source
Describe your data source for easy identification within your organization. You can inform things like what data it brings, to which team it belongs, etc. To define your Trigger, consider how often you want data to be extracted from this source. This decision usually depends on how frequently you need the new table data updated (every day, once a week, or only at specific times). Click Done to finalize the setup.5. Check your new source
You can view your new source on the Sources page. Now, for you to be able to see it on your Catalog, you have to wait for the pipeline to run. You can now monitor it on the Sources page to see its execution and completion. If needed, manually trigger the pipeline by clicking on the refresh icon. Once executed, your new table will appear in the Catalog section.Streams and Fields
Because Oracle is a relational database, the streams and fields correspond directly to the tables, views, and columns available in the schemas you have granted access to. During the setup process, Nekt will automatically discover all accessible tables and views. You will be able to select specifically which ones you want to sync into your catalog. The data types from Oracle (e.g.,VARCHAR2, NUMBER, DATE, TIMESTAMP) will be automatically mapped to standard Nekt data types during the extraction. Binary fields (such as BLOB, RAW, and LONG RAW) are automatically converted to uppercase hex strings, while character large objects (CLOB and NCLOB) are fetched directly as standard strings.
Implementation Notes
Oracle Multitenant Support
The connector supports both traditional (non-CDB) and multitenant (CDB/PDB) Oracle environments. For Oracle 12c+ multitenant architecture (CDB/PDB), configure thepdb_services option with your PDB service names. The tap will:
- Connect to each PDB to discover schemas and tables
- Properly switch containers during extraction
- Handle LogMiner operations across the multitenant environment seamlessly, safely fetching System Change Numbers (SCN) regardless of the architecture
- Automatically detect CDB/non-CDB environments to ensure compatibility
Column Name Sanitization & Preservation
To ensure compatibility with data warehouses like BigQuery, column names and primary keys are automatically sanitized during extraction. Any characters that are not letters, numbers, or underscores (such as$) are replaced with underscores. Additionally, if a column name starts with a number, an underscore is prepended. Original casing is preserved to match data extraction safely.
The connector also preserves original, full-length column names directly from the table schema. This avoids historical Oracle database dialect limitations where long column names were truncated to 30 characters and appended with disambiguation suffixes in the extracted records.
Connection Resilience
The connector includes built-in resilience against transient network and database errors (e.g.,ORA-12170, ORA-12547, ORA-03113, DPY-4011). It automatically applies driver-level connection retries, SQLAlchemy pool pre-pinging, and exponential backoff during discovery. You can tune these behaviors using the Advanced Configuration options (such as timeouts, retry counts, and pool settings) for high-latency or unstable network environments.
Performance Considerations
| Setting | Recommendation |
|---|---|
| Chunk Size | Reduce from 25,000 if memory issues occur with wide tables |
| Use Batch Query | Enable for very large tables to use keyset pagination and avoid connection timeouts |
| Thick Mode | Keep enabled (true) for LogMiner and optimal performance |
| Filter Schemas | Specify schemas to reduce discovery time on large databases |
| Stream Selection | During extraction, the connector automatically limits schema discovery to only the selected streams, significantly reducing discovery overhead on large databases. |
Troubleshooting
| Issue | Solution |
|---|---|
| LogMiner fails to start | Verify supplemental logging is enabled and user has required grants |
ORA-00942: table or view does not exist | Missing dictionary view grants. Ensure the extraction user has SELECT on V_$ARCHIVED_LOG, V_$LOG, V_$LOGFILE, V_$DATABASE, and V_$LOGMNR_CONTENTS |
ORA-01031: insufficient privileges | Missing permissions to read mined rows after LogMiner starts. Grant LOGMINING (12c+) or EXECUTE_CATALOG_ROLE + SELECT ANY TRANSACTION (11g) |
| No changes captured | Check archive logs exist and haven’t been purged |
| AWS RDS permission denied | Use rdsadmin.rdsadmin_util procedures for grants |
| SCN gaps in data | Normal behavior - LogMiner processes committed transactions only |
| Slow extraction | Enable thick mode, adjust chunk_size, filter to needed schemas, or enable Use Batch Query |
| Dates arriving in unexpected format | The connector attempts to set Oracle NLS variables (NLS_LANG, NLS_DATE_FORMAT, etc.) automatically. If initialization fails, dates might fall back to strings. Check your Instant Client installation. |
Transient connection errors (ORA-12170, DPY-4011) | Normal in volatile network environments. The connector will automatically retry. If issues persist, consider increasing TNS retry parameters in the advanced settings. |
If you encounter any issues, reach out to us via Slack, and we’ll gladly assist you!
Skills for agents
Download Oracle skills file
Oracle connector documentation as plain markdown, for use in AI agent contexts.