Nov 5, 2025
By Vijay Ganapathy, Datavail
Migrating legacy Oracle databases to the cloud is a critical step for organizations aiming to modernize their infrastructure and enhance operational efficiency. In this blog post, we’ll walk through a practical approach to moving an on-premises Oracle 12c RAC (Enterprise Edition) database to an AWS RDS Oracle 19c Standard Edition 2 environment using Oracle Data Pump—without using S3 integration.
- The apps are moving simultaneously or have been moved to AWS.
- You know the sys or any DBA user password for expdp.
- You have sufficient space for the export dump files in the local mount.
- Connectivity to and between AWS RDS, AWS Linux jump server, and the current database exist.
- It is a homogeneous migration.
NOTE:
- Perform multiple mocks to estimate the downtime and document the steps in order.
- Migrate the test/dev/training databases BEFORE PROD and make sure the apps are working fine for an estimated time.
- Do End-to-End testing of apps on the newer Oracle database version to validate the working app in the 19c environment.
Advantages of AWS RDS Oracle Database 19c SE2 Over On-premises RAC Infrastructure
- Reduced licensing costs
- Licensing can possibly be included, if the application is internal to the customer
- Huge potential performance gain if you’re coming from older hardware
- Easier database upgrade and patching processes
- Easier changes to the memory, IOPS, size, etc. whenever required.
- Multi-AZ failover provides some of the resiliency you get with RAC
Advantages of Oracle Data Pump
- Works like a charm.
- Can move data which AWS Database Migration Service (DMS) cannot move.
Disadvantages of Oracle Data Pump
- Downtime required.
Migration Preparation
On Target:
Create an AWS Oracle RDS with the:
- Required size
- Correct VPC/Subnet/Security Group
- Multi-AZ
- Correct database name
- NOT publicly accessible
- Correct DB Character set
- Enable Encryption
- Oracle Edition SE2 License Included (LI)
Database Prep:
Create the required
- Users
- Tablespaces
- Non-partition tables for the partitioned tables in Source
- For COMPRESSION usage on BLOB columns, make sure you create those tables accordingly.
- Roles
- PERFSTAT user for using STATSPACK, using option groups for RDS
- Required grants for users
Network prep:
- Whitelist the new AWS Oracle RDS network so that you can access it from the on-premises database.
- Create a Linux jump server to access the Oracle RDS and install the Oracle client in it so that you can use it to import dump. Test the connection by adding the tns entry for Oracle AWS RDS – PRODSTAGE.
On Source:
- Add the tns for the new AWS RDS as PRODSTAGE.
- Test using tnsping
- Database Prep:
- Create a public database link to the new aws rds
create public database link to_awsrds connect to identified by using ‘PRODSTAGE’;
- Test the connection.
- Create a directory for the export files.
Steps:
create directory MIGTOAWS as ”;
grant read, write on directory MIGTOAWS to sys;
Downtime:
Here’s what you need to do once the downtime is scheduled and after confirming the apps are down.
On Source Server/Database:
- Make sure there are no applications connected to the database.
HINT: Double check the gv$session.
- Stop the listener.
- Note down the database objects and table row counts.
- Note down the Invalid objects for the schemas exported.
- Set the environment with the Source database name
. oraenv
- Kick off the export dump in background and use the PARALLEL option for the required schemas.
nohup expdp directory=MIGTOAWS dumpfile=expdp_mig_U%U.dmp FILESIZE=300GB log=expdplog_mig.log schemas={list of schemas comma-separated) parallel=16 &
NOTE: Increase the IOPS to 40000 or to a number from your testing to process faster.
Here’s what to do after the export is completed.
- Make sure you have the IOPS enabled in the AWS Oracle RDS.
- Transfer all of them individually directly to the AWS Oracle RDS using database link. The main advantage of this method is a faster process rather than moving the dump to S3 and then to the database.
HINT: Create multiple scripts(.sql) with transfer of dump files accordingly to run it in parallel sessions.
EXAMPLE:
set echo on term on feedback on time on
set timing on;
BEGIN
DBMS_FILE_TRANSFER.PUT_FILE(
source_directory_object => ‘MIGTOAWS’,
source_file_name => ”,
destination_directory_object => ‘DATA_PUMP_DIR’,
destination_file_name => ‘
destination_database => ‘TO_AWSRDS’
);
END;
/
Here’s what to do once the dump files are transferred.
On the Linux Jump server:
- Kick off the import
- nohup impdp username/password@PRODSTAGE directory=DATA_PUMP_DIR dumpfile=expdp_mig_U%U.dmp logfile=impdpprod_mig.log TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y transform=oid:n PARTITION_OPTIONS=MERGE TABLE_EXISTS_ACTION=APPEND SCHEMAS={Schema listed as comma-separated} &
- Once the import is completed, check the log to make sure all got imported. Multiple mocks will be helpful here to determine the next steps or solution to the errors.
Validate the:
- Database components
- Database schema objects
- Tables count
- Invalid objects
- Sequence match if any found in the mock testing
- And any custom implementation specific to your application
Once the validation is completed
From a database perspective:
- Create the database backup strategy
- Add Monitoring/alarm in AWS CloudWatch with Notification and Rules
- Statspack scheduling for reports
- Gather Stats execution
- Application pointing to new database
- If required, add your custom Cron tab scripts as AWS Lambda functions.
To dive deeper into best practices and discover more expert insights, be sure to visit Datavail.com.