Oracle to Aurora PostgreSQL: What We Learned After 12 Enterprise Migrations
Why Enterprises Are Moving
Oracle licensing is the single largest line item in most enterprise database budgets. A 4-core Enterprise Edition license with Partitioning, Diagnostics Pack, and Tuning Pack runs $190K+ in license fees alone, before annual support at 22%. A two-node RAC cluster with those options easily exceeds $500K in Year 1 and $100K+ in perpetual annual support. Multiply that across 8-12 Oracle databases in a typical financial services shop, and you're looking at $2-4M in annual Oracle spend.
Aurora PostgreSQL eliminates per-core licensing entirely. You pay for compute (instance hours), storage (per GB-month), and I/O (per million requests). A db.r6g.2xlarge instance — 8 vCPUs, 64 GB RAM — costs roughly $1.15/hour on-demand, or $0.72/hour with a 1-year reserved instance. That's approximately $6,300/year for compute alone. Add storage and I/O for a 2 TB database with moderate throughput, and total annual cost lands between $15K-$30K. That's a 60-80% reduction from the equivalent Oracle deployment.
But licensing isn't the only driver. Cloud-first mandates from CTO offices are pushing everything off bare metal. Aurora's managed operations — automated patching, continuous backups, storage auto-scaling, zero-downtime minor version upgrades — remove 60-70% of the operational DBA workload. And Aurora's read replica architecture (up to 15 replicas with sub-20ms replication lag) solves read-scaling problems that required expensive Oracle RAC or Active Data Guard configurations.
The 5 Factors That Determine Migration Complexity
After 12 migrations, we can predict the complexity and timeline of an Oracle-to-Aurora migration within the first week of assessment. It comes down to five factors:
1. Stored Procedure Volume and Complexity
PL/SQL packages are the single biggest migration effort driver. A database with 50 simple CRUD procedures migrates in weeks. A database with 200 PL/SQL packages containing cursor loops, bulk collect operations, autonomous transactions, and DBMS_SCHEDULER jobs takes months. AWS SCT converts roughly 60-70% of PL/SQL to PL/pgSQL automatically. The remaining 30-40% requires manual rewriting by someone who understands both languages at a procedural level.
2. Oracle-Specific SQL Constructs
Certain Oracle SQL features have no direct PostgreSQL equivalent. CONNECT BY hierarchical queries, the MODEL clause, MERGE with complex conditions, FLASHBACK queries, and Oracle's non-ANSI outer join syntax (+) all require manual conversion. The presence of any of these in application SQL adds 2-4 weeks to the migration per construct type.
3. Partitioning Strategy
Oracle's partitioning options (range, list, hash, composite, interval, reference) are more mature than PostgreSQL's native partitioning. PostgreSQL 14+ handles range and list partitioning well. Hash partitioning works but with different syntax. Composite partitioning requires nested partition definitions. Interval partitioning — where Oracle auto-creates partitions as data arrives — has no native PostgreSQL equivalent and requires either a trigger-based solution or the pg_partman extension. If your Oracle database uses partitioning heavily, this is a 3-6 week effort.
4. RAC Dependencies
Applications that depend on Oracle RAC features — transparent application failover (TAF), distributed transactions across RAC nodes, or instance-specific services — need architectural changes. Aurora doesn't have a RAC equivalent. Its high availability model is different: single writer instance with automatic failover to a read replica (failover time: 30-60 seconds with cluster cache management). Applications that assume sub-second RAC failover need testing and potentially connection retry logic.
5. Application Coupling
The tightest Oracle coupling we've seen: application code that uses OCI (Oracle Call Interface) directly, embeds Oracle-specific hints in every query, relies on Oracle sequences with CACHE and ORDER semantics, or uses Oracle Advanced Queuing (AQ) for inter-process messaging. Loosely coupled applications that use standard JDBC/ODBC with ANSI SQL migrate with minimal application changes. Tightly coupled applications sometimes require more effort on the application side than the database side.
Schema Conversion: What SCT Handles vs. Manual Work
AWS Schema Conversion Tool (SCT) is the starting point for every migration. It analyzes your Oracle schema and generates PostgreSQL DDL. For tables, indexes, views, and simple procedures, SCT does 80-90% of the work. For complex PL/SQL, it generates partially converted code with action items flagged for manual intervention.
Here's what requires manual conversion, consistently across all 12 migrations:
The recursive CTE approach is functionally equivalent but syntactically different enough that automated conversion fails on complex hierarchical queries — especially those with CONNECT_BY_ISCYCLE, CONNECT_BY_ISLEAF, or multiple START WITH conditions. Each one needs a developer who understands both Oracle's hierarchical query engine and PostgreSQL's recursive CTE execution model.
Other common manual conversions:
- Oracle sequences with CACHE + ORDER: PostgreSQL sequences support CACHE but not ORDER. For applications that depend on strictly ordered sequence values across concurrent sessions, you need a serialized sequence function or a redesign.
- DBMS_OUTPUT / DBMS_LOB / UTL_FILE: Oracle's built-in packages have no PostgreSQL equivalent. Replacement depends on context — RAISE NOTICE for debug output, PostgreSQL large object functions for LOB handling, pg_read_file/pg_write_file for file I/O (with restricted permissions).
- NVL vs COALESCE: Simple but pervasive. NVL evaluates both arguments; COALESCE short-circuits. Functionally equivalent in most cases, but edge cases exist with side-effecting function arguments.
- DECODE vs CASE: Oracle's DECODE function maps directly to CASE WHEN in PostgreSQL. SCT handles this automatically, but complex nested DECODE statements sometimes produce incorrect CASE logic that needs manual review.
Performance: What's Faster and What's Slower
Faster on Aurora
Read-heavy workloads benefit enormously from Aurora's architecture. Aurora's storage layer — a distributed, fault-tolerant, self-healing storage system shared across the writer and all read replicas — delivers consistent read I/O performance regardless of database size. Read replicas share the same storage volume (no replication lag from WAL shipping), so reads scale horizontally by adding replicas. Workloads that hit Oracle Data Guard standby for read offloading typically see 20-40% better read latency on Aurora replicas.
Sequential scan performance on large tables is also consistently faster on Aurora. Aurora's storage engine uses a log-structured model that optimizes sequential reads. Analytical queries that scan multi-billion-row tables (reporting, ETL extraction) often run 15-30% faster without any query tuning.
Slower on Aurora
Complex PL/pgSQL procedures with heavy cursor-based processing run 10-30% slower than equivalent PL/SQL on Oracle. Oracle's PL/SQL engine is deeply integrated with the SQL engine — context switches between PL/SQL and SQL are minimized. PostgreSQL's PL/pgSQL has more overhead per SQL statement executed within a function. For procedures that execute thousands of individual SQL statements in a loop, the overhead accumulates.
Materialized view refresh on PostgreSQL is less optimized than Oracle's. Oracle supports fast refresh (incremental, using materialized view logs). PostgreSQL's REFRESH MATERIALIZED VIEW does a full rebuild by default. REFRESH MATERIALIZED VIEW CONCURRENTLY avoids locking but still rebuilds completely. For large materialized views refreshed frequently, this is a noticeable regression that requires application-level workarounds (incremental tables, batch update logic).
Data Migration: Three Patterns
AWS DMS for Online Migration
AWS Database Migration Service handles the bulk of data movement. It performs a full load (initial copy of all tables) followed by continuous replication using Oracle LogMiner to capture changes. DMS supports LOB columns, partitioned tables, and most Oracle data types. The critical limitation: DMS migrates data, not schema objects. Triggers, procedures, sequences, and grants must be migrated separately (via SCT or manually).
pg_dump/pg_restore for Offline Migration
For smaller databases (under 500 GB) with an acceptable maintenance window, the offline approach is simpler: export from Oracle using ora2pg or a custom extraction script, load into Aurora using pg_restore or COPY. No ongoing replication to manage. The tradeoff is downtime — typically 4-8 hours for a 200 GB database, longer for larger datasets.
Hybrid Approach for Large Databases
For databases over 2 TB, we use a hybrid: DMS for initial full load (which can run for days on very large databases without impacting production), followed by CDC (change data capture) replication to keep Aurora in sync during the parallel run phase. This minimizes cutover downtime to 30-60 minutes — the time needed to stop application writes to Oracle, verify DMS replication is caught up, and redirect application connections to Aurora.
The Parallel Run Phase
Every migration we've done includes a parallel run — Oracle and Aurora running side by side, processing the same workload, with results compared. This phase catches issues that no amount of testing in isolation reveals: subtle data type conversion differences, time zone handling discrepancies, sort order changes (Oracle's NLS_SORT vs PostgreSQL's collation), and rounding behavior differences in numeric calculations.
Plan for 3-6 weeks of parallel run. The first week catches the obvious issues. The second and third weeks catch the edge cases that only appear with specific data patterns. Weeks four through six are confidence-building — the application team and business stakeholders need to see consistent results before they'll approve the cutover.
The cost of the parallel run is real: you're paying for both Oracle and Aurora infrastructure simultaneously. For a large database, that's $30K-$80K in incremental infrastructure costs over 6 weeks. Every migration, someone asks if we can shorten it. We don't recommend less than 3 weeks. The cost of a production data discrepancy discovered after cutover is orders of magnitude higher than 6 weeks of parallel infrastructure.
Monitoring: The Forgotten Migration Workstream
Oracle shops rely on Enterprise Manager, AWR reports, ASH analytics, and ADDM recommendations for performance management. None of these exist in Aurora. The monitoring toolchain has to be rebuilt from scratch.
Aurora provides CloudWatch metrics (CPU, memory, I/O, connections, replication lag) and Performance Insights (a lightweight equivalent of ASH that shows wait events, top SQL, and host-level metrics). For most operational monitoring, these are sufficient. For deep performance analysis — the kind of investigation you do when a query that ran in 200ms yesterday runs in 15 seconds today — you need more.
pg_stat_statements gives you the top SQL view, but it lacks the historical snapshots that AWR provides out of the box. For historical analysis, you need either pganalyze (SaaS), pg_stat_monitor (Percona's enhanced alternative to pg_stat_statements), or a custom solution that snapshots pg_stat_statements on a schedule and stores the deltas. We deploy pganalyze on every Aurora migration — it's the closest equivalent to having AWR + ASH + ADDM in the PostgreSQL ecosystem.
The monitoring gap is consistently underestimated in migration planning. DBA teams that have relied on AWR reports for a decade need 2-4 weeks to become proficient with the PostgreSQL equivalents. Budget for this training explicitly — it's not something people absorb through documentation alone.
The migrations that fail aren't the ones with complex PL/SQL or unusual Oracle features. Those are known problems with known solutions. The migrations that fail are the ones that underestimate the parallel run phase, skip monitoring toolchain setup, or treat the cutover date as immovable before the assessment is complete. Give the migration the time it needs. A 5-month migration that succeeds is better than a 3-month migration that produces a production incident on day one.
Is It Worth It?
Across our 12 migrations, the average total cost of the migration project — including consulting, parallel run infrastructure, application code changes, and team training — was 1.2x the annual Oracle licensing cost being eliminated. That means the migration pays for itself in 14-15 months. By month 24, you've saved the equivalent of the entire migration budget. By year 3, the cumulative savings exceed $1M for a mid-size Oracle estate.
The operational savings compound on top of the licensing savings. Aurora eliminates patching windows, storage management, backup orchestration, and most high-availability configuration. A database that required 0.5 FTE of DBA time on Oracle requires 0.1-0.2 FTE on Aurora. That freed DBA capacity can be redirected to performance optimization, data architecture, or migrating the next Oracle database.
Planning an Oracle to PostgreSQL Migration?
We've done this 12 times across financial services and manufacturing. We know where the problems hide, how long it actually takes, and what the real costs are. Let's start with a no-cost assessment of your Oracle estate.
Talk to Our Team