Home // Cloud Infrastructure // Odoo Development // AI & Automation // Odoo + AI Agents Case Studies About Blog Free Assessment
// DATABASE MIGRATION · 11 MIN READ

Migrating DB2 LUW to Aurora PostgreSQL: A Field Guide

A practical field guide for migrating DB2 LUW on-premises databases to Aurora PostgreSQL — data type mapping, stored procedure conversion, migration tooling choices, and the gotchas that routinely blow up timelines.

// PUBLISHED 2026-04-20 · LANIAKEA TEAM

Why Teams Are Moving Off DB2 LUW

The DB2 LUW migration conversation starts in one of three places: IBM licensing renewal sticker shock, a cloud-first mandate that landed on a 15-year-old DB2 installation, or a DBA team that can no longer recruit junior talent who know DB2.

All three are real forcing functions. IBM's DB2 licensing — processor-based, annual maintenance, support tiers — runs $25,000–$200,000/year for enterprise environments. When the renewal arrives on the same quarter as a CFO asking about cloud costs, Aurora's pay-per-use model looks compelling. Aurora PostgreSQL eliminates the licensing cost entirely and runs on managed infrastructure with built-in Multi-AZ replication, automated backups, and no DBA overhead for patching.

The skills argument is also real: the PostgreSQL talent pool is orders of magnitude larger than DB2 LUW. Your next hire is more likely to know PostgreSQL deeply than DB2, and your existing team can learn PostgreSQL in months rather than years.

What's less often stated honestly: DB2 LUW to Aurora PostgreSQL is a hard migration. Not impossible, not a reason to stay on DB2, but harder than Oracle-to-Aurora or SQL Server-to-RDS because DB2's dialect, system catalog, and PL/SQL equivalent (DB2 SQL PL) are further from PostgreSQL than Oracle PL/SQL is. This guide tells you what you're actually getting into.

Phase 1: Assessment

Before writing a single line of migration code, spend two weeks in assessment. The output should be a migration complexity score for each database object.

What to inventory:

-- DB2: Count objects by type
SELECT TABSCHEMA, TYPE, COUNT(*) AS cnt
FROM SYSCAT.TABLES
WHERE TABSCHEMA NOT LIKE 'SYS%'
GROUP BY TABSCHEMA, TYPE
ORDER BY cnt DESC;

For each schema, catalog:

Complexity tiers:

Phase 2: Schema Conversion

Data Type Mapping

Most DB2 data types map cleanly to PostgreSQL. The ones that don't are where migrations break:

DB2 Type PostgreSQL Equivalent Notes
INTEGERINTEGERDirect
BIGINTBIGINTDirect
DECIMAL(p,s)NUMERIC(p,s)Direct
VARCHAR(n)VARCHAR(n)Direct
TIMESTAMPTIMESTAMPDB2 has 12 decimal digits of precision; PostgreSQL has 6
CLOBTEXTNo size limit in PostgreSQL
BLOBBYTEADirect equivalent
XMLXML or JSONBConvert to JSONB if the use case fits
GRAPHIC/VARGRAPHICVARCHARDB2 double-byte; PostgreSQL uses UTF-8 natively
DECFLOATNUMERICDB2's IEEE 754r decimal float
ROWIDNo equivalentRemove; use application-layer UUID if needed

The IDENTITY column conversion is the most frequent source of breakage. DB2 GENERATED ALWAYS AS IDENTITY becomes PostgreSQL GENERATED ALWAYS AS IDENTITY (PostgreSQL 10+) or SERIAL for older targets. The critical detail: if your application inserts explicit values into IDENTITY columns (common in DB2 with GENERATED BY DEFAULT), you need GENERATED BY DEFAULT AS IDENTITY in PostgreSQL, not GENERATED ALWAYS.

Schema-Level DDL Differences

-- DB2
CREATE TABLE orders (
    order_id  INTEGER GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1),
    customer  VARCHAR(100) NOT NULL,
    created   TIMESTAMP DEFAULT CURRENT TIMESTAMP
);

-- PostgreSQL equivalent
CREATE TABLE orders (
    order_id  INTEGER GENERATED ALWAYS AS IDENTITY,
    customer  VARCHAR(100) NOT NULL,
    created   TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

Notable DDL differences:

Phase 3: Stored Procedure Conversion

This is where the timeline gets consumed. DB2 SQL PL and PostgreSQL PL/pgSQL are both procedural SQL extensions but have meaningful differences:

Syntax Differences

-- DB2 SQL PL procedure
CREATE PROCEDURE get_customer_orders (
    IN p_cust_id INTEGER,
    OUT p_order_count INTEGER
)
LANGUAGE SQL
BEGIN
    DECLARE v_status VARCHAR(20);

    SELECT COUNT(*) INTO p_order_count
    FROM orders
    WHERE customer_id = p_cust_id;

    IF p_order_count = 0 THEN
        SET v_status = 'NO_ORDERS';
    ELSE
        SET v_status = 'HAS_ORDERS';
    END IF;
END;

-- PostgreSQL PL/pgSQL equivalent
CREATE OR REPLACE PROCEDURE get_customer_orders (
    IN p_cust_id INTEGER,
    INOUT p_order_count INTEGER DEFAULT NULL
)
LANGUAGE plpgsql AS $$
DECLARE
    v_status VARCHAR(20);
BEGIN
    SELECT COUNT(*) INTO p_order_count
    FROM orders
    WHERE customer_id = p_cust_id;

    IF p_order_count = 0 THEN
        v_status := 'NO_ORDERS';
    ELSE
        v_status := 'HAS_ORDERS';
    END IF;
END;
$$;

Key differences:

Budget 20–40 minutes per stored procedure for conversion and testing. A database with 200 stored procedures is 70–130 hours of work. This is the single most common reason migrations run over budget.

Phase 4: Data Migration Tooling

Three approaches, each with tradeoffs:

AWS Database Migration Service (DMS)

Best for: Initial bulk load + CDC (change data capture) during parallel run period.
DB2 LUW support: AWS DMS supports DB2 LUW as a source with CDC via the DB2 log reader.
Gotchas:

pg_dump via Foreign Data Wrapper

Best for: Smaller databases (<50 GB) or when DMS licensing is a concern.
Approach: Use db2_fdw or tds_fdw (for TDS-capable DB2 instances) to read DB2 tables from PostgreSQL, then INSERT INTO ... SELECT FROM fdw_table.
Gotchas: Performance is limited by the FDW throughput; not suitable for large tables without parallel partition-based extraction.

Custom ETL

Best for: Complex transformations, data cleansing requirements, or when source and target schemas diverge significantly.
Tooling: Python with ibm-db (DB2 connector) and psycopg2, or Apache Spark for parallel extraction of large tables.
Advantage: Full control over transformation logic; can handle DB2-specific quirks inline.

Phase 5: Testing and Parallel Run

Never cut over without a parallel run. Run both DB2 and Aurora PostgreSQL simultaneously for at least 2–4 weeks:

  1. Dual-write pattern: Application writes go to both databases; reads come from DB2 until validation is complete
  2. Row count reconciliation: Automated daily comparison of row counts per table
  3. Data spot-checks: Sample 1,000 rows from 20 tables and compare field by field
  4. Stored procedure output comparison: For every procedure call in production traffic, compare return values between DB2 and Aurora
  5. Performance baseline: Confirm key queries run at acceptable performance on Aurora; don't assume schema equivalence means performance equivalence

The Cutover Decision

Cut over when:

Keep DB2 running (read-only) for 30 days post-cutover before decommissioning.

Planning a DB2 LUW migration?

We provide migration assessment, stored procedure conversion, DMS configuration, and parallel-run validation as a structured engagement. 20+ years of DB2 LUW and Aurora PostgreSQL experience.