Home // Cloud Infrastructure // Odoo Development // AI & Automation // Odoo + AI Agents Case Studies About Blog Free Assessment
// CLOUD MIGRATION · 12 MIN READ

Azure Database Migration Service for Oracle: Pre-Migration Assessment Gaps

Azure DMS Oracle assessments score objects as "automatically convertible" or "needs review." The problem is what falls through the cracks of that binary — package body logic, DBMS_* dependencies, and data type edge cases that only surface during the actual migration run.

// PUBLISHED 2024-01-16 · LANIAKEA TEAM

What the Assessment Report Tells You

Azure Database Migration Service's Oracle assessment generates a report that categorizes each schema object — tables, views, procedures, functions, packages, triggers, sequences — as either ready for automatic conversion or flagged for manual review. The report includes object counts by category and an overall readiness percentage. Teams take that percentage to stakeholders, build a project timeline around it, and then discover during execution that the percentage was optimistic.

The assessment is genuinely useful for what it measures: syntactic conversion feasibility. Where it falls short is in the areas it does not measure at all — runtime behavioral dependencies, cross-schema object references, Oracle built-in package usage, and data type precision edge cases that only manifest with production data distributions.

Gap 1: DBMS_* Package Calls Are Not Flagged

Oracle's built-in packages — DBMS_SCHEDULER, DBMS_CRYPTO, UTL_FILE, UTL_HTTP, DBMS_LOCK, DBMS_PIPE, DBMS_ALERT — are deeply embedded in Oracle applications. They provide scheduling, encryption, file I/O, HTTP calls, advisory locking, inter-session messaging, and dozens of other capabilities that have no direct equivalent in Azure SQL Database or PostgreSQL.

The Azure DMS assessment identifies that these packages are referenced in stored procedures. It does not flag the procedures that call them as requiring complex migration effort. The procedure may be marked as "automatically convertible" even if 40% of its logic calls DBMS_SCHEDULER APIs that need to be replaced with Azure Service Bus, Azure Functions, or SQL Agent jobs.

-- Oracle procedure that looks simple to DMS
CREATE OR REPLACE PROCEDURE refresh_daily_summary AS
BEGIN
  -- This part converts automatically
  DELETE FROM daily_summary WHERE summary_date = TRUNC(SYSDATE);
  INSERT INTO daily_summary SELECT ...;

  -- This part does NOT convert and DMS may not flag it clearly
  DBMS_SCHEDULER.SET_ATTRIBUTE(
    name      => 'REFRESH_DAILY_JOB',
    attribute => 'NEXT_RUN_DATE',
    value     => SYSDATE + 1
  );

  -- UTL_FILE call also needs complete replacement
  UTL_FILE.PUT_LINE(l_file_handle, 'Refresh completed: ' || TO_CHAR(SYSDATE));
END;

A pre-migration audit should grep every procedure and package body for DBMS_* and UTL_* references independently of the DMS assessment. Count the calls, identify which packages are used, and plan replacement architecture for each. DBMS_SCHEDULER alone often requires a non-trivial Azure equivalent — you need to decide whether to use Azure SQL Agent, Azure Functions on a timer, Logic Apps, or a custom scheduling service depending on what the job does.

-- Query to find all DBMS_* and UTL_* references in your Oracle schema
SELECT object_name, object_type, procedure_name,
       referenced_name
FROM dba_dependencies
WHERE owner = 'YOUR_SCHEMA'
  AND referenced_owner = 'SYS'
  AND (referenced_name LIKE 'DBMS_%'
    OR referenced_name LIKE 'UTL_%'
    OR referenced_name LIKE 'HTP%'
    OR referenced_name LIKE 'OWA%')
ORDER BY object_name, referenced_name;

Gap 2: Package-Level Variable State

As covered in Oracle migration discussions generally, Oracle packages can hold session-level state in package variables. The DMS assessment can convert the package DDL — the package spec and body syntax — into PostgreSQL-equivalent schema objects. What it cannot convert is the runtime semantics of session-level package variables.

Consider a package that initializes a cache of lookup values in a package variable on first call and reuses it for the session lifetime:

-- Oracle package with session-level cache
CREATE OR REPLACE PACKAGE lookup_cache AS
  TYPE t_lookup_table IS TABLE OF VARCHAR2(200) INDEX BY VARCHAR2(50);
  g_cache    t_lookup_table;
  g_loaded   BOOLEAN := FALSE;

  FUNCTION get_value(p_key VARCHAR2) RETURN VARCHAR2;
END lookup_cache;

CREATE OR REPLACE PACKAGE BODY lookup_cache AS
  FUNCTION get_value(p_key VARCHAR2) RETURN VARCHAR2 AS
  BEGIN
    IF NOT g_loaded THEN
      -- Load all lookup values once per session
      FOR rec IN (SELECT key_col, val_col FROM lookup_master) LOOP
        g_cache(rec.key_col) := rec.val_col;
      END LOOP;
      g_loaded := TRUE;
    END IF;
    RETURN g_cache(p_key);
  END;
END lookup_cache;

DMS will produce a PostgreSQL schema structure for this. It will not reproduce the session-level caching behavior. In PostgreSQL, the equivalent requires either per-call queries (losing the caching benefit), a temporary table per session (overhead per session initialization), a materialized view (not session-scoped), or an application-layer cache. Each has different performance and correctness implications that require explicit architectural decisions — not automatic conversion.

Gap 3: Oracle DATE vs Target DATE Semantics

Oracle's DATE type stores both date and time components — it is equivalent to a timestamp with second precision, not a calendar date. Azure SQL Database's DATE type stores only the date portion. PostgreSQL's DATE type also stores only the date portion.

DMS assessments flag this type mismatch — it is a known gap. What the assessment does not quantify is how pervasively DATE is used as a datetime in a given schema, and how many computations depend on the time component being preserved.

-- Oracle: DATE includes time
SELECT order_date,
       TO_CHAR(order_date, 'YYYY-MM-DD HH24:MI:SS') AS full_dt
FROM orders;
-- Returns: 2024-01-15, 2024-01-15 14:32:07

-- After naive DMS migration to PostgreSQL DATE:
SELECT order_date FROM orders;
-- Returns: 2024-01-15 (time component lost)

-- Correct mapping for Oracle DATE used as datetime:
-- Use TIMESTAMP in the target, not DATE
-- DMS may or may not apply this automatically depending on configuration

-- Check how many DATE columns contain non-midnight time values
SELECT column_name,
       COUNT(*) AS rows_with_time
FROM your_table
WHERE TRUNC(date_column) != date_column  -- non-midnight rows
GROUP BY column_name;

Run this check across all DATE columns before migration. If a DATE column contains only midnight values (date-only usage), mapping to DATE in the target is correct. If it contains time values, it must map to TIMESTAMP or DATETIME2 depending on the target. DMS may default to one or the other; verify the mapping configuration matches the actual data usage pattern.

Gap 4: NUMBER Type Precision Mapping

Oracle's NUMBER type with no explicit precision or scale (NUMBER or NUMBER(*,*)) can store up to 38 significant digits. This is Oracle's arbitrary-precision numeric type. DMS maps it to NUMERIC or DECIMAL in PostgreSQL, which is correct in principle — but the default mapping configuration may impose precision limits that truncate data.

-- Oracle: unconstrained NUMBER columns
CREATE TABLE financial_transactions (
  transaction_id NUMBER,        -- no precision limit
  amount         NUMBER,        -- arbitrary precision
  exchange_rate  NUMBER(20, 8)  -- explicit precision
);

-- DMS default mapping may produce:
CREATE TABLE financial_transactions (
  transaction_id NUMERIC(38),
  amount         NUMERIC(38),    -- or FLOAT8 depending on DMS config
  exchange_rate  NUMERIC(20, 8)
);

-- FLOAT8 (double precision) is only 15-16 significant digits
-- If Oracle stored 20-digit amounts, FLOAT8 mapping loses precision
-- Always verify DMS type mapping config before migrating financial data

For financial applications, audit every NUMBER column that does not have explicit scale/precision in Oracle DDL. Verify the DMS mapping configuration produces NUMERIC (arbitrary precision) rather than FLOAT or REAL in the target. A single misconfigured column in a financial ledger table can introduce rounding errors that are extremely difficult to detect and reconcile post-migration.

Gap 5: Cross-Database and Database Link Dependencies

Oracle database links (CREATE DATABASE LINK) allow queries and procedure calls to span Oracle instances. In large enterprise environments, it is common to find procedures that query tables via database links — pulling data from a different Oracle instance, a different schema on the same instance, or even a heterogeneous non-Oracle source via Oracle's transparent gateway.

-- Oracle: query via database link
SELECT account_id, balance
FROM accounts@finance_db
WHERE account_id = p_account_id;

-- Or in a join
SELECT o.order_id, c.customer_name
FROM orders o
JOIN customers@crm_db c ON o.customer_id = c.customer_id;

DMS assessment reports database link references as manual conversion items — they cannot be automatically resolved because the target of the link may be a different system that is migrating separately, on a different timeline, or not migrating at all. What the assessment does not do is enumerate the full dependency graph: which procedures reference which links, which tables those links point to, and whether those remote tables are migrating to the same Azure environment or remaining elsewhere.

Before migration, build a database link dependency map:

-- Find all database link references in your schema
SELECT owner, name AS object_name, type AS object_type,
       referenced_link_name AS db_link
FROM dba_dependencies
WHERE referenced_link_name IS NOT NULL
  AND owner = 'YOUR_SCHEMA'
ORDER BY referenced_link_name, object_name;

-- Find all database links defined in the database
SELECT db_link, username, host, created
FROM dba_db_links
ORDER BY db_link;

For each database link, document: (1) what it points to, (2) which procedures use it, (3) whether the target is also migrating, and (4) the replacement architecture — Azure Elastic Query, Linked Server, an ETL feed, or a microservice API call. This is architectural work that cannot be automated.

Gap 6: Trigger Firing Order

Oracle guarantees trigger firing order when multiple triggers of the same type (BEFORE INSERT, AFTER UPDATE, etc.) are defined on the same table. Oracle processes them in creation order unless explicitly specified with FOLLOWS. PostgreSQL fires triggers alphabetically by trigger name when multiple triggers of the same type exist on a table.

If your Oracle schema has multiple triggers of the same type on a table, and the execution order matters (one trigger sets a value the other reads, for example), the alphabetical PostgreSQL ordering may break the dependency. DMS assessment does not analyze trigger firing order dependencies.

-- Check for tables with multiple same-type triggers
SELECT table_name, trigger_type, COUNT(*) AS trigger_count
FROM all_triggers
WHERE owner = 'YOUR_SCHEMA'
  AND status = 'ENABLED'
GROUP BY table_name, trigger_type
HAVING COUNT(*) > 1
ORDER BY table_name, trigger_type;

For each table with multiple same-type triggers, review whether execution order matters. If it does, consolidate into a single trigger in the PostgreSQL migration to make the order explicit and eliminate the dependency on alphabetical naming.

Building a Better Pre-Migration Assessment

The Azure DMS assessment is a starting point, not a migration plan. Supplement it with:

This additional work typically takes 2-4 days on a medium-complexity Oracle schema and consistently surfaces items that add weeks to migration execution when discovered late.

Planning an Oracle migration to Azure or AWS?

We run a deep pre-migration assessment that goes beyond what DMS and SCT report — identifying the gaps that cost weeks of unplanned work. Free, no obligation.