Why Type Mapping Is the Migration's Hidden Risk
Schema conversion tools focus on structural fidelity — they get the table names, column names, constraints, and indexes right. Type mapping gets less attention because it looks solved: Oracle NUMBER maps to SQL Server DECIMAL, Oracle VARCHAR2 maps to NVARCHAR, done. The migration report shows green checkmarks and the team moves on.
The problems are in the specifics that the high-level mapping table omits. Oracle NUMBER without explicit precision and scale stores integers, small decimals, and large floating-point values in the same column — what SQL Server type handles all three? Oracle DATE includes a time component down to the second — SQL Server DATE does not. Oracle CHAR pads to byte length, not character length in multibyte character sets. Each of these gaps is individually manageable. Collectively, across a schema with hundreds of tables, they create a migration QA problem that most teams underestimate by a factor of three.
The Core Type Mapping Table
This is the mapping we use as a baseline for Oracle-to-Azure SQL Database migrations. SSMA's defaults are noted where they differ from our recommendations.
-- Oracle NUMBER type mapping decision tree
-- Oracle NUMBER without precision: stores any numeric value
-- SSMA default: FLOAT(53) -- loses exact integer representation
-- Recommended: inspect actual data, then map to:
-- integers only → BIGINT
-- fixed decimal → DECIMAL(p, s) where p and s are from data analysis
-- mixed/unknown → DECIMAL(38, 10) as safe wide default
-- Oracle NUMBER(p) -- integer-like
-- Maps to: INT (p <= 9), BIGINT (p <= 18), DECIMAL(p, 0) (p > 18)
-- Oracle NUMBER(p, s) -- fixed decimal
-- Maps to: DECIMAL(p, s) -- direct equivalent, safe
-- Oracle NUMBER(p, s) where s < 0 (scale to left of decimal)
-- Example: NUMBER(5, -2) stores values like 1200, 45600
-- No direct SQL Server equivalent
-- Map to: DECIMAL(p + ABS(s), 0) with application-layer rounding
-- This is a semantic gap -- document it explicitly
-- Full type mapping reference table
-- Deploy in your migration database for reference
CREATE TABLE oracle_to_azure_type_map (
oracle_type NVARCHAR(100),
oracle_params NVARCHAR(200),
azure_sql_type NVARCHAR(100),
notes NVARCHAR(500),
risk_level NVARCHAR(10) -- LOW / MEDIUM / HIGH
);
INSERT INTO oracle_to_azure_type_map VALUES
-- Numeric types
('NUMBER', 'no params', 'DECIMAL(38,10)', 'SSMA uses FLOAT(53) -- loses integer exactness', 'HIGH'),
('NUMBER', '(p)', 'BIGINT or DECIMAL(p,0)', 'Use BIGINT for p<=18', 'LOW'),
('NUMBER', '(p,s) s>=0', 'DECIMAL(p,s)', 'Direct map, safe', 'LOW'),
('NUMBER', '(p,s) s<0', 'DECIMAL(p+|s|,0)', 'Semantic shift, document', 'HIGH'),
('FLOAT', '(p)', 'FLOAT(53)', 'Oracle FLOAT(p) is binary precision, SQL Server FLOAT(n) also binary', 'LOW'),
('BINARY_FLOAT', '', 'REAL', 'Single precision, direct map', 'LOW'),
('BINARY_DOUBLE', '', 'FLOAT(53)', 'Double precision, direct map', 'LOW'),
('INTEGER', '', 'DECIMAL(38,0)', 'Oracle INTEGER = NUMBER(38)', 'LOW'),
-- String types
('VARCHAR2', '(n) BYTE', 'NVARCHAR(n)', 'SSMA maps to NVARCHAR -- adds Unicode overhead, verify sizing', 'MEDIUM'),
('VARCHAR2', '(n) CHAR', 'NVARCHAR(n)', 'Character semantics -- direct', 'LOW'),
('CHAR', '(n)', 'NCHAR(n)', 'SSMA maps to NCHAR, fixed-length Unicode', 'LOW'),
('NVARCHAR2', '(n)', 'NVARCHAR(n)', 'Direct Unicode map', 'LOW'),
('NCHAR', '(n)', 'NCHAR(n)', 'Direct Unicode map', 'LOW'),
('LONG', '', 'NVARCHAR(MAX)', 'Deprecated in Oracle, maps to MAX', 'MEDIUM'),
-- Date/time types
('DATE', '', 'DATETIME2(0)', 'Oracle DATE includes time -- SQL DATE does not. SSMA uses DATETIME -- loses precision', 'HIGH'),
('TIMESTAMP', '(0-6)', 'DATETIME2(n)', 'n = Oracle fractional seconds precision', 'LOW'),
('TIMESTAMP WITH TIME ZONE', '', 'DATETIMEOFFSET', 'Direct map, preserves offset', 'LOW'),
('TIMESTAMP WITH LOCAL TIME ZONE', '', 'DATETIME2(6)', 'Time zone lost -- stored in session TZ at insert, no offset stored. Document this', 'HIGH'),
('INTERVAL YEAR TO MONTH', '', 'No equivalent', 'Must use application-layer calculation or computed column', 'HIGH'),
('INTERVAL DAY TO SECOND', '', 'No equivalent', 'Store as total seconds in BIGINT or use CLR type', 'HIGH'),
-- Large object types
('CLOB', '', 'NVARCHAR(MAX)', 'Up to 2GB in SQL Server. SSMA maps to NVARCHAR(MAX)', 'LOW'),
('NCLOB', '', 'NVARCHAR(MAX)', 'Direct', 'LOW'),
('BLOB', '', 'VARBINARY(MAX)', 'Direct', 'LOW'),
('BFILE', '', 'No equivalent', 'External file pointer. Migrate to FILESTREAM or store path in NVARCHAR', 'HIGH'),
('LONG RAW', '', 'VARBINARY(MAX)', 'Deprecated in Oracle', 'MEDIUM'),
('RAW', '(n)', 'VARBINARY(n)', 'Direct', 'LOW'),
-- Other types
('ROWID', '', 'NVARCHAR(18)', 'No equivalent. If used as PK, replace with BIGINT IDENTITY', 'HIGH'),
('UROWID', '', 'NVARCHAR(18)', 'No equivalent', 'HIGH'),
('XMLTYPE', '', 'XML', 'SQL Server XML type has similar query syntax but different functions', 'MEDIUM'),
('BOOLEAN', '', 'BIT', 'Only in PL/SQL -- not a table column type in Oracle < 23', 'LOW');
The DATE Problem in Detail
Oracle DATE stores date and time to one-second precision. A column defined as DATE in Oracle holds values like 2023-07-18 14:35:22. SQL Server's DATE type stores only the date portion — 2023-07-18. SSMA maps Oracle DATE to SQL Server DATETIME, which is the right general direction but introduces two issues:
- DATETIME has 3.33ms precision rounding for the time component. If any Oracle DATE values have sub-second precision stored (technically Oracle DATE doesn't support it, but some ETL processes write values that appear to have it), DATETIME may subtly round them.
- DATETIME has a range of 1753-01-01 to 9999-12-31. Oracle DATE supports 4712 BC to 9999 AD. Historical dates outside DATETIME's range will cause insert failures.
Our recommendation: map Oracle DATE to DATETIME2(0), not DATETIME. DATETIME2(0) stores to one-second precision (matching Oracle DATE exactly), has the same date range as SQL Server's maximum (0001-01-01 to 9999-12-31), and avoids DATETIME's rounding artifact.
-- Identify Oracle DATE columns actually being used for date-only storage
-- (time portion is always midnight -- common pattern for "pure date" columns)
-- Run this in Oracle before migration to decide mapping
SELECT
table_name,
column_name,
COUNT(*) AS total_rows,
SUM(CASE WHEN created_date != TRUNC(created_date) THEN 1 ELSE 0 END) AS rows_with_time,
ROUND(100 * SUM(CASE WHEN created_date != TRUNC(created_date) THEN 1 ELSE 0 END)
/ NULLIF(COUNT(*), 0), 1) AS pct_has_time
FROM your_table
GROUP BY table_name, column_name;
-- If pct_has_time = 0: this is a pure date column → map to DATE in Azure SQL
-- If pct_has_time > 0: this stores datetime values → map to DATETIME2(0)
NUMBER Without Precision: The Invisible Precision Loss
Oracle NUMBER without precision or scale stores any numeric value with up to 38 significant digits. It's Oracle's universal numeric type — developers use it for integer IDs, monetary amounts, percentages, and floating-point calculations, all in columns defined simply as NUMBER.
SSMA maps bare NUMBER to FLOAT(53) (SQL Server's double-precision floating point). This is wrong for any column storing integer values or exact decimals. FLOAT(53) cannot represent large integers exactly — integers above 2^53 lose precision. A NUMBER column storing order IDs in the range of 10^15 will have those IDs silently rounded to the nearest representable floating-point value after migration.
-- Oracle: audit bare NUMBER columns to determine correct SQL Server type
SELECT
c.owner,
c.table_name,
c.column_name,
c.data_precision,
c.data_scale,
c.nullable,
-- Sample analysis query to run per column:
'SELECT MAX(LENGTH(TO_CHAR(ABS(' || c.column_name || ')))) AS max_int_digits, '
|| 'MAX(LENGTH(TO_CHAR(ABS(MOD(' || c.column_name || ', 1))))) AS has_fractional '
|| 'FROM ' || c.owner || '.' || c.table_name AS analysis_query
FROM all_tab_columns c
WHERE c.data_type = 'NUMBER'
AND c.data_precision IS NULL
AND c.data_scale IS NULL
ORDER BY c.owner, c.table_name, c.column_name;
-- Run per suspicious column and interpret results:
-- max_int_digits <= 9, has_fractional = 0 → INT
-- max_int_digits <= 18, has_fractional = 0 → BIGINT
-- max_int_digits > 18, has_fractional = 0 → DECIMAL(38, 0)
-- has_fractional > 0 → DECIMAL(38, 10) or measure actual precision needed
-- Post-migration validation query for NUMBER→DECIMAL conversions
-- Run in Azure SQL after data load to verify no values were truncated
SELECT
COUNT(*) AS rows_checked,
SUM(CASE WHEN ABS(azure_col - CAST(azure_col AS DECIMAL(38,10))) > 0 THEN 1 ELSE 0 END)
AS precision_loss_count
FROM migrated_table;
INTERVAL Types: The Unsupported Gap
Oracle INTERVAL YEAR TO MONTH and INTERVAL DAY TO SECOND have no SQL Server equivalents. If your Oracle schema uses these types — common in financial applications for storing tenor/maturity intervals or in scheduling systems — you need a strategy before migration, not after.
Three approaches in order of preference:
- Decompose to integers at migration time. INTERVAL YEAR TO MONTH → two INT columns (years, months). INTERVAL DAY TO SECOND → one BIGINT storing total seconds. Application code must be updated to construct and interpret these values differently, but the data is stored exactly.
- Store as formatted string. Convert interval values to ISO 8601 duration strings (P1Y6M, P30DT12H). Preserves semantic meaning but loses native arithmetic — interval math must be done in application code or via SQL computed columns with string parsing.
- CLR user-defined type. Implement a custom CLR type in SQL Server that provides INTERVAL semantics. Maximum compatibility with Oracle behavior but adds a CLR dependency and significant ongoing maintenance overhead. Only justified for schemas with extensive interval arithmetic in stored procedures.
Post-Migration Type Validation Queries
After migrating data, run these validation queries to confirm no silent truncation or precision loss occurred:
-- Azure SQL: check for NULL values that didn't exist in Oracle
-- (type conversion failures often silently produce NULLs when ANSI_WARNINGS is OFF)
SELECT
table_name,
column_name,
COUNT(*) AS null_count
FROM information_schema.columns c
CROSS APPLY (
SELECT COUNT(*) AS cnt
FROM your_migrated_table
WHERE [column_name] IS NULL
) x
WHERE c.table_name = 'your_migrated_table'
AND x.cnt > 0
ORDER BY null_count DESC;
-- Check for values at SQL Server type boundaries (possible truncation indicator)
-- For NVARCHAR(n) columns migrated from VARCHAR2(n):
SELECT column_name,
MAX(LEN(column_value)) AS max_length,
COUNT(CASE WHEN LEN(column_value) = declared_max THEN 1 END) AS at_max_count
FROM your_migrated_table
-- Run per column with LEN() checks
-- Verify row counts match Oracle source exactly
-- Run equivalent COUNT(*) in Oracle before migration and compare
SELECT
'orders' AS table_name,
COUNT(*) AS azure_row_count,
1847293 AS oracle_row_count, -- fill in from Oracle export
COUNT(*) - 1847293 AS difference
FROM orders;
Type mapping issues in Oracle-to-Azure SQL migrations rarely cause outright failures during data load — they cause silent degradation that surfaces as application logic errors, financial calculation discrepancies, or date comparison bugs weeks after go-live. The validation queries above, run against every migrated table, are the difference between a clean migration sign-off and a 6am production incident call.
Need a second opinion on your stack?
We'll review your environment and share findings in 5–7 business days. No sales pitch, no obligation.