Every Odoo implementation starts fast. A fresh database with a handful of test records feels snappy — form views load instantly, tree views render before you finish clicking. Then production happens. A year later, the sales team is complaining that the quotation list takes eight seconds to load, the warehouse team is watching spinners on stock moves, and someone in accounting has started timing their lunch break around the general ledger report.
We've tuned Odoo instances ranging from 50-user SMBs to 800-user multi-company deployments. The performance problems are almost always the same: missing PostgreSQL indexes, ORM methods that generate catastrophic query patterns, and configuration defaults that made sense for a demo database but collapse under real data volumes. This article covers the fixes that consistently deliver the biggest improvements.
Understanding Where Time Goes
Before tuning anything, you need to see what's slow. Odoo ships with a developer mode profiler, but for database-level visibility, PostgreSQL's pg_stat_statements extension is indispensable. It tracks every query executed against your database, how often it runs, and how long it takes on average.
Enable it by adding the extension to your PostgreSQL configuration:
# postgresql.conf
shared_preload_libraries = 'pg_stat_statements'
pg_stat_statements.track = all
pg_stat_statements.max = 10000
# Then in your Odoo database:
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;
Once enabled, this query surfaces your worst offenders — the queries consuming the most total time across all invocations:
SELECT
round(total_exec_time::numeric, 2) AS total_ms,
calls,
round(mean_exec_time::numeric, 2) AS avg_ms,
round((100 * total_exec_time / sum(total_exec_time) OVER())::numeric, 2) AS pct,
substring(query, 1, 120) AS query_preview
FROM pg_stat_statements
ORDER BY total_exec_time DESC
LIMIT 20;
In our experience, the top 5 queries by total time typically account for 40–60% of all database time in a busy Odoo instance. Fix those five and the entire system feels different.
The Indexes Odoo Doesn't Create
Odoo's ORM creates indexes on primary keys and fields explicitly marked with index=True in the model definition. But many fields that end up in WHERE clauses and ORDER BY expressions during normal use don't have indexes. The ORM can't predict how your users will filter and sort data.
Finding Missing Indexes
PostgreSQL tracks sequential scans on every table. A sequential scan means the database is reading every row to find matches — acceptable on a 500-row table, devastating on a 2-million-row stock_move_line table. This query identifies tables where sequential scans dominate:
SELECT
schemaname, relname AS table,
seq_scan, idx_scan,
round(100.0 * seq_scan / nullif(seq_scan + idx_scan, 0), 1) AS seq_pct,
pg_size_pretty(pg_relation_size(relid)) AS size
FROM pg_stat_user_tables
WHERE seq_scan + idx_scan > 100
ORDER BY seq_scan DESC
LIMIT 20;
Any large table (over 50 MB) with a sequential scan percentage above 50% is a prime candidate for index analysis. Examine the slow queries hitting that table and add targeted indexes.
Common High-Value Indexes
Across dozens of Odoo deployments, these indexes deliver measurable improvements almost every time:
-- Sale orders filtered/sorted by date and state
CREATE INDEX CONCURRENTLY idx_sale_order_date_state
ON sale_order (date_order DESC, state);
-- Stock moves: the workhorses of warehouse operations
CREATE INDEX CONCURRENTLY idx_stock_move_product_state
ON stock_move (product_id, state);
CREATE INDEX CONCURRENTLY idx_stock_move_line_lot
ON stock_move_line (lot_id)
WHERE lot_id IS NOT NULL;
-- Account move lines: the foundation of all accounting reports
CREATE INDEX CONCURRENTLY idx_aml_account_date
ON account_move_line (account_id, date DESC);
CREATE INDEX CONCURRENTLY idx_aml_partner_reconciled
ON account_move_line (partner_id, reconciled)
WHERE reconciled = false;
-- Mail message: often the largest table, queried on every form view
CREATE INDEX CONCURRENTLY idx_mail_message_resmodel_resid
ON mail_message (res_id, model)
WHERE model IS NOT NULL;
Key insight: Always use CREATE INDEX CONCURRENTLY on production databases. Without the CONCURRENTLY flag, PostgreSQL locks the entire table for writes during index creation — a 10-minute index build on account_move_line means 10 minutes of blocked accounting operations.
Partial Indexes: The Secret Weapon
Odoo tables accumulate records that are rarely queried after they reach a terminal state. Cancelled sale orders, done stock moves, and posted journal entries are historical data. Most day-to-day queries filter for active, draft, or in-progress records. Partial indexes exploit this pattern:
-- Only index the sale orders people actually work with
CREATE INDEX CONCURRENTLY idx_sale_order_active
ON sale_order (create_date DESC, user_id)
WHERE state IN ('draft', 'sent', 'sale');
-- Only index unreconciled move lines (the ones accountants care about)
CREATE INDEX CONCURRENTLY idx_aml_unreconciled
ON account_move_line (partner_id, account_id, amount_residual)
WHERE reconciled = false AND parent_state = 'posted';
Partial indexes are smaller, faster to scan, and cheaper to maintain because PostgreSQL only updates them when a qualifying row changes. On a database with 3 million account move lines where only 200,000 are unreconciled, the partial index is 15x smaller than a full index on the same columns.
ORM Patterns That Kill Performance
PostgreSQL tuning addresses the database layer, but many Odoo performance problems originate in Python code — specifically, in how developers use the ORM. The Odoo ORM is convenient, but its convenience makes it easy to write code that generates hundreds of queries where one would suffice.
The N+1 Problem in Odoo
The classic N+1 pattern looks like this in Odoo custom code:
# BAD: Triggers one query per sale order line
for line in self.order_line:
product_name = line.product_id.name
partner_name = line.order_id.partner_id.name
# ... do something with these values
Each line.product_id.name access triggers a separate SQL query if the related record isn't already in the ORM cache. For a sale order with 200 lines, this code executes 400+ queries. The fix is to prefetch related fields before iterating:
# GOOD: Prefetch all related data in bulk
lines = self.order_line
lines.mapped('product_id') # Prefetches all products in one query
lines.mapped('order_id.partner_id') # Prefetches all partners in one query
for line in lines:
product_name = line.product_id.name # Now served from cache
partner_name = line.order_id.partner_id.name # Also cached
Alternatively, use read() or read_group() when you need raw data without recordset overhead:
# BEST for large datasets: Single query, dictionary results
line_data = self.env['sale.order.line'].search_read(
[('order_id', 'in', self.ids)],
['product_id', 'price_subtotal', 'product_uom_qty'],
)
search_count vs search
A surprisingly common pattern in custom Odoo code is calling search() just to count results:
# BAD: Loads all record IDs into memory, then counts them
count = len(self.env['stock.move'].search([('state', '=', 'done')]))
This forces PostgreSQL to return every matching row ID to the ORM. On a table with 500,000 done stock moves, that's a lot of data transfer for a number. Use search_count() instead:
# GOOD: Executes SELECT COUNT(*), returns an integer
count = self.env['stock.move'].search_count([('state', '=', 'done')])
read_group for Aggregation
When you need sums, averages, or counts grouped by category, read_group() pushes the aggregation to PostgreSQL instead of doing it in Python:
# BAD: Loads every invoice line into Python, then sums
lines = self.env['account.move.line'].search([
('move_id.move_type', '=', 'out_invoice'),
('date', '>=', '2026-01-01'),
])
total_by_account = {}
for line in lines:
total_by_account.setdefault(line.account_id.id, 0)
total_by_account[line.account_id.id] += line.balance
# GOOD: Single SQL query with GROUP BY
results = self.env['account.move.line'].read_group(
domain=[
('move_id.move_type', '=', 'out_invoice'),
('date', '>=', '2026-01-01'),
],
fields=['balance:sum'],
groupby=['account_id'],
)
On a dataset of 100,000 invoice lines, the read_group() approach typically runs 50–100x faster and uses a fraction of the memory.
PostgreSQL Configuration for Odoo
The default PostgreSQL configuration is tuned for a machine with 128 MB of RAM. If you're running Odoo with the default postgresql.conf, your database is using a tiny fraction of available system resources. These settings should be adjusted based on your server's RAM (assuming a dedicated database server with 16 GB):
# Memory
shared_buffers = 4GB # 25% of RAM
effective_cache_size = 12GB # 75% of RAM
work_mem = 64MB # Per-sort operation memory
maintenance_work_mem = 1GB # For VACUUM, CREATE INDEX
# Query Planner
random_page_cost = 1.1 # For SSD storage (default 4.0)
effective_io_concurrency = 200 # For SSD storage
# Write-Ahead Log
wal_buffers = 64MB
checkpoint_completion_target = 0.9
max_wal_size = 2GB
# Connection handling
max_connections = 200 # Match Odoo worker count + buffer
Key insight: The single most impactful change is random_page_cost. If your database runs on SSD (and it should), lowering this from the default 4.0 to 1.1 tells the query planner that random disk reads are nearly as fast as sequential reads — which is true for SSDs. This alone can change query plans from sequential scans to index scans across the board.
Odoo Server Configuration
Odoo's worker model directly impacts how database connections are used. The key settings in your Odoo configuration file:
# odoo.conf
workers = 6 # CPU cores * 2 + 1 (for an 8-core server)
max_cron_threads = 2 # Separate cron workers
limit_memory_hard = 2684354560 # 2.5 GB per worker
limit_memory_soft = 2147483648 # 2 GB soft limit
limit_time_cpu = 600 # 10 min CPU time per request
limit_time_real = 1200 # 20 min wall time per request
db_maxconn = 32 # Connections per worker to PostgreSQL
A common mistake is setting workers too high. Each Odoo worker holds a persistent database connection, so workers * db_maxconn must stay below PostgreSQL's max_connections. Six workers with 32 connections each means 192 potential connections — close to our 200 limit. If you need more workers, use PgBouncer as a connection pooler between Odoo and PostgreSQL.
Maintenance: The Work Nobody Wants to Do
PostgreSQL relies on regular VACUUM and ANALYZE operations to maintain accurate table statistics and reclaim dead row space. Autovacuum handles this automatically, but its default settings are too conservative for Odoo's write-heavy tables.
# More aggressive autovacuum for Odoo
autovacuum_vacuum_scale_factor = 0.05 # Default 0.2
autovacuum_analyze_scale_factor = 0.025 # Default 0.1
autovacuum_vacuum_cost_delay = 2ms # Default 20ms (speed up vacuum)
autovacuum_max_workers = 4 # Default 3
For tables that change heavily — mail_message, bus_bus, ir_attachment — consider table-level autovacuum overrides that run even more aggressively. The bus_bus table in particular accumulates and deletes rows constantly as Odoo's longpolling bus operates, and a backed-up vacuum on this table can cause visible UI lag.
Measuring the Impact
After applying these changes, go back to pg_stat_statements and reset the counters:
SELECT pg_stat_statements_reset();
Let the system run under normal load for 24 hours, then compare the new top-20 query list against your baseline. We typically see total query time drop by 50–70% after the first round of index additions and PostgreSQL tuning. ORM-level fixes in custom modules add another 20–30% on top of that for the specific operations they target.
Performance tuning isn't a one-time event. As your data grows and usage patterns shift, new bottlenecks emerge. Schedule a quarterly review of pg_stat_statements output and sequential scan ratios. The hour you spend analyzing query patterns saves hundreds of hours of accumulated user wait time over the following quarter.