Optimizing postgres.conf for Django Performance

django postgresql devops

Django works great with PostgreSQL. But the default postgres.conf is configured for compatibility, not performance. Here’s how to tune it.

The Basics

PostgreSQL’s default settings are conservative:

Your production server has more resources. Let’s use them.

Key Parameters

shared_buffers

PostgreSQL’s buffer cache. How much RAM for caching data.

Default: 128MB Recommended: 25% of RAM

# For 16GB server
shared_buffers = 4GB

effective_cache_size

Tells the query planner how much memory is available for caching (including OS cache).

Default: 4GB Recommended: 50-75% of RAM

# For 16GB server
effective_cache_size = 12GB

work_mem

Memory for sorting and hash operations per query.

Default: 4MB Recommended: 32-256MB (careful with concurrency)

work_mem = 64MB

⚠️ Each connection can use this multiple times. If you have 100 connections doing complex queries, you could use 100 × 64MB × N = many GB.

maintenance_work_mem

Memory for maintenance operations (VACUUM, CREATE INDEX).

Default: 64MB Recommended: 256MB-2GB

maintenance_work_mem = 512MB

wal_buffers

Write-ahead log buffer size.

Default: 3MB Recommended: 64MB

wal_buffers = 64MB

Connection Settings

max_connections

How many simultaneous connections.

Default: 100 Recommended: Match your connection pool

max_connections = 200

With Django + pgBouncer, you often need fewer than you think.

Django Connection Pooling

# settings.py
DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql',
        'NAME': 'mydb',
        'CONN_MAX_AGE': 600,  # 10 minutes
        'CONN_HEALTH_CHECKS': True,  # Django 4.1+
    }
}

Or use pgBouncer:

# pgbouncer.ini
[databases]
mydb = host=localhost dbname=mydb

[pgbouncer]
pool_mode = transaction
max_client_conn = 1000
default_pool_size = 20

Write Performance

synchronous_commit

Wait for disk write confirmation before returning?

Default: on Recommended: off for non-critical data, on for transactions

# Faster, but risk of losing last few transactions on crash
synchronous_commit = off

checkpoint_completion_target

Spread checkpoint writes over time.

Default: 0.5 Recommended: 0.9

checkpoint_completion_target = 0.9

wal_level

Level of WAL detail.

Development: minimal Production with replication: replica

wal_level = replica

Query Performance

random_page_cost

Cost of random disk access vs sequential. SSDs are faster.

Default: 4.0 (assumes spinning disk) Recommended for SSD: 1.1-1.3

random_page_cost = 1.1

default_statistics_target

How detailed statistics are for query planning.

Default: 100 Recommended: 100-200

default_statistics_target = 100

For tables with complex distributions:

ALTER TABLE my_table ALTER COLUMN important_column SET STATISTICS 500;

Sample Configuration

For a 16GB server with SSDs:

# Memory
shared_buffers = 4GB
effective_cache_size = 12GB
work_mem = 64MB
maintenance_work_mem = 512MB
wal_buffers = 64MB

# Connections
max_connections = 200

# Checkpoints
checkpoint_completion_target = 0.9
min_wal_size = 1GB
max_wal_size = 4GB

# Query Planner
random_page_cost = 1.1
effective_io_concurrency = 200

# Logging
log_min_duration_statement = 1000  # Log queries > 1s
log_checkpoints = on
log_lock_waits = on
log_temp_files = 0

Monitoring Performance

Slow Query Log

log_min_duration_statement = 1000  # ms

Check Cache Hit Ratio

SELECT 
    sum(heap_blks_read) as heap_read,
    sum(heap_blks_hit)  as heap_hit,
    sum(heap_blks_hit) / (sum(heap_blks_hit) + sum(heap_blks_read)) as ratio
FROM pg_statio_user_tables;

Target: > 99%

Check Index Usage

SELECT 
    relname,
    100 * idx_scan / (seq_scan + idx_scan) as idx_ratio,
    n_live_tup as rows
FROM pg_stat_user_tables 
WHERE (seq_scan + idx_scan) > 0
ORDER BY n_live_tup DESC;

Connection Stats

SELECT count(*), state FROM pg_stat_activity GROUP BY state;

Django-Specific Tuning

# Bad: N+1 queries
for article in Article.objects.all():
    print(article.author.name)

# Good: 1 query
for article in Article.objects.select_related('author'):
    print(article.author.name)

Add Database Indexes

class Article(models.Model):
    title = models.CharField(max_length=200, db_index=True)
    created_at = models.DateTimeField(db_index=True)
    
    class Meta:
        indexes = [
            models.Index(fields=['status', 'created_at']),
        ]

Use explain

from django.db import connection

qs = Article.objects.filter(status='published')
print(qs.explain(analyze=True))

Tools

PGTune

Web tool that generates config based on your hardware: https://pgtune.leopard.in.ua/

check_postgres

Nagios plugin for monitoring:

check_postgres --action=bloat --warning=20%

pg_stat_statements

Track query statistics:

CREATE EXTENSION pg_stat_statements;

SELECT query, calls, mean_exec_time, total_exec_time
FROM pg_stat_statements
ORDER BY total_exec_time DESC
LIMIT 10;

Restart vs Reload

Some changes require restart, others just reload:

ParameterRestart Required?
shared_buffersYes
work_memNo
effective_cache_sizeNo
max_connectionsYes
wal_buffersYes
# Reload (no downtime)
sudo systemctl reload postgresql

# Restart (brief downtime)
sudo systemctl restart postgresql

Final Thoughts

Default PostgreSQL settings leave performance on the table. A few config changes can make significant differences.

Start with PGTune, benchmark, and adjust. Monitor your cache hit ratio and slow queries. Most Django apps benefit immediately from proper tuning.


Database tuning: the highest ROI 30 minutes you’ll spend.

All posts