Simple steps to improve performance in a Postgres database.
Identify your problem queries
SELECT query,
round(total_time::numeric, 2) AS total_time,
calls,
round(mean_time::numeric, 2) AS mean,
round((100 * total_time / sum(total_time::numeric) OVER ())::numeric, 2) AS percentage_cpu
FROM pg_stat_statements
where query != ‘
ORDER BY total_time DESC
LIMIT 20;
Enable the pg_stat_statements module if it isn’t already.
Try ordering by different things. total_time, percentage_cpu, calls. See if anything consistently comes to the top or stands out as an obvious problem.
Get a sense of index usage
Pretty sure I found this query on okigiveup.net
Index scan ratio per table. A low (or zero) scan ratio may not be bad if the number of rows in the table is small.
SELECT relname,idx_scan::float/(idx_scan+seq_scan+1) as idx_scan_ratio
FROM pg_stat_all_tables
WHERE schemaname=’public’
ORDER BY idx_scan_ratio ASC;
Look at the query plans
Cost != time. Its a number representing time and resource usage.
Width == ~bytes in single row returned from given operation
Eliminate Sequential Scans (Seq Scan) by adding indexes unless table size is (very) small.
Review Your Indexes
A multi column index on column a, column b, and column c can be used as an index on:
– column a
– column a, column b
– column a, column b, column c
CREATE INDEX CONCURRENTLY index_name ON “table_name” USING btree (column a, column b)
“concurrently” prevents the index creation taking write locks.
The error “CREATE INDEX CONCURRENTLY cannot run inside a transaction block” from knex (or others) can be resolved by not creating the index concurrently or creating the index outside of a transaction. Pick one.
Partial indexes are smaller on disk, faster to search, and less overhead to keep up to date. Are there any where conditions that are always present and which always have the same value?
CREATE INDEX index_name ON table_name USING btree(column a, column b) WHERE color = ‘green’
Allay your fears about the size of the indexes on disk
Query from postgresql.org
SELECT *, pg_size_pretty(total_bytes) AS total
, pg_size_pretty(index_bytes) AS INDEX
, pg_size_pretty(toast_bytes) AS toast
, pg_size_pretty(table_bytes) AS TABLE
FROM (
SELECT *, total_bytes-index_bytes-COALESCE(toast_bytes,0) AS table_bytes FROM (
SELECT c.oid,nspname AS table_schema, relname AS TABLE_NAME
, c.reltuples AS row_estimate
, pg_total_relation_size(c.oid) AS total_bytes
, pg_indexes_size(c.oid) AS index_bytes
, pg_total_relation_size(reltoastrelid) AS toast_bytes
FROM pg_class c
LEFT JOIN pg_namespace n ON n.oid = c.relnamespace
WHERE relkind = ‘r’
) a
) a
order by total_bytes desc;
See the size of an individual index.
select pg_size_pretty(pg_relation_size(‘index_name’));
Reset pg_stat_statements to get fresh data
After making index changes blow away the query stats to get some new numbers.
select pg_stat_statements_reset();
This does not affect the data generated by doing an analyze.