I'd like to do something similar to what's described in http://www.bestbrains.dk/Blog/2010/03/25/HowToAssertThatYourSQLDoesNotDoFullTableScans.aspx but for that I'd need postgres to keep track of any full table scans it does. Is there such a thing for postgres?
Does postgresql keep track of full table scans it makes?
Asked Answered
Some thoughts: if a table is under a certain size, the index will NOT be used as it is cheaper to just fetch all the pages from disk than it is to hop around an index that may point to all the pages in the table anyway. Also, say you have an index on a large table that works well for 99% of your cases, but you have one outlier that takes up a large portion of the index space. That case may always force a full scan for the 1% of the times it's called and there's not much you can do about it. Because of those two cases, asserting full scans == 0 is going to hurt more than it will help. –
Pandit
Have a look at pg_stat_get_numscans(oid)
(number of sequential scans on a table) and the statistics collector (in general).
I suppose you mean
pg_stat_get_numscans(oid)
. SELECT relname, seq_scan FROM pg_stat_user_tables
is a bit easier to use. –
Kippy That's what I meant yes (refined my post a bit) –
Tagmeme
I use the below query. If index usage is > 99 I am fine. So I worry about any table having more than 7000 rows and index usage less than 99.
SELECT
relname table_name,
idx_scan index_scan,
seq_scan table_scan,
100*idx_scan / (seq_scan + idx_scan) index_usage,
n_live_tup rows_in_table
FROM
pg_stat_user_tables
WHERE
seq_scan + idx_scan > 0 and 100 * idx_scan / (seq_scan + idx_scan) < 99 and
n_live_tup > 70000
ORDER BY
4 DESC;
Of course you need to understand the query and its plan. Sometimes the table scan could be more optimized.
© 2022 - 2024 — McMap. All rights reserved.