There are a few discussions this and there (including the official post on postgres web) about the slow count(*) prior version 9.2; somehow I did not find satisfied answer.
Basically I had postgres 9.1 installed, and I observed slow count(*) as simple as
select count(*) from restaurants;
on tables with records of 100k+. The average request is around 850ms. Well I assumed that that was the symptom people have been talking about for slow count on postgres 9.1 and below since postgres 9.2 has some new feature like index-only scan. I want to experiment this by using the same dataset from 9.1 and put it on 9.2. I call the count statement, and it still give a bad result as 9.1.
explain analyze select count(*) from restaurants;
------------------------------------------------------------------
Aggregate (cost=23510.35..23510.36 rows=1 width=0) (actual time=979.960..979.961 rows=1 loops=1)
-> Seq Scan on restaurants (cost=0.00..23214.88 rows=118188 width=0) (actual time=0.050..845.097 rows=118188 loops=1)
Total runtime: 980.037 ms
Can anyone suggest feasible solution to this problem? Do I need to configure anything on postgres to enable the feature?
P.S. where clause doesn't help in my case either.
count
and limitations around it there. Is there a primary key on that table? Did youVACUUM
andANALYZE
the table after loading the data? – Lattoniarandom_page_cost
andseq_page_cost
set to? How abouteffective_cache_size
? – LattoniaEXPLAIN (BUFFERS, ANALYZE)
on the query after theVACUUM ANALYZE
? And what's the output ofSELECT avg(pg_column_size(restaurants)) FROM restaurants
? – Lattonia