I have a table named events in my Postgresql 9.5 database. And this table has about 6 million records.
I am runnig a select count(event_id) from events
query. But this query takes 40seconds. This is very long time for a database. My event_id
field of table is primary key and indexed. Why this takes very long time? (Server is ubuntu vm on vmware has 4cpu)
Explain:
"Aggregate (cost=826305.19..826305.20 rows=1 width=0) (actual time=24739.306..24739.306 rows=1 loops=1)"
" Buffers: shared hit=13 read=757739 dirtied=53 written=48"
" -> Seq Scan on event_source (cost=0.00..812594.55 rows=5484255 width=0) (actual time=0.014..24087.050 rows=6320689 loops=1)"
" Buffers: shared hit=13 read=757739 dirtied=53 written=48"
"Planning time: 0.369 ms"
"Execution time: 24739.364 ms"
vacuum full events;
? – Dillertext
type and so long json data. – Obara