Improve performance of first query
Asked Answered
M

4

7

If the following database (postgres) queries are executed, the second call is much faster.

I guess the first query is slow since the operating system (linux) needs to get the data from disk. The second query benefits from caching at filesystem level and in postgres.

Is there a way to optimize the database to get the results fast on the first call?

First call (slow)

foo3_bar_p@BAR-FOO3-Test:~$ psql

foo3_bar_p=# explain analyze SELECT "foo3_beleg"."id", ... FROM "foo3_beleg" WHERE 
foo3_bar_p-# (("foo3_beleg"."id" IN (SELECT beleg_id FROM foo3_text where 
foo3_bar_p(# content @@ 'footown'::tsquery)) AND "foo3_beleg"."belegart_id" IN 
foo3_bar_p(# ('...', ...));
                                                                                             QUERY PLAN                                                                                 
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Nested Loop  (cost=75314.58..121963.20 rows=152 width=135) (actual time=27253.451..88462.165 rows=11 loops=1)
   ->  HashAggregate  (cost=75314.58..75366.87 rows=5229 width=4) (actual time=16087.345..16113.988 rows=17671 loops=1)
         ->  Bitmap Heap Scan on foo3_text  (cost=273.72..75254.67 rows=23964 width=4) (actual time=327.653..16026.787 rows=27405 loops=1)
               Recheck Cond: (content @@ '''footown'''::tsquery)
               ->  Bitmap Index Scan on foo3_text_content_idx  (cost=0.00..267.73 rows=23964 width=0) (actual time=281.909..281.909 rows=27405 loops=1)
                     Index Cond: (content @@ '''footown'''::tsquery)
   ->  Index Scan using foo3_beleg_pkey on foo3_beleg  (cost=0.00..8.90 rows=1 width=135) (actual time=4.092..4.092 rows=0 loops=17671)
         Index Cond: (id = foo3_text.beleg_id)
         Filter: ((belegart_id)::text = ANY ('{...
         Rows Removed by Filter: 1
 Total runtime: 88462.809 ms
(11 rows)

Second call (fast)

  Nested Loop  (cost=75314.58..121963.20 rows=152 width=135) (actual time=127.569..348.705 rows=11 loops=1)
   ->  HashAggregate  (cost=75314.58..75366.87 rows=5229 width=4) (actual time=114.390..133.131 rows=17671 loops=1)
         ->  Bitmap Heap Scan on foo3_text  (cost=273.72..75254.67 rows=23964 width=4) (actual time=11.961..97.943 rows=27405 loops=1)
               Recheck Cond: (content @@ '''footown'''::tsquery)
               ->  Bitmap Index Scan on foo3_text_content_idx  (cost=0.00..267.73 rows=23964 width=0) (actual time=9.226..9.226 rows=27405 loops=1)
                     Index Cond: (content @@ '''footown'''::tsquery)
   ->  Index Scan using foo3_beleg_pkey on foo3_beleg  (cost=0.00..8.90 rows=1 width=135) (actual time=0.012..0.012 rows=0 loops=17671)
         Index Cond: (id = foo3_text.beleg_id)
         Filter: ((belegart_id)::text = ANY ('...
         Rows Removed by Filter: 1
 Total runtime: 348.833 ms
(11 rows)

Table layout of the foo3_text table (28M rows)

foo3_egs_p=# \d foo3_text
                                 Table "public.foo3_text"
  Column  |         Type          |                         Modifiers                          
----------+-----------------------+------------------------------------------------------------
 id       | integer               | not null default nextval('foo3_text_id_seq'::regclass)
 beleg_id | integer               | not null
 index_id | character varying(32) | not null
 value    | text                  | not null
 content  | tsvector              | 
Indexes:
    "foo3_text_pkey" PRIMARY KEY, btree (id)
    "foo3_text_index_id_2685e3637668d5e7_uniq" UNIQUE CONSTRAINT, btree (index_id, beleg_id)
    "foo3_text_beleg_id" btree (beleg_id)
    "foo3_text_content_idx" gin (content)
    "foo3_text_index_id" btree (index_id)
    "foo3_text_index_id_like" btree (index_id varchar_pattern_ops)
Foreign-key constraints:
    "beleg_id_refs_id_6e6d40770e71292" FOREIGN KEY (beleg_id) REFERENCES foo3_beleg(id) DEFERRABLE INITIALLY DEFERRED
    "index_id_refs_name_341600137465c2f9" FOREIGN KEY (index_id) REFERENCES foo3_index(name) DEFERRABLE INITIALLY DEFERRED

Hardware changes (SSD instead of traditional disks) or RAM disks are possible. But maybe there the current hardware can do faster results, too.

Version: PostgreSQL 9.1.2 on x86_64-unknown-linux-gnu

Please leave a comment if you need more details.

Myelencephalon answered 25/11, 2014 at 14:22 Comment(10)
If the WHERE clause is always the same, what about issuing periodically the request ? Keeping the data hot is the only I know to avoid the panalty of the first attempt. As per optimize, did you check selectivity of AND "foo3_beleg"."belegart_id". Would it make sens to move it to the first SELECT ?Gladdy
@Gladdy no, the WHERE clause is different. The search term (here footown) differs.Myelencephalon
Postgres 9.4 will have the pg_prewarm extension which can fill the buffer cache on demand: postgresql.org/docs/9.4/static/pgprewarm.htmlSelfdetermination
@a_horse_with_no_name: besides suggesting to use something equivalent at the app level, I can't think of anything else. Methinks you should post that as an -- excellent -- answer.Milo
@a_horse_with_no_name the DB size is bigger than the available memory. This means we get slow queries not only after server start. It happens if the relation was not used for several minutes, too. Does prewarm help in this context? I was hoping for a less IO intensive solution (maybe by using a different way of indexing).Myelencephalon
Since you are open to hardware changes, simply adding more memory would be an excellent option. Since the slow step is getting the rows from foo3_beleg once those rows have already been identified, a different indexing method is unlikely to help. It could help to cluster foo3_beleg thematically, so that rows that are likely to be retrieved together are stored nearby.Chace
@guetti - it shouldn't matter if the DB is big. The explain plans indicate that what is expensive to load into memory is the foo3_beleg_pkey index on foo3_beleg. If you periodically issue a query that requires that index (say, once per second), you may be able to keep it in RAM without doing any expensive operations. You can find out how much space an index takes up[1] and decide if it's worth it to get more ram. (1 page is 8KB IIRC).Motherhood
To all kind people who tried to help: A team mate tried different solutions. We could get better performance by splitting the tsearch column vector (which gets created by our code, not inside postgres) into N rows. But still not perfect if the query hits the cold DB. First I was optimistic, but now I guess more RAM and SSD will be our solution. I don't know which of you should get the bounty. I guess Stackoverflow will choose one in the next minutes.Myelencephalon
@Myelencephalon It's a good news that you finally found a solution even if it's not the best. I don't think RAM will help for the first query but the SSD should.Adhesion
@Ludovic: Yes you are right. RAM won't help on the first query.Myelencephalon
M
4

Postgres is providing you a chance to do some configuration on runtime query executing for deciding your I/O operation priority.

random_page_cost(floating point) -(reference) is what may help you. It will basically set your IO/CPU operation ratio.

Higher value means I/O is important, I have sequential disk; and lower value means I/O is not important, I have random-access disk.

Default value is 4.0, and may be you want to increase and test if your query take shorter time.

Do not forget, your I/O priority will depend on your column count, row count.

A big BUT; since your indicies are btree, your CPU priority is going down much faster than I/O priorities going up. You can basically map complexities to priorities.

CPU Priority = O(log(x))
I/O Priority = O(x)

All in all, this means, if Postgre's value 4.0 would for 100k entries, You should set it to (approx.) (4.0 * log(100k) * 10M)/(log(10M) * 100k) for 10M entry.

Mosaic answered 4/12, 2014 at 6:52 Comment(0)
R
1

Agree with Julius but, if you only need stuff from foo3_beleg, try EXISTS in instead (and it would help if you'd pasted your sql too, not just your explain plan).

select ...
from foo3_beleg b
where exists
(select 1 from foo_text s where t.beleg_id = b.id)
....

However, I suspect your "wake up" on the 1st pass is just your db loading up the IN subquery rows into memory. That will likely happen regardless, though an EXISTS is generally much faster than an IN (INs are rarely needed, if not containing hardcoded lists, and a yellow flag if I review sql).

Rafter answered 3/12, 2014 at 6:19 Comment(2)
Using exists won't help here. OP's question is clearly related to an "index/data is not in the cache yet" problem. (And the correct answer is, I'd gather, already in the comments.)Milo
Well, if you noticed, I pretty much stated just that "...likely happen regardless...". Also, depending on how the actual data looks and how pg goes about loading cache/checking matches, keep in mind an EXISTs is fast because it is free to return on the first match. Which may be the case with an IN as well, but that's an optimizer aspect, rather than a feature of the SQL command. Bottom line: I've yet to come across a valid use case of IN <subquery> that cannot be EXISTS.Rafter
A
1

The first time you execute the query, postgres will load the data from the disk which is slow even with a good hard drive. The second time you run your query it will load the previously loaded data from the RAM which is obviously faster.

The solution to this problem would be to load relation data into either the operating system buffer cache or the PostgreSQL buffer cache with:

int8 pg_prewarm(regclass, mode text default 'buffer', fork text default 'main', first_block int8 default null, last_block int8 default null) :

The first argument is the relation to be prewarmed. The second argument is the prewarming method to be used, as further discussed below; the third is the relation fork to be prewarmed, usually main. The fourth argument is the first block number to prewarm (NULL is accepted as a synonym for zero). The fifth argument is the last block number to prewarm (NULL means prewarm through the last block in the relation). The return value is the number of blocks prewarmed.

There are three available prewarming methods. prefetch issues asynchronous prefetch requests to the operating system, if this is supported, or throws an error otherwise. read reads the requested range of blocks; unlike prefetch, this is synchronous and supported on all platforms and builds, but may be slower. buffer reads the requested range of blocks into the database buffer cache.

Note that with any of these methods, attempting to prewarm more blocks than can be cached — by the OS when using prefetch or read, or by PostgreSQL when using buffer — will likely result in lower-numbered blocks being evicted as higher numbered blocks are read in. Prewarmed data also enjoys no special protection from cache evictions, so it is possible for other system activity may evict the newly prewarmed blocks shortly after they are read; conversely, prewarming may also evict other data from cache. For these reasons, prewarming is typically most useful at startup, when caches are largely empty.

Source

Hope this helped !

Adhesion answered 4/12, 2014 at 14:4 Comment(0)
S
0

Sometimes moving an "WHERE x IN" into a JOIN can improve performance significantly. Try this:

SELECT
  foo3_beleg.id, ...
FROM
  foo3_beleg b INNER JOIN
  foo3_text  t ON (t.beleg_id = b.id AND t.content @@ 'footown'::tsquery)
WHERE 
  foo3_beleg.belegart_id IN ('...', ...);

Here's a repeatable experiment to support my claim.

I happen to have a big Postgres database handy (30 million rows) (http://juliusdavies.ca/2013/j.emse/bertillonage/), so I loaded that into postgres 9.4beta3.

The results are impressive. The sub-select approach is approximately 20 times slower:

time  psql myDb < using-in.sql
real    0m17.212s

time  psql myDb < using-join.sql
real    0m0.807s

For those interested in replicating, here are the raw SQL queries I used to test my theory.

This query uses a "SELECT IN" subquery, and it's 20 times slower (17 seconds on my laptop on the first execution):

  -- using-in.sql
  SELECT
    COUNT(DISTINCT sigsha1re) AS a_intersect_b, infilesha1
  FROM
    files INNER JOIN sigs  ON (files.filesha1 = sigs.filesha1)
  WHERE
    sigs.sigsha1re IN (
      SELECT sigsha1re FROM sigs WHERE sigs.sigsha1re like '0347%'
    )  
  GROUP BY
    infilesha1

This query moves the condition out of the subquery and into the joining criteria, and it's 20 times faster (0.8 seconds on my laptop on the first execution).

  -- using-join.sql
  SELECT
    COUNT(DISTINCT sigsha1re) AS a_intersect_b, infilesha1
  FROM
    files INNER JOIN sigs  ON (
      files.filesha1 = sigs.filesha1 AND sigs.sigsha1re like '0347%'
    )
  GROUP BY
    infilesha1

p.s. if you're curious what that database is for, you can use it to calculate how similar an arbitrary jar file is to all of the jar files in the maven repository circa 2011.

./query.sh lib/commons-codec-1.5.jar | psql myDb

 similarity |                      a = 39 = commons-codec-1.5.jar  (bin2bin)                       
------------+--------------------------------------------------------------------------------------
  1.000     | commons-codec-1.5.jar
  0.447     | commons-codec-1.4.jar
  0.174     | org.apache.sling.auth.form-1.0.2.jar
  0.170     | org.apache.sling.auth.form-1.0.0.jar
  0.142     | jbehave-core-3.0-beta-3.jar
  0.142     | jbehave-core-3.0-beta-4.jar
  0.141     | jbehave-core-3.0-beta-5.jar
  0.141     | jbehave-core-3.0-beta-6.jar
  0.140     | commons-codec-1.2.jar
Send answered 3/12, 2014 at 0:4 Comment(5)
"Sometimes moving an "WHERE x IN" into a JOIN can improve performance significantly." -- You must be talking about MySQL. Postgres is smart enough to rewrite the query tree when this is possible. OP's question is clearly related to an "index/data is not in the cache yet" problem. (And the correct answer is, I'd gather, already in the comments.)Milo
When did Postgres get better at this? I ran into this problem with 8.4 and 9.0. Never tried on other versions. (And I had similar symptoms: slow query first time, much faster the 2nd time.)Send
It has been doing so for as long as I've been using Postgres (8.0); where things have improved over the years has been which additional classes of subqueries get collapsed.Milo
I just tried with 9.4beta3, and it's still exhibiting this problem (see my edits to my answer for details).Send
@denis , curious for your thoughts. Am I doing something wrong?Send

© 2022 - 2024 — McMap. All rights reserved.