Slow nested loop left join with index scan 130k times in loop
O

2

9

I am really struggling to optimize this query:

SELECT wins / (wins + COUNT(loosers.match_id) + 0.) winrate, wins + COUNT(loosers.match_id) matches, winners.winning_champion_one_id, winners.winning_champion_two_id, winners.winning_champion_three_id, winners.winning_champion_four_id, winners.winning_champion_five_id
FROM
(
   SELECT COUNT(match_id) wins, winning_champion_one_id, winning_champion_two_id, winning_champion_three_id, winning_champion_four_id, winning_champion_five_id FROM matches
   WHERE
      157 IN (winning_champion_one_id, winning_champion_two_id, winning_champion_three_id, winning_champion_four_id, winning_champion_five_id)
   GROUP BY winning_champion_one_id, winning_champion_two_id, winning_champion_three_id, winning_champion_four_id, winning_champion_five_id
) winners
LEFT OUTER JOIN matches loosers ON
  winners.winning_champion_one_id = loosers.loosing_champion_one_id AND
  winners.winning_champion_two_id = loosers.loosing_champion_two_id AND
  winners.winning_champion_three_id = loosers.loosing_champion_three_id AND
  winners.winning_champion_four_id = loosers.loosing_champion_four_id AND
  winners.winning_champion_five_id = loosers.loosing_champion_five_id
GROUP BY winners.wins, winners.winning_champion_one_id, winners.winning_champion_two_id, winners.winning_champion_three_id, winners.winning_champion_four_id, winners.winning_champion_five_id
HAVING wins + COUNT(loosers.match_id) >= 20
ORDER BY winrate DESC, matches DESC
LIMIT 1;

And this is the output of EXPLAIN (BUFFERS, ANALYZE):

Limit  (cost=72808.80..72808.80 rows=1 width=58) (actual time=1478.749..1478.749 rows=1 loops=1)
  Buffers: shared hit=457002
  ->  Sort  (cost=72808.80..72837.64 rows=11535 width=58) (actual time=1478.747..1478.747 rows=1 loops=1)
"        Sort Key: ((((count(matches.match_id)))::numeric / ((((count(matches.match_id)) + count(loosers.match_id)))::numeric + '0'::numeric))) DESC, (((count(matches.match_id)) + count(loosers.match_id))) DESC"
        Sort Method: top-N heapsort  Memory: 25kB
        Buffers: shared hit=457002
        ->  HashAggregate  (cost=72462.75..72751.12 rows=11535 width=58) (actual time=1448.941..1478.643 rows=83 loops=1)
"              Group Key: (count(matches.match_id)), matches.winning_champion_one_id, matches.winning_champion_two_id, matches.winning_champion_three_id, matches.winning_champion_four_id, matches.winning_champion_five_id"
              Filter: (((count(matches.match_id)) + count(loosers.match_id)) >= 20)
              Rows Removed by Filter: 129131
              Buffers: shared hit=457002
              ->  Nested Loop Left Join  (cost=9857.76..69867.33 rows=115352 width=26) (actual time=288.086..1309.687 rows=146610 loops=1)
                    Buffers: shared hit=457002
                    ->  HashAggregate  (cost=9857.33..11010.85 rows=115352 width=18) (actual time=288.056..408.317 rows=129214 loops=1)
"                          Group Key: matches.winning_champion_one_id, matches.winning_champion_two_id, matches.winning_champion_three_id, matches.winning_champion_four_id, matches.winning_champion_five_id"
                          Buffers: shared hit=22174
                          ->  Bitmap Heap Scan on matches  (cost=1533.34..7455.69 rows=160109 width=18) (actual time=26.618..132.844 rows=161094 loops=1)
                                Recheck Cond: ((157 = winning_champion_one_id) OR (157 = winning_champion_two_id) OR (157 = winning_champion_three_id) OR (157 = winning_champion_four_id) OR (157 = winning_champion_five_id))
                                Heap Blocks: exact=21594
                                Buffers: shared hit=22174
                                ->  BitmapOr  (cost=1533.34..1533.34 rows=164260 width=0) (actual time=22.190..22.190 rows=0 loops=1)
                                      Buffers: shared hit=580
                                      ->  Bitmap Index Scan on matches_winning_champion_one_id_index  (cost=0.00..35.03 rows=4267 width=0) (actual time=0.045..0.045 rows=117 loops=1)
                                            Index Cond: (157 = winning_champion_one_id)
                                            Buffers: shared hit=3
                                      ->  Bitmap Index Scan on matches_winning_champion_two_id_index  (cost=0.00..47.22 rows=5772 width=0) (actual time=0.665..0.665 rows=3010 loops=1)
                                            Index Cond: (157 = winning_champion_two_id)
                                            Buffers: shared hit=13
                                      ->  Bitmap Index Scan on matches_winning_champion_three_id_index  (cost=0.00..185.53 rows=22840 width=0) (actual time=3.824..3.824 rows=23893 loops=1)
                                            Index Cond: (157 = winning_champion_three_id)
                                            Buffers: shared hit=89
                                      ->  Bitmap Index Scan on matches_winning_champion_four_id_index  (cost=0.00..537.26 rows=66257 width=0) (actual time=8.069..8.069 rows=67255 loops=1)
                                            Index Cond: (157 = winning_champion_four_id)
                                            Buffers: shared hit=244
                                      ->  Bitmap Index Scan on matches_winning_champion_five_id_index  (cost=0.00..528.17 rows=65125 width=0) (actual time=9.577..9.577 rows=67202 loops=1)
                                            Index Cond: (157 = winning_champion_five_id)
                                            Buffers: shared hit=231
                    ->  Index Scan using matches_loosing_champion_ids_index on matches loosers  (cost=0.43..0.49 rows=1 width=18) (actual time=0.006..0.006 rows=0 loops=129214)
                          Index Cond: ((matches.winning_champion_one_id = loosing_champion_one_id) AND (matches.winning_champion_two_id = loosing_champion_two_id) AND (matches.winning_champion_three_id = loosing_champion_three_id) AND (matches.winning_champion_four_id = loosing_champion_four_id) AND (matches.winning_champion_five_id = loosing_champion_five_id))
                          Buffers: shared hit=434828
Planning time: 0.584 ms
Execution time: 1479.779 ms

Table and index definitions:

create table matches (
    match_id bigint not null,
    winning_champion_one_id smallint,
    winning_champion_two_id smallint,
    winning_champion_three_id smallint,
    winning_champion_four_id smallint,
    winning_champion_five_id smallint,
    loosing_champion_one_id smallint,
    loosing_champion_two_id smallint,
    loosing_champion_three_id smallint,
    loosing_champion_four_id smallint,
    loosing_champion_five_id smallint,
    constraint matches_match_id_pk primary key (match_id)
);

create index matches_winning_champion_one_id_index   on matches (winning_champion_one_id);
create index matches_winning_champion_two_id_index   on matches (winning_champion_two_id);
create index matches_winning_champion_three_id_index on matches (winning_champion_three_id);
create index matches_winning_champion_four_id_index  on matches (winning_champion_four_id);
create index matches_winning_champion_five_id_index  on matches (winning_champion_five_id);
create index matches_loosing_champion_ids_index      on matches (loosing_champion_one_id, loosing_champion_two_id, loosing_champion_three_id, loosing_champion_four_id, loosing_champion_five_id);
create index matches_loosing_champion_one_id_index   on matches (loosing_champion_one_id);
create index matches_loosing_champion_two_id_index   on matches (loosing_champion_two_id);
create index matches_loosing_champion_three_id_index on matches (loosing_champion_three_id);
create index matches_loosing_champion_four_id_index  on matches (loosing_champion_four_id);
create index matches_loosing_champion_five_id_index  on matches (loosing_champion_five_id);

The table can have 100m+ rows. At the moment it does have about 20m rows.
Current size of table and indexes:

public.matches, 2331648 rows, 197 MB

public.matches_riot_match_id_pk, 153 MB
public.matches_loosing_champion_ids_index, 136 MB
public.matches_loosing_champion_four_id_index, 113 MB
public.matches_loosing_champion_five_id_index, 113 MB
public.matches_winning_champion_one_id_index, 113 MB
public.matches_winning_champion_five_id_index, 113 MB
public.matches_winning_champion_three_id_index, 112 MB
public.matches_loosing_champion_three_id_index, 112 MB
public.matches_winning_champion_four_id_index, 112 MB
public.matches_loosing_champion_one_id_index, 112 MB
public.matches_winning_champion_two_id_index, 112 MB
public.matches_loosing_champion_two_id_index, 112 MB

These are the only changes I made to postgresql.conf:

max_connections = 50
shared_buffers = 6GB
effective_cache_size = 18GB
work_mem = 125829kB
maintenance_work_mem = 1536MB
min_wal_size = 1GB
max_wal_size = 2GB
checkpoint_completion_target = 0.7
wal_buffers = 16MB
default_statistics_target = 100
max_parallel_workers_per_gather = 8
min_parallel_relation_size = 1

There is probably something I do overlook.

EDIT:

For anyone wondering. The best approach was the UNION ALL approach. The suggested schema of Erwin unfortunately doesn't work well. Here is the EXPLAIN (ANALYZE, BUFFERS) output of the suggested schema:

Limit  (cost=2352157.06..2352157.06 rows=1 width=48) (actual time=1976.709..1976.710 rows=1 loops=1)
  Buffers: shared hit=653004
  ->  Sort  (cost=2352157.06..2352977.77 rows=328287 width=48) (actual time=1976.708..1976.708 rows=1 loops=1)
"        Sort Key: (((((count(*)))::numeric * 1.0) / (((count(*)) + l.loss))::numeric)) DESC, (((count(*)) + l.loss)) DESC"
        Sort Method: top-N heapsort  Memory: 25kB
        Buffers: shared hit=653004
        ->  Nested Loop  (cost=2.10..2350515.62 rows=328287 width=48) (actual time=0.553..1976.294 rows=145 loops=1)
              Buffers: shared hit=653004
              ->  GroupAggregate  (cost=1.67..107492.42 rows=492431 width=16) (actual time=0.084..1409.450 rows=154547 loops=1)
                    Group Key: w.winner
                    Buffers: shared hit=188208
                    ->  Merge Join  (cost=1.67..100105.96 rows=492431 width=8) (actual time=0.061..1301.578 rows=199530 loops=1)
                          Merge Cond: (tm.team_id = w.winner)
                          Buffers: shared hit=188208
                          ->  Index Only Scan using team_member_champion_team_idx on team_member tm  (cost=0.56..8978.79 rows=272813 width=8) (actual time=0.026..103.842 rows=265201 loops=1)
                                Index Cond: (champion_id = 157)
                                Heap Fetches: 0
                                Buffers: shared hit=176867
                          ->  Index Only Scan using match_winner_loser_idx on match w  (cost=0.43..79893.82 rows=2288093 width=8) (actual time=0.013..597.331 rows=2288065 loops=1)
                                Heap Fetches: 0
                                Buffers: shared hit=11341
              ->  Subquery Scan on l  (cost=0.43..4.52 rows=1 width=8) (actual time=0.003..0.003 rows=0 loops=154547)
                    Filter: (((count(*)) + l.loss) > 19)
                    Rows Removed by Filter: 0
                    Buffers: shared hit=464796
                    ->  GroupAggregate  (cost=0.43..4.49 rows=2 width=16) (actual time=0.003..0.003 rows=0 loops=154547)
                          Group Key: l_1.loser
                          Buffers: shared hit=464796
                          ->  Index Only Scan using match_loser_winner_idx on match l_1  (cost=0.43..4.46 rows=2 width=8) (actual time=0.002..0.002 rows=0 loops=154547)
                                Index Cond: (loser = w.winner)
                                Heap Fetches: 0
                                Buffers: shared hit=464796
Planning time: 0.634 ms
Execution time: 1976.792 ms

And now with the UNION ALL approach and the new schema:

Limit  (cost=275211.80..275211.80 rows=1 width=48) (actual time=3540.420..3540.421 rows=1 loops=1)
  Buffers: shared hit=199478
  CTE t
    ->  Index Only Scan using team_member_champion_team_idx on team_member  (cost=0.56..8978.79 rows=272813 width=8) (actual time=0.027..103.732 rows=265201 loops=1)
          Index Cond: (champion_id = 157)
          Heap Fetches: 0
          Buffers: shared hit=176867
  ->  Sort  (cost=266233.01..266233.51 rows=200 width=48) (actual time=3540.417..3540.417 rows=1 loops=1)
"        Sort Key: ((((count((true)))::numeric * 1.0) / (count(*))::numeric)) DESC, (count(*)) DESC"
        Sort Method: top-N heapsort  Memory: 25kB
        Buffers: shared hit=199478
        ->  HashAggregate  (cost=266228.01..266232.01 rows=200 width=48) (actual time=3455.112..3540.301 rows=145 loops=1)
              Group Key: t.team_id
              Filter: (count(*) > 19)
              Rows Removed by Filter: 265056
              Buffers: shared hit=199478
              ->  Append  (cost=30088.37..254525.34 rows=936214 width=9) (actual time=315.399..3137.115 rows=386575 loops=1)
                    Buffers: shared hit=199478
                    ->  Merge Join  (cost=30088.37..123088.80 rows=492454 width=9) (actual time=315.398..1583.746 rows=199530 loops=1)
                          Merge Cond: (match.winner = t.team_id)
                          Buffers: shared hit=188208
                          ->  Index Only Scan using match_winner_loser_idx on match  (cost=0.43..79893.82 rows=2288093 width=8) (actual time=0.033..583.016 rows=2288065 loops=1)
                                Heap Fetches: 0
                                Buffers: shared hit=11341
                          ->  Sort  (cost=30087.94..30769.97 rows=272813 width=8) (actual time=315.333..402.516 rows=310184 loops=1)
                                Sort Key: t.team_id
                                Sort Method: quicksort  Memory: 24720kB
                                Buffers: shared hit=176867
                                ->  CTE Scan on t  (cost=0.00..5456.26 rows=272813 width=8) (actual time=0.030..240.150 rows=265201 loops=1)
                                      Buffers: shared hit=176867
                    ->  Merge Join  (cost=30088.37..122074.39 rows=443760 width=9) (actual time=134.118..1410.484 rows=187045 loops=1)
                          Merge Cond: (match_1.loser = t_1.team_id)
                          Buffers: shared hit=11270
                          ->  Index Only Scan using match_loser_winner_idx on match match_1  (cost=0.43..79609.82 rows=2288093 width=8) (actual time=0.025..589.773 rows=2288060 loops=1)
                                Heap Fetches: 0
                                Buffers: shared hit=11270
                          ->  Sort  (cost=30087.94..30769.97 rows=272813 width=8) (actual time=134.076..219.529 rows=303364 loops=1)
                                Sort Key: t_1.team_id
                                Sort Method: quicksort  Memory: 24720kB
                                ->  CTE Scan on t t_1  (cost=0.00..5456.26 rows=272813 width=8) (actual time=0.003..60.179 rows=265201 loops=1)
Planning time: 0.401 ms
Execution time: 3548.072 ms
Otey answered 15/4, 2017 at 7:34 Comment(9)
what's the size pf matches_loosing_champion_ids_index?..Enjambement
My first impression is that you have your database design wrong. You should not have five columns with IDs per row, but one column with five rows; e.g. table match with (match_id, other_metadata) and table match_participants with (match_id, participant_id, rank, etc_pp).Mews
My first tries were like that. I do now also have a table match_players. But I couldn't get a query together fast enough. A primitive approach like that was a lot faster. But still slow :/Otey
What is it? why does a match have five winners and five losers?Glove
It's a 5v5 game.Otey
How many distinct player are there / can be? smallint indicate a low maximum. Can the same player be in the same match more than once? Your winning* and loosing* (should be "losing" btw.) columns are not defined NOT NULL. Can there be NULL values? Meaning what? Is a different db schema still an option? I have a different idea ...Georgeanngeorgeanna
Also you treat (1,2,3,4,5) to be a different team than (5,4,3,2,1) etc. Typically, that's an error. Is it intended? If so, please explain in your question.Georgeanngeorgeanna
The question is exemplary in providing all relevant information, btw.Georgeanngeorgeanna
Nitpicking. The one not winning is the loser, not the looser. And he is losing the game, not loosing it.Forbis
G
16

Your query and explain output don't look so bad. Still, a couple of observations:

  1. An index-only scan instead of an index scan on matches_loosing_champion_ids_index would be faster. The reason you don't see that: the useless count(match_id).

  2. 5 bitmap index scans + BitmapOR step are pretty fast but a single bitmap index scan would be faster.

  3. The most expensive part in this query plan is the Nested Loop Left Join. Might be different for other players.

With your schema

Query 1: LEFT JOIN LATERAL

This way, we aggregate before we join and don't need another GROUP BY. Also fewer join operations. And count(*) should unblock index-only scans:

SELECT player1, player2, player3, player4, player5
     , ((win * 1.0) / (win + loss))::numeric(5,5) AS winrate
     , win + loss AS matches
FROM  (
   SELECT winning_champion_one_id   AS player1
        , winning_champion_two_id   AS player2
        , winning_champion_three_id AS player3
        , winning_champion_four_id  AS player4
        , winning_champion_five_id  AS player5
        , COUNT(*) AS win           -- see below
   FROM   matches
   WHERE  157 IN (winning_champion_one_id
                , winning_champion_two_id
                , winning_champion_three_id
                , winning_champion_four_id
                , winning_champion_five_id)
   GROUP  BY 1,2,3,4,5
   ) w
LEFT  JOIN LATERAL (
   SELECT COUNT(*) AS loss          -- see below
   FROM   matches
   WHERE  loosing_champion_one_id   = w.player1
   AND    loosing_champion_two_id   = w.player2
   AND    loosing_champion_three_id = w.player3
   AND    loosing_champion_four_id  = w.player4
   AND    loosing_champion_five_id  = w.player5
   GROUP  BY loosing_champion_one_id
           , loosing_champion_two_id
           , loosing_champion_three_id
           , loosing_champion_four_id
           , loosing_champion_five_id
   ) l ON true
WHERE  win + loss > 19
ORDER  BY winrate DESC, matches DESC
LIMIT  1;

count(*):
is slightly shorter and faster in Postgres, doing the same as count(match_id) here, because match_id is never NULL.

Removing the only reference to match_id allows an index-only scan on matches_loosing_champion_ids_index! Some other preconditions must be met ...

Query 2: UNION ALL

Another way around the expensive Nested Loop Left Join, and a single GROUP BY. But we add 5 more bitmap index scans. May or may not be faster:

SELECT player1, player2, player3, player4, player5
     ,(count(win) * 1.0) / count(*) AS winrate  -- I would round ...
     , count(*)                     AS matches
FROM  (
   SELECT winning_champion_one_id   AS player1
        , winning_champion_two_id   AS player2
        , winning_champion_three_id AS player3
        , winning_champion_four_id  AS player4
        , winning_champion_five_id  AS player5
        , TRUE AS win
   FROM   matches
   WHERE  157 IN (winning_champion_one_id
                , winning_champion_two_id
                , winning_champion_three_id
                , winning_champion_four_id
                , winning_champion_five_id)
   UNION ALL 
   SELECT loosing_champion_one_id
        , loosing_champion_two_id
        , loosing_champion_three_id
        , loosing_champion_four_id
        , loosing_champion_five_id
        , NULL AS win              -- following "count(win)" ignores NULL values
   FROM   matches
   WHERE  157 IN (loosing_champion_one_id
                , loosing_champion_two_id
                , loosing_champion_three_id
                , loosing_champion_four_id
                , loosing_champion_five_id)
   ) m
GROUP  BY 1,2,3,4,5
HAVING count(*) > 19    -- min 20 matches
-- AND    count(win) > 0   -- min 1 win --  see below!
ORDER  BY winrate DESC, matches DESC
LIMIT  1;

AND count(win) > 0 is commented out, because it's redundant, while you pick the single best winrate anyways.

Different schema

I really would use a different schema to begin with:

CREATE TABLE team (
   team_id serial PRIMARY KEY --  or bigserial if you expect > 2^31 distinct teams
   -- more attributes?
);

CREATE TABLE player (
   player_id smallserial PRIMARY KEY
   -- more attributes?
);

CREATE TABLE team_member (
   team_id   integer  REFERENCES team
 , player_id smallint REFERENCES player
 , team_pos  smallint NOT NULL CHECK (team_pos BETWEEN 1 AND 5) -- only if position matters
 , PRIMARY KEY (team_id, player_id)
 , UNIQUE      (team_id, team_pos)
);
CREATE INDEX team_member_player_team_idx on team_member (player_id, team_id);

--  Enforce 5 players per team. Various options, different question.

CREATE TABLE match (
   match_id bigserial PRIMARY KEY
 , winner   integer NOT NULL REFERENCES team
 , loser    integer NOT NULL REFERENCES team
 , CHECK (winner <> loser)    --  wouldn't make sense
);
CREATE INDEX match_winner_loser_idx ON match (winner, loser);
CREATE INDEX match_loser_winner_idx ON match (loser, winner);

Subsidiary tables add to the disk footprint, but the main table is a bit smaller. And most importantly, you need fewer indexes, which should be substantially smaller overall for your cardinalities.

Query

We don't need any other indexes for this equivalent query. Much simpler and presumably faster now:

SELECT winner
     , win * 1.0/ (win + loss) AS winrate
     , win + loss AS matches
FROM  (
   SELECT w.winner, count(*) AS win
   FROM   team_member tm
   JOIN   match w ON w.winner = tm.team_id
   WHERE  tm.player_id = 157
   GROUP  BY w.winner
   ) w
LEFT   JOIN LATERAL (
   SELECT count(*) AS loss
   FROM   match l
   WHERE  l.loser = w.winner
   GROUP  BY l.loser
   ) l ON true
WHERE  win + loss > 19
ORDER  BY winrate DESC, matches DESC
LIMIT  1;

Join the result to team_member to get individual players.

You can also try the corresponding UNION ALL technique from above:

WITH t AS (
   SELECT team_id
   FROM   team_member
   WHERE  player_id = 157  -- provide player here
   )
SELECT team_id
     ,(count(win) * 1.0) / count(*) AS winrate
     , count(*)                     AS matches
FROM  (
   SELECT t.team_id, TRUE AS win
   FROM   t JOIN match ON winner = t.team_id
   UNION ALL 
   SELECT t.team_id, NULL AS win
   FROM   t JOIN match ON loser  = t.team_id
   ) m
GROUP  BY 1
HAVING count(*) > 19    -- min 20 matches
ORDER  BY winrate DESC, matches DESC
LIMIT  1;

bloom index

I briefly considered a bloom index for your predicate:

   WHERE  157 IN (loosing_champion_one_id
                , loosing_champion_two_id
                , loosing_champion_three_id
                , loosing_champion_four_id
                , loosing_champion_five_id)

Didn't test, probably won't pay for just 5 smallint columns.

Georgeanngeorgeanna answered 19/4, 2017 at 3:24 Comment(8)
The UNION ALL approach was double as fast! With your schema suggestion, how would the query look like, if I had 2 winner ids and three looser ids I want to look up for? Like: 157 IN (w1, w2, w3, w4, w5) AND 63 IN (w1, w2, w3, w4, w5) AND 22 IN (l1, l2, l3, l4, l5) AND 11 IN (l1, l2, l3, l4, l5) AND 33 IN (l1, l2, l3, l4, l5)Otey
@FerhatSayan: That's a sub-problem in filtering affected teams with relational division and a different question, really. See here for an arsenal of possible solutions, and start a new question if you are still unsure. You can always link to this one for context.Georgeanngeorgeanna
Thank you Erwin. I will try the new schema today after work. Hopefully it will be a lot better. But your UNION ALL approach was really helpful. Thank you!Otey
Forgot to add the link for relational division in above comment: https://mcmap.net/q/28767/-how-to-filter-sql-results-in-a-has-many-through-relation. BTW, how does performance of the LATERAL query compare?Georgeanngeorgeanna
Thank you Erwin. The difference between normal outer joins and lateral joins is almost non existent. The approach with the new schema also doesn't work well. See my edited question.Otey
@FerhatSayan: You added EXPLAIN ANALYZE output instead of EXPLAIN (ANALYZE, BUFFERS), so we can't see whether you tested with warm or cold cache. Also, did you try to combine the new schema with the UNION ALL query? I added an implementation above. Should be faster, yet.Georgeanngeorgeanna
I added the EXPLAIN (ANALYZE, BUFFERS) for both approaches with the new schema. Both not really faster.Otey
The 2 merge joins are more expensive than I expected. Well, then the simple model with UNION ALL remains the winner in this contest.Georgeanngeorgeanna
L
7

The execution plan looks pretty good in my opinion.

What you could try is to see if performance improves when a nested loop join is avoided.
To test this, run

SET enable_nestloop = off;

before your query and see if that improves the speed.

Other than that, I cannot think of any improvements.

Laine answered 18/4, 2017 at 9:39 Comment(4)
It got slower. Does PostgreSQL offer parallel queries or something? So I can use all my CPU cores?Otey
I don't think that parallel query (available from 9.6 on) can help you there - the time is mostly spent in the 129214 loops of the nested loop join.Laine
With released version 10 they have parallel queries.Ardeliaardelis
For others who may wonder if this answer applies to their problem— it doesn't hurt to try, but disabling nested loops is only likely to help if EXPLAIN ANALYZE shows that there are more actual rows than expected rather than the other way round, as it was in the OP's case. I had a case with 6000x underestimation where disabling nested loops dropped runtime from 27 minutes to 27 seconds.Unwept

© 2022 - 2024 — McMap. All rights reserved.