In spark join, does table order matter like in pig?
Asked Answered
D

1

12

Related to Spark - Joining 2 PairRDD elements

When doing a regular join in pig, the last table in the join is not brought into memory but streamed through instead, so if A has small cardinality per key and B large cardinality, it is significantly better to do join A, B than join A by B, from performance perspective (avoiding spill and OOM)

Is there a similar concept in spark? I didn't see any such recommendation, and wonder how is it possible? The implementation looks to me pretty much the same as in pig: https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/rdd/CoGroupedRDD.scala

Or am I missing something?

Durham answered 24/2, 2015 at 11:24 Comment(0)
L
6

It does not make a difference, in spark the RDD will only be brought into memory if it is cached. So in spark to achieve the same effect you can cache the smaller RDD. Another thing you can do in spark which I'm not sure that pig does, is if all RDD's being joined have the same partitioner no shuffle needs to be done.

Lotty answered 24/2, 2015 at 18:10 Comment(2)
ok, but suppose we're not caching any RDD. I'm assuming that spark does a sort of nested-loops between the 2 RDDs. If A has 1M records per join-key, and B has only 3 records per join-key, but both are huge. If the outer (left) table is A, each join-key is going to cause spills to disk... right?Durham
@Durham if you look at the [source]( github.com/apache/spark/blob/master/core/src/main/scala/org/…) everything is an iterator, so nothing is necessarily loaded into memory, when an iterator is exhausted it would just start from the beginning, that being said it kinda looks like the for loop would leave the output in memoryLotty

© 2022 - 2024 — McMap. All rights reserved.