Why is Spark faster than Hadoop Map Reduce
Asked Answered
R

2

16

Can someone explain using the word count example, why Spark would be faster than Map Reduce?

Rinderpest answered 14/9, 2015 at 19:34 Comment(2)
possible duplicate of Why is Spark fast when word count?Eri
Another possible duplicate Is caching the only advantage of spark over map-reduce?Archilochus
A
18

bafna's answer provides the memory-side of the story, but I want to add other two important facts:DAG and ecosystem

  1. Spark uses "lazy evaluation" to form a directed acyclic graph (DAG) of consecutive computation stages. In this way, the execution plan can be optimized, e.g. to minimize shuffling data around. In contrast, this should be done manually in MapReduce by tuning each MR step. (It would be easier to understand this point if you are familiar with the execution plan optimization in RDBMS or the DAG-style execution of Apache Tez)
  2. Spark ecosystem has established a versatile stack of components to handle SQL, ML, Streaming, Graph Mining tasks. But in the hadoop ecosystem you have to install other packages to do these individual things.

And I want to add that, even if your data is too big for main memory, you can still use spark by choosing to persist you data on disk. Although by doing this you give up the advantages of in-memory processing, you can still benefit from the DAG execution optimization.

Some informative answers on Quora: here and here.

Archilochus answered 15/9, 2015 at 6:6 Comment(0)
A
13

I think there are three primary reasons.

The main two reasons stem from the fact that, usually, one does not run a single MapReduce job, but rather a set of jobs in sequence.

  1. One of the main limitations of MapReduce is that it persists the full dataset to HDFS after running each job. This is very expensive, because it incurs both three times (for replication) the size of the dataset in disk I/O and a similar amount of network I/O. Spark takes a more holistic view of a pipeline of operations. When the output of an operation needs to be fed into another operation, Spark passes the data directly without writing to persistent storage. This is an innovation over MapReduce that came from Microsoft's Dryad paper, and is not original to Spark.

  2. The main innovation of Spark was to introduce an in-memory caching abstraction. This makes Spark ideal for workloads where multiple operations access the same input data. Users can instruct Spark to cache input data sets in memory, so they don't need to be read from disk for each operation.

  3. What about Spark jobs that would boil down to a single MapReduce job? In many cases also these run faster on Spark than on MapReduce. The primary advantage Spark has here is that it can launch tasks much faster. MapReduce starts a new JVM for each task, which can take seconds with loading JARs, JITing, parsing configuration XML, etc. Spark keeps an executor JVM running on each node, so launching a task is simply a matter of making an RPC to it and passing a Runnable to a thread pool, which takes in the single digits of milliseconds.

Lastly, a common misconception probably worth mentioning is that Spark somehow runs entirely in memory while MapReduce does not. This is simply not the case. Spark's shuffle implementation works very similarly to MapReduce's: each record is serialized and written out to disk on the map side and then fetched and deserialized on the reduce side.

Aniakudo answered 16/2, 2017 at 13:6 Comment(2)
The best answer, now with the second release of Spark out you can also mention a Tungsten engine I think. It uses bytecode optimizations not available in MapReduce.Lundquist
The answer is orginally written by Sandy Ryza, Please find it here qr.ae/pNySpAConfectioner

© 2022 - 2024 — McMap. All rights reserved.