In practice, how many machines do you need in order for Hadoop / MapReduce / Mahout to speed up very parallelizable computations?
Asked Answered
A

4

6

I need to do some heavy machine learning computations. I have a small number of machines idle on a LAN. How many machines would I need in order for distrubuting my computations using hadoop / mapreduce / mahout to to be significantly faster than running on a single machine without these distributed frameworks? This is a practical question of computational overhead versus gains as I assume distributing between just 2 machines the overall time would be worse than not distributing and simply running on a single machine (just because of all the overhead involved to distribute the computations).

Technical note: Some of the heavy computations are very parallelizable. All of them are as long as each machine has it's own copy of the raw data.

Attenweiler answered 13/7, 2011 at 16:40 Comment(2)
How long is a piece of string?Advent
@Shaggy Frog, Jeff Foster. Sorry, I wasn't clearer at first. The question wasn't "how much faster can it go", it was "how many machines do I need for it to be much faster, rather than slower or just breaking even." That is, it was about the computational cost of running hadoop, map reduce, mahout. My fault for not being more clear.Attenweiler
I
8

A "plain" Java program and a Hadoop-based, MapReduce-based implementation are very different beasts and are hard to compare. It's not like Hadoop parallelizes a little bit of your program; it is written in an entirely different form from top to bottom.

Hadoop has overheads: just the overhead of starting a job, and starting workers like mappers and reducers. It introduces a lot more time spent serializing/deserializing data, writing it locally, and transferring it to HDFS.

A Hadoop-based implementation will always consume more resources. So, it's something to avoid unless you can't avoid it. If you can run a non-distributed computation on one machine, the simplest practical advice is to not distribute. Save yourself the trouble.

In the case of Mahout recommenders, I can tell you that very crudely, a Hadoop job incurs 2-4x more computation than a non-distributed implementation on the same data. Obviously that depends immensely on the algorithm and algo tuning choices. But to give you a number: I wouldn't bother with a Hadoop cluster of less than 4 machines.

Obviously, if your computation can't fit on one of your machines, you have no choice but to distribute. Then the tradeoff is what kind of wall-clock time you can allow versus how much computing power you can devote. The reference to Amdahl's law is right, though it doesn't consider the significant overhead of Hadoop. For example, to parallelize N ways, you need at least N mappers/reducers, and incur N times the per-mapper/reducer overhead. There's some fixed startup/shutdown time too.

Inhabited answered 13/7, 2011 at 18:58 Comment(0)
T
6

See Amdahl's Law

Amdahl's law is a model for the relationship between the expected speedup of parallelized implementations of an algorithm relative to the serial algorithm, under the assumption that the problem size remains the same when parallelized. For example, if for a given problem size a parallelized implementation of an algorithm can run 12% of the algorithm's operations arbitrarily quickly (while the remaining 88% of the operations are not parallelizable), Amdahl's law states that the maximum speedup of the parallelized version is 1/(1 – 0.12) = 1.136 times as fast as the non-parallelized implementation.

Picture of Equation

Without specifics it's difficult to give a more detailed answer.

Ticon answered 13/7, 2011 at 16:45 Comment(0)
R
1

I know this has already been answered, but I'll throw my hat into the ring. I can't give you a general rule of thumb. The performance increase really depends on many factors:

  1. How parallel/mutually exclusive all of the components/algorithm are/is.
  2. The size of the dataset
  3. The pre and post processing of the dataset [including the splitting/mapping, and reducing/concatinating]
  4. Network traffic

If you have a highly connected algorithm like a Bayes net, neural nets, markov, PCA, and EM then a lot of the time of the hadoop program will be getting instances processed, split, and recombined. [Assuming you have a large number of nodes per instance (more than 1 machine can handle). If you have a situation like this the network traffic will become more of an issue.

If you have an agorithm such as path finding, or simulated annealing, that is easy to seperate instances into their own map/reduce job. These types of algorithms can be very quick.

Redeeming answered 9/8, 2011 at 18:49 Comment(0)
D
0

Another aspect is what is you bottleneck that forces you to use mapreduce. If you have reasonable data size good in your single machine and you merely probe speed boost then you can prefer to use GPU implementations. They are easier to setup and use even in a single machine with promising results.

Danielledaniels answered 1/9, 2014 at 14:21 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.