Using Hadoop, are my reducers guaranteed to get all the records with the same key?
Asked Answered
V

3

14

I'm running a Hadoop job using Hive actually that is supposed to uniq lines in many text files. In the reduce step, it chooses the most recently timestamped record for each key.

Does Hadoop guarantee that every record with the same key, output by the map step, will go to a single reducer, even if many reducers are running across a cluster?

I worry that the mapper output might be split after the shuffle happens in the middle of a set of records with the same key.

Vision answered 13/4, 2010 at 21:16 Comment(0)
G
14

All values for a key are sent to the same reducer. See this Yahoo! tutorial for more discussion.

This behavior is determined by the partitioner, and might not be true if you use a partitioner other than the default.

Greenockite answered 13/4, 2010 at 22:53 Comment(1)
actually i am not sure of this. See #26693534 . I didn't modify the partitioner in my program.Deflected
D
5

Actually, no! You could create a Partitioner that sent the same key to a different reducer each time the getPartition is called. It's just not generally a good idea for most applications.

Dowitcher answered 14/4, 2010 at 12:13 Comment(0)
B
3

Yes, Hadoop does guarantee that all keys that are the same will go to the same Reducer. This is achieved using a Partition function which buckets the keys using a hash function.

For more information on the Partitioning process take a look at this: Partitioning Data

It specifically talks about how different mappers that process the same key ensure that all keys of a given value end up in the same partition, and thus are processed by the same reducer.

Briefs answered 13/4, 2010 at 22:53 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.