Increase number of Hive mappers in Hadoop 2
Asked Answered
L

3

9

I created a HBase table from Hive and I'm trying to do a simple aggregation on it. This is my Hive query:

from my_hbase_table 
select col1, count(1) 
group by col1;

The map reduce job spawns only 2 mappers and I'd like to increase that. With a plain map reduce job I would configure the yarn and mapper memory to increase the number of mappers. I tried the following in Hive but it did not work:

set yarn.nodemanager.resource.cpu-vcores=16;
set yarn.nodemanager.resource.memory-mb=32768;
set mapreduce.map.cpu.vcores=1;
set mapreduce.map.memory.mb=2048;

NOTE:

  • My test cluster has only 2 nodes
  • The HBase table has more than 5M records
  • Hive logs show HiveInputFormat and a number of splits=2
Layne answered 13/5, 2015 at 17:53 Comment(3)
How many regions is your HBase table split into?Irreligious
How many map slots are available in your cluster?Doran
How many map slots are available in your cluster?Footloose
D
21

Split the file lesser then default value is not a efficient solution. Spiting is basically used during dealing with large dataset. Default value is itself a small size so its not worth to split it again.

I would recommend following configuration before your query.You can apply it based upon your input data.

set hive.merge.mapfiles=false;

set hive.input.format=org.apache.hadoop.hive.ql.io.HiveInputFormat;

set mapred.map.tasks = XX;

If you want to assign number of reducer also then you can use below configuration

set mapred.reduce.tasks = XX;

Note that on Hadoop 2 (YARN), the mapred.map.tasks and mapred.reduce.tasks are deprecated and are replaced by other variables:

mapred.map.tasks     -->    mapreduce.job.maps
mapred.reduce.tasks  -->    mapreduce.job.reduces

Please refer below useful link related to this

http://answers.mapr.com/questions/5336/limit-mappers-and-reducers-for-specific-job.html

Fail to Increase Hive Mapper Tasks?

How mappers get assigned

Number of mappers is determined by the number of splits determined by the InputFormat used in the MapReduce job. In a typical InputFormat, it is directly proportional to the number of files and file sizes.

suppose your HDFS block configuration is configured for 64MB(default size) and you have a files with 100MB size then it will occupy 2 block and then 2 mapper will get assigned based on the blocks

but suppose if you have 2 files with 30MB size(each file) then each file will occupy one block and mapper will get assigend based on that.

When you are working with a large number of small files, Hive uses CombineHiveInputFormat by default. In terms of MapReduce, it ultimately translates to using CombineFileInputFormat that creates virtual splits over multiple files, grouped by common node, rack when possible. The size of the combined split is determined by

mapred.max.split.size
or 
mapreduce.input.fileinputformat.split.maxsize ( in yarn/MR2);

So if you want to have less splits(less mapper) you need to set this parameter higher.

This link can be useful to understand more on it.

What is the default size that each Hadoop mapper will read?

Also number of mappers and reducers are always dependent of available mapper and reducer slots of your cluster.

Doran answered 13/5, 2015 at 18:54 Comment(3)
mapred.map.tasks is deprecated in recent versions of hadoop. I tried to set both that and the new mapreduce.job.maps to X but it did not work. Are you sure that this would work on Hadoop2? Also, if the number of splits is 2, is it possible at all to have more mappers than splits?Layne
I need to check this configuration in hadoop 2. I have edited my answer and have explanation about mapper allotment. I hope it will help you.Doran
In Hadoop 2, mapper command is mapreduce.job.maps and reducer command is mapreduce.job.reducesDoran
W
7

Reduce the input split size from the default value. The mappers will get increased.

SET mapreduce.input.fileinputformat.split.maxsize;

Woodpecker answered 13/5, 2015 at 18:15 Comment(3)
What's the default value and what should I set that to? Would that work in Hadoop2 when using HBase as an input?Layne
Execute this property without any value. SET mapreduce.input.fileinputformat.split.maxsize; This will print you the default value i.e. default max split size. Now reduce the split size from the default value by setting the property SET mapreduce.input.fileinputformat.split.maxsize=*Reduced Value*;Woodpecker
Ok, but WHERE should I execute such SET command? Directly in the terminal? Directly in the beeline query?Poriferous
I
1

Splitting the HBase table should get your job to use more mappers automatically.

Since you have 2 splits each split is read by one mapper. Increase no. of splits.

Irreligious answered 14/5, 2015 at 16:4 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.