Container is running beyond memory limits
Asked Answered
K

9

96

In Hadoop v1, I have assigned each 7 mapper and reducer slot with size of 1GB, my mappers & reducers runs fine. My machine has 8G memory, 8 processor. Now with YARN, when run the same application on the same machine, I got container error. By default, I have this settings:

  <property>
    <name>yarn.scheduler.minimum-allocation-mb</name>
    <value>1024</value>
  </property>
  <property>
    <name>yarn.scheduler.maximum-allocation-mb</name>
    <value>8192</value>
  </property>
  <property>
    <name>yarn.nodemanager.resource.memory-mb</name>
    <value>8192</value>
  </property>

It gave me error:

Container [pid=28920,containerID=container_1389136889967_0001_01_000121] is running beyond virtual memory limits. Current usage: 1.2 GB of 1 GB physical memory used; 2.2 GB of 2.1 GB virtual memory used. Killing container.

I then tried to set memory limit in mapred-site.xml:

  <property>
    <name>mapreduce.map.memory.mb</name>
    <value>4096</value>
  </property>
  <property>
    <name>mapreduce.reduce.memory.mb</name>
    <value>4096</value>
  </property>

But still getting error:

Container [pid=26783,containerID=container_1389136889967_0009_01_000002] is running beyond physical memory limits. Current usage: 4.2 GB of 4 GB physical memory used; 5.2 GB of 8.4 GB virtual memory used. Killing container.

I'm confused why the the map task need this much memory. In my understanding, 1GB of memory is enough for my map/reduce task. Why as I assign more memory to container, the task use more? Is it because each task gets more splits? I feel it's more efficient to decrease the size of container a little bit and create more containers, so that more tasks are running in parallel. The problem is how can I make sure each container won't be assigned more splits than it can handle?

Kook answered 8/1, 2014 at 20:18 Comment(2)
possible duplicate of Hadoop Yarn Container Does Not Allocate Enough SpaceSkantze
Hi ! your config 'yarn.nodemanager.vmem-pmem-ratio=2' ?Colonel
R
109

You should also properly configure the maximum memory allocations for MapReduce. From this HortonWorks tutorial:

[...]

Each machine in our cluster has 48 GB of RAM. Some of this RAM should be >reserved for Operating System usage. On each node, we’ll assign 40 GB RAM for >YARN to use and keep 8 GB for the Operating System

For our example cluster, we have the minimum RAM for a Container (yarn.scheduler.minimum-allocation-mb) = 2 GB. We’ll thus assign 4 GB for Map task Containers, and 8 GB for Reduce tasks Containers.

In mapred-site.xml:

mapreduce.map.memory.mb: 4096

mapreduce.reduce.memory.mb: 8192

Each Container will run JVMs for the Map and Reduce tasks. The JVM heap size should be set to lower than the Map and Reduce memory defined above, so that they are within the bounds of the Container memory allocated by YARN.

In mapred-site.xml:

mapreduce.map.java.opts: -Xmx3072m

mapreduce.reduce.java.opts: -Xmx6144m

The above settings configure the upper limit of the physical RAM that Map and Reduce tasks will use.

To sum it up:

  1. In YARN, you should use the mapreduce configs, not the mapred ones. EDIT: This comment is not applicable anymore now that you've edited your question.
  2. What you are configuring is actually how much you want to request, not what is the max to allocate.
  3. The max limits are configured with the java.opts settings listed above.

Finally, you may want to check this other SO question that describes a similar problem (and solution).

Rheology answered 8/1, 2014 at 22:51 Comment(10)
Yes. By setting mapreduce.map.java.opts and mapreduce.reduce.java.opts solve my problem. DO you know if the actual memory assigned to task is only defined by mapreduce.map/reduce.memory.mb? How does yarn.scheduler.minimum-allocation-mb affect actual memory assignment?Kook
@lishu, if that helped, then please accept the answer. About your last question, the yarn setting applies to any container allocation in the cluster; this includes map and reduce tasks, but other tasks from other types of Applications too. The mapreduce settings apply only to mapreduce jobs.Rheology
@cabad, I develop a lib that Lishu is using. I was wondering if you'd change anything in your answer knowing that the MR task is spawning a process that's actually allocating most of the memory (hadoop streaming). Certainly the Xmx setting doesn't affect the external process, since it isn't a java program. Thanks for your help.Middlings
@piccolbo: Hmm... then I think you have a memory leak. You can post your streaming code (as a separate question) and we could help you fix it. BTW, also see this other SO question for an explanation on these properties.Rheology
There is now a handy tool from Hortonworks called hdp-configuration-utils to get recommended values. Get it from github.com/hortonworks/hdp-configuration-utilsLavonia
Can you explain a bit about what should be the 'yarn.scheduler.maximum-allocation-mb' 'yarn.scheduler.minimum-allocation-vcores' 'mapreduce.task.timeout' 'mapreduce.task.io.sort.mb' with your above exampleHaha
Reducers are getting stuckedHaha
@MurtazaKanchwala See selle's comment right above yours. That tool will let you get the recommended values.Rheology
@murtaza-kanchwala See selle's comment right above yours. That tool will let you get the recommended values.Rheology
If applying proper memory configuration didn't fix the problem (like in my case, actually it worked on a hadoop running on ubuntu but not on CentOS) try disabling vmem check: blog.cloudera.com/blog/2014/04/…Deceive
L
51

There is a check placed at Yarn level for Virtual and Physical memory usage ratio. Issue is not only that VM doesn't have sufficient physical memory. But it is because Virtual memory usage is more than expected for given physical memory.

Note : This is happening on Centos/RHEL 6 due to its aggressive allocation of virtual memory.

It can be resolved either by :

  1. Disable virtual memory usage check by setting yarn.nodemanager.vmem-check-enabled to false;

  2. Increase VM:PM ratio by setting yarn.nodemanager.vmem-pmem-ratio to some higher value.

References :

https://issues.apache.org/jira/browse/HADOOP-11364

http://blog.cloudera.com/blog/2014/04/apache-hadoop-yarn-avoiding-6-time-consuming-gotchas/

Add following property in yarn-site.xml

 <property>
   <name>yarn.nodemanager.vmem-check-enabled</name>
    <value>false</value>
    <description>Whether virtual memory limits will be enforced for containers</description>
  </property>
 <property>
   <name>yarn.nodemanager.vmem-pmem-ratio</name>
    <value>4</value>
    <description>Ratio between virtual memory to physical memory when setting memory limits for containers</description>
  </property>
Laos answered 16/7, 2015 at 9:23 Comment(2)
Thanks also for the CentOS hint :-)Rammish
Setting yarn.nodemanager.vmem-check-enabled to false on all nodes in the cluster and restarting Hadoop services fixed this issueMargherita
W
15

I had a really similar issue using HIVE in EMR. None of the extant solutions worked for me -- ie, none of the mapreduce configurations worked for me; and neither did setting yarn.nodemanager.vmem-check-enabled to false.

However, what ended up working was setting tez.am.resource.memory.mb, for example:

hive -hiveconf tez.am.resource.memory.mb=4096

Another setting to consider tweaking is yarn.app.mapreduce.am.resource.mb

Wendall answered 9/11, 2016 at 23:41 Comment(2)
Um @hiroprotagonist, do you know if "tweaking" the yarn parameter has to happen before YARN starts up or if it's only used at application time (and could be changed from one job to the next)?Molybdenous
i've been able to set at application time. specifically, within the hive interactive console.Wendall
F
8

I can't comment on the accepted answer, due to low reputation. However, I would like to add, this behavior is by design. The NodeManager is killing your container. It sounds like you are trying to use hadoop streaming which is running as a child process of the map-reduce task. The NodeManager monitors the entire process tree of the task and if it eats up more memory than the maximum set in mapreduce.map.memory.mb or mapreduce.reduce.memory.mb respectively, we would expect the Nodemanager to kill the task, otherwise your task is stealing memory belonging to other containers, which you don't want.

Fabled answered 15/8, 2014 at 3:51 Comment(0)
F
2

While working with spark in EMR I was having the same problem and setting maximizeResourceAllocation=true did the trick; hope it helps someone. You have to set it when you create the cluster. From the EMR docs:

aws emr create-cluster --release-label emr-5.4.0 --applications Name=Spark \
--instance-type m3.xlarge --instance-count 2 --service-role EMR_DefaultRole --ec2-attributes InstanceProfile=EMR_EC2_DefaultRole --configurations https://s3.amazonaws.com/mybucket/myfolder/myConfig.json

Where myConfig.json should say:

[
  {
    "Classification": "spark",
    "Properties": {
      "maximizeResourceAllocation": "true"
    }
  }
]
Fundament answered 19/4, 2017 at 21:21 Comment(1)
The maximizeResourceAllocation is specific to Amazon EMRInterfaith
C
1

We also faced this issue recently. If the issue is related to mapper memory, couple of things I would like to suggest that needs to be checked are.

  • Check if combiner is enabled or not? If yes, then it means that reduce logic has to be run on all the records (output of mapper). This happens in memory. Based on your application you need to check if enabling combiner helps or not. Trade off is between the network transfer bytes and time taken/memory/CPU for the reduce logic on 'X' number of records.
    • If you feel that combiner is not much of value, just disable it.
    • If you need combiner and 'X' is a huge number (say millions of records) then considering changing your split logic (For default input formats use less block size, normally 1 block size = 1 split) to map less number of records to a single mapper.
  • Number of records getting processed in a single mapper. Remember that all these records need to be sorted in memory (output of mapper is sorted). Consider setting mapreduce.task.io.sort.mb (default is 200MB) to a higher value if needed. mapred-configs.xml
  • If any of the above didn't help, try to run the mapper logic as a standalone application and profile the application using a Profiler (like JProfiler) and see where the memory getting used. This can give you very good insights.
Coenocyte answered 13/6, 2018 at 19:47 Comment(0)
R
1

Running yarn on Windows Linux subsystem with Ubunto OS, error "running beyond virtual memory limits, Killing container" I resolved it by disabling virtual memory check in the file yarn-site.xml

<property> <name>yarn.nodemanager.vmem-check-enabled</name> <value>false</value> </property> 
Rundle answered 9/3, 2020 at 19:58 Comment(2)
On WSL, the error message has absurd numbers (at least for me): "... is running beyond virtual memory limits. Current usage: 338.8 MB of 2 GB physical memory used; 481.1 GB of 4.2 GB virtual memory used. Killing container."Ferd
@SamikR Yes, I have similar situation, I guess it is not the hadoop issues, it is the WSL issues. Maybe I need to transfer the demo to a real Linux OS computerLilah
S
1

I am practicing Hadoop programs (version hadoop3). Via virtual box I have installed Linux OS. We allocate very limited memory at time of installation of Linux. By setting the following memory limit properties in mapred-site.xml and restarting your HDFS and YARN then my program worked.

 <property>
    <name>mapreduce.map.memory.mb</name>
    <value>4096</value>
  </property>
  <property>
    <name>mapreduce.reduce.memory.mb</name>
    <value>4096</value>
  </property>
Squama answered 24/4, 2021 at 16:15 Comment(0)
W
0

I haven't personally checked, but hadoop-yarn-container-virtual-memory-understanding-and-solving-container-is-running-beyond-virtual-memory-limits-errors sounds very reasonable

I solved the issue by changing yarn.nodemanager.vmem-pmem-ratio to a higher value , and I would agree that:

Another less recommended solution is to disable the virtual memory check by setting yarn.nodemanager.vmem-check-enabled to false.

Wira answered 31/8, 2020 at 11:1 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.