Amazon Elastic MapReduce - SIGTERM
Asked Answered
M

2

6

I have an EMR streaming job (Python) which normally works fine (e.g. 10 machines processing 200 inputs). However, when I run it against large data sets (12 machines processing a total of 6000 inputs, at about 20 seconds per input), after 2.5 hours of crunching I get the following error:

java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 143
at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:372)
at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:586)
at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:135)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:36)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:441)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:377)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1132)
at org.apache.hadoop.mapred.Child.main(Child.java:249)

If I am reading this correctly, the subprocess failed with code 143 because someone sent a SIGTERM signal to the streaming job.

Is my understanding correct? If so: When would the EMR infrastructure send a SIGTERM?

Myogenic answered 15/8, 2012 at 13:59 Comment(2)
Did you check the CloudWatch metrics to see if you are hitting some type of IO limit? From my experience, once you hit the IO limit some strange things start to happen. I don't know which instance type you were using for your data nodes but I'd suggest upgrading to something with faster IO performance when running bigger jobs.Nuptials
The thing is that each task is CPU-bound, with rare and sporadic I/O. What it does is that it loads a file from S3, and then for about 20 seconds does a lot of heavy CPU processing. Every 5 seconds it stores another (intermediate) file to S3. It uses some external libraries (lxml, scikit-learn), and I was thinking that maybe one of them was failing me (by a spike in memory consumption?), and the EMR infrastructure was sending a SIGTERM. To verify that, I am trying to understand the cases/scenarios when EMR may SIGTERM a process.Myogenic
M
10

I figured out what was happening, so here's some information if anyone else experiences similar problems.

The key to me was to look at the "jobtracker" logs. These live in your task's logs/ folder on S3, under:

<logs folder>/daemons/<id of node running jobtracker>/hadoop-hadoop-jobtracker-XXX.log.

There were multiple lines of the following kind:

2012-08-21 08:07:13,830 INFO org.apache.hadoop.mapred.TaskInProgress 
  (IPC Server handler 29 on 9001): Error from attempt_201208210612_0001_m_000015_0: 
  Task attempt_201208210612_0001_m_000015_0 failed to report status 
  for 601 seconds. Killing!

So my code was timing out, and it was being killed (it was going beyond the 10 minute task timeout). 10 minutes I wasn't doing any I/Os, which was certainly not expected (I would typically do an I/O every 20 seconds).

I then discovered this article:

http://devblog.factual.com/practical-hadoop-streaming-dealing-with-brittle-code

"In one of our science projects, we have a few Hadoop Streaming jobs that run over ruby and rely on libxml to parse documents. This creates a perfect storm of badness – the web is full of really bad html and libxml occasionally goes into infinite loops or outright segfaults. On some documents, it always segfaults."

It nailed it. I must be experiencing one of these "libxml going into infinite loop" situations (I am using libxml heavily -- only with Python, not Ruby).

The final step for me was to trigger skip mode (instructions here: Setting hadoop parameters with boto?).

Myogenic answered 22/8, 2012 at 11:21 Comment(2)
Thank you for posting the answer to your own problem. This helped me debug a similar one I am having.Styrene
Ditto, I was running a mrjob hadoop job that failed with no output other than what was posted in the original question. This was the only useful log that helped me find the root cause (hadoop2 mapper container was running out of memory).Occupier
B
1

I ran into this output from Amazon EMR ("subprocess failed with code 143"). My streaming job was using PHP curl to send data to a server that didn't have the MapReduce job servers part of its security group. Therefore the reducer was timing out and being killed. Ideally I'd like to add my jobs to the same security group but I opted to simply add a URL security token param infront of my API.

Bocage answered 6/5, 2013 at 20:48 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.