Error on running multiple Workflow in OOZIE-4.1.0
Asked Answered
R

2

7

I installed oozie 4.1.0 on a Linux machine by following the steps at http://gauravkohli.com/2014/08/26/apache-oozie-installation-on-hadoop-2-4-1/

hadoop version - 2.6.0 
maven - 3.0.4 
pig - 0.12.0

Cluster Setup -

MASTER NODE runnig - Namenode, Resourcemanager ,proxyserver.

SLAVE NODE running -Datanode,Nodemanager.

When I run single workflow job means it succeeds. But when I try to run more than one Workflow job i.e. both the jobs are in accepted state enter image description here

Inspecting the error log, I drill down the problem as,

014-12-24 21:00:36,758 [JobControl] INFO  org.apache.hadoop.ipc.Client  - Retrying connect to server: 172.16.***.***/172.16.***.***:8032. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2014-12-25 09:30:39,145 [communication thread] INFO  org.apache.hadoop.ipc.Client  - Retrying connect to server: 172.16.***.***/172.16.***.***:52406. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2014-12-25 09:30:39,199 [communication thread] INFO  org.apache.hadoop.mapred.Task  - Communication exception: java.io.IOException: Failed on local exception: java.net.SocketException: Network is unreachable: no further information; Host Details : local host is: "SystemName/127.0.0.1"; destination host is: "172.16.***.***":52406; 
 at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)
 at org.apache.hadoop.ipc.Client.call(Client.java:1415)
 at org.apache.hadoop.ipc.Client.call(Client.java:1364)
 at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:231)
 at $Proxy9.ping(Unknown Source)
 at org.apache.hadoop.mapred.Task$TaskReporter.run(Task.java:742)
 at java.lang.Thread.run(Thread.java:722)
Caused by: java.net.SocketException: Network is unreachable: no further information
 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
 at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:701)
 at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
 at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
 at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
 at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:606)
 at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:700)
 at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367)
 at org.apache.hadoop.ipc.Client.getConnection(Client.java:1463)
 at org.apache.hadoop.ipc.Client.call(Client.java:1382)
 ... 5 more

Heart beat
Heart beat
.
.

In the above running jobs, if I kill any one launcher job manually (hadoop job -kill <launcher-job-id>) mean all jobs get succeeded. So I think the problem is more than one launcher job running simultaneously mean job will meet deadlock..

If anyone know the reason and solution for above problem. Please do me the favor as soon as possible.

Riendeau answered 26/12, 2014 at 7:4 Comment(6)
Have you got networking worked out right? If you have installed a local cluster, shouldn't it try to connect to localhost?Hame
Hi thanks for reply.. I'm installed hadoop with two node cluster machine as like above mentioned architecture..Modla
And also I found some what solution to my problem.. If I run two workflow job means two launcher job was creating it not get succeed. But If I kill any one launcher job manually "hadoop job -kill <Launcher-job-id>;" then both mapreduce program get succeeded. But in oozie site it show killed launcher job status was KILLED. So the exact problem of my error was can't able to run two launcher program at a same time..Modla
May be the launcher programs trying to run on the instance of 127.0.0.1 in the same machine, instead of different nodes. From the error message, it seems the port for a launcher is inaccessible.Slr
Hi kalai, It running in cluster only. Because in resourceManager it show number of active node is 2. And also if I run single mapreduce program mean I can able to see running on second node manager also.Modla
Change the queue to set it in job.properties as queueName=newqueueBaptism
C
1

The problem is with the Queue, When we running the Job in SAME QUEUE(DEFAULT) with above cluster setup the Resourcemanager is responsible to run mapreduce job in the salve node. Due to lack of resource in slave node the job running in the queue will meet Deadlock situation.

In order to over come this issue we need to split the Mapreduce job by means of Triggering the mapreduce job in different queue.

enter image description here

you can do this by setting this part in the pig action inside your oozie workflow.xml

<configuration>
<property>
  <name>mapreduce.job.queuename</name>
  <value>launcher2</value>
</property>

NOTE: This solution only for SMALL CLUSTER SETUP

Civil answered 6/1, 2015 at 4:56 Comment(3)
Hi karthi, Thanks Its working now.. But I need to know what happen if memory used would greater than total memory? (assume that I run more than 10 job concurrently)Modla
job in the queue will wait for the memory, it occurs when we running all the job in same queue. For that reason you should run the job in different queue as per the memory. Once we run in different queue the memory will automatically released once the job in one queue will finished its mapred job.Civil
Where exactly do I need to put that configuration inside workflow.xml? I hate when people leave out such important details. This costs people hours upon hours of tinkering around and trying to understand what they're supposed to do...Thermomotor
R
2

I tried below solution it works perfectly for me.

1) Change the Hadoop schedule type from capacity scheduler to fair scheduler. Because for small cluster each queue assign some memory size (2048MB) to complete single map reduce job. If more than one map reduce job run in single queue mean it met deadlock.

Solution: add below property to yarn-site.xml

  <property>
    <name>yarn.resourcemanager.scheduler.class</name>
    <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
  </property>
  <property>
    <name>yarn.scheduler.fair.allocation.file</name>
    <value>file:/%HADOOP_HOME%/etc/hadoop/fair-scheduler.xml</value>
  </property>

2) By default Hadoop Total memory size was allot as 8GB.

So if we run two mapreduce program memory used by Hadoop get more than 8GB so it met deadlock.

Solution: Increase the size of Total Memory of nodemanager using following properties at yarn-site.xml

<property>
  <name>yarn.nodemanager.resource.memory-mb</name>
  <value>20960</value>
  </property>
  <property>
  <name>yarn.scheduler.minimum-allocation-mb</name>
  <value>1024</value>
  </property>
  <property>
  <name>yarn.scheduler.maximum-allocation-mb</name>
  <value>2048</value>
  </property>

So If user try to run more than two mapreduce program mean he need to increase nodemanager or he need to increase the size of total memory of Hadoop (note: Increasing the size will reduce the system usage memory. Above property file able to run 10 map reduce program concurrently.)

Riendeau answered 9/1, 2015 at 7:1 Comment(0)
C
1

The problem is with the Queue, When we running the Job in SAME QUEUE(DEFAULT) with above cluster setup the Resourcemanager is responsible to run mapreduce job in the salve node. Due to lack of resource in slave node the job running in the queue will meet Deadlock situation.

In order to over come this issue we need to split the Mapreduce job by means of Triggering the mapreduce job in different queue.

enter image description here

you can do this by setting this part in the pig action inside your oozie workflow.xml

<configuration>
<property>
  <name>mapreduce.job.queuename</name>
  <value>launcher2</value>
</property>

NOTE: This solution only for SMALL CLUSTER SETUP

Civil answered 6/1, 2015 at 4:56 Comment(3)
Hi karthi, Thanks Its working now.. But I need to know what happen if memory used would greater than total memory? (assume that I run more than 10 job concurrently)Modla
job in the queue will wait for the memory, it occurs when we running all the job in same queue. For that reason you should run the job in different queue as per the memory. Once we run in different queue the memory will automatically released once the job in one queue will finished its mapred job.Civil
Where exactly do I need to put that configuration inside workflow.xml? I hate when people leave out such important details. This costs people hours upon hours of tinkering around and trying to understand what they're supposed to do...Thermomotor

© 2022 - 2024 — McMap. All rights reserved.