Today, while executing pig scripts I got the same error mentioned in the question:
starting namenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-training-namenode-localhost.localdomain.out
localhost: /home/training/.bashrc: line 10: /jdk1.7.0_10/bin: No such file or directory
localhost: Warning: $HADOOP_HOME is deprecated.
localhost:
localhost: starting datanode, logging to /usr/local/hadoop/libexec/../logs/hadoop-training-datanode-localhost.localdomain.out
localhost: /home/training/.bashrc: line 10: /jdk1.7.0_10/bin: No such file or directory
localhost: Warning: $HADOOP_HOME is deprecated.
localhost:
localhost: starting secondarynamenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-training-secondarynamenode-localhost.localdomain.out
starting jobtracker, logging to /usr/local/hadoop/libexec/../logs/hadoop-training-jobtracker-localhost.localdomain.out
localhost: /home/training/.bashrc: line 10: /jdk1.7.0_10/bin: No such file or directory
localhost: Warning: $HADOOP_HOME is deprecated.
localhost:
localhost: starting tasktracker, logging to /usr/local/hadoop/libexec/../logs/hadoop-training-tasktracker-localhost.localdomain.out
So, the answer is:
[training@localhost bin]$ stop-all.sh
and then type:
[training@localhost bin]$ start-all.sh
The issue will be resolved. Now you can run the pig script with mapreduce!
$HADOOP_HOME/logs
, I think)? Most of the time the info in there is pretty clear. – Physicps axww | grep hadoop
on both your cluster nodes (dellnode1 and dellnode2) and paste that output back into your original question – Akkad