error in namenode starting
Asked Answered
P

4

15

When i try to start the hadoop on master node i am getting the following output.and the namenode is not starting.

[hduser@dellnode1 ~]$ start-dfs.sh
starting namenode, logging to /usr/local/hadoop/bin/../logs/hadoop-hduser-namenode-dellnode1.library.out
dellnode1.library: datanode running as process 5123. Stop it first.
dellnode3.library: datanode running as process 4072. Stop it first.
dellnode2.library: datanode running as process 4670. Stop it first.
dellnode1.library: secondarynamenode running as process 5234. Stop it first.
[hduser@dellnode1 ~]$ jps
5696 Jps
5123 DataNode
5234 SecondaryNameNode
Puli answered 11/1, 2013 at 7:31 Comment(3)
Did you check the Namenode log (default in $HADOOP_HOME/logs, I think)? Most of the time the info in there is pretty clear.Physic
can you share your log files?Rabelais
rather than using jps (which only shows processes for the current user), can you run a ps axww | grep hadoop on both your cluster nodes (dellnode1 and dellnode2) and paste that output back into your original questionAkkad
D
26

"Stop it first".

  • First call stop-all.sh

  • Type jps

  • Call start-all.sh (or start-dfs.sh and start-mapred.sh)

  • Type jps (if namenode don't appear type "hadoop namenode" and check error)

Delrosario answered 17/8, 2013 at 8:45 Comment(2)
What should a typical output look-like? I am getting only 15845 Jps. unix.stackexchange.com/questions/257279/validate-start-dfs-shAalto
This method is Deprecated. so usage of stop-dfs.sh, stop-yarn.sh, start-dfs.sh, start-yarn.sh is preferredPaediatrician
P
8

According to running "stop-all.sh" on newer versions of hardoop, this is deprecated. You should instead use:

stop-dfs.sh

and

stop-yarn.sh

Phyte answered 16/11, 2015 at 17:22 Comment(0)
G
1

Today, while executing pig scripts I got the same error mentioned in the question:

starting namenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-training-namenode-localhost.localdomain.out
localhost: /home/training/.bashrc: line 10: /jdk1.7.0_10/bin: No such file or directory
localhost: Warning: $HADOOP_HOME is deprecated.
localhost: 
localhost: starting datanode, logging to /usr/local/hadoop/libexec/../logs/hadoop-training-datanode-localhost.localdomain.out
localhost: /home/training/.bashrc: line 10: /jdk1.7.0_10/bin: No such file or directory
localhost: Warning: $HADOOP_HOME is deprecated.
localhost: 
localhost: starting secondarynamenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-training-secondarynamenode-localhost.localdomain.out
starting jobtracker, logging to /usr/local/hadoop/libexec/../logs/hadoop-training-jobtracker-localhost.localdomain.out
localhost: /home/training/.bashrc: line 10: /jdk1.7.0_10/bin: No such file or directory
localhost: Warning: $HADOOP_HOME is deprecated.
localhost: 
localhost: starting tasktracker, logging to /usr/local/hadoop/libexec/../logs/hadoop-training-tasktracker-localhost.localdomain.out

So, the answer is:

[training@localhost bin]$ stop-all.sh

and then type:

[training@localhost bin]$ start-all.sh

The issue will be resolved. Now you can run the pig script with mapreduce!

Gadson answered 18/11, 2015 at 0:24 Comment(0)
L
0

In Mac (If you install using homebrew) Where 3.0.0 is Hadoop version. In Linux change the installation path accordingly(only this part will change . /usr/local/Cellar/).

> /usr/local/Cellar/hadoop/3.0.0/sbin/stopyarn.sh
> /usr/local/Cellar/hadoop/3.0.0/sbin/stopdfs.sh
> /usr/local/Cellar/hadoop/3.0.0/sbin/stop-all.sh"

Better for pro users write this alias at the end of your ~/.bashrc or ~/.zshrc(If you are zsh user). And just type hstopfrom your command line everytime you want to stop Hadoop and all the related processes.

alias hstop="/usr/local/Cellar/hadoop/3.0.0/sbin/stop-yarn.sh;/usr/local/Cellar/hadoop/3.0.0/sbin/stop-dfs.sh;/usr/local/Cellar/hadoop/3.0.0/sbin/stop-all.sh"
Lindbom answered 21/4, 2018 at 23:8 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.