hadoop/hdfs/name is in an inconsistent state: storage directory(hadoop/hdfs/data/) does not exist or is not accessible
Asked Answered
C

6

16

I have tried all the different solutions provided at stackoverflow on this topic, but of no help Asking again with the specific log and the details

Any help is appreciated

I have one master node and 5 slave nodes in my Hadoop cluster. ubuntu user and ubuntu group is the owner of the ~/Hadoop folder Both the ~/hadoop/hdfs/data & ~/hadoop/hdfs/name folder exist

and permission for both the folders are set to 755

successfully formated the namenode before starting the script start-all.sh

THE SCRIPT FAILS TO LAUNCH THE "NAMENODE"

These are running at the master node

ubuntu@master:~/hadoop/bin$ jps

7067 TaskTracker
6914 JobTracker
7237 Jps
6834 SecondaryNameNode
6682 DataNode

ubuntu@slave5:~/hadoop/bin$ jps

31438 TaskTracker
31581 Jps
31307 DataNode

Below is the log from name-node log files.

..........
..........
.........

014-12-03 12:25:45,460 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
2014-12-03 12:25:45,461 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered.
2014-12-03 12:25:45,532 INFO org.apache.hadoop.hdfs.util.GSet: Computing capacity for map BlocksMap
2014-12-03 12:25:45,532 INFO org.apache.hadoop.hdfs.util.GSet: VM type       = 64-bit
2014-12-03 12:25:45,532 INFO org.apache.hadoop.hdfs.util.GSet: 2.0% max memory = 1013645312
2014-12-03 12:25:45,532 INFO org.apache.hadoop.hdfs.util.GSet: capacity      = 2^21 = 2097152 entries
2014-12-03 12:25:45,532 INFO org.apache.hadoop.hdfs.util.GSet: recommended=2097152, actual=2097152
2014-12-03 12:25:45,588 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=ubuntu
2014-12-03 12:25:45,588 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2014-12-03 12:25:45,588 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2014-12-03 12:25:45,622 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100
2014-12-03 12:25:45,623 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
2014-12-03 12:25:45,716 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean
2014-12-03 12:25:45,777 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: dfs.namenode.edits.toleration.length = 0
2014-12-03 12:25:45,777 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times 
2014-12-03 12:25:45,785 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /home/ubuntu/hadoop/file:/home/ubuntu/hadoop/hdfs/name does not exist
2014-12-03 12:25:45,787 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /home/ubuntu/hadoop/file:/home/ubuntu/hadoop/hdfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:104)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:427)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:395)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:299)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
2014-12-03 12:25:45,801 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /home/ubuntu/hadoop/file:/home/ubuntu/hadoop/hdfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:104)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:427)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:395)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:299)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
Cog answered 3/12, 2014 at 12:38 Comment(0)
C
16

Removed the "file:" from the hdfs-site.xml file

[WRONG HDFS-SITE.XML]

  <property>
  <name>dfs.namenode.name.dir</name>
  <value>file:/home/hduser/mydata/hdfs/namenode</value>
  </property>
  <property>
  <name>dfs.datanode.data.dir</name>
  <value>file:/home/hduser/mydata/hdfs/datanode</value>
  </property>

[CORRECT HDFS-SITE.XML]

  <property>
  <name>dfs.namenode.name.dir</name>
  <value>/home/hduser/mydata/hdfs/namenode</value>
  </property>


  <property>
  <name>dfs.datanode.data.dir</name>
  <value>/home/hduser/mydata/hdfs/datanode</value>
  </property>

Thanks to Erik for the help.

Cog answered 3/12, 2014 at 14:51 Comment(3)
I am using above hdfs-site.xml from last 2 years and it is working fine for me.Untried
Thanks for your answers Kumar. It was helpful. I removed "file:" from your answer and the error was removed.Cog
When not using file: in the path, hdfs namenode -format complains about incorrect path.Autodidact
S
8

Follow the below steps,

1.Stop all services

2.Format your namenode

3.Delete your data node directory

4.start all services

Smacker answered 4/12, 2014 at 4:1 Comment(2)
Thanks, I ran into the same problem today. This fixed everything.Ignition
Where is the data node directory?Outage
U
6

run these commands on terminal

$ cd ~
$ mkdir -p mydata/hdfs/namenode
$ mkdir -p mydata/hdfs/datanode

give permission to both directory 755

then,

Add this property in conf/hdfs-site.xml

  <property>
 <name>dfs.namenode.name.dir</name>
 <value>file:/home/hduser/mydata/hdfs/namenode</value>
</property>


<property>
 <name>dfs.datanode.data.dir</name>
 <value>file:/home/hduser/mydata/hdfs/datanode</value>
</property>

if not work ,then

stop-all.sh
start-all.sh
Untried answered 3/12, 2014 at 12:58 Comment(2)
Thanks for your reply. In the logs: org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /home/ubuntu/hadoop/file:/home/ubuntu/hadoop/hdfs/name is in an inconsistent state: storage directory does not exist or is not accessible; I don't think /home/ubuntu/hadoop/file:/home/ubuntu/hadoop/hdfs/name is the correct. It should only be file:/home/ubuntu/hadoop/hdfs/name. do you have any idea from where is it additional bit appended?Cog
Changed the XML tags. Still the same problem.... its looking for /home/ubuntu/hadoop/hadoop-data/dfs/name folder, whereas, I have my name folder in /home/ubuntu/hadoop/hdfs/name. How to change this configurationCog
S
3

1) name node directory you should be owner and give chmod 750 appropriately
2)stop all services
3)use hadoop namenode -format to format namenode
4)add this to hdfs-site.xml

<property>
    <name>dfs.data.dir</name>
    <value>path/to/hadooptmpfolder/dfs/name/data</value> 
    <final>true</final> 
    </property> 
    <property> 
    <name>dfs.name.dir</name>
    <value>path/to/hadooptmpfolder/dfs/name</value> 
    <final>true</final> 
</property> 

5) to run hadoop namenode -format add export PATH=$PATH:/usr/local/hadoop/bin/ in ~/.bashrc wherever hadoop is unzip add that in path

Sea answered 20/4, 2016 at 12:28 Comment(0)
F
0

Had similar problem, I formatted the namenode then started it

Hadoop namenode -format
hadoop-daemon.sh start namenode
Fretted answered 25/8, 2017 at 15:3 Comment(1)
$HOADOOP_HOME/sbin/hadoop-daemon.sh start namenode -format and then $HADOOP_HOME/sbin/hadoop-daemon.sh start namenodeTrypsin
C
0

You can follow given below steps to remove this error:

  1. Stop all hadoop daemons
  2. Delete all files from given below directory: /tmp/hadoop-{user}/dfs/name/current and /tmp/hadoop-{user}/dfs/data/current where user is the user with which you logged in into the box.
  3. Format namenode
  4. Start all services
  5. You will now see a new file VERSION created in directory /tmp/hadoop-/dfs/name/current

One thing to notice here is that value of Cluster ID in file /tmp/hadoop-eip/dfs/name/current/VERSION must be same as in /tmp/hadoop-eip/dfs/data/current/VERSION

-Hitesh

Crumby answered 31/5, 2021 at 5:52 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.