httpfs error Operation category READ is not supported in state standby
Asked Answered
C

2

12

I am working on hadoop apache 2.7.1 and I have a cluster that consists of 3 nodes

nn1
nn2
dn1

nn1 is the dfs.default.name, so it is the master name node.

I have installed httpfs and started it of course after restarting all the services. When nn1 is active and nn2 is standby I can send this request

http://nn1:14000/webhdfs/v1/aloosh/oula.txt?op=open&user.name=root

from my browser and a dialog of open or save for this file appears, but when I kill the name node running on nn1 and start it again as normal then because of high availability nn1 becomes standby and nn2 becomes active.

So here httpfs should work, even if nn1 becomes stand by, but sending the same request now

http://nn1:14000/webhdfs/v1/aloosh/oula.txt?op=open&user.name=root

gives me the error

{"RemoteException":{"message":"Operation category READ is not supported in state standby","exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException"}}

Shouldn't httpfs overcome nn1 standby status and bring the file? Is that because of a wrong configuration, or is there any other reason?

My core-site.xml is

<property>
       <name>hadoop.proxyuser.root.hosts</name>
                <value>*</value>
       </property>

        <property>
                <name>hadoop.proxyuser.root.groups</name>
                <value>*</value>
        </property>
Cymar answered 11/4, 2017 at 8:10 Comment(3)
Assuming nn1 is where HttpFs server is running, can you confirm whether either one of the node is in active state.Nikaniki
nn1 is stand by and nn2 is active i can know through hdfs haadmin -getServiceStateCymar
I have the same problem and I checked status of my namenodes by running hdfs haadmin -getAllServiceState but the problem persists nevertheless returned state is "active" for one of namenodesCountermeasure
N
11

It looks like HttpFs is not High Availability aware yet. This could be due to the missing configurations required for the Clients to connect with the current Active Namenode.

Ensure the fs.defaultFS property in core-site.xml is configured with the correct nameservice ID.

If you have the below in hdfs-site.xml

<property>
  <name>dfs.nameservices</name>
  <value>mycluster</value>
</property>

then in core-site.xml, it should be

<property>
  <name>fs.defaultFS</name>
  <value>hdfs://mycluster</value>
</property>

Also configure the name of the Java class which will be used by the DFS Client to determine which NameNode is the currently Active and is serving client requests.

Add this property to hdfs-site.xml

<property>
  <name>dfs.client.failover.proxy.provider.mycluster</name>            
  <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>

Restart the Namenodes and HttpFs after adding the properties in all nodes.

Nikaniki answered 11/4, 2017 at 18:51 Comment(1)
Can I use in cluster name characters like "_-." ? I use HDP 2.6.3 and I set cluster name during installation to my_dev_env, but in hdfs-site.xml is by default mydevenv, is that okay? Also I didn't setup httpfs, only HDFS HA, but these properties was set up.Countermeasure
C
0

Not sure if it is solving this issue, but solved mine when I was accessing HDFS via HDFS client and it was constantly logging warning about reading in standby mode. I changed following property in hdfs-site.xml for HDFS client:

  <property>
    <name>dfs.client.failover.proxy.provider.uniza-hdfs-ha</name>
    <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
  </property>

to

  <property>
    <name>dfs.client.failover.proxy.provider.uniza-hdfs-ha</name>
    <value>org.apache.hadoop.hdfs.server.namenode.ha.RequestHedgingProxyProvider</value>
  </property>

It is also mentioned in official documentation https://hadoop.apache.org/docs/r3.3.4/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html

Countermeasure answered 12/6, 2023 at 13:41 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.