Error "Failed to retrieve data from /webhdfs/v1/?op=LISTSTATUS: Server Error" when using hadoop
Asked Answered
B

7

5

I opened up localhost:9870 and try to upload a txt file to the hdfs.

I see the error message below

Failed to retrieve data from /webhdfs/v1/?op=LISTSTATUS: Server Error
Blessed answered 11/2, 2018 at 20:4 Comment(10)
Did you actually enable WebHDFS? This error is for a file listing, not uploadingGt
@cricket_007 I think I did. I can open up localhost:9870 doesn’t that mean that I enable webhdfs?Blessed
9870 is the NameNode, not WebHDFSGt
hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/…Gt
@cricket_007 what command should I use to enable WebHDFS? Pardon me being stupid.Blessed
It's not a command. It's a configuration property in the hdfs-site.xml, (which is true, by default). See the link. It mentions the properties. In any case, I've never actually used port 9870 to upload files. (My version of hadoop doesn't even have an upload feature there). Ambari or Hue are the popular web interfaces to do so. If you want to use webhdfs, it happens on port 50070 community.hortonworks.com/questions/139351/…Gt
@cricket_007 thanks I ll check my hdfs-site.xml first. I remember I did the change between two configuration blocks.Blessed
@cricket_007 also when I try to create an input and an output dir in HDFS. I got an error message said: "WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable"Blessed
@cricket_007 and here's what I have in my hdfs-site.xml <configuration> <property> <name>dfs.replication</name> <value>1</value> </property> </configuration>Blessed
Regarding the native libraries. Edit the hadoop env file https://mcmap.net/q/98853/-hadoop-quot-unable-to-load-native-hadoop-library-for-your-platform-quot-warning and your XML uses all the defaults if otherwise not set. So if that's the case, you need to look at the log directory for the namenode. Assuming Linux, try looking around /var/log/hadoopGt
Z
4

I had the same issue with JDK 9. The fix for me was to add this line in hadoop-env.sh

export HADOOP_OPTS="--add-modules java.activation"

Thats because java.activation package is deprecated in Java 9.

Zarla answered 20/2, 2018 at 6:40 Comment(1)
I have -Hadoop 3.0.0 -Java 9.0.4 And this solution worked for me. Thanks S2LOtti
A
4

I got this to work with OpenJDK 13 by downloading hadoop 2.9.2 and copying the activation-1.1.jar file from that download into the $HADOOP_HOME/share/hadoop/yarn folder you're using for Hadoop 3. Then you have to run stop-dfs.sh and stop-yarn.sh and then start them both again. No need to edit any config files with this method since it will automatically be added to the classpath.

Antony answered 29/11, 2019 at 23:49 Comment(2)
Confirmed previous answer did not work for me (v11) but you solution did. You may want to update to 1.1.1Gridiron
I would edit this to link to here instead of DLing all of the previous version... mvnrepository.com/artifact/javax.activation/activation/1.1.1Gridiron
F
3

Just solved such a problem, I have multiple java versions and hadoop3.1.0.

you need to specify the java home variable in etc/hadoop/hadoop-env.sh, and the java version should be 1.8.

Fiann answered 29/5, 2018 at 6:44 Comment(0)
H
0

This occurs due to conflicting versions of Java and OpenJDK installed as a dependency of Homebrew. So it would be nice to uninstall Java. using the commands below

  1. Uninstall java

#!/bin/bash

sudo rm -rvf /Library/Java/JavaVirtualMachines/jdk<version>.jdk
sudo rm -rvf /Library/PreferencePanes/JavaControlPanel.prefPane
sudo rm -rvf /Library/Internet\ Plug-Ins/JavaAppletPlugin.plugin
sudo rm -rvf /Library/LaunchAgents/com.oracle.java.Java-Updater.plist
sudo rm -rvf /Library/PrivilegedHelperTools/com.oracle.java.JavaUpdateHelper
sudo rm -rvf /Library/LaunchDaemons/com.oracle.java.JavaUpdateHelper.plist
sudo rm -rvf /Library/Preferences/com.oracle.java.Helper-Tool.plist

you can check with [this link][1]

  1. Create symlink to point to Homebrew open JDK dependency

    sudo ln -sfn $(brew --prefix)/opt/openjdk@11/libexec/openjdk.jdk /Library/Java/JavaVirtualMachines/openjdk-11.jdk

  2. Check for Java path using

    $ /usr/libexec/java_home

    It generates a path like this

    /opt/homebrew/Cellar/openjdk@11/11.0.18/libexec/openjdk.jdk/Contents/Home

  3. update the Hadoop environment file with the OpenJDK path by using this command in the terminal

    $cd /opt/homebrew/Cellar/hadoop/3.3.1/libexec/etc/hadoop $code hadoop-env.sh

update the JAVA_HOME path with the following

export JAVA_HOME=/opt/homebrew/Cellar/openjdk@11/11.0.18/libexec/openjdk.jdk/Contents/Home

Bonus: Check your java path with echo $JAVA_HOME

Holmic answered 23/1, 2023 at 15:22 Comment(0)
I
0

Try to install Java version 11 (or lower) or 1.8. I changed to Java 1.8, and it solves my problem. Hadoop is not compatible with a java version higher than 11.

Irreligion answered 8/6, 2023 at 1:4 Comment(0)
S
0

so this error actually comes if you have disturbed the java that was used during the installation example in my case i changed jdk in my sysytem i lost that web access to files But no worries can be fixed easly. so i put back the same jdk version and it was back and worked.

chake if your using a compatable jdk version for the hadoop your using then if configuration was fine before without changing anything it should work

suggested jdk 1.8

Smattering answered 7/7 at 10:57 Comment(0)
T
0

I was getting same problem but then I made a update to a file in hadoop/etc/httpfs-site.xml with the below code and the webhdfs started to work.

httpfs-site.xml

<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property> <name>dfs.namenode.name.dir</name>
<value>file:///home/hadoop/hadoopdata/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///home/hadoop/hadoopdata/hdfs/datanode</value>
</property>
</configuration> 

httpfs-site.xml

Tremolite answered 1/8 at 10:55 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.