hadoop No FileSystem for scheme: file
Asked Answered
G

20

111

I am trying to run a simple NaiveBayesClassifer using hadoop, getting this error

Exception in thread "main" java.io.IOException: No FileSystem for scheme: file
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1375)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:180)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:175)
    at org.apache.mahout.classifier.naivebayes.NaiveBayesModel.materialize(NaiveBayesModel.java:100)

Code :

    Configuration configuration = new Configuration();
    NaiveBayesModel model = NaiveBayesModel.materialize(new Path(modelPath), configuration);// error in this line..

modelPath is pointing to NaiveBayes.bin file, and configuration object is printing - Configuration: core-default.xml, core-site.xml

I think its because of jars, any ideas?

Greater answered 23/6, 2013 at 20:27 Comment(8)
Need some more info...Concupiscent
Don't know myself, but a quick look on google suggests that there are some issues around jars not being referenced as you suggested. Perhaps the following links will yield an answer. groups.google.com/a/cloudera.org/forum/#!topic/scm-users/… grokbase.com/t/cloudera/cdh-user/134r64jm5t/…Topazolite
I was adding hadoop-common-2.0.0-cdh4.3.0-sources.jar and hadoop-core-0.20.2.jar to class path, I removed first and it worked dont know why.Greater
Hmm..Could you please tell me about your environment? Also, please show me the complete exception message.Concupiscent
Whats the value of modelPath? have you tried file:///path/to/dirAutolysis
as @emile suggested, make sure you are running your jar via hadoop, not java. i.e. "just run the distributed jar with "hadoop jar", instead of trying to execute a standalone "java -jar"."Kurdistan
I have used hadoop jar test.jar instead of java -jar test.jarShortlived
I copied all the jars in hadoop folder and placed where I am running the command Now everything is working finePrandial
N
196

This is a typical case of the maven-assembly plugin breaking things.

Why this happened to us

Different JARs (hadoop-commons for LocalFileSystem, hadoop-hdfs for DistributedFileSystem) each contain a different file called org.apache.hadoop.fs.FileSystem in their META-INFO/services directory. This file lists the canonical classnames of the filesystem implementations they want to declare (This is called a Service Provider Interface implemented via java.util.ServiceLoader, see org.apache.hadoop.FileSystem#loadFileSystems).

When we use maven-assembly-plugin, it merges all our JARs into one, and all META-INFO/services/org.apache.hadoop.fs.FileSystem overwrite each-other. Only one of these files remains (the last one that was added). In this case, the FileSystem list from hadoop-commons overwrites the list from hadoop-hdfs, so DistributedFileSystem was no longer declared.

How we fixed it

After loading the Hadoop configuration, but just before doing anything FileSystem-related, we call this:

    hadoopConfig.set("fs.hdfs.impl", 
        org.apache.hadoop.hdfs.DistributedFileSystem.class.getName()
    );
    hadoopConfig.set("fs.file.impl",
        org.apache.hadoop.fs.LocalFileSystem.class.getName()
    );

Update: the correct fix

It has been brought to my attention by krookedking that there is a configuration-based way to make the maven-assembly use a merged version of all the FileSystem services declarations, check out his answer below.

Nipissing answered 14/1, 2014 at 16:37 Comment(8)
Here's the equivalent code required for doing the same thing in Spark: val hadoopConfig: Configuration = spark.hadoopConfiguration hadoopConfig.set("fs.hdfs.impl", classOf[org.apache.hadoop.hdfs.DistributedFileSystem].getName) hadoopConfig.set("fs.file.impl", classOf[org.apache.hadoop.fs.LocalFileSystem].getName)Goatfish
Actually, I just added this maven dependency http://mvnrepository.com/artifact/org.apache.hadoop/hadoop-hdfs/2.2.0 to maven and problem solved.Exuberance
I have tried adding hadoop-hdfs, hadoop-core, hadoop-common, hadoop-client, Aslo tried adding hadoopConfig.set("fs.hdfs.impl", org.apache.hadoop.hdfs.DistributedFileSystem.class.getName() ); hadoopConfig.set("fs.file.impl", org.apache.hadoop.fs.LocalFileSystem.class.getName() ); but not working, when running from eclipse it's running fine but when running from java -cp command it shows above errorSnuggle
Harish, what have you seen? Same problem here but with intellijPeti
Just an addition to the wonderful answer: if one is using the hadoop JARS but running the job in a non-hadoop cluster, """hadoopConfig.set("fs.hdfs.impl....."""" will not work. In which case we will fall back on managing the assembly build. e.g. in sbt we could do a mergeStrategy of concat or even filterDistinctLinesRebec
@Nipissing where we should call it ?Ifin thedriver class then when we use to see the ouput using bin/hdfs dfs -ls /somefile then what will happen /Sapor
Looks like your link is dead. Never used grepcode but it sounds like it was a great toolWalz
Where do you get the hadoopConfig?Neace
F
76

For those using the shade plugin, following on david_p's advice, you can merge the services in the shaded jar by adding the ServicesResourceTransformer to the plugin config:

  <plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-shade-plugin</artifactId>
    <version>2.3</version>
    <executions>
      <execution>
        <phase>package</phase>
        <goals>
          <goal>shade</goal>
        </goals>
        <configuration>
          <transformers>
            <transformer implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer"/>
          </transformers>
        </configuration>
      </execution>
    </executions>
  </plugin>

This will merge all the org.apache.hadoop.fs.FileSystem services in one file

Flighty answered 17/12, 2014 at 18:23 Comment(9)
I like this solution best. Fix the problem at the source (the build) rather than patching it with config changes after the fact.Hindu
Great answer. Fixed my similar error. Tried with maven-assembly-plugin as well as maven-jar-plugin/maven-dependency-plugin combination but didn't work. This solution made my Spark app work. Thanks a lot!Horseradish
Great answer! Thanks a lot!Brunswick
This should be marked as the accepted answer. The ServicesResourceTransformer is necessary for when jar files map interfaces to implementations by using a META-INF/services directory. More information can be found here: maven.apache.org/plugins/maven-shade-plugin/examples/…Aureaaureate
Excellent answer.Burnham
Thank you very much, very helpful!Voltz
Wow! You just spared me 4 hours of head scratching! This should be the accepted answer!Mccaleb
how to use this when we opted to jars instead of maven?Legitimacy
can someone translate this to sbt's build.sbt shade implementation?Sarmentose
T
12

Took me ages to figure it out with Spark 2.0.2, but here's my bit:

val sparkBuilder = SparkSession.builder
.appName("app_name")
.master("local")
// Various Params
.getOrCreate()

val hadoopConfig: Configuration = sparkBuilder.sparkContext.hadoopConfiguration

hadoopConfig.set("fs.hdfs.impl", classOf[org.apache.hadoop.hdfs.DistributedFileSystem].getName)

hadoopConfig.set("fs.file.impl", classOf[org.apache.hadoop.fs.LocalFileSystem].getName)

And the relevant parts of my build.sbt:

scalaVersion := "2.11.8"
libraryDependencies += "org.apache.spark" %% "spark-core" % "2.0.2"

I hope this can help!

Toed answered 23/11, 2016 at 13:15 Comment(2)
Been beating my head against the wall and this was the solution. Thank you!Trinity
I was getting an error ONLY when running as an assembly jarTrinity
H
9

For the record, this is still happening in hadoop 2.4.0. So frustrating...

I was able to follow the instructions in this link: http://grokbase.com/t/cloudera/scm-users/1288xszz7r/no-filesystem-for-scheme-hdfs

I added the following to my core-site.xml and it worked:

<property>
   <name>fs.file.impl</name>
   <value>org.apache.hadoop.fs.LocalFileSystem</value>
   <description>The FileSystem for file: uris.</description>
</property>

<property>
   <name>fs.hdfs.impl</name>
   <value>org.apache.hadoop.hdfs.DistributedFileSystem</value>
   <description>The FileSystem for hdfs: uris.</description>
</property>
Hypnotherapy answered 15/8, 2014 at 15:28 Comment(0)
C
8

thanks david_p,scala

conf.set("fs.hdfs.impl", classOf[org.apache.hadoop.hdfs.DistributedFileSystem].getName);
conf.set("fs.file.impl", classOf[org.apache.hadoop.fs.LocalFileSystem].getName);

or

<property>
 <name>fs.hdfs.impl</name>
 <value>org.apache.hadoop.hdfs.DistributedFileSystem</value>
</property>
Clubwoman answered 23/7, 2014 at 7:40 Comment(1)
Only after I read this did I realize that the conf here was the Hadoop Configuration: brucebcampbell.wordpress.com/2014/12/11/…Gauss
F
7

For maven, just add the maven dependency for hadoop-hdfs (refer to the link below) will solve the issue.

http://mvnrepository.com/artifact/org.apache.hadoop/hadoop-hdfs/2.7.1

Florafloral answered 9/9, 2015 at 9:59 Comment(0)
A
6

I use sbt assembly to package my project. I also meet this problem. My solution is here. Step1: add META-INF mergestrategy in your build.sbt

case PathList("META-INF", "MANIFEST.MF") => MergeStrategy.discard
case PathList("META-INF", ps @ _*) => MergeStrategy.first

Step2: add hadoop-hdfs lib to build.sbt

"org.apache.hadoop" % "hadoop-hdfs" % "2.4.0"

Step3: sbt clean; sbt assembly

Hope the above information can help you.

Acrefoot answered 22/5, 2014 at 15:5 Comment(5)
A better solution might be to merge like: case PathList("META-INF", "services", "org.apache.hadoop.fs.FileSystem") => MergeStrategy.filterDistinctLines This will keep all the registered filesystemsGaiser
Thanks at @ravwojdyla , pretty neat solution. You saved my hair. For the lost souls discovering this answer for Apache spark. Add this to build.sbt when sbt-assembly, works correctly.Revival
The solution provided by @ravwojdyla is the only one that worked for me.Pyrrolidine
The solution given by @ravwojdyla is ideal. I did a similar setup in build.sbt and used: ``` assemblyMergeStrategy in assembly := { case PathList("META-INF", "MANIFEST.MF") => MergeStrategy.discard case PathList("META-INF", "services", "org.apache.hadoop.fs.FileSystem") => MergeStrategy.concat case _ => MergeStrategy.first } ```Rebec
@Rebec nothing worked before i used your setup! Kudos!Gualterio
M
5

Assuming that you are using mvn and cloudera distribution of hadoop. I'm using cdh4.6 and adding these dependencies worked for me.I think you should check the versions of hadoop and mvn dependencies.

<dependency>
        <groupId>org.apache.hadoop</groupId>
        <artifactId>hadoop-core</artifactId>
        <version>2.0.0-mr1-cdh4.6.0</version>
    </dependency>

    <dependency>
        <groupId>org.apache.hadoop</groupId>
        <artifactId>hadoop-common</artifactId>
        <version>2.0.0-cdh4.6.0</version>
    </dependency>

    <dependency>
        <groupId>org.apache.hadoop</groupId>
        <artifactId>hadoop-client</artifactId>
        <version>2.0.0-cdh4.6.0</version>
    </dependency>

don't forget to add cloudera mvn repository.

<repository>
        <id>cloudera</id>
        <url>https://repository.cloudera.com/artifactory/cloudera-repos/</url>
</repository>
M answered 18/4, 2014 at 13:29 Comment(0)
E
3

I faced the same problem. I found two solutions: (1) Editing the jar file manually:

Open the jar file with WinRar (or similar tools). Go to Meta-info > services , and edit "org.apache.hadoop.fs.FileSystem" by appending:

org.apache.hadoop.fs.LocalFileSystem

(2) Changing the order of my dependencies as follow

<dependencies>
<dependency>
  <groupId>org.apache.hadoop</groupId>
  <artifactId>hadoop-hdfs</artifactId>
  <version>3.2.1</version>
</dependency>

<dependency>
  <groupId>org.apache.hadoop</groupId>
  <artifactId>hadoop-common</artifactId>
  <version>3.2.1</version>
</dependency>

<dependency>
  <groupId>org.apache.hadoop</groupId>
  <artifactId>hadoop-mapreduce-client-core</artifactId>
  <version>3.2.1</version>
</dependency>

<dependency>
  <groupId>org.apache.hadoop</groupId>
  <artifactId>hadoop-client</artifactId>
  <version>3.2.1</version>
</dependency>



</dependencies>
Endogenous answered 29/9, 2019 at 11:7 Comment(0)
H
3

If you're using the Gradle Shadow plugin, then this is the config you have to add:

shadowJar {
    mergeServiceFiles()
}
Helaina answered 23/8, 2021 at 12:36 Comment(0)
S
2

I assume you build sample using maven.

Please check content of the JAR you're trying to run. Especially META-INFO/services directory, file org.apache.hadoop.fs.FileSystem. There should be list of filsystem implementation classes. Check line org.apache.hadoop.hdfs.DistributedFileSystem is present in the list for HDFS and org.apache.hadoop.fs.LocalFileSystem for local file scheme.

If this is the case, you have to override referred resource during the build.

Other possibility is you simply don't have hadoop-hdfs.jar in your classpath but this has low probability. Usually if you have correct hadoop-client dependency it is not an option.

Setaceous answered 31/8, 2013 at 17:51 Comment(1)
HI Roman ..i have the same issue and the META-INFO/services/org.apache.hadoop.fs.FileSystem does not have hdfs line.I have 2.0.0-mr1-cdh4.4.0 as the only dependency. What do i need to do? Any documentation about this? Using Maven to buildLuciusluck
E
2

Another possible cause (though the OPs question doesn't itself suffer from this) is if you create a configuration instance that does not load the defaults:

Configuration config = new Configuration(false);

If you don't load the defaults then you won't get the default settings for things like the FileSystem implementations which leads to identical errors like this when trying to access HDFS. Switching to the parameterless constructor of passing in true to load defaults may resolve this.

Additionally if you are adding custom configuration locations (e.g. on the file system) to the Configuration object be careful of which overload of addResource() you use. For example if you use addResource(String) then Hadoop assumes that the string is a class path resource, if you need to specify a local file try the following:

File configFile = new File("example/config.xml");
config.addResource(new Path("file://" + configFile.getAbsolutePath()));
Elbrus answered 10/2, 2016 at 14:46 Comment(0)
C
2

This is not related to Flink, but I've found this issue in Flink also.

For people using Flink, you need to download Pre-bundled Hadoop and put it inside /opt/flink/lib.

Chartulary answered 9/1, 2020 at 12:6 Comment(0)
E
1

It took me sometime to figure out fix from given answers, due to my newbieness. This is what I came up with, if anyone else needs help from the very beginning:

import org.apache.spark.SparkContext
import org.apache.spark.SparkConf

object MyObject {
  def main(args: Array[String]): Unit = {

    val mySparkConf = new SparkConf().setAppName("SparkApp").setMaster("local[*]").set("spark.executor.memory","5g");
    val sc = new SparkContext(mySparkConf)

    val conf = sc.hadoopConfiguration

    conf.set("fs.hdfs.impl", classOf[org.apache.hadoop.hdfs.DistributedFileSystem].getName)
    conf.set("fs.file.impl", classOf[org.apache.hadoop.fs.LocalFileSystem].getName)

I am using Spark 2.1

And I have this part in my build.sbt

assemblyMergeStrategy in assembly := {
  case PathList("META-INF", xs @ _*) => MergeStrategy.discard
  case x => MergeStrategy.first
}
Equanimous answered 30/3, 2017 at 0:24 Comment(0)
G
1
Configuration conf = new Configuration();
conf.set("fs.defaultFS", "hdfs://nameNode:9000");
FileSystem fs = FileSystem.get(conf);

set fs.defaultFS works for me! Hadoop-2.8.1

Gass answered 21/8, 2017 at 13:27 Comment(0)
V
1

For SBT use below mergeStrategy in build.sbt

mergeStrategy in assembly <<= (mergeStrategy in assembly) { (old) => {
    case PathList("META-INF", "services", "org.apache.hadoop.fs.FileSystem") => MergeStrategy.filterDistinctLines
    case s => old(s)
  }
}
Vertigo answered 19/2, 2018 at 7:1 Comment(0)
S
1

This question is old, but I faced the same issue recently and the origin of the error was different than those of the answers here.

On my side, the root cause was due to hdfs trying to parse an authorithy when encountering // at the beginning of a path :

$ hdfs dfs -ls //dev
ls: No FileSystem for scheme: null

So try to look for a double slash or an empty variable in the path building part of your code.

Related Hadoop ticket: https://issues.apache.org/jira/browse/HADOOP-8087

Stiver answered 26/3, 2021 at 10:50 Comment(0)
S
0

Use this plugin

<plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-shade-plugin</artifactId>
                <version>1.5</version>
                <executions>
                    <execution>
                        <phase>package</phase>
                        <goals>
                            <goal>shade</goal>
                        </goals>

                        <configuration>
                            <filters>
                                <filter>
                                    <artifact>*:*</artifact>
                                    <excludes>
                                        <exclude>META-INF/*.SF</exclude>
                                        <exclude>META-INF/*.DSA</exclude>
                                        <exclude>META-INF/*.RSA</exclude>
                                    </excludes>
                                </filter>
                            </filters>
                            <shadedArtifactAttached>true</shadedArtifactAttached>
                            <shadedClassifierName>allinone</shadedClassifierName>
                            <artifactSet>
                                <includes>
                                    <include>*:*</include>
                                </includes>
                            </artifactSet>
                            <transformers>
                                <transformer
                                    implementation="org.apache.maven.plugins.shade.resource.AppendingTransformer">
                                    <resource>reference.conf</resource>
                                </transformer>
                                <transformer
                                    implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
                                </transformer>
                                <transformer 
                                implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer">
                                </transformer>
                            </transformers>
                        </configuration>
                    </execution>
                </executions>
            </plugin>
Snuggle answered 8/1, 2016 at 10:10 Comment(0)
S
0

If you are using sbt:

//hadoop
lazy val HADOOP_VERSION = "2.8.0"

lazy val dependenceList = Seq(

//hadoop
//The order is important: "hadoop-hdfs" and then "hadoop-common"
"org.apache.hadoop" % "hadoop-hdfs" % HADOOP_VERSION

,"org.apache.hadoop" % "hadoop-common" % HADOOP_VERSION
)
Setup answered 6/6, 2017 at 15:11 Comment(0)
D
-1

I also came across similar issue. Added core-site.xml and hdfs-site.xml as resources of conf (object)

Configuration conf = new Configuration(true);    
conf.addResource(new Path("<path to>/core-site.xml"));
conf.addResource(new Path("<path to>/hdfs-site.xml"));

Also edited version conflicts in pom.xml. (e.g. If configured version of hadoop is 2.8.1, but in pom.xml file, dependancies has version 2.7.1, then change that to 2.8.1) Run Maven install again.

This solved error for me.

Discursive answered 28/11, 2017 at 12:42 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.