Can Apache YARN be used without HDFS?
Asked Answered
M

5

9

I want to use Apache YARN as a cluster and resource manager for running a framework where resources would be shared across different task of the same framework. I want to use my own distributed off-heap file system.

  1. Is it possible to use any other distributed file system with YARN other than HDFS?

  2. If yes, what HDFS APIs need to be implemented?

  3. What Hadoop components are required to run YARN?
Maudiemaudlin answered 2/3, 2017 at 8:6 Comment(0)
L
16

There's some different questions here

Can you use YARN to deploy apps using something like S3 to propagate the binaries?

Yes: it's how LinkedIn have deployed Samza in the past, using http:// downloads. Samza does not need a cluster filesystem, so there is no hdfs running in cluster, just local file:// filesystems, one per host.

Applications which need a cluster fileystems wouldn't work in such a cluster.

Can you bring up a YARN cluster with an alternative filesystem?

Yes.

For what "filesystem" is, look at the Filesystem Specification. You need a consistent view across the filesytem: newly create files list(), deleted ones aren't found, updates immediately visible. And rename() of files and directories must be an atomic operation, ideally O(1). It's used for atomic commits of work, checkpoints, ... Oh, and for HBase, append() is needed.

MapR does this, Redhat with GlusterFS; IBM and EMC for theirs. Do bear in mind here that pretty much everything is tested on HDFS; you'd better hope the other cluster FS has done the testing (or someone has done it for them, such as Hortonworks or Cloudera).

Can you bring up a YARN cluster using an object store as the underlying FS.

It depends on whether or not the FS offers a consistent filesystem view, rather than some eventual consistency world view. HBase is the real test here.

  1. Microsoft Azure Storage is consistent, has leases for obtaining exclusive access to bits of the FS and rename()s really fast. In Azure it completely replaces HDFS.
  2. Google cloud storage announced on Mar 1 2017 that GCS offers consistency. Maybe it can be used as a replacement now; no experience there.
  3. Amazon EMR does offer s3 as a replacement using (a) dynamo for the consistent metadata and (b) doing horrible things to get HBase to work.
  4. The ASF's own S3 client, S3a, can't be used as a replacement. We in the team working on it have been focusing on read and write perf as a source and final destination of data; in s3guard adding the dynamo layer and in the s3guard committer, on being able to use it as a high performance destination of work (resilient to failures while avoiding rename()).

Can the new distributed Filesystem you are writing be used as a replacement for HDFS?

Well, you can certainly try!

First get all the filesystem contract tests to work, which measure basic API compliance. Then look at all the Apache Bigtop tests, which do system integration. I recommend you avoid HBase & Accumulo initially, focus on: Mapreduce, Hive, spark, Flink.

Don't be afraid to get on the Hadoop common-dev & bigtop lists and ask questions.

Labanna answered 2/3, 2017 at 11:35 Comment(1)
Hello Mister Loughran, according to your knowledge, could Yarn be used with a shared cephFS drive ?Hither
C
2

Here's the interface you have to implement, keep an eye on the guarantees that you have to support. There's a utility to test the contracts. If you need an example, there are a plethora of implementations of different filsystems within Hadoop, eg for S3/AzureBlobs/FTP that serve as a good starting point.

You can configure your filesystem implementation by class, all components should honor fs.defaultFS as the configuration key.

Coliseum answered 2/3, 2017 at 11:4 Comment(0)
L
0

Yes, you can provided you have a file store implementation that supports HDFS API.

for e.g. you can use AWS S3 (s3n:// or s3a://) instead of HDFS. there few other file systems that supports HDFS API.

Lanner answered 2/3, 2017 at 11:10 Comment(1)
Sorry, but you can't rip out HDFS for the ASF s3n/s3a drivers, "not a real filesystem", at least in the way people expect filesystems to work. Azure and wasb://, yes, EMR, s3:// , yes, if you pay extra, but ASF s3a://? Not as of March 2017Labanna
B
-1

Yarn is not only resource manager for distributed cluster. Apache Mesos is resource manager similar yarn (but internal technology is different.). And it is not dependent to hadoop components. In enterprise cloud infra, already many uses such as dc/os(Consisting of mesos, docker, etc)

Banns answered 2/3, 2017 at 9:54 Comment(1)
Yes. I am aware of Mesos but from all the documentations of Mesos that I have read what I understood is, it is used to share resources among different frameworks. I would be running single framework and resource sharing should happen among different tasks of same framework.Maudiemaudlin
L
-1

YARN can be used without HDFS . You don't have to configure and start HDFS services, so it will run without HDFS.

But you can not install YARN without Hadoop. You have to download the Hadoop and configure only YARN(and other services which you want to use).

Lahr answered 2/3, 2017 at 11:18 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.