There's some different questions here
Can you use YARN to deploy apps using something like S3 to propagate the binaries?
Yes: it's how LinkedIn have deployed Samza in the past, using http:// downloads. Samza does not need a cluster filesystem, so there is no hdfs running in cluster, just local file:// filesystems, one per host.
Applications which need a cluster fileystems wouldn't work in such a cluster.
Can you bring up a YARN cluster with an alternative filesystem?
Yes.
For what "filesystem" is, look at the Filesystem Specification. You need a consistent view across the filesytem: newly create files list(), deleted ones aren't found, updates immediately visible. And rename() of files and directories must be an atomic operation, ideally O(1). It's used for atomic commits of work, checkpoints, ... Oh, and for HBase, append() is needed.
MapR does this, Redhat with GlusterFS; IBM and EMC for theirs. Do bear in mind here that pretty much everything is tested on HDFS; you'd better hope the other cluster FS has done the testing (or someone has done it for them, such as Hortonworks or Cloudera).
Can you bring up a YARN cluster using an object store as the underlying FS.
It depends on whether or not the FS offers a consistent filesystem view, rather than some eventual consistency world view. HBase is the real test here.
- Microsoft Azure Storage is consistent, has leases for obtaining exclusive access to bits of the FS and rename()s really fast. In Azure it completely replaces HDFS.
- Google cloud storage announced on Mar 1 2017 that GCS offers consistency. Maybe it can be used as a replacement now; no experience there.
- Amazon EMR does offer s3 as a replacement using (a) dynamo for the consistent metadata and (b) doing horrible things to get HBase to work.
- The ASF's own S3 client, S3a, can't be used as a replacement. We in the team working on it have been focusing on read and write perf as a source and final destination of data; in s3guard adding the dynamo layer and in the s3guard committer, on being able to use it as a high performance destination of work (resilient to failures while avoiding rename()).
Can the new distributed Filesystem you are writing be used as a replacement for HDFS?
Well, you can certainly try!
First get all the filesystem contract tests to work, which measure basic API compliance. Then look at all the Apache Bigtop tests, which do system integration. I recommend you avoid HBase & Accumulo initially, focus on: Mapreduce, Hive, spark, Flink.
Don't be afraid to get on the Hadoop common-dev & bigtop lists and ask questions.