Is it possible to append to HDFS file from multiple clients in parallel?
Asked Answered
W

2

21

Basically whole question is in the title. I'm wondering if it's possible to append to file located on HDFS from multiple computers simultaneously? Something like storing stream of events constantly produced by multiple processes. Order is not important.

I recall hearing on one of the Google tech presentations that GFS supports such append functionality but trying some limited testing with HDFS (either with regular file append() or with SequenceFile) doesn't seems to work.

Thanks,

Warplane answered 17/6, 2011 at 17:40 Comment(1)
Here are some background details, why append is not possible, yet: File Appends in HDFSPhotocell
C
12

I don't think that this is possible with HDFS. Even though you don't care about the order of the records, you do care about the order of the bytes in the file. You don't want writer A to write a partial record that then gets corrupted by writer B. This is a hard problem for HDFS to solve on its own, so it doesn't.

Create a file per writer. Pass all the files to any MapReduce worker that needs to read this data. This is much simpler and fits the design of HDFS and Hadoop. If non-MapReduce code needs to read this data as one stream then either stream each file sequentially or write a very quick MapReduce job to consolidate the files.

Clupeid answered 17/6, 2011 at 20:31 Comment(3)
Thanks. I guess I didn't realize that it doesn't have to be one file per MapReduce job. Writing one file per computer should be very simple to implement, perhaps using in-memory queue as suggested in another answer to avoid blocking.Warplane
@Spike Just to clarify that GFS does support concurrent append. From their GFS paper: "Record append is heavily used by our distributed applications in which many clients on different machines append to the same file concurrently."Nickinickie
You should get an exception stating the file already exists. That jira says "HDFS supports single writer at a time for a given file." You can consolidate the files as suggested in this answer using getmergeGigantic
S
8

just FYI, probably it'd be fully supported in hadoop 2.6.x, acorrding to the JIRA item on the official site: https://issues.apache.org/jira/browse/HDFS-7203

Sullivan answered 27/1, 2015 at 19:13 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.