Merging multiple files into one within Hadoop
Asked Answered
D

8

32

I get multiple small files into my input directory which I want to merge into a single file without using the local file system or writing mapreds. Is there a way I could do it using hadoof fs commands or Pig?

Thanks!

Dyad answered 23/8, 2010 at 13:59 Comment(1)
You should accept an answer if your question has been answered.Abbie
T
23

In order to keep everything on the grid use hadoop streaming with a single reducer and cat as the mapper and reducer (basically a noop) - add compression using MR flags.

hadoop jar \
    $HADOOP_PREFIX/share/hadoop/tools/lib/hadoop-streaming.jar \<br>
    -Dmapred.reduce.tasks=1 \
    -Dmapred.job.queue.name=$QUEUE \
    -input "$INPUT" \
    -output "$OUTPUT" \
    -mapper cat \
    -reducer cat

If you want compression add
-Dmapred.output.compress=true \ -Dmapred.output.compression.codec=org.apache.hadoop.io.compress.GzipCodec

Thor answered 25/11, 2014 at 12:54 Comment(5)
I think it is the best method.Prosperous
I imagine this would change the order of the lines?Borreri
@AndredeMiranda I think the order will be deterministic, sorted by key, since we only have one reducer. This is based on recalling the shuffle, sort, reducer model.Velate
it's not jsut the best answer; it is the answer. all other answers are not correct (e.g. fs -getmerge will put file locally, not on hdfs )Barong
Uhm, doing that adds a tabulation at the end of each line... how should we fix that?Messidor
R
17
hadoop fs -getmerge <dir_of_input_files> <mergedsinglefile>
Rerun answered 24/8, 2010 at 17:46 Comment(5)
oddly this gives me no result. not sure why.Pharmacopsychosis
maybe your directory only has empty filesAllowed
I think mergedsinglefile is local, not distributedMillwork
this will result with files on the local filesystem, which the OP wants to avoidAxes
This does not put the file to hdfs, rather saves it to dfs. We then need to put the file back to hdfs using hdfs -put.Sandpiper
D
7

okay...I figured out a way using hadoop fs commands -

hadoop fs -cat [dir]/* | hadoop fs -put - [destination file]

It worked when I tested it...any pitfalls one can think of?

Thanks!

Dyad answered 25/8, 2010 at 8:49 Comment(2)
But in this case you're downloading all data from HDFS to the node you're running command from (local one?), and then upload it to HDFS. This is not too effective if you have much of dataCp
Another pitfall is that occasionally you might get also some unwanted input from stdin. I came across it once in an HA enabled cluster when some warning messages got trapped into the output.Kaikaia
D
4

If you set up fuse to mount your HDFS to a local directory, then your output can be the mounted filesystem.

For example, I have our HDFS mounted to /mnt/hdfs locally. I run the following command and it works great:

hadoop fs -getmerge /reports/some_output /mnt/hdfs/reports/some_output.txt

Of course, there are other reasons to use fuse to mount HDFS to a local directory, but this was a nice side effect for us.

Diecious answered 26/4, 2011 at 15:21 Comment(0)
M
1

You can use the tool HDFSConcat, new in HDFS 0.21, to perform this operation without incurring the cost of a copy.

Menstrual answered 4/10, 2010 at 11:46 Comment(2)
Thanks Jeff, will look into HDFSConcat. Currently we are on 0.20.2 so I am now creating a Har of all the files and then reading from pig. This way data stays in HDFS.Dyad
I should note that this tool has limitations highlighted at issues.apache.org/jira/browse/HDFS-950. Files must have the same block size and be owned by the same user.Menstrual
S
1

If you are working in Hortonworks cluster and want to merge multiple file present in HDFS location into a single file then you can run 'hadoop-streaming-2.7.1.2.3.2.0-2950.jar' jar which runs single reducer and get the merged file into HDFS output location.

$ hadoop jar /usr/hdp/2.3.2.0-2950/hadoop-mapreduce/hadoop-streaming-2.7.1.2.3.2.0-2950.jar \
-Dmapred.reduce.tasks=1 \
-input "/hdfs/input/dir" \
-output "/hdfs/output/dir" \
-mapper cat \
-reducer cat

You can download this jar from Get hadoop streaming jar

If you are writing spark jobs and want to get a merged file to avoid multiple RDD creations and performance bottlenecks use this piece of code before transforming your RDD

sc.textFile("hdfs://...../part*).coalesce(1).saveAsTextFile("hdfs://...../filename)

This will merge all part files into one and save it again into hdfs location

Supremacist answered 23/1, 2017 at 10:31 Comment(0)
P
0

All the solutions are equivalent to doing a

hadoop fs -cat [dir]/* > tmp_local_file  
hadoop fs -copyFromLocal tmp_local_file 

it only means that the local m/c I/O is on the critical path of data transfer.

Peh answered 27/6, 2011 at 4:37 Comment(0)
O
0

Addressing this from Apache Pig perspective,

To merge two files with identical schema via Pig, UNION command can be used

 A = load 'tmp/file1' Using PigStorage('\t') as ....(schema1)
 B = load 'tmp/file2' Using PigStorage('\t') as ....(schema1) 
 C = UNION A,B
 store C into 'tmp/fileoutput' Using PigStorage('\t')
Oysterman answered 26/1, 2017 at 14:30 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.