Hive -- split data across files
Asked Answered
L

1

6

Is there a way to instruct Hive to split data into multiple output files? Or maybe cap the size of the output files.

I'm planning to use Redshift, which recommends splitting data into multiple files to allow parallel loading http://docs.aws.amazon.com/redshift/latest/dg/t_splitting-data-files.html

We preprocess all out data in hive, and I'm wondering if there's a way to create, say 10 1GB files which might make copying to redshift faster.

I was looking at https://cwiki.apache.org/Hive/adminmanual-configuration.html and https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties but I can't find anything

Lapierre answered 8/5, 2013 at 20:28 Comment(0)
W
11

There are a couple of ways you could go about splitting Hive output. The first and easiest way is to set the number of reducers. Since each reduces writes to its own output file, the number of reducers you specify will correspond to the number of output files written. Note that some Hive queries will not result in the number of reducers you specify (for example, SELECT COUNT(*) FROM some_table always results in one reducer). To specify the number of reducers run this before your query:

set mapred.reduce.tasks=10

Another way you could split into multiple output files would be to have Hive insert the results of your query into a partitioned table. This would result in at least one file per partition. For this to make sense you must have some reasonable column to partition on. For example, you wouldn't want to partition on a unique id column or you would have one file for each record. This approach will guarantee at least output file per partition, and at most numPartitions * numReducers. Here's an example (don't worry too much about hive.exec.dynamic.partition.mode, it needs to be set for this query to work).

hive.exec.dynamic.partition.mode=nonstrict

CREATE TABLE table_to_export_to_redshift (
  id INT,
  value INT
)
PARTITIONED BY (country STRING)

INSERT OVERWRITE TABLE table_to_export_to_redshift
PARTITION (country)
SELECT id, value, country
FROM some_table

To get more fine grained control, you can write your own reduce script to pass to hive and have that reduce script write to multiple files. Once you are writing your own reducer, you can do pretty much whatever you want.

Finally, you can forgo trying to maneuver Hive into outputting your desired number of files and just break them apart yourself once Hive is done. By default, Hive stores its tables uncompressed and in plain text in it's warehouse directory (ex, /apps/hive/warehouse/table_to_export_to_redshift). You can use Hadoop shell commands, a MapReduce job, Pig, or pull them into Linux and break them apart however you like.

I don't have any experience with Redshift, so some of my suggestions may not be appropriate for consumption by Redshift for whatever reason.

A couple of notes: Splitting files into more, smaller files is generally bad for Hadoop. You might get a speed increase for Redshift, but if the files are consumed by other parts of the Hadoop ecosystem (MapReduce, Hive, Pig, etc) you might see a performance loss if the files are too small (though 1GB would be fine). Also make sure that the extra processing/developer time is worth the time savings you get for paralleling your Redshift data load.

Wing answered 8/5, 2013 at 21:43 Comment(3)
Your example does not do any partitioning. Both the CREATE TABLE and INSERT OVERWRITE commands need to specify the partition(s) to use.Scuta
@Scuta Thanks for catching that. Fixed above.Wing
mapred.reduce.tasks is the key, but downside is that you can't directly control size of files. To distribute the data to reducers you can add a DISTRIBUTE BY field clause to INSERT OVERWRITE DIRECTORY query and will send all mapper results with same field value to the same reducer. You can potentially use this to exploit your data. For example, I've used this when collecting data over 1 week; if I set mapred.reduce.tasks=7 and DISTRIBUTE BY date, I can produce 7 relatively similar sized files.Scuta

© 2022 - 2024 — McMap. All rights reserved.