How to save a huge pandas dataframe to hdfs?
Asked Answered
R

4

13

Im working with pandas and with spark dataframes. The dataframes are always very big (> 20 GB) and the standard spark functions are not sufficient for those sizes. Currently im converting my pandas dataframe to a spark dataframe like this:

dataframe = spark.createDataFrame(pandas_dataframe)  

I do that transformation because with spark writing dataframes to hdfs is very easy:

dataframe.write.parquet(output_uri, mode="overwrite", compression="snappy")

But the transformation is failing for dataframes which are bigger than 2 GB. If I transform a spark dataframe to pandas I can use pyarrow:

// temporary write spark dataframe to hdfs
dataframe.write.parquet(path, mode="overwrite", compression="snappy")

// open hdfs connection using pyarrow (pa)
hdfs = pa.hdfs.connect("default", 0)
// read parquet (pyarrow.parquet (pq))
parquet = pq.ParquetDataset(path_hdfs, filesystem=hdfs)
table = parquet.read(nthreads=4)
// transform table to pandas
pandas = table.to_pandas(nthreads=4)

// delete temp files
hdfs.delete(path, recursive=True)

This is a fast converstion from spark to pandas and it also works for dataframes bigger than 2 GB. I yet could not find a way to do it the other way around. Meaning having a pandas dataframe which I transform to spark with the help of pyarrow. The problem is that I really cant find how to write a pandas dataframe to hdfs.

My pandas version: 0.19.0

Rusty answered 20/11, 2017 at 13:19 Comment(4)
What error you get? Are you sure that the application fails in writing or maybe a bit before (during the dataframe transformations) ?Monocotyledon
It fails with out of memory exception because the java heap space is limitted and the createDataFrame method is building a byte array on the java heap. To get around this we need the pyarrow solution. as described it already works perfectly for spark to pandas. but I also need pandas to spark since I cant find a way to save pandas directly to hdfs or hive.Rusty
Just curious -- at this size, why not just write the data to a database? Postgres, for example, if you still want to write Python or C code to operate on it in-database.Fronia
An hack could be to create N pandas dataframes (each less than 2 GB) (horizontal partitioning) from the big one and create N different spark dataframes, then merging (Union) them to create a final one to write into HDFS. I am assuming that your master machine is powerful but you also have available a cluster in which you are running Spark.Monocotyledon
G
23

Meaning having a pandas dataframe which I transform to spark with the help of pyarrow.

pyarrow.Table.fromPandas is the function your looking for:

Table.from_pandas(type cls, df, bool timestamps_to_ms=False, Schema schema=None, bool preserve_index=True)

Convert pandas.DataFrame to an Arrow Table
import pyarrow as pa

pdf = ...  # type: pandas.core.frame.DataFrame
adf = pa.Table.from_pandas(pdf)  # type: pyarrow.lib.Table

The result can be written directly to Parquet / HDFS without passing data via Spark:

import pyarrow.parquet as pq

fs  = pa.hdfs.connect()

with fs.open(path, "wb") as fw
    pq.write_table(adf, fw)

See also

Spark notes:

Furthermore since Spark 2.3 (current master) Arrow is supported directly in createDataFrame (SPARK-20791 - Use Apache Arrow to Improve Spark createDataFrame from Pandas.DataFrame). It uses SparkContext.defaultParallelism to compute number of chunks so you can easily control the size of individual batches.

Finally defaultParallelism can be used to control number of partitions generated using standard _convert_from_pandas, effectively reducing size of the slices to something more manageable.

Unfortunately these are unlikely to resolve your current memory problems. Both depend on parallelize, therefore store all data in memory of the driver node. Switching to Arrow or adjusting configuration can only speedup the process or address block size limitations.

In practice I don't see any reason to switch to Spark here, as long as you use local Pandas DataFrame as the input. The most severe bottleneck in this scenario is driver's network I/O and distributing data won't address that.

Gauzy answered 28/11, 2017 at 21:16 Comment(0)
R
1

From https://issues.apache.org/jira/browse/SPARK-6235

Support for parallelizing R data.frame larger than 2GB

is resolved.

From https://pandas.pydata.org/pandas-docs/stable/r_interface.html

Converting DataFrames into R objects

you can convert a pandas dataframe to an R data.frame

So perhaps the transformation pandas -> R -> Spark -> hdfs?

Rebellious answered 28/11, 2017 at 13:49 Comment(0)
Z
1

One other way is to convert your pandas dataframe to spark dataframe (using pyspark) and saving it to hdfs with save command. example

    df = pd.read_csv("data/as/foo.csv")
    df[['Col1', 'Col2']] = df[['Col2', 'Col2']].astype(str)
    sc = SparkContext(conf=conf)
    sqlCtx = SQLContext(sc)
    sdf = sqlCtx.createDataFrame(df)

Here astype changes type of your column from object to string. This saves you from otherwise raised exception as spark couldn't figure out pandas type object. But make sure these columns really are of type string.

Now to save your df in hdfs:

    sdf.write.csv('mycsv.csv')
Zerline answered 12/3, 2019 at 13:41 Comment(0)
M
-1

An hack could be to create N pandas dataframes (each less than 2 GB) (horizontal partitioning) from the big one and create N different spark dataframes, then merging (Union) them to create a final one to write into HDFS. I am assuming that your master machine is powerful but you also have available a cluster in which you are running Spark.

Monocotyledon answered 28/11, 2017 at 15:7 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.