pandas df.to_parquet write to multiple smaller files
Asked Answered
C

5

16

Is it possible to use Pandas' DataFrame.to_parquet functionality to split writing into multiple files of some approximate desired size?

I have a very large DataFrame (100M x 100), and am using df.to_parquet('data.snappy', engine='pyarrow', compression='snappy') to write to a file, but this results in a file that's about 4GB. I'd instead like this split into many ~100MB files.

Cocker answered 6/9, 2020 at 20:33 Comment(3)
Do the same thing as Pandas dataframe to_csv - split into multiple output files, except with .to_parquet().Onestep
Ended up doing ddf = dask.dataframe.from_pandas(df, chunksize=5000000); ddf.to_parquet('/path/to/save/') which saves one file per chunk.Cocker
You should write it up as an answer. May be beneficial to others.Onestep
C
19

I ended up using Dask:

import dask.dataframe as da

ddf = da.from_pandas(df, chunksize=5000000)
save_dir = '/path/to/save/'
ddf.to_parquet(save_dir)

This saves to multiple parquet files inside save_dir, where the number of rows of each sub-DataFrame is the chunksize. Depending on your dtypes and number of columns, you can adjust this to get files to the desired size.

Cocker answered 7/9, 2020 at 16:6 Comment(0)
C
8

One other option is to use the partition_cols option in pyarrow.parquet.write_to_dataset():

import pyarrow.parquet as pq
import numpy as np

# df is your dataframe
n_partition = 100
df["partition_idx"] = np.random.choice(range(n_partition), size=df.shape[0])
table = pq.Table.from_pandas(df, preserve_index=False)
pq.write_to_dataset(table, root_path="{path to dir}/", partition_cols=["partition_idx"])
Chester answered 13/11, 2020 at 18:15 Comment(0)
T
5

Keep each parquet size small, around 128MB. To do this:

import dask.dataframe as dd

# Get number of partitions required for nominal 128MB partition size
# "+ 1" for non full partition
size128MB = int(df.memory_usage().sum()/1e6/128) + 1
# Read
ddf = dd.from_pandas(df, npartitions=size128MB)
save_dir = '/path/to/save/'
ddf.to_parquet(save_dir)
Tecu answered 30/7, 2022 at 0:49 Comment(0)
N
4

Slice the dataframe and save each chunk to a folder, using just pandas api (without dask or pyarrow).

You can pass extra params to the parquet engine if you wish.

def df_to_parquet(df, target_dir, chunk_size=1000000, **parquet_wargs):
"""Writes pandas DataFrame to parquet format with pyarrow.

Args:
    df: DataFrame
    target_dir: local directory where parquet files are written to
    chunk_size: number of rows stored in one chunk of parquet file. Defaults to 1000000.
"""    
for i in range(0, len(df), chunk_size):
    slc = df.iloc[i : i + chunk_size]
    chunk = int(i/chunk_size)
    fname = os.path.join(target_dir, f"part_{chunk:04d}.parquet")
    slc.to_parquet(fname, engine="pyarrow", **parquet_wargs)
Nosh answered 26/4, 2022 at 7:52 Comment(0)
A
0
cunk = 200000
    i = 0
    n = 0
    while i<= len(all_df):
        j = i + cunk
        print((i, j))
        tmpdf = all_df[i:j]
        tmpdf.to_parquet(path=f"./append_data/part.{n}.parquet",engine='pyarrow', compression='snappy')
        i = j
        n = n + 1
Accusative answered 28/7, 2022 at 15:0 Comment(1)
Please, add a brief explanation of how/why it solves the problem.Costanzo

© 2022 - 2024 — McMap. All rights reserved.