Using pyarrow how do you append to parquet file?
Asked Answered
M

5

57

How do you append/update to a parquet file with pyarrow?

import pandas as pd
import pyarrow as pa
import pyarrow.parquet as pq


 table2 = pd.DataFrame({'one': [-1, np.nan, 2.5], 'two': ['foo', 'bar', 'baz'], 'three': [True, False, True]})
 table3 = pd.DataFrame({'six': [-1, np.nan, 2.5], 'nine': ['foo', 'bar', 'baz'], 'ten': [True, False, True]})


pq.write_table(table2, './dataNew/pqTest2.parquet')
#append pqTest2 here?  

There is nothing I found in the docs about appending parquet files. And, Can you use pyarrow with multiprocessing to insert/update the data.

Marshy answered 4/11, 2017 at 17:59 Comment(1)
Did you put absolutely different column names in both tables intentionally?Goldsworthy
K
53

I ran into the same issue and I think I was able to solve it using the following:

import pandas as pd
import pyarrow as pa
import pyarrow.parquet as pq


chunksize=10000 # this is the number of lines

pqwriter = None
for i, df in enumerate(pd.read_csv('sample.csv', chunksize=chunksize)):
    table = pa.Table.from_pandas(df)
    # for the first chunk of records
    if i == 0:
        # create a parquet write object giving it an output file
        pqwriter = pq.ParquetWriter('sample.parquet', table.schema)            
    pqwriter.write_table(table)

# close the parquet writer
if pqwriter:
    pqwriter.close()
Karie answered 15/12, 2017 at 20:10 Comment(10)
Of course, it depends on the data, but in my experience chunksize=10000 is too big. Chunk size values about a hundred work much faster for me in most casesHinda
The else after the if is unnecessary since you're writing to table in both cases.Kelle
worked wonders for me. I added compression='gzip' when creating pqwriter.Fructose
Is there a way to skip converting to pandas.DataFrame before converting it into Arrow.Table? Thanks.Aesculapian
Well, according to the docs pyarrow.Table can convert from_arrays, from_batches or from_pandas. see arrow.apache.org/docs/python/generated/…Karie
Thanks! To this date, the api for incrementally write parquets is really not well documented.Landowner
@YuryKirienko I get the best performance with chunksize=1e5. A best advice for people would be: benchmark with different values and see what's best for you.Landowner
This solution works only if the writer is still open ... A better way is to put to files in a directory. pandas/pyarrow will append to a dataframe both files while reading the directory.Cuttie
Unfortunately, this cannot append to an existing .parquet file (see my answer that can). Reason: Once .close() is called, the file cannot be appended to, and before .close() is called, the .parquet file is not valid (will throw an exception due to a corrupted file as it's missing its binary footer). The answer from @Indecorum solves this.Indecorum
As already mentioned, this only works once. As soon as the ParquetWriter is closed, then if you make a new ParquetWriter it will overwrite the parquet file.Convex
I
19

In your case the column name is not consistent, I made the column name consistent for three sample dataframes and the following code worked for me.

# -*- coding: utf-8 -*-
import numpy as np
import pandas as pd
import pyarrow as pa
import pyarrow.parquet as pq


def append_to_parquet_table(dataframe, filepath=None, writer=None):
    """Method writes/append dataframes in parquet format.

    This method is used to write pandas DataFrame as pyarrow Table in parquet format. If the methods is invoked
    with writer, it appends dataframe to the already written pyarrow table.

    :param dataframe: pd.DataFrame to be written in parquet format.
    :param filepath: target file location for parquet file.
    :param writer: ParquetWriter object to write pyarrow tables in parquet format.
    :return: ParquetWriter object. This can be passed in the subsequenct method calls to append DataFrame
        in the pyarrow Table
    """
    table = pa.Table.from_pandas(dataframe)
    if writer is None:
        writer = pq.ParquetWriter(filepath, table.schema)
    writer.write_table(table=table)
    return writer


if __name__ == '__main__':

    table1 = pd.DataFrame({'one': [-1, np.nan, 2.5], 'two': ['foo', 'bar', 'baz'], 'three': [True, False, True]})
    table2 = pd.DataFrame({'one': [-1, np.nan, 2.5], 'two': ['foo', 'bar', 'baz'], 'three': [True, False, True]})
    table3 = pd.DataFrame({'one': [-1, np.nan, 2.5], 'two': ['foo', 'bar', 'baz'], 'three': [True, False, True]})
    writer = None
    filepath = '/tmp/verify_pyarrow_append.parquet'
    table_list = [table1, table2, table3]

    for table in table_list:
        writer = append_to_parquet_table(table, filepath, writer)

    if writer:
        writer.close()

    df = pd.read_parquet(filepath)
    print(df)

Output:

   one  three  two
0 -1.0   True  foo
1  NaN  False  bar
2  2.5   True  baz
0 -1.0   True  foo
1  NaN  False  bar
2  2.5   True  baz
0 -1.0   True  foo
1  NaN  False  bar
2  2.5   True  baz
Ingress answered 2/2, 2018 at 6:53 Comment(1)
Unfortunately, this cannot append to an existing .parquet file (see my answer that can). Reason: Once .close() is called, the file cannot be appended to, and before .close() is called, the .parquet file is not valid (will throw an exception due to a corrupted file as it's missing its binary footer). The answer from @Indecorum solves this.Indecorum
V
18

Generally speaking, Parquet datasets consist of multiple files, so you append by writing an additional file into the same directory where the data belongs to. It would be useful to have the ability to concatenate multiple files easily. I opened https://issues.apache.org/jira/browse/PARQUET-1154 to make this possible to do easily in C++ (and therefore Python)

Vulcanism answered 4/11, 2017 at 19:26 Comment(4)
Pls include updating data. Maybe there is something in arrow, that might work.Marshy
Please come to the mailing lists for Arrow and Parquet with your questions. Stack Overflow is not the best venue for getting supportVulcanism
Is parquet-tools command parquet-merge not an option? - at least from the command line? (Disclaimer I haven't tried it yet)Cuttie
The parquet files appears as a single file on Windows sometimes. How do I view it as a folder on Windows?Scratches
I
10

Demo of appending a Pandas dataframe to an existing .parquet file.

Note: Other answers cannot append to existing .parquet files. This can; see discussion at end.

Tested on Python v3.9 on Windows and Linux.

Install PyArrow using pip:

pip install pyarrow==6.0.1

Or Anaconda / Miniconda:

conda install -c conda-forge pyarrow=6.0.1 -y

Demo code:

# Q. Demo?
# A. Demo of appending to an existing .parquet file by memory mapping the original file, appending the new dataframe, then writing the new file out.

import os
import numpy as np
import pandas as pd
import pyarrow as pa  
import pyarrow.parquet as pq  

filepath = "parquet_append.parquet"

Method 1 of 2

Simple way: Using pandas, read the orignal .parquet file in, append, write entire file back out.

# Create parquet file.
df = pd.DataFrame({"x": [1.,2.,np.nan], "y": ["a","b","c"]})  # Create dataframe ...
df.to_parquet(filepath)  # ... write to file.

# Append to original parquet file.
df = pd.read_parquet(filepath)  # Read original ...
df2 = pd.DataFrame({"x": [3.,4.,np.nan], "y": ["d","e","f"]})  # ... create new dataframe to append ...
df3 = pd.concat([df, df2])  # ... concatenate together ...
df3.to_parquet(filepath)  # ... overwrite original file.

# Demo that new data frame has been appended to old.
df_copy = pd.read_parquet(filepath)
print(df_copy)
#      x  y
# 0  1.0  a
# 1  2.0  b
# 2  NaN  c
# 0  3.0  d
# 1  4.0  e
# 2  NaN  f

Method 2 of 2

More complex but faster: using native PyArrow calls, memory map the original file, append the new dataframe, write new file out.

# Write initial file using PyArrow.
df = pd.DataFrame({"x": [1.,2.,np.nan], "y": ["a","b","c"]})  # Create dataframe ...
table = pa.Table.from_pandas(df)
pq.write_table(table, where=filepath)

def parquet_append(filepath:Path or str, df: pd.DataFrame) -> None:
    """
    Append to dataframe to existing .parquet file. Reads original .parquet file in, appends new dataframe, writes new .parquet file out.
    :param filepath: Filepath for parquet file.
    :param df: Pandas dataframe to append. Must be same schema as original.
    """
    table_original_file = pq.read_table(source=filepath,  pre_buffer=False, use_threads=True, memory_map=True)  # Use memory map for speed.
    table_to_append = pa.Table.from_pandas(df)
    table_to_append = table_to_append.cast(table_original_file.schema)  # Attempt to cast new schema to existing, e.g. datetime64[ns] to datetime64[us] (may throw otherwise).
    handle = pq.ParquetWriter(filepath, table_original_file.schema)  # Overwrite old file with empty. WARNING: PRODUCTION LEVEL CODE SHOULD BE MORE ATOMIC: WRITE TO A TEMPORARY FILE, DELETE THE OLD, RENAME. THEN FAILURES WILL NOT LOSE DATA.
    handle.write_table(table_original_file)
    handle.write_table(table_to_append)
    handle.close()  # Writes binary footer. Until this occurs, .parquet file is not usable.

# Append to original parquet file.
df = pd.DataFrame({"x": [3.,4.,np.nan], "y": ["d","e","f"]})  # ... create new dataframe to append ...
parquet_append(filepath, df)

# Demo that new data frame has been appended to old.
df_copy = pd.read_parquet(filepath)
print(df_copy)
#      x  y
# 0  1.0  a
# 1  2.0  b
# 2  NaN  c
# 0  3.0  d
# 1  4.0  e
# 2  NaN  f

Discussion

The answers from @Ibraheem Ibraheem and @yardstick17 cannot be used to append to existing .parquet files:

  • Limitation 1: After .close() is called, the files cannot be appended to. Once the footer is written, everything is set in stone;
  • Limitation 2: The .parquet file cannot be read by any other program until .close() is called (it will throw an exception as the binary footer is missing).

Combined, these limitations mean that they cannot be used to append to an existing .parquet file, they can only be used to write a .parquet file in chunks. The technique above removes these limitations, at the expense of being less efficient as the entire file has to be rewritten to append to the end. After extensive research, I believe that it is not possible to append to an existing .parquet file with the existing PyArrow libraries (as of v6.0.1).

It would be possible to modify this to merge multiple .parquet files in a folder into a single .parquet file.

It would be possible to perform an efficient upsert: pq.read_table() has filters on column and row, so if the rows in the original table were filtered out on load, the rows in the new table would effectively replace the old. This would be more useful for timeseries data.

Indecorum answered 22/1, 2022 at 22:33 Comment(3)
surprisingly, fastparquet lets us append row groups to an already existing parquet file..Ezraezri
In your solution, the write has to write both the original and the appended file data - is that correct? In which case, you may as well concatenate the two dataframes and save the merged dataframe normally. I'm struggling to see the benefit of this solution unless there is some kind of memory/speed advantage?Alkali
@Alkali Re: solution 2, I believe it is already optimal for large dataframes. If concatenating the dataframes then writing, it would require an additional copy which is slower and doubles RAM usage. For example, if the dataframe was 10 columns of 50 million rows each, the act of first concatenating (which creates another copy) will take longer than flushing the first part to disc, then the second part, then closing. However, for smaller dataframes, a cleaner parquet file may result from concatenating first as columns would be stored in one contiguous memory array rather than two.Indecorum
E
6

The accepted answer works as long as you have the pyarrow parquet writer open. Once the writer is closed we cannot append row groups to a parquet file. pyarrow doesn't have any implementation to append to an already existing parquet file.

Its possible to append row groups to an already existing parquet file using fastparquet. Here is SO answer that explains this with example.

from fast parquet docs

append: bool (False) or ‘overwrite’ If False, construct data-set from scratch; if True, add new row-group(s) to existing data-set. In the latter case, the data-set must exist, and the schema must match the input data.

from fastparquet import write
write('output.parquet', df, append=True)

Update: feature request to have this in pyarrow as well - JIRA

Ezraezri answered 1/11, 2022 at 7:53 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.