What are the differences between feather and parquet?
Asked Answered
C

2

204

Both are columnar (disk-)storage formats for use in data analysis systems. Both are integrated within Apache Arrow (pyarrow package for python) and are designed to correspond with Arrow as a columnar in-memory analytics layer.

How do both formats differ?

Should you always prefer feather when working with pandas when possible?

What are the use cases where feather is more suitable than parquet and the other way round?


Appendix

I found some hints here https://github.com/wesm/feather/issues/188, but given the young age of this project, it's possibly a bit out of date.

Not a serious speed test because I'm just dumping and loading a whole Dataframe but to give you some impression if you never heard of the formats before:

 # IPython    
import numpy as np
import pandas as pd
import pyarrow as pa
import pyarrow.feather as feather
import pyarrow.parquet as pq
import fastparquet as fp


df = pd.DataFrame({'one': [-1, np.nan, 2.5],
                   'two': ['foo', 'bar', 'baz'],
                   'three': [True, False, True]})

print("pandas df to disk ####################################################")
print('example_feather:')
%timeit feather.write_feather(df, 'example_feather')
# 2.62 ms ± 35.8 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
print('example_parquet:')
%timeit pq.write_table(pa.Table.from_pandas(df), 'example.parquet')
# 3.19 ms ± 51 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
print()

print("for comparison:")
print('example_pickle:')
%timeit df.to_pickle('example_pickle')
# 2.75 ms ± 18.8 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
print('example_fp_parquet:')
%timeit fp.write('example_fp_parquet', df)
# 7.06 ms ± 205 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
print('example_hdf:')
%timeit df.to_hdf('example_hdf', 'key_to_store', mode='w', table=True)
# 24.6 ms ± 4.45 ms per loop (mean ± std. dev. of 7 runs, 100 loops each)
print()

print("pandas df from disk ##################################################")
print('example_feather:')
%timeit feather.read_feather('example_feather')
# 969 µs ± 1.8 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
print('example_parquet:')
%timeit pq.read_table('example.parquet').to_pandas()
# 1.9 ms ± 5.5 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

print("for comparison:")
print('example_pickle:')
%timeit pd.read_pickle('example_pickle')
# 1.07 ms ± 6.21 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
print('example_fp_parquet:')
%timeit fp.ParquetFile('example_fp_parquet').to_pandas()
# 4.53 ms ± 260 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
print('example_hdf:')
%timeit pd.read_hdf('example_hdf')
# 10 ms ± 43.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

# pandas version: 0.22.0
# fastparquet version: 0.1.3
# numpy version: 1.13.3
# pandas version: 0.22.0
# pyarrow version: 0.8.0
# sys.version: 3.6.3
# example Dataframe taken from https://arrow.apache.org/docs/python/parquet.html
Cheatham answered 3/1, 2018 at 18:48 Comment(0)
H
262
  • Parquet format is designed for long-term storage, where Arrow is more intended for short term or ephemeral storage (Arrow may be more suitable for long-term storage after the 1.0.0 release happens, since the binary format will be stable then)

  • Parquet is more expensive to write than Feather as it features more layers of encoding and compression. Feather is unmodified raw columnar Arrow memory. We will probably add simple compression to Feather in the future.

  • Due to dictionary encoding, RLE encoding, and data page compression, Parquet files will often be much smaller than Feather files

  • Parquet is a standard storage format for analytics that's supported by many different systems: Spark, Hive, Impala, various AWS services, in future by BigQuery, etc. So if you are doing analytics, Parquet is a good option as a reference storage format for query by multiple systems

The benchmarks you showed are going to be very noisy since the data you read and wrote is very small. You should try compressing at least 100MB or upwards 1GB of data to get some more informative benchmarks, see e.g. http://wesmckinney.com/blog/python-parquet-multithreading/

Heshum answered 4/1, 2018 at 14:47 Comment(13)
Thank you very much Wes! If you add simple compression to Feather in the future, would it be possible to make it optional? I'm considering feather as a alternative format to json for an event storage in a stream processing system with fairly low latency requirements. So for this use case my concern would mainly be speed, not storage size.Cheatham
Yes, "uncompressed" will always be an optionHeshum
I noticed that your generate_floats function in your benchmark code here wesmckinney.com/blog/python-parquet-update doesn't guarantee unique_values. They are just random. With n=100M I got duplicates two out of ten runs. Just mentioning in case somebody uses this function where uniqueness should be guaranteed.Cheatham
@Cheatham just wondering... compression results in smaller size so it would be quicker to read it into memory. It could be that the extra processing due to compressing/decompressing will be still faster than having to read more bytes. Or do you have a situation I'm not thinking of?Mistreat
@PascalvKooten That's an interesting remark, thanks! I have no idea how the tradeoff will be, depending on data sizes, but I surely will have to test and see.Cheatham
Can Parquet or Feather be compared to HDF5 format, since they have similar functionality?Grandaunt
One year later, is the situation still the same?Purposeful
HDF5 is more general and heavy...also a lot slower most of the time.Sparerib
Just to add an observation, 200,000 images in parquet format took 4 GB, but in feather took 6 GB. The data was read using pandas pd.read_parquet and pd.read_feather. pd.read_parquet took around 4 minutes, but pd.read_feather took 11 seconds. That is a huge difference. Reference: kaggle.com/corochann/…Aubyn
@WesMcKinney I noticed your answer was written back in 2018. After 2.3 years, do you still think Arrow (feather) is not good for long term storage (by comparing to Parquet)? Is there a specific reason? Like stability? format evolution? or?Payola
W. McKinney indicates that feather (v2) is now stable here: #64090191Novation
Thanks for the answer, but the question is between feather and parquet, and you start your answer mentioning Arrow. This makes things even more confusing.Killer
I believe this answer is outdated. We should update or removeCorncob
S
19

I would also include in the comparison between parquet and feather different compression methods to check for importing/exporting speeds and how much storage it uses.

I advocate for 2 options for the average user who wants a better csv alternative:

  • parquet with "gzip" compression (for storage): It is slitly faster to export than just .csv (if the csv needs to be zipped, then parquet is much faster). Importing is about 2x times faster than csv. The compression is around 22% from the original file size, which is about the same as zipped csv files.
  • feather with "zstd" compression (for I/O speed): compared to csv, feather exporting has 20x faster exporting and about 6x times faster importing. The storage is around 32% from the original file size, which is 10% worse than parquet "gzip" and csv zipped but still decent.

Both are better options that just normal csv files in all categories (I/O speed and storage).

I analysed the following formats:

  1. csv
  2. csv using "zip" compression
  3. feather using "zstd" compression
  4. feather using "lz4" compression
  5. parquet using "snappy" compression
  6. parquet using "gzip" compression
  7. parquet using "brotli" compression

import zipfile
import pandas as pd
folder_path = (r"...\\intraday")
zip_path = zipfile.ZipFile(folder_path + "\\AAPL.zip")    
test_data = pd.read_csv(zip_path.open('AAPL.csv'))


# EXPORT, STORAGE AND IMPORT TESTS
# ------------------------------------------
# - FORMAT .csv 

# export
%%timeit
test_data.to_csv(folder_path + "\\AAPL.csv", index=False)
# 12.8 s ± 399 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

# storage
# AAPL.csv exported using python.
# 169.034 KB

# import
%%timeit
test_data = pd.read_csv(folder_path + "\\AAPL.csv")
# 1.56 s ± 14.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

# ------------------------------------------
# - FORMAT zipped .csv 

# export
%%timeit
test_data.to_csv(folder_path + "\\AAPL.csv")
# 12.8 s ± 399 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
# OBSERVATION: this does not include the time I spent manually zipping the .csv

# storage
# AAPL.csv zipped with .zip "normal" compression using 7-zip software.
# 36.782 KB

# import
zip_path = zipfile.ZipFile(folder_path + "\AAPL.zip")
%%timeit
test_data = pd.read_csv(zip_path.open('AAPL.csv'))
# 2.31 s ± 43.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

# ------------------------------------------
# - FORMAT .feather using "zstd" compression.

# export
%%timeit
test_data.to_feather(folder_path + "\\AAPL.feather", compression='zstd')
# 460 ms ± 13.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

# storage
# AAPL.feather exported with python using zstd
# 54.924 KB

# import
%%timeit
test_data = pd.read_feather(folder_path + "\\AAPL.feather")
# 310 ms ± 11.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

# ------------------------------------------
# - FORMAT .feather using "lz4" compression.
# Only works installing with pip, not with conda. Bad sign.

# export
%%timeit
test_data.to_feather(folder_path + "\\AAPL.feather", compression='lz4')
# 392 ms ± 14.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

# storage
# AAPL.feather exported with python using "lz4"
# 79.668 KB    

# import
%%timeit
test_data = pd.read_feather(folder_path + "\\AAPL.feather")
# 255 ms ± 4.79 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

# ------------------------------------------
# - FORMAT .parquet using compression "snappy"

# export
%%timeit
test_data.to_parquet(folder_path + "\\AAPL.parquet", compression='snappy')
# 2.82 s ± 47.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

# storage
# AAPL.parquet exported with python using "snappy"
# 62.383 KB

# import
%%timeit
test_data = pd.read_parquet(folder_path + "\\AAPL.parquet")
# 701 ms ± 19.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

# ------------------------------------------
# - FORMAT .parquet using compression "gzip"

# export
%%timeit
test_data.to_parquet(folder_path + "\\AAPL.parquet", compression='gzip')
# 10.8 s ± 77.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

# storage
# AAPL.parquet exported with python using "gzip"
# 37.595 KB

# import
%%timeit
test_data = pd.read_parquet(folder_path + "\\AAPL.parquet")
# 1.18 s ± 80.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

# ------------------------------------------
# - FORMAT .parquet using compression "brotli"

# export
%%timeit
test_data.to_parquet(folder_path + "\\AAPL.parquet", compression='brotli')
# around 5min each loop. I did not run %%timeit on this one.

# storage
# AAPL.parquet exported with python using "brotli"
# 29.425 KB    

# import
%%timeit
test_data = pd.read_parquet(folder_path + "\\AAPL.parquet")
# 1.04 s ± 72 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

Observations:

  • Feather seems better for light weight data, as it writes and loads faster. Parquet has better storage ratios.
  • Feather library support and maintenance made me initially concerned, however the file format has good integration with pandas and I could install the dependencies using conda for the "zstd" compression method.
  • Best storage by far is parquet with "brotli" compression, however it takes to long to export. It has a good import speed once the exporting is done, but still is 2.5x slower importing than feather.
Swampland answered 27/5, 2022 at 13:18 Comment(1)
Nice benchmark! Sometimes it can be very helpful to quantify things. Thanks to this I might switch to feather—it seems superior to using parquet + snappy compression, which is parquet's default compression method and the one I'm currently using.Slick

© 2022 - 2024 — McMap. All rights reserved.