I have a very wide data frame (20,000 columns) that is mainly made up of float64 columns in Pandas. I want to cast these columns to float32 and write to Parquet format. I am doing this because the down steam user of these files are small containers with limited memory.
I currently cast within Pandas but this very slow on a wide data set and then write out to parquet. Is it possible to cast the types while doing the write to_parquet process itself? A dummy example is shown below.
import pandas as pd
import numpy as np
import pyarrow
df = pd.DataFrame(np.random.randn(3000, 15000)) # make dummy data set
df.columns = [str(x) for x in list(df)] # make column names string for parquet
df[list(df.loc[:, df.dtypes == float])] = df[list(df.loc[:, df.dtypes == float])].astype('float32') # cast the data
df.to_parquet("myfile.parquet") # write out the df