How to write standard CSV
Asked Answered
K

2

1

It is very simple to read a standard CSV file, for example:

 val t = spark.read.format("csv")
 .option("inferSchema", "true")
 .option("header", "true")
 .load("file:///home/xyz/user/t.csv")

It reads a real CSV file, something as

   fieldName1,fieldName2,fieldName3
   aaa,bbb,ccc
   zzz,yyy,xxx

and t.show produced the expected result.

I need the inverse, to write standard CSV file (not a directory of non-standard files).

It is very frustrating not to see the inverse result when write is used. Maybe some other option or some kind of format (" REAL csv please! ") exists.


NOTES

I am using Spark v2.2 and running tests on Spark-shell.

The "syntatical inverse" of read is write, so is expected to produce same file format with it. But the result of

   t.write.format("csv").option("header", "true").save("file:///home/xyz/user/t-writed.csv")

is not a CSV file of rfc4180 standard format, as the original t.csv, but a t-writed.csv/ folder with the file part-00000-66b020ca-2a16-41d9-ae0a-a6a8144c7dbc-c000.csv.deflate _SUCCESS that seems a "parquet", "ORC" or other format.

Any language with a complete kit of things that "read someting" is able to "write the something", it is a kind of orthogonality principle.

Similar that not solves

Similar question or links that not solved the problem, perhaps used a incompatible Spark version, or perhaps spark-shell a limitation to use it. They have good clues for experts:

Kafir answered 27/9, 2019 at 23:29 Comment(10)
simple small and standard CSV file <-- there's no such thing... A CSV file is simple, for humans. It is, basically, uncompressed text, so, can't be small. And there's no standard CSV.Tomfoolery
@IsmaelMiguel, sorry, I corrected question's text. I am using a CSV file for read/write configuration and to post results of (big data) summarizations... Small CSV files, no "big data CSV".Kafir
very simple (one line) -> Note that putting all your code on one line does not make it more simple. Typically it will be harder to read, understand and reason about, instead of easier if you create lines with more than one statement or function call on it.Dodgson
@JochemKuijpers, make sense, I edited the question, that is not the point.Kafir
@PeterKrauss Can you give an example of the formatting issue? It's hard for us to think about any of this without replicating the set-up. Do you need spark to produce the CSV in a format you like, or is it okay to do post-processing on it?Dodgson
Hi @JochemKuijpers, read the NOTE: I is not complete? There are a description of the function and of its ugly result.Kafir
https://mcmap.net/q/342050/-how-to-export-data-from-spark-sql-to-csv might help. Other than this quick search on existing questions, I'm not to help you I'm afraid.Dodgson
thanks the link @JochemKuijpers, I try... But the result of my tests on spark v2.2, in spark-shell, is the same that I reported: the result is not a file but a folder with ugly files... I try t.write.option("header", "true").csv("file:///C:/out.csv").Kafir
@IsmaelMiguel tools.ietf.org/html/rfc4180Leanto
@PeterKrauss For what it's worth, I agree with your core premise - spark has done something quite nasty here by having .write.format("csv") be unable to generate something that can in turn be re-read by .read.format("csv").Accustom
C
3

If you're using Spark because you're working with "big"* datasets, you probably don't want to anything like coalesce(1) or toPandas() since that will most likely crash your driver (since the whole dataset has to fit in the drivers RAM, which it usually does not).

On the other hand: If your data does fit into the RAM of a single machine - why are you torturing yourself with distributed computing?

*definitions vary. My personal is "does not fit in an excel sheet".

Culmination answered 25/11, 2019 at 14:45 Comment(4)
No, the "Big Data Universe" is not an island (!), I need to interact with small datasets to join and normalize data, or to generate and publish summarizations... So, as expressed in the question, I need to generate standard files for CSV or JSON little files (in real-world for summarizations or for update datasets of joins -- see link). All programmers and Spark-data analists not say but do it... But with Scala the source-code that I have access are all ugly using direct println() to generate JSON and CSV small files.Kafir
"summarizations", Big Data is reduced to small data by aggregate functions cwiki.apache.org/confluence/display/Hive/…Kafir
k, got it. What's the next tool in your pipeline?Culmination
I was looking for standard Java packages or a Github Scala CSV-writer... Any one that is (reliable and) easy to install and maintain. Any suggestion?Kafir
C
1

if the dataframe is not too large you can try:

df.toPandas().to_csv(path)

if the dataframe is large you may get out of memory errors or too many open files errors.

Charbonneau answered 8/11, 2019 at 21:48 Comment(1)
Hi, good answer (!). Pandas have good plugins for many frameworks, in particular its dataframe is compatible with Apache Spark... But unfortunately Pandas is not a standard module of "Spark ecosystem", so there are no toPandas() for example in Scala Spark. Main standard methods are in Scala or all Scala/Python/Java.Kafir

© 2022 - 2024 — McMap. All rights reserved.