Well, though I've got -3 rating for my question, here I'm posting the solution which helped me addressing the problem. Me being a techie, always bother more about code / logic than looking into grammar. At least for me, a small context should do to understand the problem.
Coming to the solution:
When we create a .csv file from spark dataframe,
The output file is by default named part-x-yyyyy where:
1) x is either 'm' or 'r', depending on whether the job was a map only job, or reduce
2) yyyyy is the mapper or reducer task number, either it can be 00000 or a random number.
In order to rename the output file, running an os.system HDFS command should do.
import os, sys
output_path_stage = //set the source folder path here
output_path = // set the target folder path here
//creating system command line
cmd2 = "hdfs dfs -mv " + output_path_stage + 'part-*' + ' ' + output_path + 'new_name.csv'
//executing system command
os.system(cmd2)
fyi, if we use rdd.saveAsTextFile option, file gets created with no header. If we use coalesce(1).write.format("com.databricks.spark.csv").option("header", "true").save(output_path)
, file gets created with a random part-x name. above solution will help us creating a .csv file with header, delimiter along with required file name.
hadoop fs -cat path/to/output/part-r-* > path/to/local/file.csv
will dump all the parts from Hadoop into one file on your local disk. – Spasmpart-#{partition number}-#{random uuid}-#{something}
. AFAIK, the UUID is to allow multiple executors to write to the same directory without worrying about trying to write to the same file. – Delusive