How to read all files in S3 folder/bucket using sparklyr in R?
Asked Answered
H

1

6

I have tried below code & its combinations in order to read all files given in a S3 folder , but nothing seems to be working .. Sensitive information/code is removed from the below script. There are 6 files each with 6.5 GB .

#Spark Connection
sc<-spark_connect(master = "local" , config=config)


rd_1<-spark_read_csv(sc,name = "Retail_1",path = "s3a://mybucket/xyzabc/Retail_Industry/*/*",header = F,delimiter = "|")


# This is the S3 bucket/folder for files [One of the file names Industry_Raw_Data_000]
s3://mybucket/xyzabc/Retail_Industry/Industry_Raw_Data_000

This the error I get

Error: org.apache.spark.sql.AnalysisException: Path does not exist: s3a://mybucket/xyzabc/Retail_Industry/*/*;
at org.apache.spark.sql.execution.datasources.DataSource$.org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary(DataSource.scala:710)
Hildie answered 3/12, 2018 at 6:42 Comment(0)
H
5

After spending few weeks on googling that issue , it is solved . Here ,the solution..

Sys.setenv(AWS_ACCESS_KEY_ID="abc") 
Sys.setenv(AWS_SECRET_ACCESS_KEY="xyz")

config<-spark_config()

config$sparklyr.defaultPackages <- c(
"com.databricks:spark-csv_2.10:1.5.0",
"com.amazonaws:aws-java-sdk-pom:1.10.34",
"org.apache.hadoop:hadoop-aws:2.7.3")



#Spark Connection
sc<-spark_connect(master = "local" , config=config)

# hadoop configurations
ctx <- spark_context(sc)
jsc <- invoke_static( sc,
"org.apache.spark.api.java.JavaSparkContext",
"fromSparkContext",
ctx
)

hconf <- jsc %>% invoke("hadoopConfiguration")  
hconf %>% invoke("set", "com.amazonaws.services.s3a.enableV4", "true")
hconf %>% invoke("set", "fs.s3a.fast.upload", "true")

folder_files<-"s3a://mybucket/abc/xyz"

rd_11<-spark_read_csv(sc,name = "Retail",path=folder_files,infer_schema = TRUE,header = F,delimiter = "|")


spark_disconnect(sc)
Hildie answered 7/12, 2018 at 13:44 Comment(2)
Same issue here, what are the configs for the amazon secret and access key configs?Isotone
I still receive Error : java.nio.file.AccessDeniedException: s3a://... message. However, I can read the same file with aws.s3 package with no problem at all. Any help would be appreciated. PS: I'm doing that in a Databricks notebook. See issue: github.com/sparklyr/sparklyr/issues/3254Mneme

© 2022 - 2024 — McMap. All rights reserved.