How to read csv into sparkR ver 1.4?
Asked Answered
R

3

7

As a new version of spark (1.4) was released there appeared to be a nice frontend interfeace to spark from R package named sparkR. On the documentation page of R for spark there is a command that enables to read json files as an RDD objects

people <- read.df(sqlContext, "./examples/src/main/resources/people.json", "json")

I am trying to read a data from a .csv file like it is described on this revolutionanalitics' blog

# Download the nyc flights dataset as a CSV from https://s3-us-west-2.amazonaws.com/sparkr-data/nycflights13.csv

# Launch SparkR using 
# ./bin/sparkR --packages com.databricks:spark-csv_2.10:1.0.3

# The SparkSQL context should already be created for you as sqlContext
sqlContext
# Java ref type org.apache.spark.sql.SQLContext id 1

# Load the flights CSV file using `read.df`. Note that we use the CSV reader Spark package here.
flights <- read.df(sqlContext, "./nycflights13.csv", "com.databricks.spark.csv", header="true")

The note says I need a spark-csv package to enable this operation. So I downloaded this package from this github repo with this command:

$ bin/spark-shell --packages com.databricks:spark-csv_2.10:1.0.3

But then I encountered such error while trying to read a .csv file.

> flights <- read.df(sqlContext, "./nycflights13.csv", "com.databricks.spark.csv", header="true")
15/07/03 12:52:41 ERROR RBackendHandler: load on 1 failed
java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:127)
    at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:74)
    at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:36)
    at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
    at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
    at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:163)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
    at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:787)
    at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:130)
    at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
    at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
    at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
    at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
    at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: Failed to load class for data source: com.databricks.spark.csv
    at scala.sys.package$.error(package.scala:27)
    at org.apache.spark.sql.sources.ResolvedDataSource$.lookupDataSource(ddl.scala:216)
    at org.apache.spark.sql.sources.ResolvedDataSource$.apply(ddl.scala:229)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:114)
    at org.apache.spark.sql.SQLContext.load(SQLContext.scala:1230)
    ... 25 more
Error: returnStatus == 0 is not TRUE

Any idea on what this error means and how to solve this?

Of course I could try to read .csv in a standard way such as:

read.table("data.csv") -> flights

and then I can transform R data.frame into spark's DataFrame like this:

flightsDF <- createDataFrame(sqlContext, flights)

But this isn't the way I like it and it is really time consuming.

Repentance answered 3/7, 2015 at 10:50 Comment(0)
T
13

You have to start sparkR console each time like this:

sparkR --packages com.databricks:spark-csv_2.10:1.0.3
Tattered answered 3/7, 2015 at 12:26 Comment(0)
S
6

If you are using Rstudio:

 library(SparkR)
 Sys.setenv('SPARKR_SUBMIT_ARGS'='"--packages" "com.databricks:spark-csv_2.10:1.0.3" "sparkr-shell"')
 sqlContext <- sparkRSQL.init(sc)

does the trick. Make sure the version you specify for spark-csv matches the one you downloaded.

Scrawl answered 9/12, 2015 at 0:11 Comment(1)
Do you know if this can be passed to sparkR.init with sparkJars parameter?Repentance
O
-1

Make sure that you install sparkr from within spark using:

install.packages("C:/spark/R/lib/sparkr.zip", repos = NULL)

and not fro github

that solved it for me.

Overbalance answered 21/10, 2016 at 18:55 Comment(1)
I had a proper installation of Spark ver 1.4 but coud not load spark csv data, as at that time the additional package was needed. The solution was written here a year ago, so please don't spam https://mcmap.net/q/1412463/-how-to-read-csv-into-sparkr-ver-1-4Repentance

© 2022 - 2024 — McMap. All rights reserved.