Pyspark Error: "Py4JJavaError: An error occurred while calling o655.count." when calling count() method on dataframe
Asked Answered
R

4

17

I'm new to Spark and I'm using Pyspark 2.3.1 to read in a csv file into a dataframe. I'm able to read in the file and print values in a Jupyter notebook running within an anaconda environment. This is the code I'm using:

# Start session
spark = SparkSession \
.builder \
.appName("Embedding Models") \
.config('spark.ui.showConsoleProgress', 'true') \
.config("spark.master", "local[2]") \
.getOrCreate()

sqlContext = sql.SQLContext(spark)
schema = StructType([
         StructField("Index", IntegerType(), True),
         StructField("title", StringType(), True),
         StructField("body", StringType(), True)])

df= sqlContext.read.csv("../data/faq_data.csv",
                         header=True, 
                         mode="DROPMALFORMED",
                         schema=schema)

Output:

df.show()

+-----+--------------------+--------------------+
|Index|               title|                body|
+-----+--------------------+--------------------+
|    0|What does “quantu...|Quantum theory is...|
|    1|What is a quantum...|A quantum compute...|

However when I call the .count() method on the dataframe it throws the below error

    ---------------------------------------------------------------------------
Py4JJavaError                             Traceback (most recent call last)
<ipython-input-29-913a2f9eb5fc> in <module>()
----> 1 df.count()

~/anaconda3/envs/Community/lib/python3.6/site-packages/pyspark/sql/dataframe.py in count(self)
    453         2
    454         """
--> 455         return int(self._jdf.count())
    456 
    457     @ignore_unicode_prefix

~/anaconda3/envs/Community/lib/python3.6/site-packages/py4j/java_gateway.py in __call__(self, *args)
   1255         answer = self.gateway_client.send_command(command)
   1256         return_value = get_return_value(
-> 1257             answer, self.gateway_client, self.target_id, self.name)
   1258 
   1259         for temp_arg in temp_args:

~/anaconda3/envs/Community/lib/python3.6/site-packages/pyspark/sql/utils.py in deco(*a, **kw)
     61     def deco(*a, **kw):
     62         try:
---> 63             return f(*a, **kw)
     64         except py4j.protocol.Py4JJavaError as e:
     65             s = e.java_exception.toString()

~/anaconda3/envs/Community/lib/python3.6/site-packages/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
    326                 raise Py4JJavaError(
    327                     "An error occurred while calling {0}{1}{2}.\n".
--> 328                     format(target_id, ".", name), value)
    329             else:
    330                 raise Py4JError(

Py4JJavaError: An error occurred while calling o655.count.
: java.lang.IllegalArgumentException
    at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
    at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
    at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
    at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
    at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:449)
    at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:432)
    at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
    at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
    at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
    at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
    at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
    at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
    at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
    at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:432)
    at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
    at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
    at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
    at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
    at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:262)
    at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:261)
    at scala.collection.immutable.List.foreach(List.scala:381)
    at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:261)
    at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:159)
    at org.apache.spark.SparkContext.clean(SparkContext.scala:2299)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2073)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2099)
    at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:939)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
    at org.apache.spark.rdd.RDD.collect(RDD.scala:938)
    at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:297)
    at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2770)
    at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2769)
    at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3254)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
    at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3253)
    at org.apache.spark.sql.Dataset.count(Dataset.scala:2769)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.base/java.lang.reflect.Method.invoke(Method.java:564)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.base/java.lang.Thread.run(Thread.java:844)

I'm using Python 3.6.5 if that makes a difference.

Roselane answered 21/8, 2018 at 15:55 Comment(1)
I am exactly on same python and pyspark and experiencing same error. Data used in my case can be generated with Rscript groupby-datagen.R 1e6 1e2, and groupby-datagen.R fileSandglass
C
12

What Java version do you have on your machine? Your problem is probably related to Java 9.

If you download Java 8, the exception will disappear. If you already have Java 8 installed, just change JAVA_HOME to it.

Carport answered 25/9, 2018 at 21:9 Comment(4)
This can be the issue, as default java version points to 10 and JAVA_HOME is manually set to java8 for working with spark. Will try to confirm it soon.Sandglass
Yes it was it. It bites me second time. Lack of meaningful error about non-supported java version is appalling.Sandglass
Well, my JAVA_HOME points to java 8 jdk1.8.0_281.jdk/Contents/Home but still getting the same error.. py4j.protocol.Py4JJavaError: An error occurred while callingTunny
I follow the above step and install java 8 and modify the environment variable path but still, it does not work for me.Annular
D
3

Could you try df.repartition(1).count() and len(df.toPandas())?

If it works, then the problem is most probably in your spark configuration.

Dyson answered 27/9, 2018 at 12:39 Comment(2)
What does it indicate if this fails? In particular, the df.repartition(1).count() step fails.Bernete
what does it mean if it fails?Lapse
C
2

In Linux installing Java 8 as the following will help:

sudo apt install openjdk-8-jdk

Then set the default Java to version 8 using:

sudo update-alternatives --config java

***************** : 2 (Enter 2, when it asks you to choose) + Press Enter

Crassus answered 28/10, 2019 at 19:19 Comment(0)
E
1

Without being able to actually see the data, I would guess that it's a schema issue. I would recommend trying to load a smaller sample of the data where you can ensure that there are only 3 columns to test that.

Since its a CSV, another simple test could be to load and split the data by new line and then comma to check if there is anything breaking your file.

I've definitely seen this before but I can't remember what exactly was wrong.

Entertain answered 25/9, 2018 at 18:34 Comment(2)
Script to reproduce data has been provided, it produce valid csv that has been properly read in multiple languages: R, python, scala, java, juliaSandglass
Possibly a data issue atleast in my case. I had to drop and recreate the source table with refreshed data and it worked fine.Functional

© 2022 - 2024 — McMap. All rights reserved.