Unsupported literal type class scala.runtime.BoxedUnit
Asked Answered
I

2

11

I am trying to filter a column of a dataframe read from oracle as below

import org.apache.spark.sql.functions.{col, lit, when}

val df0  =  df_org.filter(col("fiscal_year").isNotNull())

When I do it I am getting below error:

java.lang.RuntimeException: Unsupported literal type class scala.runtime.BoxedUnit ()
at org.apache.spark.sql.catalyst.expressions.Literal$.apply(literals.scala:77)
at org.apache.spark.sql.catalyst.expressions.Literal$$anonfun$create$2.apply(literals.scala:163)
at org.apache.spark.sql.catalyst.expressions.Literal$$anonfun$create$2.apply(literals.scala:163)
at scala.util.Try.getOrElse(Try.scala:79)
at org.apache.spark.sql.catalyst.expressions.Literal$.create(literals.scala:162)
at org.apache.spark.sql.functions$.typedLit(functions.scala:113)
at org.apache.spark.sql.functions$.lit(functions.scala:96)
at org.apache.spark.sql.Column.apply(Column.scala:212)
at com.snp.processors.BenchmarkModelValsProcessor2.process(BenchmarkModelValsProcessor2.scala:80)
at com.snp.utils.Utils$$anonfun$getAllDefinedProcessors$1.apply(Utils.scala:30)
at com.snp.utils.Utils$$anonfun$getAllDefinedProcessors$1.apply(Utils.scala:30)
at com.sp.MigrationDriver$$anonfun$main$6$$anonfun$apply$2.apply(MigrationDriver.scala:140)
at com.sp.MigrationDriver$$anonfun$main$6$$anonfun$apply$2.apply(MigrationDriver.scala:140)
at scala.Option.map(Option.scala:146)
at com.sp.MigrationDriver$$anonfun$main$6.apply(MigrationDriver.scala:138)
at com.sp.MigrationDriver$$anonfun$main$6.apply(MigrationDriver.scala:135)
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.MapLike$DefaultKeySet.foreach(MapLike.scala:174)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
at com.sp.MigrationDriver$.main(MigrationDriver.scala:135)
at com.sp.MigrationDriver.main(MigrationDriver.scala)

Any idea what am I doing wrong here and how to fix this?

Inappreciable answered 19/11, 2018 at 12:37 Comment(9)
please add more information about versions of Spark, and Spark Cassandra connector...Cromlech
@AlexOtt , sir Here are the version details : scala - 2.11 spark - 2.3.1 cassandra - 3.11.1Inappreciable
and spark-cassandra-connector version?Cromlech
what do you mean by "trying to filter a column of a dataframe"? can you elaborate that?Renita
@RameshMaharjan , column "fiscal_year" seems to have some null values , hence failing to load into cassandra ...so from dataframe filtering out those records.Inappreciable
@AlexOtt sir, its <artifactId>spark-cassandra-connector_2.11</artifactId> <version>2.3.0</version>Inappreciable
check this #39728242 for filtering and you can check my answer too #50479012Renita
@RameshMaharjan I am getting similar error while filtering ...how to fix it .... result_df.filter( col("indicator") === lit('N')) .... ERROR ::: RuntimeException: Unsupported literal type class java.lang.Character NInappreciable
isn't the error message clear enough? @user3252097 ? character is not supported in lit functionRenita
M
20

Try removing () in isNull() in your filter.

Matchbox answered 18/3, 2019 at 21:5 Comment(3)
What? there is no isNull in the OP's questionImpulsive
he means to remove the parenthesis on isNotNull, which is the correct answer.Taft
So this totally worked for me but WHY? What is going on?Farthermost
T
20

Just remove parenthesis on your function:

from:
val df0 = df_org.filter(col("fiscal_year").isNotNull())
to:
val df0 = df_org.filter(col("fiscal_year").isNotNull)

Taft answered 2/5, 2019 at 11:32 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.