spark-shell - How to avoid suppressing of elided stack trace (Exceptions)
Asked Answered
T

1

6

I am trying to run on of my scala file from spark-shell. This file calling some other jar files which have already been loaded into spark-context

The problem is if something fails, it only prints the part of the stacktrace. Is there any way I can enable whole stacktrace?

myclassn: ClassifyFields : queryDb -> Problems extracting from DB
  at myclass.queryDb(ClassifyFields.java:231)
  at myclass.getColumnsWithKeys(ClassifyFields.java:258)
  ... 78 elided
Tver answered 6/5, 2020 at 14:7 Comment(1)
Try doing a try/catch block and inside the catch block, write the stacktrace to a file..Ingesta
C
5

set the below value to zero.

vals.isettings.maxPrintString=0

like the example below

scala> :power
Power mode enabled. :phase is at typer.
import scala.tools.nsc._, intp.global._, definitions._
Try :help or completions for vals._ and power._

scala> vals.isettings.maxPrintString
res0: Int = 800

scala> vals.isettings.maxPrintString=0
vals.isettings.maxPrintString: Int = 0

scala> vals.isettings.maxPrintString
res1: Int = 0

or after exception happened use the below

lastException.printStackTrace
Crescen answered 6/5, 2020 at 14:47 Comment(3)
Is there a way to do this from inside a Jupyter Notebook with Spark Scala code?Pachston
Proposed solution doesn't help :(Rondelet
its doest not help, still see the same error stack traceTome

© 2022 - 2024 — McMap. All rights reserved.