What is the impact of spark UI on application's memory usage?
Asked Answered
G

1

6

I have a Spark application (2.4.5) using Kafka as the source using big batch windows (5 minutes), in our application we only really care about the RDD from that specific interval to process data.

What is happening is that our application is crashing from time to time with either OutOfMemory exception on the Driver (running in client mode) or GC OutOfMemory on the executors. After a lot of research, it seemed that we were not handling the states properly which was causing the Lineage to grow indefinitely. We considered fixing the problem either by using a batch approach where we control the offsets grabbed from Kafka and create the RDD's from them (which would truncate the lineage) or by enabling checkpointing.

During the investigations someone found a not really similar issue which was solved by tweaking some UI parameters (Yarn Heap usage growing over time):

  • spark.ui.retainedJobs=50
  • spark.ui.retainedStages=50
  • spark.ui.retainedTasks=500
  • spark.worker.ui.retainedExecutors=50
  • spark.worker.ui.retainedDrivers=50
  • spark.sql.ui.retainedExecutions=50
  • spark.streaming.ui.retainedBatches=50

Since these are UI parameters, it doesn't make sense to me that they would affect the application's memory usage unless they affect the way applications store information to send to the UI. Early tests show that the application is indeed running longer without OOM issues.

Can anyone explain what is the impact these parameters have on Applications? Can they really impact memory usage on applications? Are there any other parameters that I should look into to get the whole picture (I'm wondering if there is a "factor" parameter that needs to be tweaked so memory allocation is appropriate for our case)?

Thank you

Gothard answered 21/10, 2020 at 18:0 Comment(0)
G
6

After a lot of testing our team managed to narrow down the problem to this particular paramter:

spark.sql.ui.retainedExecutions

I decided to dig in so I downloaded Spark's code. I found out that information about the Parsed Logical Plan is not only kept in the application's memory but it's also controlled by this parameter.

When a SparkSession session is created, one of the many objects that are instantiated is the SQLAppStatusListener. This class implements two methods:

onExecutionStart - On every execution , creates a new SparkPlanGraphWrapper, which will hold references to the Parsed Logical Plan, and add it to a SharedState object which in this case keeps track of how many instances of the object were created.

cleanupExecution - Removes the SparkPlanGraphWrapper from the SharedState object if the number of stored objects is greater than the value of spark.sql.ui.retainedExecutions, which defaults to 1000.

In our case specifically, the logical plan was taking 4MB of memory, so in a simplistic way, we would have to allocate 4GB of memory to accommodate the retained executions.

Gothard answered 23/10, 2020 at 20:59 Comment(2)
learnsome. i cannot glean that from the manuals or elsewhere. hard to follow on the GC side. assuming correct so upvoted. that said, there are a couple of other aspects that are hard to follow with spark. i am not convinced all know what they are dealing with.Thielen
@andre How did you find out memory usage of 4MB by logical plan?Bossy

© 2022 - 2024 — McMap. All rights reserved.