Troubleshooting JVM CPU spikes
Asked Answered
B

3

5

We're seeing an interesting (though quite severe) issue with one of our application servers: at a certain point in time, the CPU usage of the JVM running our web applications starts rising and keeps on rising until the applications eventually slow down to a crawl. The only way to fix it is to restart the Application server software.

  • Application server: Spring tc Server (as the servers are hosted elsewhere, I currently do not know the exact version)
  • Applications: relatively standard Spring 3 web applications (we do use an in-JVM EHCache though)

This brings me to a simple question; what can we do to troubleshoot this?

I have considered using VisualVM (or some other JVM monitoring tool), but the best they can do - in this particular case - is give me a thread dump, which still will not tell me what is eating up all the CPU time (unless I'm missing something).

Baggy answered 30/1, 2013 at 11:3 Comment(0)
S
5

You need to find out what it is doing? A common cause of this problem is running low on free memory. If this is not the cause, a CPU profiler is needed. VisualVM comes free with the JDK and can do both for you.

only you can't profile the application all the time

When this is happening you can do an ad hoc profiling by calling jstack multiple times a few second apart. You can use diff of the stack traces to help you can find the thread which are likely to be busy and consuming CPU.

Statist answered 30/1, 2013 at 11:10 Comment(1)
Memory consumption was our first guess but it was in fact normal (with plenty to spare). Using a profiler is a good suggestion, only you can't profile the application all the time (this hasn't happened for 2 weeks until now).Baggy
A
5

The CPU consuming threads are basically the stuck or hogging threads or the GC activities if the Full GC is being performed.

If you are on a unix based environment, then executing the below will give a preview of the utilization of the JVM threads. (PID is the process id for the jvm).

prstat -L -p <PID> 

A threaddump (kill -3 ) can be subsequently taken and then the prstat and the threadump can be co-related to find which threads are using high CPU.

The nid in the thread dump corresponds to the HEX value of the LWPID in the output of prstat.

Eg: LWPID = java/75 corresponds to nid = 0X4B

Once the CPU consuming threads are found in the thread dump, the pointer to where the investigation should start will be available.

Additionally, running a profiler eg: Jprofiler will be helpful.

Agnesagnese answered 30/1, 2013 at 11:18 Comment(0)
P
1

One of the reasons for high CPU usage could be because a Full GC is being performed. You can look here http://www.cubrid.org/blog/dev-platform/how-to-monitor-java-garbage-collection/ to understand how to monitor GC in a jvm.

If there is no GC activity, you need to take a thread dump when your CPU usage crosses a certain threshold using jstack multiple times a few seconds (5-10) apart. You can achieve this through a background process which monitors the CPU usage of the process running the jvm using the Unix top command.

Once the thread dumps are available you can run them through a profiler like VisualVM or Samurai that will help you narrow down to the root cause

Portsmouth answered 7/10, 2014 at 17:54 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.