Problems with Java garbage collector and memory
Asked Answered
V

9

9

I am having a really weird issue with a Java application.

Essentially it is a web page that uses magnolia (a cms system), there are 4 instances available on production environment. Sometimes the CPU goes to 100% in a java process.

So, first approach was to make a thread dump, and check the offending thread, what I found was weird:

"GC task thread#0 (ParallelGC)" prio=10 tid=0x000000000ce37800 nid=0x7dcb runnable 
"GC task thread#1 (ParallelGC)" prio=10 tid=0x000000000ce39000 nid=0x7dcc runnable 

Ok, that is pretty weird, I have never had a problem with the garbage collector like that, so the next thing we did was to activate JMX and using jvisualvm inspect the machine: the heap memory usage was really high (95%).

Naive approach: Increase memory, so the problem takes more time to appear, result, on the restarted server with increased memory (6 GB!) the problem appeared 20 hours after restart while on other servers with less memory (4GB!) that had been running for 10 days, the problem took still a few more days to reappear. Also, I tried to use the apache access log from the server failing and use JMeter to replay the requests into a local server in an attemp to reproduce the error... it did not work either.

Then I investigated the logs a little bit more to find this errors

info.magnolia.module.data.importer.ImportException: Error while importing with handler [brightcoveplaylist]:GC overhead limit exceeded
at info.magnolia.module.data.importer.ImportHandler.execute(ImportHandler.java:464)
at info.magnolia.module.data.commands.ImportCommand.execute(ImportCommand.java:83)
at info.magnolia.commands.MgnlCommand.executePooledOrSynchronized(MgnlCommand.java:174)
at info.magnolia.commands.MgnlCommand.execute(MgnlCommand.java:161)
at info.magnolia.module.scheduler.CommandJob.execute(CommandJob.java:91)
at org.quartz.core.JobRunShell.run(JobRunShell.java:216)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:549)
    Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded

Another example

    Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
    at java.util.Arrays.copyOf(Arrays.java:2894)
    at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:117)
    at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:407)
    at java.lang.StringBuilder.append(StringBuilder.java:136)
    at java.lang.StackTraceElement.toString(StackTraceElement.java:175)
    at java.lang.String.valueOf(String.java:2838)
    at java.lang.StringBuilder.append(StringBuilder.java:132)
    at java.lang.Throwable.printStackTrace(Throwable.java:529)
    at org.apache.log4j.DefaultThrowableRenderer.render(DefaultThrowableRenderer.java:60)
    at org.apache.log4j.spi.ThrowableInformation.getThrowableStrRep(ThrowableInformation.java:87)
    at org.apache.log4j.spi.LoggingEvent.getThrowableStrRep(LoggingEvent.java:413)
    at org.apache.log4j.AsyncAppender.append(AsyncAppender.java:162)
    at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
    at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
    at org.apache.log4j.Category.callAppenders(Category.java:206)
    at org.apache.log4j.Category.forcedLog(Category.java:391)
    at org.apache.log4j.Category.log(Category.java:856)
    at org.slf4j.impl.Log4jLoggerAdapter.error(Log4jLoggerAdapter.java:576)
    at info.magnolia.module.templatingkit.functions.STKTemplatingFunctions.getReferencedContent(STKTemplatingFunctions.java:417)
    at info.magnolia.module.templatingkit.templates.components.InternalLinkModel.getLinkNode(InternalLinkModel.java:90)
    at info.magnolia.module.templatingkit.templates.components.InternalLinkModel.getLink(InternalLinkModel.java:66)
    at sun.reflect.GeneratedMethodAccessor174.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:622)
    at freemarker.ext.beans.BeansWrapper.invokeMethod(BeansWrapper.java:866)
    at freemarker.ext.beans.BeanModel.invokeThroughDescriptor(BeanModel.java:277)
    at freemarker.ext.beans.BeanModel.get(BeanModel.java:184)
    at freemarker.core.Dot._getAsTemplateModel(Dot.java:76)
    at freemarker.core.Expression.getAsTemplateModel(Expression.java:89)
    at freemarker.core.BuiltIn$existsBI._getAsTemplateModel(BuiltIn.java:709)
    at freemarker.core.BuiltIn$existsBI.isTrue(BuiltIn.java:720)
    at freemarker.core.OrExpression.isTrue(OrExpression.java:68)

Then I find out that such problem is due to the garbage collector using a ton of CPU but not able to free much memory

Ok, so it is a problem with the MEMORY that manifests itself in the CPU, so If the memory usage problem is solved, then the CPU should be fine, so I took a heapdump, unfortunatelly it was just too big to open it (the file was 10GB), anyway I run the server locallym loaded it a little bit and took a heapdump, after opening it, I found something interesting:

There are a TON of instances of

AbstractReferenceMap$WeakRef  ==> Takes 21.6% of the memory, 9 million instances
AbstractReferenceMap$ReferenceEntry  ==> Takes 9.6% of the memory, 3 million instances

In addition, I have found a Map which seems to be used as a "cache" (horrible but true), the problem is that such map is NOT synchronized and it is shared among threads (being static), the problem could be not only concurrent writes but also the fact that with lack of synchronization, there is no guarantee that thread A will see the changes done to the map by thread B, however, I am unable to figure out how to link this suspicious map using the memory eclipse analyzer, as it does not use the AbstracReferenceMap, it is just a normal HashMap.

Unfortunately, we do not use those classes directly (obviously the code uses them, but not directly), so I have seem to hit a dead end.

Problems for me are

  1. I cannot reproduce the error
  2. I cannot figure out where the hell the memory is leaking (if that is the case)

Any ideas at all?

Vacant answered 27/2, 2014 at 22:3 Comment(7)
I'm not familiar with Magnolia CMS, but this feels like improper use of apache common's ReferenceMap. Possibly, a number of objects are being "cached" using a ReferenceMap to allow faster lookups, but still allow the GC to remove entries if memory got low. However, if the wrong kind of references (hard refs) are used in the ReferenceMap, there's no guarantee the GC can remove those entries. This would cause long retention times of otherwise transient data. Might be worth reaching out to the Magnolia community, there's a good chance someone else there has seen this exact issue before.Uri
what JRE version you're using in production? If it is still 1.6 and older than java-1.6.0-openjdk-1.6.0.0-1.62, than i have bad news for you: bugzilla.redhat.com/show_bug.cgi?id=1051245Yasmeen
Current java version is $ java -version java version "1.6.0_30" OpenJDK Runtime Environment (IcedTea6 1.13.1) (rhel-4.1.13.1.el5_10-x86_64) OpenJDK 64-Bit Server VM (build 23.25-b01, mixed mode)Vacant
Congratulations, you have basically made your question unreadable to people who hadn't seen the original version of it. Instead of putting big UPDATE notices at the top, try just updating your question in a coherent way.Coronal
@jalf Question changed, thanks for the suggestion.Vacant
Have you try creating a heap dump and analyze it with something like eclipse.org/mat to see what's been keeping your memory alive?Dahliadahlstrom
MAT requires almost as much memory as the process it is analyzing. Once you get up around 20G, it is pretty hard to run...Proletarian
I
6

The 'no-op' finalize() methods should definitely be removed as they are likely to make any GC performance problems worse. But I suspect that you have other memory leak issues as well.

Advice:

  • First get rid of the useless finalize() methods.

  • If you have other finalize() methods, consider getting rid of them. (Depending on finalization to do things is generally a bad idea ...)

  • Use memory profiler to try to identify the objects that are being leaked, and what is causing the leakage. There are lots of SO Questions ... and other resources on finding leaks in Java code. For example:


Now to your particular symptoms.

First of all, the place where the OutOfMemoryErrors were thrown is probably irrelevant.

However, the fact that you have huge numbers of AbstractReferenceMap$WeakRef and AbstractReferenceMap$ReferenceEntry objects is a string indication that something in your application or the libraries it is using is doing a huge amount of caching ... and that that caching is implicated in the problem. (The AbstractReferenceMap class is part of the Apache Commons Collections library. It is the superclass of ReferenceMap and ReferenceIdentityMap.)

You need to track down the map object (or objects) that those WeakRef and ReferenceEntry objects belong to, and the (target) objects that they refer to. Then you need to figure out what is creating it / them and figure out why the entries are not being cleared in response to the high memory demand.

  • Do you have strong references to the target objects elsewhere (which would stop the WeakRefs from being broken)?

  • Is / are the map(s) being used incorrectly so as to cause a leak. (Read the javadocs carefully ...)

  • Are the maps being used by multiple threads without external synchronization? That could result in corruption, which potentially could manifest as a massive storage leak.


Unfortunately, these are only theories and there could be other things causing this. And indeed, it is conceivable that this is not a memory leak at all.


Finally, your observation that the problem is worse when the heap is bigger. To me, this is still consistent with a Reference / cache-related issue.

  • Reference objects are more work for the GC than regular references.

  • When the GC needs to "break" a Reference, that creates more work; e.g. processing the Reference queues.

  • Even when that happens, the resulting unreachable objects still can't be collected until the next GC cycle at the earliest.

So I can see how a 6Gb heap full of References would significantly increase the percentage of time spent in the GC ... compared to a 4Gb heap, and that could cause the "GC Overhead Limit" mechanism to kick in earlier.

But I reckon that this is an incidental symptom rather than the root cause.

Impressionable answered 4/3, 2014 at 10:19 Comment(8)
I have been using Ecplipse Memory Analyzer in order to get some light. I have also read a couple of tutorials about it, however I am still unable to link my suspicious (see updated answer) to the Map I think is causing the problem, however I think that is due to my lack of knowledge of the eclipse memory analyzer tool. Any tutorial?, I have checked eclipsesource.com/blogs/2013/01/21/….Vacant
@JuanAntonioGomezMoriano - That blog post looks excellent. What about it do you not understand?Impressionable
I will re-read it again at the end of the day and will be more specific. Thanks!Vacant
@JuanAntonioGomezMoriano the Map might not be causing the problem. If it really has weak references to objects, the key questions are what are those objects and where are the strong references to them?Paternity
@Paternity - agreed. So the first thing he needs to do is to see what the references are referring to.Impressionable
I agree that it seems unlikely that the WeakRefs are of interest. The things to which they point will be deleted, as soon as the GC runs. Before we dismiss it, though, is it at all possible that the WeakRefs, themselves, are never cleared or reused? Perhaps, once a WeakRef object becomes a value in the map, it stays in the map forever? That could happen, e.g., if the map keys were mutable...Proletarian
@G.BlakeMeike - There are a number of things that could cause the weak refs to not be broken. And it is also possible that the problem is that there are so many weak refs that processing them is blowing the GC Overhead limit. Remember the problem here is not necessarily that we are out of memory.Impressionable
@G.BlakeMeike I only peeked at Apache's map code so I can't even rule out the possibilities that the WeakRef instances hold strong references or already-cleared weak references to objects. It'd help to see what they point to. And he needs a reproducible test case to be able to conduct reliable experiments and to have confidence in a fix.Paternity
P
3

With a difficult debugging problem, you need to find a way to reproduce it. Only then will you be able to test experimental changes and determine if they make the problem better or worse. In this case, I'd try writing loops that rapidly create & delete server connections, that create a server connection and rapidly send it memory-expensive requests, etc.

After you can reproduce it, try reducing the heap size to see if you can reproduce it faster. But do that second since a small heap might not hit the "GC overhead limit" which means the GC is spending excessive time (98% by some measure) trying to recover memory.

For a memory leak, you need to figure out where in the code it's accumulating references to objects. E.g. does it build a Map of all incoming network requests? A web search https://www.google.com/search?q=how+to+debug+java+memory+leaks shows many helpful articles on how to debug Java memory leaks, including tips on using tools like the Eclipse Memory Analyzer that you're using. A search for the specific error message https://www.google.com/search?q=GC+overhead+limit+exceeded is also helpful.

The no-op finalize() methods shouldn't cause this problem but they may well exacerbate it. The doc on finalize() reveals that having a finalize() method forces the GC to twice determine that the instance is unreferenced (before and after calling finalize()).

So once you can reproduce the problem, try deleting those no-op finalize() methods and see if the problem takes longer to reproduce.

It's significant that there are many AbstractReferenceMap$WeakRef instances in memory. The point of a weak reference is to refer to an object without forcing it to stay in memory. AbstractReferenceMap is a Map that lets one make the keys and/or values be weak references or soft references. (The point of a soft reference is to try to keep an object in memory but let the GC free it when memory gets low.) Anyway, all those WeakRef instances in memory are probably exacerbating the problem but shouldn't keep the referenced Map keys/values in memory. What are they referring to? What else is referring to those objects?

Paternity answered 3/3, 2014 at 20:40 Comment(1)
+1 to quote: "With a difficult debugging problem, you need to find a way to reproduce it. Only then will you be able to test experimental changes and determine if they make the problem better or worse."Antonomasia
P
3

Try a tool that locates the leaks in your source code such as plumbr

Peat answered 9/3, 2014 at 8:40 Comment(0)
A
2

There are a number of possibilities, perhaps some of which you've explored.

It's definitely a memory leak of some sort.

If your server has user sessions, and your user sessions aren't expiring or being disposed of properly when the user is inactive for more than X minutes/hours, you will get a buildup of used memory.

If you have one or more maps of something that your program generates, and you don't clear the map of old/unneeded entries, you could again get a buildup of used memory. For example, I once considered adding a map to keep track of process threads so that a user could get info from each thread, until my boss pointed out that at no point were finished threads getting removed from the map, so if the user stayed logged in and active, they would hold onto those threads forever.

You should try doing a load test on a non-production server where you simulate normal usage of your app by large numbers of users. Maybe even limit the server's memory even lower than usual.

Good luck, memory issues are a pain to track down.

Astragalus answered 27/2, 2014 at 22:22 Comment(2)
Thanks for the hint, currently we do not keep track of users sessions, in fact you cannot even login in the page, however I found a Map that works like a cache (grrrr), also I found that finalize() is being overwriten, can you check my update on the question please?Vacant
If finalize is overwritten to only call super.finalize(), it's not doing anything different than if it just weren't overwritten.Astragalus
A
1

You say that you have already tried jvisualvm, to inspect the machine. Maybe, try it again, like this:

  • This time look at the "Sampler -> Memory" tab.

  • It should tell you which (types of) objects occupy the most memory.

  • Then find out where such objects are usually created and removed.

Antonomasia answered 6/3, 2014 at 23:1 Comment(0)
O
1
  • A lot of times 'weird' errors can be caused by java agents plugged into the JVM. If you have any agents running (e.g. jrebel/liverebel, newrelic, jprofiler), try running without them first.
  • Weird things can also happen when running JVM with non-standard parameters (-XX); certain combinations are known to cause problems; which parameters are you using currently?
  • Memory leak can also be in Magnolia itself, have you tried googling "magnolia leak"? Are you using any 3rd-party magnolia modules? If possible, try disabling/removing them.

The problem might be connected to just one part of your You can try reproducing the problem by "replaying" your access logs on your staging/development server.

If nothing else works, if it were me, I would do the following: - trying to replicate the problem on an "empty" Magnolia instance (without any of my code) - trying to replicate the problem on an "empty" Magnolia instance (without 3rd party modules) - trying to upgrade all software (magnolia, 3rd-party modules, JVM) - finally try to run the production site with YourKit and try to find the leak

Oram answered 9/3, 2014 at 13:11 Comment(0)
H
0

My guess is that you have automated import running which invokes some instance of ImportHandler. That handler is configured to make a backup of all the nodes it's going to update (I think this is default option), and since you have probably a lot of data in your data type, and since all of this is done in session you run out of memory. Try to find out which import job it is and disable backup for it.

HTH, Jan

Holna answered 3/3, 2014 at 19:22 Comment(0)
C
0

It appears that your memory leaks are emanating from your arrays. The garbage collector has trouble identifying object instances that were removed from arrays, therefore would not be collected for releasing of memory. My advice is when you do remove an object from an array, assign the former object's position to null, therefore the garbage collector can realize that it is a null object, and remove it. Doubt this will be your exact problem, but it is always good to know these things, and check if this is your problem.

It is also good to assign an object instance to null when you need to remove it/clean it up. This is because the finalize() method is sketchy and evil, and sometimes will not be called by the garbage collector. The best workaround for this is to call it (or another similar method) yourself. That way, you are assured that garbage cleanup was performed successfully. As Joshua Bloch said in his book: Effective Java, 2nd edition, Item 7, page 27: Avoid finalizers. "Finalizers are unpredictable, often dangerous and generally unnecessary". You can see the section here.

Because there is no code displayed, I cannot see if any of these methods can be useful, but it is still worth knowing these things. Hope these tips help you!

Classics answered 7/3, 2014 at 1:8 Comment(0)
D
0

As recommended above, I'd get in touch with the devs of Magnolia, but meanwhile:

You are getting this error because the GC doesn't collect much on a run

The concurrent collector will throw an OutOfMemoryError if too much time is being spent in garbage collection: if more than 98% of the total time is spent in garbage collection and less than 2% of the heap is recovered, an OutOfMemoryError will be thrown.

Since you can't change the implementation, I would recommend changing the config of the GC, in a way that runs less frequently, so it would be less likely to fail in this way.

Here is a example config just to get you started on the parameters, you would have to figure out your sweet spot. The logs of the GC will probably be of help for that

My VM params are as follow: -Xms=6G -Xmx=6G -XX:MaxPermSize=1G -XX:NewSize=2G -XX:MaxTenuringThreshold=8 -XX:SurvivorRatio=7 -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled -XX:CMSInitiatingOccupancyFraction=60 -XX:+HeapDumpOnOutOfMemoryError -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintTenuringDistribution -Xloggc:logs/gc.log

Dither answered 9/3, 2014 at 22:50 Comment(1)
Or, if you have to run unchangeable code, do exactly the opposite: Detect the (out-of-memory) error and then restart your server automatically (if possible).Antonomasia

© 2022 - 2024 — McMap. All rights reserved.