Locating code that is filling PermGen with dead Groovy code
Asked Answered
N

2

10

We have had our glassfish instance go down every two weeks for a while with a java.lang.OutOfMemoryError: PermGen space. I increased the PermGen space to 512MB and startet dumping memory usage with jstat -gc. After two weeks I came up with the following graph that shows how the PermGen space is steadily increasing (the units on the x-axis are minutes, y-axis are KB). Graph of increasing PermGen usage

I tried googling around for some kind of profiling tool that could pinpoint the error and a thread here on SO mentioned jmap, which proved to be quite helpful. Out of the approximately 14000 lines dumped from jmap -permstats $PID, approximately 12500 contained groovy/lang/GroovyClassLoader$InnerLoader, pointing to some kind of memory leak from either our own Groovy code or Groovy itself. I have to point out that Groovy constitues less than 1% of the relevant codebase .

Example output below:

class_loader    classes bytes   parent_loader   alive?  type

<bootstrap> 3811    14830264      null      live    <internal>
0x00007f3aa7e19d20  20  164168  0x00007f3a9607f010  dead    groovy/lang/GroovyClassLoader$InnerLoader@0x00007f3a7afb4120
0x00007f3aa7c850d0  20  164168  0x00007f3a9607f010  dead    groovy/lang/GroovyClassLoader$InnerLoader@0x00007f3a7afb4120
0x00007f3aa5d15128  21  181072  0x00007f3a9607f010  dead    groovy/lang/GroovyClassLoader$InnerLoader@0x00007f3a7afb4120
0x00007f3aad0b40e8  36  189816  0x00007f3a9d31fbf8  dead    org/apache/jasper/servlet/JasperLoader@0x00007f3a7d0caf00
....

So how can I proceed to find out more about what code is causing this?

From this article I infer that our Groovy code is dynamically creating classes somewhere. And from the the dump from jmap I can see that most of the dead objects/classes(?) have the same parent_loader, although I am unsure what that means in this context. I do not know how to proceed from here.

Addendum

For latecomers, it's worth pointing out that the accepted answer does not fix the issue. It simply extends the period needed before rebooting with a tenfold by not storing so much class info. What actually fixed our problems was getting rid of the code that generated it. We used the validation (Design by contract) framework OVal where one could script custom constraints using Groovy as annotations on methods and classes. Removing the annotations in favor of explicit pre- and post-conditions in plain Java was boring, but it got the job done. I suspect each time an OVal constraint was being checked a new anonymous class was being created and somehow the associated class data was causing a memory leak.

Nyssa answered 28/4, 2011 at 8:44 Comment(3)
Does this solution provide any benefit: #88735Bithia
I haven't solved the problem yet - other things are more pressing, since this can be "solved" by restarting every two weeks, but I have found some good tips on how to pinpoint what is causing the classloader leaks using the Eclipse Memory Analyzer: sites.google.com/site/eclipsebiz/The-Unknown-Generation-PermNyssa
I'll check back when I have had time to dump the heap space and analyzed it. Seems that there are very few solutions lying around on the interweb.Nyssa
P
3

We had a similar problem (1 week between crashes). The trouble seems to be that Groovy caches meta methods. We ended up using this code based on this discussion and bug report

GroovyClassLoader loader = new GroovyClassLoader();
Reader reader = new BufferedReader(clob.getCharacterStream());
GroovyCodeSource source = new GroovyCodeSource(reader, name, "xb3.Classifier");
Class<?> groovyClass = loader.parseClass(source);
Object possibleClass = groovyClass.newInstance();
if (expectedType.isAssignableFrom(possibleClass.getClass())) {
    classifiers.put((T) possibleClass, name);
}
reader.close();
// Tell Groovy we don't need any meta
// information about these classes
GroovySystem.getMetaClassRegistry().removeMetaClass(possibleClass.getClass());
// Tell the loader to clear out it's cache,
// this ensures the classes will be GC'd
loader.clearCache();
Photopia answered 7/5, 2012 at 14:40 Comment(3)
Thanks! Will try this out. Might be a while before I check back with an upvote or answr, though. But not forgotten :)Nyssa
Finally had to time to check this out. Got PermGen increase to go from approx 100KB/min to 10KB/min (approximate measure using VisualVM). That at least gives us several weeks more between each restart. Just needed to add this after script parsing: GroovySystem.getMetaClassRegistry().removeMetaClass(script.getClass());Nyssa
For onlookers, the bug report mentioned above places the fix version at 1.5.7 in year 2008.Dactylogram
T
-6

If you are using Sun JVM change it for IBM JVM , it will works fine i hope :)

Tisbe answered 18/5, 2011 at 14:16 Comment(4)
Thanks for the tip, Eric, but I cannot see how that should help. I know that the IBM JVM has no pre-set static limit on the PermSpace like the JVM, so that it would gradually just increase it as it needed it.Nyssa
BUT that still would not fix the memory leak I am experiencing - AFAIK.Nyssa
IBM JVM is optimized, source code is different not only for the limits. Your memory leak is caused by external resources memory occupation,Tisbe
We already know that, but how would I go about in finding out what part of the code is not releasing the Groovy Classloader? Something is keeping a reference to it, and I need to analyze the running system to find it (through a memory dump or similar). I already found the classes polluting the permanent space by using jmap, but I need to dig deeper than that. I am unfortunately not able to disclose our 20MB of source code through stackoverflow :-)Nyssa

© 2022 - 2024 — McMap. All rights reserved.