As always, a lengthy problem description.
We are currently stress testing our product - and we now we face a strange problem. After one to two hours, heap space begins to grow, the application dies sometime later.
Profiling the application shows a very large amount of Finalizer objects, filling the heap. Well, we thought "might be the weird finalizer thread to slow" issue and reviewed for reducing the amount of objects that need to be finalized (JNA native handles in this case). Good idea anyway and reduced thousands of new objects...
The next tests showed the same pattern, only one hour later and not so steep. This time the Finalizers originated from the FileInput- and FileOutput streams that are heavily used in the testbed. All resources are closed, but the Finalizers not cleaned up anymore.
I have no idea why after 1 or 2 hours (no exceptions) the FinalizerThread seems suddenly to stop working. If we force System.runFinalization() by hand in some of our threads, the profiler shows that the finalizers are cleaned up. Resuming the test immediately causes new heap allocation for Finalizers.
The FinalizerThread is still there, asking jConsole he's WAITING.
EDIT
First, inspecting the heap with HeapAnalyzer revealed nothing new/strange. HeapAnalyzer has some nice features, but i had my difficulties at first. Im using jProfiler, which comes along with nice heap inspection tools and will stay with it.
Maybe i'm missing some killer features in HeapAnalyzer?
Second, today we set up the tests with a debug connection instead of the profiler - the system is stable for nearly 5 hours now. This seems to be a very strange combination of too much Finalizers (that have been reduced in the first review), the profiler and the VM GC strategies. As everything runs fine at the moment, no real insights...
Thanks for the input so far - maybe you stay tuned and interested (now that you may have more reason to believe that we do not talk over a simple programming fault).