Using Concurrent Mark Sweep garbage collector with more than 120GB RAM
Asked Answered
W

2

18

Has anyone managed to use the Concurrent Mark Sweep garbage collector (UseConcMarkSweepGC) in Hotspot with more than 120GB RAM?

The JVM starts just fine if I set -ms and -mx to 120G, but if I set them to 130G, the JVM crashes on startup. The JVM starts-up fine with the parallel and G1 collectors (but they have their own issues).

Has anyone managed to use the Concurrent Mark Sweep collector with more than a 120GB heap? If so, did you have to do anything special, or am I just being unlucky here?

The stack from the JVM error dump is as follows:

Stack: [0x00007fbd0290d000,0x00007fbd02a0e000],  sp=0x00007fbd02a0c758,  free space=1021k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
C  [libc.so.6+0x822c0]  __tls_get_addr@@GLIBC_2.3+0x822c0
V  [libjvm.so+0x389c01]      CompactibleFreeListSpace::CompactibleFreeListSpace(BlockOffsetSharedArray*, MemRegion, bool, FreeBlockDictionary::DictionaryChoice)+0xc1
V  [libjvm.so+0x3d1ae0]  ConcurrentMarkSweepGeneration::ConcurrentMarkSweepGeneration(ReservedSpace, unsigned long, int, CardTableRS*, bool, FreeBlockDictionary::DictionaryChoice)+0x100
V  [libjvm.so+0x49d922]  GenerationSpec::init(ReservedSpace, int, GenRemSet*)+0xf2
V  [libjvm.so+0x48d0b9]  GenCollectedHeap::initialize()+0x2e9
V  [libjvm.so+0x824098]  Universe::initialize_heap()+0xb8
V  [libjvm.so+0x82657d]  universe_init()+0x7d
V  [libjvm.so+0x4cf0dd]  init_globals()+0x5d
V  [libjvm.so+0x80f462]  Threads::create_vm(JavaVMInitArgs*, bool*)+0x1e2
V  [libjvm.so+0x51fac4]  JNI_CreateJavaVM+0x74
C  [libjli.so+0x31b7]  JavaMain+0x97

I've raised a bug for this with Oracle (https://bugs.java.com/bugdatabase/view_bug?bug_id=7175901), but I was wondering if anyone else had seen it.

Waylay answered 17/6, 2012 at 20:37 Comment(7)
I would consider using more off heap memory if you can. I regularly use 200-800 GB of off heap while the maximum heap size is 1 GB and the GC largely idle.Aspirator
Thanks -- I am planning to use off-heap memory. However, I'd still like to try out large heap sizes, just to see how well they perform.Waylay
A guide for Full GC times is about 1 seconds per GB in tenured space. The Azul JVM is fully concurrent (for minor and full collections), the HotSpot GC is best effort concurrent. ;)Aspirator
I am wondering if your issue is not down to the JVM but is a bug in glibc ? (thats the top of the frame), after all 120GB of allocation all sorts of things are going to start creakingTylertylosis
..... Thinking some more about it, I wonder if the JVM is trying to memmove something outside of the very large mmap segment that actually contains the jvm heap (for hotspot the JVM actually creates a large mmap private file where the java heap resides), moving memory outside of that (or more technically outside of the processes virtual space) would cause the kernel to get pissyTylertylosis
does this happen with -server on?Jackscrew
Apparently "The default VM is server, because you are running on a server-class machine." so this flag should have no effectWaylay
W
3

This seems to have been accepted as a bug by Oracle: https://bugs.java.com/bugdatabase/view_bug?bug_id=7175901

Waylay answered 28/7, 2012 at 22:3 Comment(0)
T
1

Had the same issue. We reduced ms to below 140 and it seems to work. Left mx at 400g and wrote a test program.

Tarshatarshish answered 8/8, 2012 at 15:40 Comment(1)
I've had no luck with that I'm afraid -- the JVM still crashes when it tries to expand the heap past around 120GWaylay

© 2022 - 2024 — McMap. All rights reserved.