Why do I get OutOfMemory when 20% of the heap is still free?
Asked Answered
I

5

18

I've set the max heap to 8 GB. When my program starts using about 6.4 GB (as reported in VisualVM), the garbage collector starts taking up most of the CPU and the program crashes with OutOfMemory when making a ~100 MB allocation. I am using Oracle Java 1.7.0_21 on Windows.

My question is whether there are GC options that would help with this. I'm not passing anything except -Xmx8g.

My guess is the heap is getting fragmented, but shouldn't the GC compact it?

Islas answered 11/6, 2013 at 18:26 Comment(2)
Does your host virtual machine have that much RAM? Example: your host VM(PC) has 4 GB of memory, but you give a guest VM 10 GB of memory. I don't know if paging would occur, but I'm just curious about your host system's specs.Rf
My machine has 10GB of RAM plus 10GB virtual memory.Islas
I
5

Collecting bits and pieces of information (which is surprisingly difficult, since the official documentation is quite bad), I've determined...

There are generally two reasons this may happen, both related to fragmentation of free space (ie, free space existing in small pieces such that a large object cannot be allocated). First, the garbage collector might not do compaction, which is to say it does not defragment the memory. Even a collector that does compaction may not do it perfectly well. Second, the garbage collector typically splits the memory area into regions that it reserves for different kinds of objects, and it may not think to take free memory from the region that has it to give to the region that needs it.

The CMS garbage collector does not do compaction, while the others (the serial, parallel, parallelold, and G1) do. The default collector in Java 8 is ParallelOld.

All garbage collectors split memory into regions, and, AFAIK, all of them are too lazy to try very hard to prevent an OOM error. The command line option -XX:+PrintGCDetails is very helpful for some of the collectors in showing the sizes of the regions and how much free space they have.

It is possible to experiment with different garbage collectors and tuning options. Regarding my question, the G1 collector (enabled with the JVM flag -XX:+UseG1GC) solved the issue I was having. However, this was basically down to chance (in other situations, it OOMs more quickly). Some of the collectors (the serial, cms, and G1) have extensive tuning options for selecting the sizes of the various regions, to enable you to waste time in futilely trying to solve the problem.

Ultimately, the real solutions are rather unpleasant. First, is to install more RAM. Second, is to use smaller arrays. Third, is to use ByteBuffer.allocateDirect. Direct byte buffers (and their int/float/double wrappers) are array-like objects with array-like performance that are allocated on the OS's native heap. The OS heap uses the CPU's virtual memory hardware and is free from fragmentation issues and can even effectively use the disk's swap space (allowing you to allocate more memory than available RAM). A big drawback, however, is that the JVM doesn't really know when direct buffers should be deallocated, making this option more desirable for long-lived objects. The final, possibly best, and certainly most unpleasant option is to allocate and deallocate memory natively using JNI calls, and use it in Java by wrapping it in a ByteBuffer.

Islas answered 30/1, 2015 at 8:25 Comment(2)
Thought provoking post.Hardnosed
I'll add that the very last option is much easier if you leverage Netty's ByteBuf.Islas
N
2

Which garbage collector are you using? CMS doesn't do any compaction. Try using the new G1 garbage collector - this does some compaction.

For a bit of context: the G1 garbage collector, or `Garbage First' collector splits up the heap into chunks and after identifying (marking) all the garbage it will evacuate a chunk by copying all the live bits into a different chunk - this is what achieves compaction.

To use include the option -XX:+UseG1GC

This gives a great explanation of G1 and garbage collection in Java in general.

Neace answered 11/6, 2013 at 18:39 Comment(4)
I'm using the defaults, whatever they are. I'll try this out. Seems like what I need. Only thing is that low delay is not a priority for me. Throughput is more important. Should I still use G1?Islas
Using G1, VisualVM reports the max heap to be equal to physical RAM. Is this a bug in VisualVM, or does G1 not care about -Xmx?Islas
It's probably using CMS by default (which doesn't have compaction). G1 has other nice properties so I would try that. But you could also try the Parallel collector (it's stop the world but in parallel) with the command -XX:+UseParallelOldGC - this does compaction.Neace
I'm not sure about the -Xmx question, I'll have a look.Neace
D
2

Whenever this problem has show up in the past, the actual free memory was much lower than it appeared. You can print the amount of free memory when an OutOfMemoryError occurs.

try {
    byte[] array = new byte[largeMemorySize];

} catch(OutOfMemroyError e) {
    System.out.printf("Failed to allocate %,d bytes, free memory= %,d%n", 
        largeMemorySize, Runtime.getRuntime().freeMemory());
    throw e;
}
Doctrine answered 11/6, 2013 at 18:41 Comment(2)
How does memory "appear" free to you? The amount of potentially available memory is actually not freeMemory() but Runtime.getRuntime().maxMemory() - Runtime.getRuntime().totalMemory + Runtime.getRuntime().freeMemory(), but is reduced further by the various factors I bring up in my answer.Islas
@AleksandrDubinsky freeMemory() is not the free memory, unless you have reached your maximum memory size, which you are likely to have done on an OOME. This is not guaranteed admittedly. E.g if you ask for more memory than your heap size you might not trigger a GC at all.Doctrine
C
0

Most likely, you are trying to allocate a large amount of contiguous memory, but all of the free memory is in little bits and pieces all over the place. Also, when the garbage collector starts taking up all of the processing time, that means that it is currently in the process of trying to find the maybe 1 or 2 objects in your whole set of objects that can be freed. In this case, all I think you can do is work on breaking your objects down so that they do not need quite as much continuous memory (at least, not all at one time).

Edit: As far as I know, there is no way that you can get Java to pack the memory so that you can use that full 8 GB, as it would involve the Garbage Collector having to pause all of the other threads, moving their objects around, updating all of the references to those objects, then refreshing stale cache entries, and so on...a very, very expensive operation.

See this about Memory Fragmentation

Chamber answered 11/6, 2013 at 18:31 Comment(1)
-1 Many Java (and other) garbage collectors do compaction (as I've found out). But you're right, it is a very expensive operation, and all threads have to be paused (which is referred to STW or Stop The World). But since this a non-interactive program, the pausing is not a problem.Islas
R
0

-Xmx only ensures that the heap will not exceed 8GB in size but makes no guarantees that this much memory will actually be allocated. Does your machine only have 8GB of memory?

Rhombus answered 11/6, 2013 at 18:49 Comment(2)
My machine has 10GB, plus another 10GB in virtual memory. The heap is already allocated to 8GB.Islas
This is simply too close. The Java Heap is not the same as max memory consumed by Java, it will be more. Modern machines are "liberal" with their use of memory, dedicating this much of the machine to the heap is likely too much. Never EVER (as in ever ever) let your JVM swap. You will hate yourself, your family, your co-workers, and their families. A swapping JVM is death. Don't do it. Don't even let it get close. GC and swap do NOT mix.Nonillion

© 2022 - 2024 — McMap. All rights reserved.