java.lang.OutOfMemoryError: GC overhead limit exceeded [duplicate]
Asked Answered
J

16

323

I am getting this error in a program that creates several (hundreds of thousands) HashMap objects with a few (15-20) text entries each. These Strings have all to be collected (without breaking up into smaller amounts) before being submitted to a database.

According to Sun, the error happens "if too much time is being spent in garbage collection: if more than 98% of the total time is spent in garbage collection and less than 2% of the heap is recovered, an OutOfMemoryError will be thrown.".

Apparently, one could use the command line to pass arguments to the JVM for

  • Increasing the heap size, via "-Xmx1024m" (or more), or
  • Disabling the error check altogether, via "-XX:-UseGCOverheadLimit".

The first approach works fine, the second ends up in another java.lang.OutOfMemoryError, this time about the heap.

So, question: is there any programmatic alternative to this, for the particular use case (i.e., several small HashMap objects)? If I use the HashMap clear() method, for instance, the problem goes away, but so do the data stored in the HashMap! :-)

The issue is also discussed in a related topic in StackOverflow.

Jauregui answered 30/4, 2011 at 3:49 Comment(4)
You may need to change your algorithm and use some more efficient data structure. Can you tell us what algorithm your are trying to implement which requires that much amount of HashMaps?Ambulacrum
I am just reading very large text files (hundreds of thousands of lines each), over which I have no control, i.e. they cannot be broken down. For every line of text, a HashMap is constructed that contains a few (actually around 10) small String values, using the same database field names again and again. Ideally, I would like to be able to read the entire file before sending the data to the database.Jauregui
It sounds like reading the entire file before sending the data to the database is really poor solution... in fact it doesn't work at all, within the very real contraints on available memory. Why do you want to do that anyway? What do you mean by "using the same database field names again and again"? field-names as keys or values? If they the fields are keys then just use a arrays, where the field is IMPLIED by it's position... and if they're values then intern them before you add them to the maps. It'd help to know what the data is. Cheers. Keith.Clergyman
They are keys with a constant value. Intern does seem to help, thanks.Jauregui
K
158

You're essentially running out of memory to run the process smoothly. Options that come to mind:

  1. Specify more memory like you mentioned, try something in between like -Xmx512m first
  2. Work with smaller batches of HashMap objects to process at once if possible
  3. If you have a lot of duplicate strings, use String.intern() on them before putting them into the HashMap
  4. Use the HashMap(int initialCapacity, float loadFactor) constructor to tune for your case
Kingfish answered 30/4, 2011 at 4:0 Comment(4)
I already use close to the initial capacity of the HashMap, so the program is nearly optimal there.Jauregui
If it works with more memory is there any reason not to go with that? It will actually only grow as large as necessary up to your maximum if you use something like -Xms128m -Xmx1024m. Seems the most simple option.Kingfish
Yes, and I guess the fastest. I used intern() for some probably repeated values and the problem went away, too.Jauregui
Using string intern method has always been a very bad idea. Because String pool has a fixed size and cannot grow at runtime when it's needed. JVM enginner, Aleksey Shipilev has even a talk on this topic ("Java.lang.String Catechism").Bethsaida
C
61

The following worked for me. Just add the following snippet:

dexOptions {
        javaMaxHeapSize "4g"
}

To your build.gradle:

android {
    compileSdkVersion 23
    buildToolsVersion '23.0.1'

    defaultConfig {
        applicationId "yourpackage"
        minSdkVersion 14
        targetSdkVersion 23
        versionCode 1
        versionName "1.0"

        multiDexEnabled true
    }

    buildTypes {
        release {
            minifyEnabled false
            proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
        }
    }

    packagingOptions {

    }

    dexOptions {
        javaMaxHeapSize "4g"
    }
}
Clique answered 22/11, 2015 at 11:47 Comment(4)
More details see: google.github.io/android-gradle-dsl/current/…Hockett
I did this and I'm still running out of heap space. Android Studio 2.2.3.Joella
what is your gradle version ?Clique
This isn't an android question, so why was this posted here?Declared
T
43

@takrl: The default setting for this option is:

java -XX:+UseConcMarkSweepGC

which means, this option is not active by default. So when you say you used the option "+XX:UseConcMarkSweepGC" I assume you were using this syntax:

java -XX:+UseConcMarkSweepGC

which means you were explicitly activating this option. For the correct syntax and default settings of Java HotSpot VM Options @ this document

Torrence answered 3/2, 2012 at 9:16 Comment(1)
In our case using -XX:+UseConcMarkSweepGC reduced a little bit the risk of "OutOfMemoryError: GC overhead limit exceeded" error in high load / high memory pressure situations, but on the other hand it used up more CPU, so that the requests took 5-10% longer to execute under normal load situations.Dovap
S
24

For the record, we had the same problem today. We fixed it by using this option:

-XX:-UseConcMarkSweepGC

Apparently, this modified the strategy used for garbage collection, which made the issue disappear.

Suggestion answered 12/10, 2011 at 14:38 Comment(0)
C
11

Ummm... you'll either need to:

  1. Completely rethink your algorithm & data-structures, such that it doesn't need all these little HashMaps.

  2. Create a facade which allows you page those HashMaps in-and-out of memory as required. A simple LRU-cache might be just the ticket.

  3. Up the memory available to the JVM. If necessary, even purchasing more RAM might be the quickest, CHEAPEST solution, if you have the management of the machine that hosts this beast. Having said that: I'm generally not a fan of the "throw more hardware at it" solutions, especially if an alternative algorithmic solution can be thought up within a reasonable timeframe. If you keep throwing more hardware at every one of these problems you soon run into the law of diminishing returns.

What are you actually trying to do anyway? I suspect there's a better approach to your actual problem.

Clergyman answered 30/4, 2011 at 4:1 Comment(1)
See my comments above. The use case is very simple and I am looking for a way to process an entire large file without interrupting in the middle of the process. Thanks!Jauregui
C
10

Use alternative HashMap implementation (Trove). Standard Java HashMap has >12x memory overhead. One can read details here.

Crouse answered 17/1, 2013 at 17:1 Comment(1)
<dependency> <groupId>net.sf.trove4j</groupId> <artifactId>trove4j</artifactId> <version>3.0.3</version> </dependency>Cragsman
U
9

Don't store the whole structure in memory while waiting to get to the end.

Write intermediate results to a temporary table in the database instead of hashmaps - functionally, a database table is the equivalent of a hashmap, i.e. both support keyed access to data, but the table is not memory bound, so use an indexed table here rather than the hashmaps.

If done correctly, your algorithm should not even notice the change - correctly here means to use a class to represent the table, even giving it a put(key, value) and a get(key) method just like a hashmap.

When the intermediate table is complete, generate the required sql statement(s) from it instead of from memory.

Unbound answered 28/1, 2013 at 19:7 Comment(0)
C
8

The parallel collector will throw an OutOfMemoryError if too much time is being spent in garbage collection. In particular, if more than 98% of the total time is spent in garbage collection and less than 2% of the heap is recovered, OutOfMemoryError will be thrown. This feature is designed to prevent applications from running for an extended period of time while making little or no progress because the heap is too small. If necessary, this feature can be disabled by adding the option -XX:-UseGCOverheadLimit to the command line.

Choroid answered 25/3, 2015 at 4:9 Comment(1)
Where did you get this information? I'm interested because it appears to be very, very correct. found it... ---> oracle.com/technetwork/java/javase/…Sinh
S
5

If you're creating hundreds of thousands of hash maps, you're probably using far more than you actually need; unless you're working with large files or graphics, storing simple data shouldn't overflow the Java memory limit.

You should try and rethink your algorithm. In this case, I would offer more help on that subject, but I can't give any information until you provide more about the context of the problem.

Shama answered 30/4, 2011 at 4:4 Comment(1)
See my comments above. The use case is very simple and I am looking for a way to process an entire large file without interrupting in the middle of the process. Thanks!Jauregui
A
5

If you have java8, and you can use the G1 Garbage Collector, then run your application with:

 -XX:+UseG1GC -XX:+UseStringDeduplication

This tells the G1 to find similar Strings and keep only one of them in memory, and the others are only a pointer to that String in memory.

This is useful when you have a lot of repeated strings. This solution may or not work and depends on each application.

More info on:
https://blog.codecentric.de/en/2014/08/string-deduplication-new-feature-java-8-update-20-2/ http://java-performance.info/java-string-deduplication/

Avis answered 12/8, 2016 at 14:4 Comment(2)
Thanks George. Helped me for compiling Apache Camel: export MAVEN_OPTS="-Xms3000m -Xmx3000m -XX:+UseG1GC -XX:+UseStringDeduplication"Glochidium
Your welcome, keep an eye on the CPU usage because the G1 GC is a little more demanding on it.Avis
B
3

Fix memory leaks in your application with help of profile tools like eclipse MAT or VisualVM

With JDK 1.7.x or later versions, use G1GC, which spends 10% on garbage collection unlike 2% in other GC algorithms.

Apart from setting heap memory with -Xms1g -Xmx2g , try `

-XX:+UseG1GC 
-XX:G1HeapRegionSize=n, 
-XX:MaxGCPauseMillis=m, 
-XX:ParallelGCThreads=n, 
-XX:ConcGCThreads=n`

Have a look at oracle article for fine-tuning these parameters.

Some question related to G1GC in SE:

Java 7 (JDK 7) garbage collection and documentation on G1

Java G1 garbage collection in production

Agressive garbage collector strategy

Budge answered 15/12, 2015 at 18:22 Comment(0)
C
3

For this use below code in your app gradle file under android closure.

dexOptions { javaMaxHeapSize "4g" }

Crenel answered 24/2, 2017 at 9:41 Comment(0)
P
2

In case of the error:

"Internal compiler error: java.lang.OutOfMemoryError: GC overhead limit exceeded at java.lang.AbstractStringBuilder"

increase the java heap space to 2GB i.e., -Xmx2g.

Pentathlon answered 23/1, 2013 at 7:56 Comment(0)
J
2

You need to increase the memory size in Jdeveloper go to setDomainEnv.cmd.

set WLS_HOME=%WL_HOME%\server
set XMS_SUN_64BIT=256
set XMS_SUN_32BIT=256
set XMX_SUN_64BIT=3072
set XMX_SUN_32BIT=3072
set XMS_JROCKIT_64BIT=256
set XMS_JROCKIT_32BIT=256
set XMX_JROCKIT_64BIT=1024
set XMX_JROCKIT_32BIT=1024

if "%JAVA_VENDOR%"=="Sun" (
    set WLS_MEM_ARGS_64BIT=-Xms256m -Xmx512m
    set WLS_MEM_ARGS_32BIT=-Xms256m -Xmx512m
) else (
    set WLS_MEM_ARGS_64BIT=-Xms512m -Xmx512m
    set WLS_MEM_ARGS_32BIT=-Xms512m -Xmx512m
)
and

set MEM_PERM_SIZE_64BIT=-XX:PermSize=256m
set MEM_PERM_SIZE_32BIT=-XX:PermSize=256m

if "%JAVA_USE_64BIT%"=="true" (
    set MEM_PERM_SIZE=%MEM_PERM_SIZE_64BIT%

) else (
    set MEM_PERM_SIZE=%MEM_PERM_SIZE_32BIT%
)

set MEM_MAX_PERM_SIZE_64BIT=-XX:MaxPermSize=1024m
set MEM_MAX_PERM_SIZE_32BIT=-XX:MaxPermSize=1024m
Jocelin answered 14/4, 2016 at 10:41 Comment(0)
B
1

For my case increasing the memory using -Xmx option was the solution.

I had a 10g file read in java and each time I got the same error. This happened when the value in the RES column in top command reached to the value set in -Xmx option. Then by increasing the memory using -Xmx option everything went fine.

There was another point as well. When I set JAVA_OPTS or CATALINA_OPTS in my user account and increased the amount of memory again I got the same error. Then, I printed the value of those environment variables in my code which gave me different values than what I set. The reason was that Tomcat was the root for that process and then as I was not a su-doer I asked the admin to increase the memory in catalina.sh in Tomcat.

Boesch answered 15/12, 2015 at 17:59 Comment(0)
M
0

This helped me to get rid of this error.This option disables -XX:+DisableExplicitGC

Microparasite answered 7/12, 2012 at 16:13 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.