How to solve "java.io.IOException: error=12, Cannot allocate memory" calling Runtime#exec()?
Asked Answered
C

10

68

On my system I can't run a simple Java application that start a process. I don't know how to solve.

Could you give me some hints how to solve?

The program is:

[root@newton sisma-acquirer]# cat prova.java
import java.io.IOException;

public class prova {

   public static void main(String[] args) throws IOException {
        Runtime.getRuntime().exec("ls");
    }

}

The result is:

[root@newton sisma-acquirer]# javac prova.java && java -cp . prova
Exception in thread "main" java.io.IOException: Cannot run program "ls": java.io.IOException: error=12, Cannot allocate memory
        at java.lang.ProcessBuilder.start(ProcessBuilder.java:474)
        at java.lang.Runtime.exec(Runtime.java:610)
        at java.lang.Runtime.exec(Runtime.java:448)
        at java.lang.Runtime.exec(Runtime.java:345)
        at prova.main(prova.java:6)
Caused by: java.io.IOException: java.io.IOException: error=12, Cannot allocate memory
        at java.lang.UNIXProcess.<init>(UNIXProcess.java:164)
        at java.lang.ProcessImpl.start(ProcessImpl.java:81)
        at java.lang.ProcessBuilder.start(ProcessBuilder.java:467)
        ... 4 more

Configuration of the system:

[root@newton sisma-acquirer]# java -version
java version "1.6.0_0"
OpenJDK Runtime Environment (IcedTea6 1.5) (fedora-18.b16.fc10-i386)
OpenJDK Client VM (build 14.0-b15, mixed mode)
[root@newton sisma-acquirer]# cat /etc/fedora-release
Fedora release 10 (Cambridge)

EDIT: Solution This solves my problem, I don't know exactly why:

echo 0 > /proc/sys/vm/overcommit_memory

Up-votes for who is able to explain :)

Additional informations, top output:

top - 13:35:38 up 40 min,  2 users,  load average: 0.43, 0.19, 0.12
Tasks: 129 total,   1 running, 128 sleeping,   0 stopped,   0 zombie
Cpu(s):  1.5%us,  0.5%sy,  0.0%ni, 94.8%id,  3.2%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:   1033456k total,   587672k used,   445784k free,    51672k buffers
Swap:  2031608k total,        0k used,  2031608k free,   188108k cached

Additional informations, free output:

[root@newton sisma-acquirer]# free
             total       used       free     shared    buffers     cached
Mem:       1033456     588548     444908          0      51704     188292
-/+ buffers/cache:     348552     684904
Swap:      2031608          0    2031608
Catalepsy answered 14/7, 2009 at 11:20 Comment(4)
It's either a bug in the linux version or you have some privilege issues. You could look into the UnixProcess:164 in the source to find out what it tries to allocate.Pahlavi
You can always try the sun jdkSaundra
I had posted a link to a free library that solves your problem but a moderator deleted my answer without explanation. To the benefit of the community, I give it another try as comment: Your memory problem is solved by Yajsw which on Linux uses calls to a C library for the process creation. Read about it here: sourceforge.net/projects/yajsw/forums/forum/810311/topic/…Merocrine
I've encountered this with openjdk, after I replaced it with the official sun jdk, forking works fine... If you don't want to replace openjdk, the 'overcommit_memory' hack works as wellMv
S
21

What's the memory profile of your machine ? e.g. if you run top, how much free memory do you have ?

I suspect UnixProcess performs a fork() and it's simply not getting enough memory from the OS (if memory serves, it'll fork() to duplicate the process and then exec() to run the ls in the new memory process, and it's not getting as far as that)

EDIT: Re. your overcommit solution, it permits overcommitting of system memory, possibly allowing processes to allocate (but not use) more memory than is actually available. So I guess that the fork() duplicates the Java process memory as discussed in the comments below. Of course you don't use the memory since the 'ls' replaces the duplicate Java process.

Shelter answered 14/7, 2009 at 11:27 Comment(5)
I once read that fork() call actually duplicates the entire memory of the currently running process. Is it still true? If you have a java program with 1.2 GB memory and 2GB total, I guess it will fail?Pahlavi
Yes. I was going to mention this, but I vaguely remember that modern OSes will implement copy-on-write for memory pages, so I'm not sure of thisShelter
If she runs the app with the default settings, it shouldn't be a problem to dupe 64MB memory I guess.Pahlavi
I think Andrea's a "he". It's a masculine name in Italy :-)Shelter
@kd304 yes this is still true, only memory mappings are copied though - and the memory is made copy-on-write in the new process - meaning memory is only actually copied if it's written to. Still - it's a quite big problem in big application server using a lot of memory -as those servers tend to cause a lot of memory to be copied in the small window between fork and exec.Karyotin
S
37

This is the solution but you have to set:

echo 1 > /proc/sys/vm/overcommit_memory
Sanches answered 22/6, 2010 at 13:38 Comment(4)
Beware! With overcommit_memory set to 1 every malloc() will succeed. Linux will start randomly killing processes when you're running out of memory. win.tue.nl/~aeb/linux/lk/lk-9.htmlHeddle
Is it possible to restrict this to be per-process, rather than system-wide?Unfolded
Using this solution in development in a Vagrant box.Antiquity
Yes, this worked for me too in a local Vagrant/JDK environment, while try to build dom-distiller. Had to sudo su - to gain root to adjust the proc filesystem.Inearth
S
21

What's the memory profile of your machine ? e.g. if you run top, how much free memory do you have ?

I suspect UnixProcess performs a fork() and it's simply not getting enough memory from the OS (if memory serves, it'll fork() to duplicate the process and then exec() to run the ls in the new memory process, and it's not getting as far as that)

EDIT: Re. your overcommit solution, it permits overcommitting of system memory, possibly allowing processes to allocate (but not use) more memory than is actually available. So I guess that the fork() duplicates the Java process memory as discussed in the comments below. Of course you don't use the memory since the 'ls' replaces the duplicate Java process.

Shelter answered 14/7, 2009 at 11:27 Comment(5)
I once read that fork() call actually duplicates the entire memory of the currently running process. Is it still true? If you have a java program with 1.2 GB memory and 2GB total, I guess it will fail?Pahlavi
Yes. I was going to mention this, but I vaguely remember that modern OSes will implement copy-on-write for memory pages, so I'm not sure of thisShelter
If she runs the app with the default settings, it shouldn't be a problem to dupe 64MB memory I guess.Pahlavi
I think Andrea's a "he". It's a masculine name in Italy :-)Shelter
@kd304 yes this is still true, only memory mappings are copied though - and the memory is made copy-on-write in the new process - meaning memory is only actually copied if it's written to. Still - it's a quite big problem in big application server using a lot of memory -as those servers tend to cause a lot of memory to be copied in the small window between fork and exec.Karyotin
S
9

Runtime.getRuntime().exec allocates the process with the same amount of memory as the main. If you had you heap set to 1GB and try to exec then it will allocate another 1GB for that process to run.

Sermon answered 3/5, 2010 at 23:19 Comment(1)
I had this problem with Maven. My machine had 1GB memory, and it was running Hudson, Nexus and another Maven process. The Maven process crashed since we set -Xms512m by mistake on MAVEN_OPTS. Fixing it to -Xms128m solved it.Unpleasant
N
9

This is solved in Java version 1.6.0_23 and upwards.

See more details at https://bugs.java.com/bugdatabase/view_bug?bug_id=7034935

Notate answered 3/2, 2012 at 10:58 Comment(5)
Any idea if it applies to OpenJDK or equivalent non-Sun JVMs?Unfolded
I am not getting this issue after upgrading to 1.6.0_37-b06.. Still confused about the bug fix.. So how much memory jvm allocates to Runtime.exec?Myungmyxedema
Excellent point. Upgrading the JVM does fix the issue as they now use a different (lighter) system call.Intermezzo
still getting this with 1.7.0_91, seems to be more a memory restriction on my machine (when other apps are closed I don't get this error). Plus that exec spawns new processes with the same RAM usage as the origin processLinguistic
@Karussell: Did you get to resolve this issue? I am on 1.7.0_111 and facing the same. Upgrading to jdk8 is not an option.Columbuscolumbyne
P
8

I came across these links:

http://mail.openjdk.java.net/pipermail/core-libs-dev/2009-May/001689.html

http://www.nabble.com/Review-request-for-5049299-td23667680.html

Seems to be a bug. Usage of a spawn() trick instead of the plain fork()/exec() is advised.

Pahlavi answered 14/7, 2009 at 11:50 Comment(0)
M
8

I solved this using JNA: https://github.com/twall/jna

import com.sun.jna.Library;
import com.sun.jna.Native;
import com.sun.jna.Platform;

public class prova {

    private interface CLibrary extends Library {
        CLibrary INSTANCE = (CLibrary) Native.loadLibrary((Platform.isWindows() ? "msvcrt" : "c"), CLibrary.class);
        int system(String cmd);
    }

    private static int exec(String command) {
        return CLibrary.INSTANCE.system(command);
    }

    public static void main(String[] args) {
        exec("ls");
    }
}
Merocrine answered 20/9, 2011 at 21:47 Comment(0)
S
5

If you look into the source of java.lang.Runtime, you'll see exec finally call protected method: execVM, which means it uses Virtual memory. So for Unix-like system, VM depends on amount of swap space + some ratio of physical memory.

Michael's answer did solve your problem but it might (or to say, would eventually) cause the O.S. deadlock in memory allocation issue since 1 tell O.S. less careful of memory allocation & 0 is just guessing & obviously that you are lucky that O.S. guess you can have memory THIS TIME. Next time? Hmm.....

Better approach is that you experiment your case & give a good swap space & give a better ratio of physical memory used & set value to 2 rather than 1 or 0.

Session answered 1/8, 2010 at 19:46 Comment(0)
S
4

overcommit_memory

Controls overcommit of system memory, possibly allowing processes to allocate (but not use) more memory than is actually available.

0 - Heuristic overcommit handling. Obvious overcommits of address space are refused. Used for a typical system. It ensures a seriously wild allocation fails while allowing overcommit to reduce swap usage. root is allowed to allocate slighly more memory in this mode. This is the default.

1 - Always overcommit. Appropriate for some scientific applications.

2 - Don't overcommit. The total address space commit for the system is not permitted to exceed swap plus a configurable percentage (default is 50) of physical RAM. Depending on the percentage you use, in most situations this means a process will not be killed while attempting to use already-allocated memory but will receive errors on memory allocation as appropriate.

Straticulate answered 21/2, 2011 at 15:44 Comment(0)
H
4

You can use the Tanuki wrapper to spawn a process with POSIX spawn instead of fork. http://wrapper.tanukisoftware.com/doc/english/child-exec.html

The WrapperManager.exec() function is an alternative to the Java-Runtime.exec() which has the disadvantage to use the fork() method, which can become on some platforms very memory expensive to create a new process.

Heddle answered 10/8, 2011 at 18:45 Comment(3)
The Tanuki wrapper is quite impressive. Unfortunately, the WrapperManager is part of the Professional Edition, which is quite expensive if this is the only thing you need. Do you know of any free alternative?Merocrine
@Merocrine It's available as part of the Free (GPLv2) community edition as well. You can even download the source and use it in GPL products.Heddle
I don't think this is part of the community edition. If you try a quick test, you'll get the following exception: Exception in thread "main" org.tanukisoftware.wrapper.WrapperLicenseError: Requires the Professional Edition.Merocrine
K
4

As weird as this may sound, one work around is to reduce the amount of memory allocated to the JVM. Since fork() duplicates the process and its memory, if your JVM process does not really need as much memory as is allocated via -Xmx, the memory allocation to git will work.

Of course you can try other solutions mentioned here (like over-committing or upgrading to a JVM that has the fix). You can try reducing the memory if you are desperate for a solution that keeps all software intact with no environment impact. Also keep in mind that reducing -Xmx aggressively can cause OOMs. I'd recommend upgrading the JDK as a long-term stable solution.

Kaylee answered 19/9, 2012 at 13:1 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.