Java IOException "Too many open files"
Asked Answered
T

6

56

I'm doing some file I/O with multiple files (writing to 19 files, it so happens). After writing to them a few hundred times I get the Java IOException: Too many open files. But I actually have only a few files opened at once. What is the problem here? I can verify that the writes were successful.

Tampere answered 27/11, 2010 at 0:37 Comment(4)
Put a comment in the function that will open the files and verify that you are only opening each one one time.Neoptolemus
How do you know that this error occurs because of one of these 19? Java itself or the framework you're using might be opening files.Mcmann
post your code. Perhaps you are opening them inside a loop, so it happens repeatedly.Supercargo
If you don't close the files yourself and just leave it to the garbage-collector to clean it up for you then it will take a long time for that to happen. In the meantime, you're busy opening more files and before you know it, you've hit the limit.Bundelkhand
O
63

On Linux and other UNIX / UNIX-like platforms, the OS places a limit on the number of open file descriptors that a process may have at any given time. In the old days, this limit used to be hardwired1, and relatively small. These days it is much larger (hundreds / thousands), and subject to a "soft" per-process configurable resource limit. (Look up the ulimit shell builtin ...)

Your Java application must be exceeding the per-process file descriptor limit.

You say that you have 19 files open, and that after a few hundred times you get an IOException saying "too many files open". Now this particular exception can ONLY happen when a new file descriptor is requested; i.e. when you are opening a file (or a pipe or a socket). You can verify this by printing the stacktrace for the IOException.

Unless your application is being run with a small resource limit (which seems unlikely), it follows that it must be repeatedly opening files / sockets / pipes, and failing to close them. Find out why that is happening and you should be able to figure out what to do about it.

FYI, the following pattern is a safe way to write to files that is guaranteed not to leak file descriptors.

Writer w = new FileWriter(...);
try {
    // write stuff to the file
} finally {
    try {
        w.close();
    } catch (IOException ex) {
        // Log error writing file and bail out.
    }
}

1 - Hardwired, as in compiled into the kernel. Changing the number of available fd slots required a recompilation ... and could result in less memory being available for other things. In the days when Unix commonly ran on 16-bit machines, these things really mattered.

UPDATE

The Java 7 way is more concise:

try (Writer w = new FileWriter(...)) {
    // write stuff to the file
} // the `w` resource is automatically closed 

UPDATE 2

Apparently you can also encounter a "too many files open" while attempting to run an external program. The basic cause is as described above. However, the reason that you encounter this in exec(...) is that the JVM is attempting to create "pipe" file descriptors that will be connected to the external application's standard input / output / error.

Outbreak answered 27/11, 2010 at 1:4 Comment(5)
I would recommend to use IOUtils.closeQuietly()Permatron
I would use that too. But I wouldn't recommend adding that dependency JUST to get a method that you can write yourself in 2 minutes.Outbreak
Indeed, in 2016 I would strongly recommend using try with resources; i.e. the "Java 7 way". If you are still using Java 6 or earlier, you should upgrade. Java 6 has been out of maintenance for 4 years now.Outbreak
I felt so happy when i read on the IOUtils documentation page about the closeQuietly(Closeable... closeables) method, which says, "Deprecated. As of 2.6 removed without replacement. Please use the try-with-resources statement or handle suppressed exceptions manually."Basis
@Basis - Yea. As of Java 7, you shouldn't need to close() explicitly anymore. The IOUtils deprecation is simply trying to get to do things in a more modern way ... and be happier. See my first UPDATE.Outbreak
M
8

For UNIX:

As Stephen C has suggested, changing the maximum file descriptor value to a higher value avoids this problem.

Try looking at your present file descriptor capacity:

   $ ulimit -n

Then change the limit according to your requirements.

   $ ulimit -n <value>

Note that this just changes the limits in the current shell and any child / descendant process. To make the change "stick" you need to put it into the relevant shell script or initialization file.

Montmartre answered 5/12, 2013 at 18:54 Comment(1)
It only avoids the problem until your application hits the new higher limit. IMO, this is not a good solution.Outbreak
P
3

You're obviously not closing your file descriptors before opening new ones. Are you on windows or linux?

Palikar answered 27/11, 2010 at 0:45 Comment(3)
If he indeed opens 19 files, and has an array with the file descriptors, then he should be fine, but I think he is actually opening up more than he thinks he does.Neoptolemus
AFAIK, Windows doesn't have this issue; for e.g. on my XP SP3, I was able to traverse the entire source tree of CXF (around 15K) files without a sweat without closing any of the files opened.Splayfoot
I guarantee windows has a file descriptor limit. It just may default to 65000Palikar
A
2

Although in most general cases the error is quite clearly that file handles have not been closed, I just encountered an instance with JDK7 on Linux that well... is sufficiently ****ed up to explain here.

The program opened a FileOutputStream (fos), a BufferedOutputStream (bos) and a DataOutputStream (dos). After writing to the dataoutputstream, the dos was closed and I thought everything went fine.

Internally however, the dos, tried to flush the bos, which returned a Disk Full error. That exception was eaten by the DataOutputStream, and as a consequence the underlying bos was not closed, hence the fos was still open.

At a later stage that file was then renamed from (something with a .tmp) to its real name. Thereby, the java file descriptor trackers lost track of the original .tmp, yet it was still open !

To solve this, I had to first flush the DataOutputStream myself, retrieve the IOException and close the FileOutputStream myself.

I hope this helps someone.

Aleras answered 24/9, 2014 at 22:35 Comment(1)
This makes no sense whatsoever. DataOutputStream does not swallow exceptions. FilterOutputStream.close() does, but not in a way that would suppress the close. The part about the 'Java file descriptor trackers', whatever they might be, 'los[ing] track of the original .tmp' is devoid of ascertainable meaning.Frap
E
0

If you're seeing this in automated tests: it's best to properly close all files between test runs.

If you're not sure which file(s) you have left open, a good place to start is the "open" calls which are throwing exceptions! 😄

If you have a file handle should be open exactly as long as its parent object is alive, you could add a finalize method on the parent that calls close on the file handle. And call System.gc() between tests.

Eastwardly answered 19/1, 2022 at 23:9 Comment(0)
G
-7

Recently, I had a program batch processing files, I have certainly closed each file in the loop, but the error still there.

And later, I resolved this problem by garbage collect eagerly every hundreds of files:

int index;
while () {
    try {
        // do with outputStream...
    } finally {
        out.close();
    }
    if (index++ % 100 = 0)
        System.gc();
}
Goforth answered 19/1, 2012 at 6:40 Comment(3)
I'm sorry, but you are mistaken. You may think that you closed all streams explicitly, but somewhere in your program there is a execution path that results in streams not being closed. Running the GC is causing the "lost" streams to be finalized. Stream finalization calls this.close() (IIRC).Outbreak
Good to know though that gc() closes streams.Bridgid
@StefanReich It doesn't. It closes file descriptors. Streams will remain unflushed.Frap

© 2022 - 2024 — McMap. All rights reserved.