FileOutputStream.close is really slow when writing large file
Asked Answered
M

2

5

I have a method which receives a file over a TCP socket using this code:

FileOutputStream fileStream = new FileOutputStream(filename.getName());
while (totalRead < size) {
    if (size - totalRead > CHUNKSIZE) {
        read = getInputStream().read(buffer, 0, CHUNKSIZE);
    } else {
        read = getInputStream().read(buffer, 0, size - totalRead);
    }
    totalRead += read;
    fileStream.write(buffer, 0, read);
    fileStream.flush();

    if (System.currentTimeMillis() > nextPrint) {
        nextPrint += 1000;
        int speed = (int) (totalRead / (System.currentTimeMillis() - startTime));
        double procent = ((double)totalRead / size) * 100;
        gui.setStatus("Reciving: " + filename + " at " + speed + " kb/s, " + procent + "% complete");
    }
}
gui.setStatus("Reciving: " + filename + " complete.");
fileStream.close();

FileOutputStream.close is taking really long time when receiving large files, why is that? As you see I'm flushing the stream at every received chunk..

Merc answered 21/10, 2011 at 12:41 Comment(4)
How long is "really long time"? If you look at the file size in the operating system before the close(), what does it look like?Criner
close take around 30 seconds when received a 500Mb file. I can see the file "grow" when receiving it, so it is writing to disk at each flush.Merc
But has it got the final size correctly just before you call close? My guess is that it hasn't really flushed it, at the OS level, but I couldn't say for sure.Criner
Your copy loop is much more complicated than necessary. All you need is 'while ((count = in.read(buffer)) > 0) out,write(buffer, 0, count);'Tumefacient
P
4

Depending on the OS, flush() does nothing more thaqn force the data to be written to the OS. In the case of FileOutputStream, write() passes all data to the OS, so flush() does nothing. Where as close() can ensure the file is actually written to disk (or not depending on the OS). You can look at whether the disk is still busy when writing the data.

A 500 MB files taking 30 seconds means you are writing 17 MB/s. This sounds like a very slow disk or that the file in on network share/drive.


You can try this

File file = File.createTempFile("deleteme", "dat"); // put your file here.
FileOutputStream fos = new FileOutputStream(file);
long start = System.nanoTime();
byte[] bytes = new byte[32 * 1024];
for (long l = 0; l < 500 * 1000 * 1000; l += bytes.length)
    fos.write(bytes);
long mid = System.nanoTime();
System.out.printf("Took %.3f seconds to write %,d bytes%n", (mid - start) / 1e9, file.length());
fos.close();
long end = System.nanoTime();
System.out.printf("Took %.3f seconds to close%n", (end - mid) / 1e9);

prints

Took 0.116 seconds to write 500,006,912 bytes
Took 0.002 seconds to close

You can see from the speed that on this system its not writing the data even on a close. i.e. the drive is not that fast.

Palermo answered 21/10, 2011 at 13:17 Comment(4)
flush() does nothing on FileOutputStream. It is only useful on buffered streams.Echidna
Ahh that makes sense. When I tried the same code at home, close() took only a fraction of a second, so I guess the "problem" at school, where close() took 30~ seconds, were because of network mounted home-folders.Merc
No,FileOutputStream.close cannot guarantee that all bytes are acutally written to disk. But you can guarantee that all bytes are written to OS buffer. If OS crash,such as power-fail,your FileOutputStream that is already returned from close(),may lose data.Deb
If you have closed the file, the program can die and you don't lose data unless something else goes wrong.Palermo
B
1

I saw the same with using filestream. What I found there was if you open the file as readwrite, it cached everything and didn't write until you close or dispose. Flush didn't write. However if your writes extend the size of the file it would autoflush.

Opening as just write autoflushed on each write.

Batiste answered 30/12, 2011 at 22:55 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.