I understand that using a BufferedReader (wrapping a FileReader) is going to be significantly slower than using a BufferedInputStream (wrapping a FileInputStream), because the raw bytes have to be converted to characters. But I don't understand why it is so much slower! Here are the two code samples that I'm using:
BufferedInputStream inputStream = new BufferedInputStream(new FileInputStream(filename));
try {
byte[] byteBuffer = new byte[bufferSize];
int numberOfBytes;
do {
numberOfBytes = inputStream.read(byteBuffer, 0, bufferSize);
} while (numberOfBytes >= 0);
}
finally {
inputStream.close();
}
and:
BufferedReader reader = new BufferedReader(new FileReader(filename), bufferSize);
try {
char[] charBuffer = new char[bufferSize];
int numberOfChars;
do {
numberOfChars = reader.read(charBuffer, 0, bufferSize);
} while (numberOfChars >= 0);
}
finally {
reader.close();
}
I've tried tests using various buffer sizes, all with a 150 megabyte file. Here are the results (buffer size is in bytes; times are in milliseconds):
Buffer Input
Size Stream Reader
4,096 145 497
8,192 125 465
16,384 95 515
32,768 74 506
65,536 64 531
As can be seen, the fastest time for the BufferedInputStream (64 ms) is seven times faster than the fastest time for the BufferedReader (465 ms). As I stated above, I don't have an issue with a significant difference; but this much difference just seems unreasonable.
My question is: does anyone have a suggestion for how to improve the performance of the BufferedReader, or an alternative mechanism?
char
, which the first doesn't do. If you needchar
data, use aReader
; if you need bytes, use anInputStream
. I think you'll find that the fastest of all will be aBufferedReader
wrapping anInputStreamReader
wrapping aBufferedInputStream
wrapping aFileInputStream
. Also take a look at this thread on how to write a benchmark. – Inbound