I've always heard that C++ file I/O operations are much much slower than C style I/O. But I didn't find any practical references on comparatively how slow they actually are, so I decided to test it in my machine (Ubuntu 12.04, GCC 4.6.3, ext4 partition format).
First I wrote a ~900MB file in the disk.
C++ (ofstream
): 163s
ofstream file("test.txt");
for(register int i = 0; i < 100000000; i++)
file << i << endl;
C (fprintf
): 12s
FILE *fp = fopen("test.txt", "w");
for(register int i = 0; i < 100000000; i++)
fprintf(fp, "%d\n", i);
I was expecting such output, it shows that writing to a file is much slower in C++ than in C. Then I read the same file using C and C++ I/O. What made me exclaimed that there is almost no difference in performance while reading from file.
C++ (ifstream
): 12s
int n;
ifstream file("test.txt");
for(register int i = 0; i < 100000000; i++)
file >> n;
C (fscanf
): 12s
FILE *fp = fopen("test.txt", "r");
for(register int i = 0; i < 100000000; i++)
fscanf(fp, "%d", &n);
So, why is taking so long to execute writing using stream? Or, why reading using stream is so fast compared to writing?
Conclusion: The culprit is the std::endl
, as the answers and the comments have pointed out. Changing the line
file << i << endl;
to
file << i << '\n';
has reduced running time to 16s from 163s.
fstream
is slower thancstdio
, but these differences seem a little larger than I'd expect from my measurements. It's not the case that you are running with low (or no) optimization levels? Since stream is largely implemented through templates, it gets compiled into your code, wherecstdio
type functions are compiled into a library and have higher level of optimization. – Cogencystd::endl
is more than the\n
you output in fprintf. It leads to a stream flush. So tryfile << i << '\n'
– Pickard