I have a really simple script right now that counts lines in a text file using enumerate()
:
i = 0
f = open("C:/Users/guest/Desktop/file.log", "r")
for i, line in enumerate(f):
pass
print i + 1
f.close()
This takes around 3 and a half minutes to go through a 15GB log file with ~30 million lines. It would be great if I could get this under two minutes or less, because these are daily logs and we want to do a monthly analysis, so the code will have to process 30 logs of ~15GB - more than one and a half hour possibly, and we'd like to minimise the time & memory load on the server.
I would also settle for a good approximation/estimation method, but it needs to be about 4 sig fig accurate...
Thank you!
\n
characters in each chunk as you go. – Flaviuswith open(fname) as f: print sum(1 for line in f)
– Pewee