How to read file N lines at a time? [duplicate]
Asked Answered
B

11

49

I need to read a big file by reading at most N lines at a time, until EOF. What is the most effective way of doing it in Python? Something like:

with open(filename, 'r') as infile:
    while not EOF:
        lines = [get next N lines]
        process(lines)
Bluebell answered 29/4, 2011 at 13:43 Comment(2)
Quick very silly question: Will whatever you are going to do inside process(lines) work if N == 1? If not, you have a problem with a potential single line in the last bunch. If it does work with N == 1, then it would be much more efficient just to do for line in infile: work_on(line).Norwegian
@JohnMachin While it may work for N == 1, it may not be efficient. Think mini batch gradient descent in DL.Tucker
C
48

One solution would be a list comprehension and the slice operator:

with open(filename, 'r') as infile:
    lines = [line for line in infile][:N]

After this lines is tuple of lines. However, this would load the complete file into memory. If you don't want this (i.e. if the file could be really large) there is another solution using a generator expression and islice from the itertools package:

from itertools import islice
with open(filename, 'r') as infile:
    lines_gen = islice(infile, N)

lines_gen is a generator object, that gives you each line of the file and can be used in a loop like this:

for line in lines_gen:
    print line

Both solutions give you up to N lines (or fewer, if the file doesn't have that much).

Callimachus answered 29/4, 2011 at 13:55 Comment(5)
Simplified to lines = islice(infile, N)Bluebell
Note: it reads N lines and stops. To read the next N lines, you could wrap your code in a loop (until EOF) or use the grouper recipe as shown in my answer.Shelly
This solution doesn't answer the question of "how do I read N lines at a time until EOF". It only goes so far as to provide the mechanism for reading N lines at a time, but then only demonstrates reading N lines one at a time (the for loop at the end).Infest
The OP states I need to read a big file by reading at most N lines at a time, and your first solution loads all lines into memory?! Maybe you should not even consider that first solution and remove it from your answer!!!Cauterize
This answer demonstrates something useful—but not a solution to the original question as asked.Trawl
S
23

A file object is an iterator over lines in Python. To iterate over the file N lines at a time, you could use grouper() function in the Itertools Recipes section of the documenation. (Also see What is the most “pythonic” way to iterate over a list in chunks?):

try:
   from itertools import izip_longest
except ImportError:  # Python 3
    from itertools import zip_longest as izip_longest

def grouper(iterable, n, fillvalue=None):
    args = [iter(iterable)] * n
    return izip_longest(*args, fillvalue=fillvalue)

Example

with open(filename) as f:
     for lines in grouper(f, N, ''):
         assert len(lines) == N
         # process N lines here
Shelly answered 30/4, 2011 at 22:49 Comment(4)
@Kevin J. Chase: 1- binary file is an iterator over b'\n'-lines 2- itertools.izip_longest is not removed in Python 3, it is renamed to itertools.zip_longestShelly
I mostly wanted to update that link, since the code only works as written in Python 2, and unspecified links to docs.python.org seem to default to 3 instead of 2 now. 1: True enough. 2: It's debatable which of the zip / izip functions got "removed" in Python 3 --- the code for one is missing, the name for the other is.Backspin
I don't mind the edit. The comment is for your benefit. itertools.zip_longest() in Python 3 and itertools.izip_longest() in Python 2 are the same object.Shelly
@martineau: why did you remove the python2 shebang? izip_longest is not available in Python 3 (it is renamed there to zip_longest)Shelly
M
17

This code will work with any count of lines in file and any N. If you have 1100 lines in file and N = 200, you will get 5 times to process chunks of 200 lines and one time with 100 lines.

with open(filename, 'r') as infile:
    lines = []
    for line in infile:
        lines.append(line)
        if len(lines) >= N:
            process(lines)
            lines = []
    if len(lines) > 0:
        process(lines)
Mahala answered 29/4, 2011 at 13:51 Comment(0)
S
2

maybe:

for x in range(N):
  lines.append(f.readline())
Soupandfish answered 29/4, 2011 at 13:50 Comment(0)
M
2

I think you should be using chunks instead of specifying the number of lines to read. It makes your code more robust and generic. Even if the lines are big, using chunk will upload only the assigned amount of data into memory.

Refer to this link

Medievalist answered 29/4, 2011 at 13:54 Comment(0)
N
2

I needed to read in n lines at a time from files for extremely large files (~1TB) and wrote a simple package to do this. If you pip install bigread, you can do:

from bigread import Reader

stream = Reader(file='large.txt', block_size=10) 
for i in stream:
  print(i)

block_size is the number of lines to read at a time.


This package is no longer maintained. I now find it best to use:

with open('big.txt') as f:
  for line_idx, line in enumerate(f):
    print(line)

If you need a memory of previous lines, just store them in a list. If you need to know future lines to decide what to do with the current line, store the current line in a list until you get to that future line...

Nicolasanicolau answered 28/6, 2018 at 12:39 Comment(2)
the link given above seems broken, also I could not match it to any of your other repos at github. there is a version available on pypi.org/project/bigread but it looks no longer maintained?Market
Yes it's no longer maintained :/ I updated the answer above to show how I approach this problem now; I hope this helps!Nicolasanicolau
P
1

How about a for loop?

with open(filename, 'r') as infile:
    while not EOF:
        lines = []
        for i in range(next N lines):
            lines.append(infile.readline())
        process(lines)
Paperhanger answered 29/4, 2011 at 13:50 Comment(2)
what is this syntax "next N lines", pseudocode? python noob hereNisa
@ColinD it's just the number of lines you want. For instance 7 lines would be for i in range(7)Paperhanger
L
1

You may have to do something as simple as:

lines = [infile.readline() for _ in range(N)]

Update after comments:

lines = [line for line in [infile.readline() for _ in range(N)] if len(line) ]
Loiretcher answered 29/4, 2011 at 13:50 Comment(3)
Your code have no checking on line count. For example if line couns is smaller than N - you will get error.Mahala
@Anatolij: You're right that there is no checking - but you just get empty strings after EOF and no error.Loiretcher
You will need to check each item in process(), so this is overhead.Mahala
F
1
def get_lines_iterator(filename, n=10):
    with open(filename) as fp:
        lines = []
        for i, line in enumerate(fp):
            if i % n == 0 and i != 0:
                yield lines 
                lines = []
            lines.append(line)
    if lines:
        yield lines 

for lines in b():
    print(lines)

It is simpler with islice:

from itertools import islice

def get_lines_iterator(filename, n=10):
    with open(filename) as fp:
        while True:
            lines = list(islice(fp, n))
            if lines:
                yield lines
            else:
                break

for lines in get_lines_iterator(filename):
    print(lines)

Another way to do this:

from itertools import islice

def get_lines_iterator(filename, n=10):
    with open(filename) as fp:
        for line in fp:
            yield [line] + list(islice(fp, n-1))
           

for lines in get_lines_iterator(filename):
    print(lines)
Forebrain answered 15/2, 2022 at 16:29 Comment(0)
F
0

If you can read the full file in ahead of time;

infile = open(filename, 'r').readlines()
my_block = [line.strip() for line in infile[:N]]
cur_pos = 0
while my_block:
    print (my_block)
    cur_pos +=1
    my_block = [line.strip() for line in infile[cur_pos*N:(cur_pos +1)*N]]
Falcate answered 1/11, 2017 at 21:51 Comment(0)
H
0

I was looking for an answer to the same question, but did not really like any of the proposed stuff earlier, so I ended up writing this slightly ugly thing that does exactly what I wanted without using strange libraries.

def test(filename, N):
    with open(filename, 'r') as infile:
        lines = []
        for line in infile:
            line = line.strip()
            if len(lines) < N-1:
                lines.append(line)
            else:
                lines.append(line)
                res = lines
                lines = []
            yield res
        else:
            if len(lines) != 0:
                yield lines
Hosmer answered 21/8, 2018 at 7:22 Comment(2)
itertools is in Python standard libraryBluebell
fair enough, itertools is fine, I did not feel comfortable about islice.Hosmer

© 2022 - 2024 — McMap. All rights reserved.