Efficiently counting the number of lines of a text file. (200mb+)
Asked Answered
J

19

109

I have just found out that my script gives me a fatal error:

Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 440 bytes) in C:\process_txt.php on line 109

That line is this:

$lines = count(file($path)) - 1;

So I think it is having difficulty loading the file into memeory and counting the number of lines, is there a more efficient way I can do this without having memory issues?

The text files that I need to count the number of lines for range from 2MB to 500MB. Maybe a Gig sometimes.

Thanks all for any help.

Judiejudith answered 29/1, 2010 at 14:26 Comment(0)
I
185

This will use less memory, since it doesn't load the whole file into memory:

$file="largefile.txt";
$linecount = 0;
$handle = fopen($file, "r");
while(!feof($handle)){
  $line = fgets($handle);
  $linecount++;
}

fclose($handle);

echo $linecount;

fgets loads a single line into memory (if the second argument $length is omitted it will keep reading from the stream until it reaches the end of the line, which is what we want). This is still unlikely to be as quick as using something other than PHP, if you care about wall time as well as memory usage.

The only danger with this is if any lines are particularly long (what if you encounter a 2GB file without line breaks?). In which case you're better off doing slurping it in in chunks, and counting end-of-line characters:

$file="largefile.txt";
$linecount = 0;
$handle = fopen($file, "r");
while(!feof($handle)){
  $line = fgets($handle, 4096);
  $linecount = $linecount + substr_count($line, PHP_EOL);
}

fclose($handle);

echo $linecount;
Impunity answered 29/1, 2010 at 14:31 Comment(12)
Thanks for the explanation Dominic - that looks good. I had a feeling it had to be done line by line and not letting count of file load the whole thing into memory!Judiejudith
The only danger of this snippet are huge files without linebreaks as fgets will then try to suck up the whole file. It'd be safer to read 4kB chunks at a time and count line termination characters.Perla
@David - how does my edit look? I'm not 100% confident about PHP_EOL - does that look right?Impunity
not perfect: you could have a unix-style file (\n) being parsed on a windows machine (PHP_EOL == '\r\n')Henning
@Henning - good point. How would you address it? How does fgets work?Impunity
Why not improve a bit by limiting the line reading to 1 ? Since we only want to count the number of lines, why not do a fgets($handle, 1); ?Fleece
@CyrilN. This depends on your setup. If you're having mostly files that contain only some chars per line it could be faster because you don't need to use substr_count(), but if you are having very long lines you need to call while() and fgets() much more causing a disadvantage. Do not forget: fgets() does not read line by line. It reads only the amount of chars you defined through $length and if it contains a linebreak it stops whatever $length have been set.Aronarondel
@DominicRodger instead of using substr_count() you should use strpos() as $line will never include more than one linebreak. Or better use $last = strlen($line) - 1; if ($line[ $last ] == "\n" || $line[ $last ] == "\r") { $linecount++; }. This should be the fastest option.Aronarondel
Won't this return 1 more than the number of lines? while(!feof()) will cause you to read an extra line, because the EOF indicator isn't set until after you try to read at the end of file.Weiss
@DominicRodger in the first example I believe $line = fgets($handle); could just be fgets($handle); because $line is never used.Anyaanyah
For the first solution: It counts an extra line because the loop runs once more than is necessary. To fix that, you need to move the fgets call to the end of the loop and clone it once above the loop as well.Eads
Second function will return wrong count if last line contains some text, but no eol.Portfire
P
122

Using a loop of fgets() calls is fine solution and the most straightforward to write, however:

  1. even though internally the file is read using a buffer of 8192 bytes, your code still has to call that function for each line.

  2. it's technically possible that a single line may be bigger than the available memory if you're reading a binary file.

This code reads a file in chunks of 8kB each and then counts the number of newlines within that chunk.

function getLines($file)
{
    $f = fopen($file, 'rb');
    $lines = 0;

    while (!feof($f)) {
        $lines += substr_count(fread($f, 8192), "\n");
    }

    fclose($f);

    return $lines;
}

If the average length of each line is at most 4kB, you will already start saving on function calls, and those can add up when you process big files.

Benchmark

I ran a test with a 1GB file; here are the results:

             +-------------+------------------+---------+
             | This answer | Dominic's answer | wc -l   |
+------------+-------------+------------------+---------+
| Lines      | 3550388     | 3550389          | 3550388 |
+------------+-------------+------------------+---------+
| Runtime    | 1.055       | 4.297            | 0.587   |
+------------+-------------+------------------+---------+

Time is measured in seconds real time, see here what real means

True line count

While the above works well and returns the same results as wc -l, if the file ends without a newline, the line number will be off by one; if you care about this particular scenario, you can make it more accurate by using this logic:


function getLines($file)
{
    $f = fopen($file, 'rb');
    $lines = 0; $buffer = '';

    while (!feof($f)) {
        $buffer = fread($f, 8192);
        $lines += substr_count($buffer, "\n");
    }

    fclose($f);

    if (strlen($buffer) > 0 && $buffer[-1] != "\n") {
        ++$lines;
    }
    return $lines;
}

Parasynthesis answered 12/12, 2013 at 7:8 Comment(8)
Curious how faster (?) it will be if you extend the buffer size to something like 64k. PS: if only php had some easy way to make IO asynchronous in this caseGestapo
@Gestapo To answer your question, with 64kB buffers it becomes 0.2 seconds faster on 1GB :)Vaporescence
Interesting. What about skipping empty lines?Laudanum
Be careful with this benchmark, which did you run first? The second one will have the benefit of the file already being in disk cache, massively skewing the result.Zug
@OliCharlesworth they're averages over five runs, skipping the first run :)Vaporescence
This answer is great! However, IMO, it must test when there is some character in the last line to add 1 in the line count: pastebin.com/yLwZqPR2Whisk
Function will return wrong count if last line contains some text, but no eol.Portfire
@Portfire Surprisingly (or maybe not so) wc -l outputs the same number of lines in that condition (i tested with echo -n "hello world" > file.txt and both return 0)Vaporescence
D
70

Simple Oriented Object solution

$file = new \SplFileObject('file.extension');

while($file->valid()) $file->fgets();

var_dump($file->key());

#Update

Another way to make this is with PHP_INT_MAX in SplFileObject::seek method.

$file = new \SplFileObject('file.extension', 'r');
$file->seek(PHP_INT_MAX);

echo $file->key(); 
Dempstor answered 24/7, 2015 at 13:18 Comment(8)
The second solution is great and uses Spl! Thanks.Concomitant
Thank you ! This is, indeed, great. And faster than calling wc -l (because of the forking I suppose), especially on small files.Trench
I didn't thought that the solution would be so helpful!Dempstor
Excellent solution!Mireyamiriam
This is the best solution by farLaevorotatory
Is the "key() + 1" right? I tried it and seems wrong. For a given file with line endings on every line including the last, this code gives me 3998. But if I do "wc" on it, I get 3997. If I use "vim", it says 3997L (and does not indicate missing EOL). So I think the "Update" answer is wrong.Aframe
@Aframe the key starts of zero value. Considering that file contains one line, key will be returned 0, but the correct is 1Dempstor
@WallaceMaxters - for whatever reason, this is wrong. I've tested on a zero length and 1 line file and removing the + 1 gets the correct line count regardless of file length. Great answer though - thanks!Stanwin
K
37

If you're running this on a Linux/Unix host, the easiest solution would be to use exec() or similar to run the command wc -l $path. Just make sure you've sanitized $path first to be sure that it isn't something like "/path/to/file ; rm -rf /".

Kostroma answered 29/1, 2010 at 14:30 Comment(7)
I am on a windows machine! If I was, I think that would be the best solution!Judiejudith
@ghostdog74: Why, yes, you're right. It is non-portable. That's why I explicitly acknowledged my suggestion's non-portability by prefacing it with the clause "If you're running this on a Linux/Unix host...".Kostroma
Non portable (though useful in some situations), but exec (or shell_exec or system) are a system call, which are considerably slower compared to PHP built-in functions.Reimport
@Manz: Why, yes, you're right. It is non-portable. That's why I explicitly acknowledged my suggestion's non-portability by prefacing it with the clause "If you're running this on a Linux/Unix host...".Kostroma
@DaveSherohman Yes, you're right, sorry. IMHO, I think the most important issue is the time consuming in a system call (especially if you need to use frequently)Reimport
@Reimport it is still 8 times faster (or more) on big files (see Jack's answer).Chowchow
This does not work with CSVs created with Excel on MacBooks. They only have carriage returns, and no newline, for line terminators.Michaels
O
35

There is a faster way I found that does not require looping through the entire file

only on *nix systems, there might be a similar way on windows ...

$file = '/path/to/your.file';

//Get number of lines
$totalLines = intval(exec("wc -l '$file'"));
Outclass answered 17/3, 2013 at 21:18 Comment(5)
add 2>/dev/null to suppress the "No such file or directory"Sac
$total_lines = intval(exec("wc -l '$file'")); will handle file names with spaces.Quartus
Thanks pgee70 didn't come across that yet but makes sense, I updated my answerOutclass
exec('wc -l '.escapeshellarg($file).' 2>/dev/null')Fadden
Looks like the answer by @DaveSherohman above posted 3 years before this oneWier
S
9

If you're using PHP 5.5 you can use a generator. This will NOT work in any version of PHP before 5.5 though. From php.net:

"Generators provide an easy way to implement simple iterators without the overhead or complexity of implementing a class that implements the Iterator interface."

// This function implements a generator to load individual lines of a large file
function getLines($file) {
    $f = fopen($file, 'r');

    // read each line of the file without loading the whole file to memory
    while ($line = fgets($f)) {
        yield $line;
    }
}

// Since generators implement simple iterators, I can quickly count the number
// of lines using the iterator_count() function.
$file = '/path/to/file.txt';
$lineCount = iterator_count(getLines($file)); // the number of lines in the file
Schuh answered 12/10, 2013 at 1:53 Comment(1)
The try/finally is not strictly necessary, PHP will automatically close the file for you. You should probably also mention that the actual counting can be done using iterator_count(getFiles($file)) :)Aftershaft
S
9

If you're under linux you can simply do:

number_of_lines = intval(trim(shell_exec("wc -l ".$file_name." | awk '{print $1}'")));

You just have to find the right command if you're using another OS

Regards

Seaware answered 25/5, 2018 at 8:47 Comment(0)
C
7

This is an addition to Wallace Maxter's solution

It also skips empty lines while counting:

function getLines($file)
{
    $file = new \SplFileObject($file, 'r');
    $file->setFlags(SplFileObject::READ_AHEAD | SplFileObject::SKIP_EMPTY | 
SplFileObject::DROP_NEW_LINE);
    $file->seek(PHP_INT_MAX);

    return $file->key() + 1; 
}
Continually answered 28/6, 2017 at 7:9 Comment(0)
R
2

Based on dominic Rodger's solution, here is what I use (it uses wc if available, otherwise fallbacks to dominic Rodger's solution).

class FileTool
{

    public static function getNbLines($file)
    {
        $linecount = 0;

        $m = exec('which wc');
        if ('' !== $m) {
            $cmd = 'wc -l < "' . str_replace('"', '\\"', $file) . '"';
            $n = exec($cmd);
            return (int)$n + 1;
        }


        $handle = fopen($file, "r");
        while (!feof($handle)) {
            $line = fgets($handle);
            $linecount++;
        }
        fclose($handle);
        return $linecount;
    }
}

https://github.com/lingtalfi/Bat/blob/master/FileTool.php

Ruyter answered 23/12, 2016 at 19:48 Comment(0)
M
2

The most succinct cross-platform solution that only buffers one line at a time.

$file = new \SplFileObject(__FILE__);
$file->setFlags($file::READ_AHEAD);
$lines = iterator_count($file);

Unfortunately, we have to set the READ_AHEAD flag otherwise iterator_count blocks indefinitely. Otherwise, this would be a one-liner.

Malaysia answered 20/9, 2019 at 14:46 Comment(0)
R
1
private static function lineCount($file) {
    $linecount = 0;
    $handle = fopen($file, "r");
    while(!feof($handle)){
        if (fgets($handle) !== false) {
                $linecount++;
        }
    }
    fclose($handle);
    return  $linecount;     
}

I wanted to add a little fix to the function above...

in a specific example where i had a file containing the word 'testing' the function returned 2 as a result. so i needed to add a check if fgets returned false or not :)

have fun :)

Renoir answered 30/1, 2013 at 7:38 Comment(0)
P
1

Counting the number of lines can be done by following codes:

<?php
$fp= fopen("myfile.txt", "r");
$count=0;
while($line = fgetss($fp)) // fgetss() is used to get a line from a file ignoring html tags
$count++;
echo "Total number of lines  are ".$count;
fclose($fp);
?>
Pseudocarp answered 2/4, 2018 at 14:34 Comment(0)
S
1

this is a bit late but...

Here is my solution for a text log file I have which uses \n to separate each line.

$data = file_get_contents("myfile.txt");
$numlines = strlen($data) - strlen(str_replace("\n","",$data));

It does load the file into memory but doesn't need to cycle through an unknown number of lines. It may be unsuitable if the file is GB in size but for smaller files with short lines of data it works a treat for me.

It just removes the "\n" from the file and compares how many have been removed by comparing the length of the data in the file to the length after removing all the line breaks ("\n" chars n my case). If your line delineator is a different char, replace the "\n" with whatever is your line delineation character.

I know it is not the best answer for all occasions but is something I have found quick and simple for my purposes where each line of the log is only a few hundred chars and total log file is not too large.

Stunk answered 3/11, 2021 at 10:1 Comment(1)
This has the same memory issue ... why not just use count(file()) instead?Conrad
A
0

You have several options. The first is to increase the availble memory allowed, which is probably not the best way to do things given that you state the file can get very large. The other way is to use fgets to read the file line by line and increment a counter, which should not cause any memory issues at all as only the current line is in memory at any one time.

Ardyce answered 29/1, 2010 at 14:31 Comment(0)
H
0

There is another answer that I thought might be a good addition to this list.

If you have perl installed and are able to run things from the shell in PHP:

$lines = exec('perl -pe \'s/\r\n|\n|\r/\n/g\' ' . escapeshellarg('largetextfile.txt') . ' | wc -l');

This should handle most line breaks whether from Unix or Windows created files.

TWO downsides (at least):

1) It is not a great idea to have your script so dependent upon the system its running on ( it may not be safe to assume Perl and wc are available )

2) Just a small mistake in escaping and you have handed over access to a shell on your machine.

As with most things I know (or think I know) about coding, I got this info from somewhere else:

John Reeve Article

Hamitosemitic answered 2/8, 2014 at 23:45 Comment(0)
V
0
public function quickAndDirtyLineCounter()
{
    echo "<table>";
    $folders = ['C:\wamp\www\qa\abcfolder\',
    ];
    foreach ($folders as $folder) {
        $files = scandir($folder);
        foreach ($files as $file) {
            if($file == '.' || $file == '..' || !file_exists($folder.'\\'.$file)){
                continue;
            }
                $handle = fopen($folder.'/'.$file, "r");
                $linecount = 0;
                while(!feof($handle)){
                    if(is_bool($handle)){break;}
                    $line = fgets($handle);
                    $linecount++;
                  }
                fclose($handle);
                echo "<tr><td>" . $folder . "</td><td>" . $file . "</td><td>" . $linecount . "</td></tr>";
            }
        }
        echo "</table>";
}
Villar answered 28/8, 2014 at 21:2 Comment(1)
Please consider adding at least some words explaining to the OP and to further readers of you answer why and how it does reply to the original question.Complacence
E
0

I use this method for purely counting how many lines in a file. What is the downside of doing this verses the other answers. I'm seeing many lines as opposed to my two line solution. I'm guessing there's a reason nobody does this.

$lines = count(file('your.file'));
echo $lines;
Ephah answered 26/10, 2017 at 14:24 Comment(1)
The original solution was this. But since file() loads the entire file in memory this was also the original issue (Memory exhaustion) so no, this isn't a solution for the question.Josphinejoss
C
0

stream_get_line is the most powerfull one.

just try

$f = fopen($file, "r");
$n=0;
while (($line = stream_get_line($f,80000,"\n"))!== false) {$n++;}
//$n is number of line and $line is each line  
Caryophyllaceous answered 27/10, 2023 at 6:29 Comment(0)
B
-1

For just counting the lines use:

$handle = fopen("file","r");
static $b = 0;
while($a = fgets($handle)) {
    $b++;
}
echo $b;
Birdiebirdlike answered 19/2, 2015 at 15:28 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.