A simple test script here:
while read LINE; do
LINECOUNT=$(($LINECOUNT+1))
if [[ $(($LINECOUNT % 1000)) -eq 0 ]]; then echo $LINECOUNT; fi
done
When I do cat my450klinefile.txt | myscript
the CPU locks up at 100% and it can process about 1000 lines a second. About 5 minutes to process what cat my450klinefile.txt >/dev/null
does in half a second.
Is there a more efficient way to do essentially this. I just need to read a line from stdin, count the bytes, and write it out to a named pipe. But the speed of even this example is impossibly slow.
Every 1Gb of input lines I need to do a few more complex scripting actions (close and open some pipes that the data is being feed to).
LINECOUNT=$(($LINECOUNT+1))
with((LINECOUNT++))
– Bagniowhy my truck uses so much fuel when I trying to transport 20tonns of wood, when i run it without trailer it uses ten times less!
– Bagniowc
(wordcount)?wc -l
count lines. – Clausewitz