Shell command to sum integers, one per line?
Asked Answered
S

47

1098

I am looking for a command that will accept (as input) multiple lines of text, each line containing a single integer, and output the sum of these integers.

As a bit of background, I have a log file which includes timing measurements. Through grepping for the relevant lines and a bit of sed reformatting I can list all of the timings in that file. I would like to work out the total. I can pipe this intermediate output to any command in order to do the final sum. I have always used expr in the past, but unless it runs in RPN mode I do not think it is going to cope with this (and even then it would be tricky).

How can I get the summation of integers?

Segno answered 16/1, 2009 at 15:42 Comment(2)
This is very similar to a question I asked a while ago: #296281Indices
This question feels like a problem for code golf. codegolf.stackexchange.com :)Inexpensive
P
1641

Bit of awk should do it?

awk '{s+=$1} END {print s}' mydatafile

Note: some versions of awk have some odd behaviours if you are going to be adding anything exceeding 2^31 (2147483647). See comments for more background. One suggestion is to use printf rather than print:

awk '{s+=$1} END {printf "%.0f", s}' mydatafile
Primer answered 16/1, 2009 at 15:42 Comment(27)
There's a lot of awk love in this room! I like how a simple script like this could be modified to add up a second column of data just by changing the $1 to $2Primer
Whats are the limits on awk? i.e. how many data elements can it process before dieing? or becoming a less preferable approach to using a small c snippet?Biak
There's not a practical limit, since it will process the input as a stream. So, if it can handle a file of X lines, you can be pretty sure it can handle X+1.Primer
I once wrote a rudimentary mailing list processer with an awk script run via the vacation utility. Good times. :)Barter
just used this for a: count all documents’ pages script: ls $@ | xargs -i pdftk {} dump_data | grep NumberOfPages | awk '{s+=$2} END {print s}'Cheviot
Yeah, I was going to mention that you can also pipe the data to awk ... cat file.csv | cut -d, -f3 | awk '{s+=$1} END {print s}'Haruspex
How to modify this to work with floats, e.g. lines containing numbers like 123456.789? This snippet is really helpful when grepping through a huge list of files in accounting.Cristiano
awk uses double-precision floats, so it should just work. Whether or not floats are appropriate for accounting I'll leave you to judge :)Primer
To those like me who don't know awk.. check this link for super-quick introductionTiffie
awk breaks for large numbers (note: paste+bc continues to work).Hohenzollern
Be careful, it will not work with numbers greater than 2147483647 (i.e., 2^31), that's because awk uses a 32 bit signed integer representation. Use awk '{s+=$1} END {printf "%.0f", s}' mydatafile instead.Farfamed
As @GiancarloSportelli says, his solution below is better - no integer overflow in print, see https://mcmap.net/q/53044/-shell-command-to-sum-integers-one-per-lineHydrosphere
I think if you have any question that goes 'I've been looking for a shell command that does X, but I can't find one'. The first response you or anyone should have is 'have you tried awk?'Coan
Lol I was using AWK to just extract the interesting number. Now I see that Chrome actually uses almost 2GB total :)Issuance
@GiancarloSportelli I just wanted to add 2 cents. You can use "%ld" on a 64 bit system to keep the number as a 64 bit int, no conversion at all. echo "2147483647\n2147483647\n2147483647\n2147483647" | awk '{s+=$1} END {printf("%ld\n", s)}'Feedback
Unlike other answers, this one works even when there are empty lines in the data.Rotund
Looking at this - how does one 'print' something before the final s variavble is printed ie total $i sConic
What happens if there are lines that don't contain numbers among them? How to filter them out?Frederiksberg
Be aware that awk has no knowledge of what an integer is. It does all its math in double precision. Hence only all numbers upto 2^53 are representable. From that point onwards it goes wrong: awk 'BEGIN{print 2^53-1, 2^53, 2^53+1}' => 9007199254740991 9007199254740992 9007199254740992Elayne
paste/bc from the comment below seems like the correct answerYardage
Can validate this fairly straightforwardly seq $((1 << 32)) $(((1 << 32)+10)) | awk '{x+=$1} END {printf "%ld\n", x}' output: 47244640311Lording
Actually Python is my preferred way, because it sums the numbers without rounding the result even for very large numbers, while awk rounds the result.Junco
Much easier to remember and type than the paste solution.Solana
@BC : i ran that exact command, even using GMP, and got something waaaaay off : ::::::::::::::: seq $((1 << 32)) $(((1 << 32)+10)) | gawk -Me '{x+=$1} END {printf "%ld\n", x}'::::::::::::::::: 5153964000. seq only worked after explicitly adding formatting flag :::: seq -f '%.f' flagCroteau
@BrianChrisman : but you don't need the %ld to get it work - any non-mawk.1.3.4 works as is, and mawk-1 just set CONVFMT='%.f' before you begin. but if you try to use "%ld" (as in L ) in mawk-1, I got 4766497255625457664 insteadCroteau
yeah.. there are some details on macos/linux as well. seq -f '%.0f' $((1 << 32)) $(((1 << 32)+10)) | awk '{x+=$1} END {print x}' where 'seq' can give exponential notation as default on some systems.Lording
just change the printing mode and it prints fine ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: jot 574998 - - 54321 | mawk '$++NF=_+=$!_' CONVFMT='%.f' | gtail -n 2 :::::: 31234357717 8979830992388423 ::: 31234412038 8979862226800461Croteau
O
789

Paste typically merges lines of multiple files, but it can also be used to convert individual lines of a file into a single line. The delimiter flag allows you to pass a x+x type equation to bc.

paste -s -d+ infile | bc

Alternatively, when piping from stdin,

<commands> | paste -s -d+ - | bc
Ossieossietzky answered 16/1, 2009 at 15:42 Comment(19)
Very nice! I would have put a space before the "+", just to help me parse it better, but that was very handy for piping some memory numbers through paste & then bc.Footstep
A solution using bc or dc is what I was looking for as a solution to my own similar problem. I don't know much about it and it seems to be underutilized, so it's very good to see others know how to use it. I suspect it's much faster than the other solutions, especially if there are many lines in the input.Barter
Much easier to remember and type than the awk solution. Also, note that paste can use a dash - as the filename - which will allow you to pipe the numbers from the output of a command into paste's standard output without the need to create a file first: <commands> | paste -sd+ - | bcArthromere
I find this solution less of an overkill than using a fully-fledged programming language, like awk/perl/python. Kudos!Putdown
Awesome! Just used this to sum bogomips across all cores: cat /proc/cpuinfo | grep bogo | cut -d: -f2 | paste -sd+ | bcDynatron
I have a file with 100 million numbers. The awk command takes 21s; the paste command takes 41s. But good to meet 'paste' nevertheless!Pretext
@Abhi: Interesting :D I guess it would take me 20s to figure out the awk command so it evens out though until I try 100 million and one numbers :DResurge
@Abhi: check the answer. awk fails for large numbers on my machine but paste + bc work.Hohenzollern
@Arthromere You can leave out the -, though. (It is useful if you wanted to combine a file with stdin).Qatar
This didn't like the '^m' chars in the file, but once that was fixed, happy happy.Appendicle
@AloisMahdal You can't leave out the - when used in pipe. echo -e "1\n2\n3" | paste -sd+ doesn't work, but having the extra - does work.Magavern
@MarkKCowan bc uses arbitrary precision, I guess awk is likely to use 32 or 64-bit arithmetics, possibly leading to overflows.Beetner
@Magavern then it's probably version-dependent; it does work here with GNU coreutils 8.24 (Fedora 23); your code in last comment yields 1+2+3Qatar
Note that bc isn't included in some shells, like Git Bash for Windows.Peremptory
@10basetom that's because bc is not a shell-builtin (so it's not included in any shell) but a separate program (which is pretty standard on any un*x machine)Macmullin
@umläute I know why it's not included in some shells -- I was pointing that out so that people can consider alternative solutions if their aim is portability.Peremptory
had the problem that bc was not installed and ended up with echo $(( $( <commands> | paste -s -d+ -) )) to evaluate the answerLemmuela
@Arthromere and @Pretext ... so the awk solution is both easier to remember... and faster. double win.Thorfinn
paste & bc also, like Python, sums the numbers without rounding the result even for very large numbers, while awk rounds the result.Junco
K
156

The one-liner version in Python:

$ python -c "import sys; print(sum(int(l) for l in sys.stdin))"
Kenogenesis answered 16/1, 2009 at 15:42 Comment(8)
Above one-liner doesn't work for files in sys.argv[], but that one does #451299Hohenzollern
True- the author said he was going to pipe output from another script into the command and I was trying to make it as short as possible :)Kenogenesis
Shorter version would be python -c"import sys; print(sum(map(int, sys.stdin)))"Hohenzollern
I love this answer for its ease of reading and flexibility. I needed the average size of files smaller than 10Mb in a collection of directories and modified it to this: find . -name '*.epub' -exec stat -c %s '{}' \; | python -c "import sys; nums = [int(n) for n in sys.stdin if int(n) < 10000000]; print(sum(nums)/len(nums))"Depose
Very flexible solution. Also usable for float numbers, just replace int with floatDispensable
You can also filter out non numbers if you have some text mixed in: import sys; print(sum(int(''.join(c for c in l if c.isdigit())) for l in sys.stdin))Contribution
Actually Python is my preferred way, because it sums the numbers without rounding the result even for very large numbers, while awk rounds the result.Junco
@Hohenzollern : use gawk -M bignum mode with GMP, and nothing failsCroteau
F
127

I would put a big WARNING on the commonly approved solution:

awk '{s+=$1} END {print s}' mydatafile # DO NOT USE THIS!!

that is because in this form awk uses a 32 bit signed integer representation: it will overflow for sums that exceed 2147483647 (i.e., 2^31).

A more general answer (for summing integers) would be:

awk '{s+=$1} END {printf "%.0f\n", s}' mydatafile # USE THIS INSTEAD
Farfamed answered 16/1, 2009 at 15:42 Comment(10)
Why does printf() help here? The overflow of the int will have happened before that because the summing code is the same.Compte
Because the problem is actually in the "print" function. Awk uses 64 bit integers, but for some reason print donwscales them to 32 bit.Farfamed
The print bug appears to be fixed, at least for awk 4.0.1 & bash 4.3.11, unless I'm mistaken: echo -e "2147483647 \n 100" |awk '{s+=$1}END{print s}' shows 2147483747Phototype
Using floats just introduces a new problem: echo 999999999999999999 | awk '{s+=$1} END {printf "%.0f\n", s}' produces 1000000000000000000Tadzhik
Shouldn't just using "%ld" on 64bit systems work to not have printf truncate to 32bit? As @Patrick points out, floats aren't a great idea here.Kokaras
@yerforkferchips, where should %ld be placed in the code? I tried echo -e "999999999999999999" | awk '{s+=$1} END {printf "%ld\n", s}' but it still produced 1000000000000000000.Deadpan
See the latest comment by @Elayne on this answer: https://mcmap.net/q/53044/-shell-command-to-sum-integers-one-per-line -- a sad fact in awk ;)Kokaras
Both methods print the same number, even with large numbers (40 decimal digits), but they are both not accurate - only about the first 15 digits are accurate and the rest are random.Junco
As others said awk is incompatible with itself across different versions and builds. You need to be aware of what to really expect with such operations, each time...Bicyclic
@Junco : then just track the addition using a hi/lo combo of 2 integers. Between 2 of them, you can easily track 101-102 bits to full precision. Track with 5 integers total, and you'll be jus shy of 256-bit full precision. Use 21 of them, and you can even track unsigned 1024-bit without needing arraysCroteau
C
102

With jq:

seq 10 | jq -s 'add' # 'add' is equivalent to 'reduce .[] as $item (0; . + $item)'
Covetous answered 16/1, 2009 at 15:42 Comment(3)
Is there a way to do this with rq?Joyance
I think I know what could be the next question, so I will add the answer here :) calculate average: seq 10 | jq -s 'add / length' ref hereConverted
this is slow. jq -s 'add' is 8x slower than the C versionGrider
A
97

Plain bash:

$ cat numbers.txt 
1
2
3
4
5
6
7
8
9
10
$ sum=0; while read num; do ((sum += num)); done < numbers.txt; echo $sum
55
Ankylosaur answered 16/1, 2009 at 15:42 Comment(4)
A smaller one liner: #451299Swob
@rjack, where is num defined? I believe somehow it is connected to the < numbers.txt expression, but it is not clear how.Suggest
@Suggest num is defined in the while expression. while read XX means "use while to read a value, then store that value in XX"Mosque
this is slow, 100x slower than the C versionGrider
S
72
dc -f infile -e '[+z1<r]srz1<rp'

Note that negative numbers prefixed with minus sign should be translated for dc, since it uses _ prefix rather than - prefix for that. For example, via tr '-' '_' | dc -f- -e '...'.

Edit: Since this answer got so many votes "for obscurity", here is a detailed explanation:

The expression [+z1<r]srz1<rp does the following:

[   interpret everything to the next ] as a string
  +   push two values off the stack, add them and push the result
  z   push the current stack depth
  1   push one
  <r  pop two values and execute register r if the original top-of-stack (1)
      is smaller
]   end of the string, will push the whole thing to the stack
sr  pop a value (the string above) and store it in register r
z   push the current stack depth again
1   push 1
<r  pop two values and execute register r if the original top-of-stack (1)
    is smaller
p   print the current top-of-stack

As pseudo-code:

  1. Define "add_top_of_stack" as:
    1. Remove the two top values off the stack and add the result back
    2. If the stack has two or more values, run "add_top_of_stack" recursively
  2. If the stack has two or more values, run "add_top_of_stack"
  3. Print the result, now the only item left in the stack

To really understand the simplicity and power of dc, here is a working Python script that implements some of the commands from dc and executes a Python version of the above command:

### Implement some commands from dc
registers = {'r': None}
stack = []
def add():
    stack.append(stack.pop() + stack.pop())
def z():
    stack.append(len(stack))
def less(reg):
    if stack.pop() < stack.pop():
        registers[reg]()
def store(reg):
    registers[reg] = stack.pop()
def p():
    print stack[-1]

### Python version of the dc command above

# The equivalent to -f: read a file and push every line to the stack
import fileinput
for line in fileinput.input():
    stack.append(int(line.strip()))

def cmd():
    add()
    z()
    stack.append(1)
    less('r')

stack.append(cmd)
store('r')
z()
stack.append(1)
less('r')
p()
Steelman answered 16/1, 2009 at 15:42 Comment(4)
dc is just the tool of choice to use. But I would do it with a little less stack ops. Assumed that all lines really contain a number: (echo "0"; sed 's/$/ +/' inp; echo 'pq')|dc.Malvie
The online algorithm: dc -e '0 0 [+?z1<m]dsmxp'. So we don't save all the numbers on stack before processing but read and process them one by one (to be more precise, line by line, since one line can contain several numbers). Note that empty line can terminate an input sequence.Shevlo
@Malvie that's great. It can actually be shortened by one more character: the space in the sed substitution can be removed, as dc doesn't care about spaces between arguments and operators. (echo "0"; sed 's/$/+/' inputFile; echo 'pq')|dcBarbera
this is slow. dc -f - -e '[+z1<r]srz1<rp' is 250x slower than the C version. dc -e '0 0 [+?z1<m]dsmxp' is 15x slower than the C versionGrider
B
53

Pure and short bash.

f=$(cat numbers.txt)
echo $(( ${f//$'\n'/+} ))
Bonnibelle answered 16/1, 2009 at 15:42 Comment(8)
This is the best solution because it does not create any subprocess if you replace first line with f=$(<numbers.txt).Napoleon
any way of having the input from stdin ? like from a pipe ?Balaklava
@Balaklava If you put f=$(cat); echo $(( ${f//$'\n'/+} )) in a script, then you can pipe anything to that script or invoke it without arguments for interactive stdin input (terminate with Control-D).Tryma
@Napoleon The <numbers.txt is an improvement, but, overall, this solution is only efficient for small input files; for instance, with a file of 1,000 input lines the accepted awk solution is about 20 times faster on my machine - and also consumes less memory, because the file is not read all at once.Tryma
My usecease: f=$(find -iname '*-2014-*' -exec du {} \; | cut -f1); echo $(( ${f//$'\n'/+} )). Might help someone.Lebeau
An advantage of the bash solution over the awk solution is that it gives an integer result, even for large numbers. For large numbers, awk returns a result in scientific notation.Hellenist
This answer is not pure bash because cat is an external call. See the accepted answer's awk '{s+=$1}END{printf"%f",s}' mydatafile for a faster solution that won't fail given too large an input. Load time differences (cat's 43k vs mawk's 128k or even gawk's 659k on my system) won't ever overcome the performance difference ... unless you're running this too often, in which case use more awk or else a "real" language.Lamellate
i didn't downvote, but wanna note this is a horrific solution - summing up from 1 to 99999 took 26.7 seconds on a machine with M1 Max and bash 5.2.15, versus 0.053 secs on awk using jot, and 0.22 secs generating via another awk. Even summing every integer to 100 mil was only 11.5 seconds, and just 1 min 55secs summing all the way to 1 billion. perl came in just slower than awkCroteau
C
43
perl -lne '$x += $_; END { print $x; }' < infile.txt
Cavesson answered 16/1, 2009 at 15:42 Comment(7)
And I added them back: "-l" ensures that output is LF-terminated as shell `` backticks and most programs expect, and "<" indicates this command can be used in a pipeline.Cavesson
You are right. As an excuse: Each character in Perl one-liners requires a mental work for me, therefore I prefer to strip as many characters as possible. The habit was harmful in this case.Hohenzollern
One of the few solutions that doesn't load everything into RAM.Pulmonate
I find it curious just how undervalued this answer is in comparison with the top-rated ones (that use non-shell tools) -- while it's faster and simpler than those. It's almost the same syntax as awk but faster (as benchmarked in another well-voted answer here) and without any caveats, and it's much shorter and simpler than python, and faster (flexibility can be added just as easily). One needs to know the basics of the language used for it, but that goes for any tool. I get the notion of a popularity of a tool but this question is tool agnostic. All these were published the same day.Monah
(disclaimer for my comment above: I know and use and like Perl and Python, as good tools.)Monah
@Monah : no idea what you're benchmarking with or against, but i just summed 1 to 99,999,999, and perl 5.36 came in just behind mawk 1.9.9.6Croteau
@RAREKpopManifesto That was written a year and a half ago, but according to the comment itself I was quoting a ("well-quoted") benchmark posted on this page (I didn't benchmark anything) ...? (Did you actually read my comment?). I don't know how mawk does it or how you timed things, but yes awk can be speedy for many things, agreed.Monah
D
38

I've done a quick benchmark on the existing answers which

  • use only standard tools (sorry for stuff like lua or rocket),
  • are real one-liners,
  • are capable of adding huge amounts of numbers (100 million), and
  • are fast (I ignored the ones which took longer than a minute).

I always added the numbers of 1 to 100 million which was doable on my machine in less than a minute for several solutions.

Here are the results:

Python

:; seq 100000000 | python -c 'import sys; print sum(map(int, sys.stdin))'
5000000050000000
# 30s
:; seq 100000000 | python -c 'import sys; print sum(int(s) for s in sys.stdin)'
5000000050000000
# 38s
:; seq 100000000 | python3 -c 'import sys; print(sum(int(s) for s in sys.stdin))'
5000000050000000
# 27s
:; seq 100000000 | python3 -c 'import sys; print(sum(map(int, sys.stdin)))'
5000000050000000
# 22s
:; seq 100000000 | pypy -c 'import sys; print(sum(map(int, sys.stdin)))'
5000000050000000
# 11s
:; seq 100000000 | pypy -c 'import sys; print(sum(int(s) for s in sys.stdin))'
5000000050000000
# 11s

Awk

:; seq 100000000 | awk '{s+=$1} END {print s}'
5000000050000000
# 22s

Paste & Bc

This ran out of memory on my machine. It worked for half the size of the input (50 million numbers):

:; seq 50000000 | paste -s -d+ - | bc
1250000025000000
# 17s
:; seq 50000001 100000000 | paste -s -d+ - | bc
3750000025000000
# 18s

So I guess it would have taken ~35s for the 100 million numbers.

Perl

:; seq 100000000 | perl -lne '$x += $_; END { print $x; }'
5000000050000000
# 15s
:; seq 100000000 | perl -e 'map {$x += $_} <> and print $x'
5000000050000000
# 48s

Ruby

:; seq 100000000 | ruby -e "puts ARGF.map(&:to_i).inject(&:+)"
5000000050000000
# 30s

C

Just for comparison's sake I compiled the C version and tested this also, just to have an idea how much slower the tool-based solutions are.

#include <stdio.h>
int main(int argc, char** argv) {
    long sum = 0;
    long i = 0;
    while(scanf("%ld", &i) == 1) {
        sum = sum + i;
    }
    printf("%ld\n", sum);
    return 0;
}

 

:; seq 100000000 | ./a.out 
5000000050000000
# 8s

Conclusion

C is of course fastest with 8s, but the Pypy solution only adds a very little overhead of about 30% to 11s. But, to be fair, Pypy isn't exactly standard. Most people only have CPython installed which is significantly slower (22s), exactly as fast as the popular Awk solution.

The fastest solution based on standard tools is Perl (15s).

Damnedest answered 16/1, 2009 at 15:42 Comment(8)
The paste + bc approach was just what I was looking for to sum hex values, thanks!Dittman
Just for fun, in Rust: use std::io::{self, BufRead}; fn main() { let stdin = io::stdin(); let mut sum: i64 = 0; for line in stdin.lock().lines() { sum += line.unwrap().parse::<i64>().unwrap(); } println!("{}", sum); }Allisan
awesome answer. not to nitpick but it is the case that if you decided to include those longer-running results, the answer would be even more awesome!Rosemare
@StevenLu I felt the answer would just be longer and thus less awesome (to use your words). But I can understand that this feeling needs not be shared by everybody :)Damnedest
Next: numba + parallelisationFinnougrian
awk would have matched perl if you had used $0 instead of $1Kacey
datamash sum 1 is maybe 5% slower than the C versionGrider
parallelization: this only works with files, because we cannot seek in a stream. i started a sum-parallel.c but im not quite there yet...Grider
B
38

My fifteen cents:

$ cat file.txt | xargs  | sed -e 's/\ /+/g' | bc

Example:

$ cat text
1
2
3
3
4
5
6
78
9
0
1
2
3
4
576
7
4444
$ cat text | xargs  | sed -e 's/\ /+/g' | bc 
5148
Burgage answered 16/1, 2009 at 15:42 Comment(3)
My input could contain blank lines, so I used what you posted here plus a grep -v '^$'. Thanks!Ivar
wow!! your answer is amazing! my personal favorite from all in the treadGirlie
Love this and +1 for pipeline. Very simple and easy solution for meEquiangular
W
29

Using the GNU datamash util:

seq 10 | datamash sum 1

Output:

55

If the input data is irregular, with spaces and tabs at odd places, this may confuse datamash, then either use the -W switch:

<commands...> | datamash -W sum 1

...or use tr to clean up the whitespace:

<commands...> | tr -d '[[:blank:]]' | datamash sum 1

If the input is large enough, the output will be in scientific notation.

seq 100000000 | datamash sum 1

Output:

5.00000005e+15

To convert that to decimal, use the the --format option:

seq 100000000 | datamash  --format '%.0f' sum 1

Output:

5000000050000000
Wellworn answered 16/1, 2009 at 15:42 Comment(2)
This works great with my github CLI pagination counting <3Profusion
limitation: datamash works only with streams, not with files, so it will use only one CPU core, no parallelizationGrider
S
20

Plain bash one liner

$ cat > /tmp/test
1 
2 
3 
4 
5
^D

$ echo $(( $(cat /tmp/test | tr "\n" "+" ) 0 ))
Swob answered 16/1, 2009 at 15:42 Comment(2)
No cat needed: echo $(( $( tr "\n" "+" < /tmp/test) 0 ))Wellworn
tr isn't exactly "plain Bash" /nitpickMistaken
B
20

BASH solution, if you want to make this a command (e.g. if you need to do this frequently):

addnums () {
  local total=0
  while read val; do
    (( total += val ))
  done
  echo $total
}

Then usage:

addnums < /tmp/nums
Brunn answered 16/1, 2009 at 15:42 Comment(0)
M
17

You can using num-utils, although it may be overkill for what you need. This is a set of programs for manipulating numbers in the shell, and can do several nifty things, including of course, adding them up. It's a bit out of date, but they still work and can be useful if you need to do something more.

https://suso.suso.org/programs/num-utils/index.phtml

It's really simple to use:

$ seq 10 | numsum
55

But runs out of memory for large inputs.

$ seq 100000000 | numsum
Terminado (killed)
Mingmingche answered 16/1, 2009 at 15:42 Comment(2)
Example: numsum numbers.txt.Wellworn
Example with pipe: printf "%s\n" 1 3 5 | numsumConvolvulus
B
15

Cannot avoid submitting this, it is the most generic approach to this Question, please check:

jot 1000000 | sed '2,$s/$/+/;$s/$/p/' | dc

It is to be found over here, I was the OP and the answer came from the audience:

And here are its special advantages over awk, bc, perl, GNU's datamash and friends:

  • it uses standards utilities common in any unix environment
  • it does not depend on buffering and thus it does not choke with really long inputs.
  • it implies no particular precision limits -or integer size for that matter-, hello AWK friends!
  • no need for different code, if floating point numbers need to be added, instead.
  • it theoretically runs unhindered in the minimal of environments
Bicyclic answered 16/1, 2009 at 15:42 Comment(6)
Please include the code related to the question in the answer and not refer to a linkFeune
It also happens to be much slower than all the other solutions, more than 10 times slower than the datamash solutionOrmandy
@GabrielRavier OP doesn't define speed as a first requirement, so in absence of that a generic working solution would be preferred. FYI. datamash is not standard across all Unix platforms, fi. MacOSX appears to be lacking that.Bicyclic
@Bicyclic this is true, but I just wanted to point out to everyone else looking at this question that this answer is, in fact, very slow compared to what you can get on most Linux systems.Ormandy
@GabrielRavier could you provide some measured numbers for comparison? btw. I have run a couple of jot tests and speed is very reasonable even for quite large lists. btw. if datamash is taken as the solution to the OP's question, then any compiled assembly program should be acceptable, too... that would speed it up!Bicyclic
@Bicyclic After some calculations, I can confirm it is even more than 10 times faster. time seq 10000000 | sed '2,$s/$/+/;$s/$/p/' | dc gives me the correct result in 43 seconds whereas time seq 10000000 | datamash sum 1 does it in 1 second, making it more than 40 times faster. Also, a "compiled assembly program" as you call it, would be much more convoluted a solution, likely not much faster and would be much more likely to give incorrect solutionsOrmandy
S
12

I realize this is an old question, but I like this solution enough to share it.

% cat > numbers.txt
1 
2 
3 
4 
5
^D
% cat numbers.txt | perl -lpe '$c+=$_}{$_=$c'
15

If there is interest, I'll explain how it works.

Sandry answered 16/1, 2009 at 15:42 Comment(3)
Please don't. We like to pretend that -n and -p are nice semantic things, not just some clever string pasting ;)Virgel
Yes please, do explain :) (I'm not a Perl typea guy.)Pickett
Try running "perl -MO=Deparse -lpe '$c+=$_}{$_=$c'" and looking at the output, basically -l uses newlines and both input and output separators, and -p prints each line. But in order to do '-p', perl first adds some boiler plate (which -MO=Deparse) will show you, but then it just substitutes and compiles. You can thus cause an extra block to be inserted with the '}{' part and trick it into not printing on each line, but print at the very end.Sandry
E
12
sed 's/^/.+/' infile | bc | tail -1
Encompass answered 16/1, 2009 at 15:42 Comment(0)
P
11

The following works in bash:

I=0

for N in `cat numbers.txt`
do
    I=`expr $I + $N`
done

echo $I
Pearce answered 16/1, 2009 at 15:42 Comment(2)
Command expansion should be used with caution when files can be arbitrarily large. With numbers.txt of 10MB, the cat numbers.txt step would be problematic.Ankylosaur
Indeed, however (if not for the better solutions found here) I would use this one until I actually encountered that problem.Pearce
P
8

Pure bash and in a one-liner :-)

$ cat numbers.txt
1
2
3
4
5
6
7
8
9
10


$ I=0; for N in $(cat numbers.txt); do I=$(($I + $N)); done; echo $I
55
Prying answered 16/1, 2009 at 15:42 Comment(2)
Why are there two (( parenthesis ))?Suggest
Not really pure bash due to cat. make it pure bash by replacing cat with $(< numbers.txt)Subsocial
D
6

Here's a nice and clean Raku (formerly known as Perl 6) one-liner:

say [+] slurp.lines

We can use it like so:

% seq 10 | raku -e "say [+] slurp.lines"
55

It works like this:

slurp without any arguments reads from standard input by default; it returns a string. Calling the lines method on a string returns a list of lines of the string.

The brackets around + turn + into a reduction meta operator which reduces the list to a single value: the sum of the values in the list. say then prints it to standard output with a newline.

One thing to note is that we never explicitly convert the lines to numbers—Raku is smart enough to do that for us. However, this means our code breaks on input that definitely isn't a number:

% echo "1\n2\nnot a number" | raku -e "say [+] slurp.lines"
Cannot convert string to number: base-10 number must begin with valid digits or '.' in '⏏not a number' (indicated by ⏏)
  in block <unit> at -e line 1
Dorris answered 16/1, 2009 at 15:42 Comment(3)
say [+] lines is actually enough :-)Preparedness
@ElizabethMattijsen: cool! how does that work?Dorris
lines without any arguments has the same semantics as slurp without any semantics, but it produces a Seq of Str, rather than a single Str.Preparedness
Q
6

For Ruby Lovers

ruby -e "puts ARGF.map(&:to_i).inject(&:+)" numbers.txt
Quadrinomial answered 16/1, 2009 at 15:42 Comment(0)
S
6

Alternative pure Perl, fairly readable, no packages or options required:

perl -e "map {$x += $_} <> and print $x" < infile.txt
Savina answered 16/1, 2009 at 15:42 Comment(2)
or a tiny bit shorter: perl -e 'map {$x += $_} <>; print $x' infile.txtSardine
Memory required is almost 2GB for a large input of 10 million numbersKacey
M
4

C (not simplified)

seq 1 10 | tcc -run <(cat << EOF
#include <stdio.h>
int main(int argc, char** argv) {
    int sum = 0;
    int i = 0;
    while(scanf("%d", &i) == 1) {
        sum = sum + i;
    }
    printf("%d\n", sum);
    return 0;
}
EOF)
Massasoit answered 16/1, 2009 at 15:42 Comment(3)
I had to upvote the comment. There's nothing wrong with the answer - it's quite good. However, to show that the comment makes the answer awesome, I'm just upvoting the comment.Marteena
impressive, this has the same performance as the gcc-compiled C versionGrider
nitpick: int should be long. there should be a newline between EOF and )Grider
I
4

The following should work (assuming your number is the second field on each line).

awk 'BEGIN {sum=0} \
 {sum=sum + $2} \
END {print "tot:", sum}' Yourinputfile.txt
Iatrochemistry answered 16/1, 2009 at 15:42 Comment(1)
You don't really need the {sum=0} partRadium
T
4

You can do it in python, if you feel comfortable:

Not tested, just typed:

out = open("filename").read();
lines = out.split('\n')
ints = map(int, lines)
s = sum(ints)
print s

Sebastian pointed out a one liner script:

cat filename | python -c"from fileinput import input; print sum(map(int, input()))"
Titanomachy answered 16/1, 2009 at 15:42 Comment(5)
python -c"from fileinput import input; print sum(map(int, input()))" numbers.txtHohenzollern
cat is overused, redirect stdin from file: python -c "..." < numbers.txtAnkylosaur
@rjack: cat is used to demonstrate that script works both for stdin and for files in argv[] (like while(<>) in Perl). If your input is in a file then '<' is unnecessary.Hohenzollern
But < numbers.txt demonstrates that it works on stdin just as well as cat numbers.txt | does. And it doesn't teach bad habits.Prase
@XiongChiamiov If you care so much about habits, using the notation command < file is a bad habit itself. Use < file command instead. Bash one-liners should be easy to read left-to-right from input to output.Kraut
T
3

My version:

seq -5 10 | xargs printf "- - %s" | xargs  | bc
Thrust answered 16/1, 2009 at 15:42 Comment(1)
Shorter: seq -s+ -5 10 | bcWellworn
J
3

One-liner in Racket:

racket -e '(define (g) (define i (read)) (if (eof-object? i) empty (cons i (g)))) (foldr + 0 (g))' < numlist.txt
Johannajohannah answered 16/1, 2009 at 15:42 Comment(1)
this is slow, 25x slower than the C versionGrider
E
3
$ cat n
2
4
2
7
8
9
$ perl -MList::Util -le 'print List::Util::sum(<>)' < n
32

Or, you can type in the numbers on the command line:

$ perl -MList::Util -le 'print List::Util::sum(<>)'
1
3
5
^D
9

However, this one slurps the file so it is not a good idea to use on large files. See j_random_hacker's answer which avoids slurping.

Entomophagous answered 16/1, 2009 at 15:42 Comment(0)
D
2

Real-time summing to let you monitor progress of some number-crunching task.

$ cat numbers.txt 
1
2
3
4
5
6
7
8
9
10

$ cat numbers.txt | while read new; do total=$(($total + $new)); echo $total; done
1
3
6
10
15
21
28
36
45
55

(There is no need to set $total to zero in this case. Neither you can access $total after the finish.)

Decolorant answered 16/1, 2009 at 15:42 Comment(0)
F
2

Apologies in advance for readability of the backticks ("`"), but these work in shells other than bash and are thus more pasteable. If you use a shell which accepts it, the $(command ...) format is much more readable (and thus debuggable) than `command ...` so feel free to modify for your sanity.

I have a simple function in my bashrc that will use awk to calculate a number of simple math items

calc(){
  awk 'BEGIN{print '"$@"' }'
}

This will do +,-,*,/,^,%,sqrt,sin,cos, parenthesis ....(and more depending on your version of awk) ... you could even get fancy with printf and format floating point output, but this is all I normally need

for this particular question, I would simply do this for each line:

calc `echo "$@"|tr " " "+"`

so the code block to sum each line would look something like this:

while read LINE || [ "$LINE" ]; do
  calc `echo "$LINE"|tr " " "+"` #you may want to filter out some lines with a case statement here
done

That's if you wanted to only sum them line by line. However for a total of every number in the datafile

VARS=`<datafile`
calc `echo ${VARS// /+}`

btw if I need to do something quick on the desktop, I use this:

xcalc() { 
  A=`calc "$@"`
  A=`Xdialog --stdout --inputbox "Simple calculator" 0 0 $A`
  [ $A ] && xcalc $A
}
Forbear answered 16/1, 2009 at 15:42 Comment(1)
What kind of ancient shell are you using that doesn't support $()?Beatrisbeatrisa
Z
2

C++ (simplified):

echo {1..10} | scc 'WRL n+=$0; n'

SCC project - http://volnitsky.com/project/scc/

SCC is C++ snippets evaluator at shell prompt

Zamudio answered 16/1, 2009 at 15:42 Comment(1)
link is gone. this works: github.com/lvv/sccGrider
C
1

UPDATED BENCHMARKS

So I synthetically generated 100 mn integers randomly distributed

between

0^0 - 1 

and

8^8 - 1

GENERATOR CODE

mawk2 '
BEGIN {
     __=_=((_+=_^=_<_)+(__=_*_*_))^(___=__)
     srand()
     ___^=___
     do  { 
           print int(rand()*___) 
  
     } while(--_)  }' | pvE9 > test_large_int_100mil_001.txt

     out9:  795MiB 0:00:11 [69.0MiB/s] [69.0MiB/s] [ <=> ]

  f='test_large_int_100mil_001.txt'
  wc5 < "${f}"

    rows = 100000000. | UTF8 chars = 833771780. | bytes = 833771780.

Odd / Even distribution of Last Digit

Odd  49,992,332
Even 50,007,668

AWK - Fastest, by a good margin (maybe C is faster I dunno)

in0:  795MiB 0:00:07 [ 103MiB/s] [ 103MiB/s] [============>] 100%            
( pvE 0.1 in0 < "${f}" | mawk2 '{ _+=$__ } END { print _ }'; )  

 7.64s user 0.35s system 103% cpu 7.727 total
     1  838885279378716

Perl - Quite Decent

 in0:  795MiB 0:00:10 [77.6MiB/s] [77.6MiB/s] [==============>] 100%            
( pvE 0.1 in0 < "${f}" | perl -lne '$x += $_; END { print $x; }'; )  
 
10.16s user 0.37s system 102% cpu 10.268 total

     1  838885279378716

Python3 - Slightly behind Perl

 in0:  795MiB 0:00:11 [71.5MiB/s] [71.5MiB/s] [===========>] 100%            
( pvE 0.1 in0 < "${f}" | python3 -c ; )  

 11.00s user 0.43s system 102% cpu 11.140 total
     1  838885279378716

RUBY - Decent

 in0:  795MiB 0:00:13 [61.0MiB/s] [61.0MiB/s] [===========>] 100%            
( pvE 0.1 in0 < "${f}" | ruby -e 'puts ARGF.map(&:to_i).inject(&:+)'; )  
15.30s user 0.70s system 101% cpu 15.757 total

     1  838885279378716

JQ - Slow

 in0:  795MiB 0:00:25 [31.1MiB/s] [31.1MiB/s] [========>] 100%            
( pvE 0.1 in0 < "${f}" | jq -s 'add'; )  

 36.95s user 1.09s system 100% cpu 37.840 total

     1  838885279378716

DC

- ( had to kill it after no response in minutes)
Croteau answered 16/1, 2009 at 15:42 Comment(0)
A
1

Just for completeness, there is also an R solution

seq 1 10 | R -q -e "f <- file('stdin'); open(f); cat(sum(as.numeric(readLines(f))))"
Argyll answered 16/1, 2009 at 15:42 Comment(0)
H
1

You can use your preferred 'expr' command you just need to finagle the input a little first:

seq 10 | tr '[\n]' '+' | sed -e 's/+/ + /g' -e's/ + $/\n/' | xargs expr

The process is:

  • "tr" replaces the eoln characters with a + symbol,
  • sed pads the '+' with spaces on each side, and then strips the final + from the line
  • xargs inserts the piped input into the command line for expr to consume.
Holbrooke answered 16/1, 2009 at 15:42 Comment(0)
T
1

You can do it with Alacon - command-line utility for Alasql database.

It works with Node.js, so you need to install Node.js and then Alasql package:

To calculate sum from stdin you can use the following command:

> cat data.txt | node alacon "SELECT VALUE SUM([0]) FROM TXT()" >b.txt
Tisza answered 16/1, 2009 at 15:42 Comment(0)
S
1

A lua interpreter is present on all fedora-based systems [fedora,RHEL,CentOS,korora etc. because it is embedded with rpm-package(the package of package manager rpm), i.e rpm-lua] and if u want to learn lua this kind of problems are ideal(you'll get your job done as well).

cat filname | lua -e "sum = 0;for i in io.lines() do sum=sum+i end print(sum)"

and it works. Lua is verbose though, you might have to endure some repeated keyboard stroke injury :)

Separatrix answered 16/1, 2009 at 15:42 Comment(0)
C
0

if the numbers that require summing up happen to fall within a contiguous range, you can use the old trick involving

   f(n) := n x ( n + 1 ) / 2

and doing

  f( high-side ) - \
  f(  low-side  - 1 )

as fast as awk is, a direct computation is still muuuuch faster than summation :

caveat being : without any big-int library, this approach is constrianed by 64-bit fp precision:

    ( time ( jot - 19495729 21895729 | pvE0 |

    mawk2 '{ __+=$_ } END { print __ }' )) 
    
    sleep 0.5

   ( time ( echo '19495729 21895729' | pvE0 | 

    mawk2 '
    function __(___, _) {
         return \
         _ * ((_ + (_^=_<_)) * (\
             _/= _+_)) - ___ * _ * --___ 
    }
    BEGIN {
        CONVFMT = "%.250g"
        OFS = ORS } ($!NF = __($1, $2))^_'  )) 

      in0: 20.6MiB 0:00:00 [49.4MiB/s] [49.4MiB/s] [ <=>]
  ( jot - 19495729 21895729 | pvE 0.1 in0 | mawk2 '{ __+=$_ } END { print __ }')

     0.59s user 0.03s system 141% cpu 0.440 total
     1  49669770295729 0x2D2CA503B9B1


      in0: 18.0 B 0:00:00 [ 462KiB/s] [ 462KiB/s] [<=>]
  ( echo '19495729 21895729' | pvE 0.1 in0 | mawk2 ; )

     0.00s user 0.01s system 62% cpu 0.024 total
     1  49669770295729 0x2D2CA503B9B1

** the code saved the / 2 from high-side so low-side could be performed as * 0.5

Croteau answered 16/1, 2009 at 15:42 Comment(0)
F
0

Miller tool(s) are definitely an overkill for this task, but 1) let's have them here for completeness' sake; 2) they might come in handy for further processing:

 % seq 10 | mlr stats1 -a sum -f 1
1_sum=55
Figurative answered 16/1, 2009 at 15:42 Comment(0)
C
0

The beauty of awk is that, using 1 single stream of integers jotted, it can simultaneously be generating multiple concurrent (and possibly cross-interacting) sequences with barely any code at all :

jot - -10 399 | 

mawk2 '__+=($++NF+=__+=-($++NF+=(--$!_)*9^9-1)+($!_^=2))' CONVFMT='%.20g'

121     4261625501                  -4261625380
100     12397455993                 -3874204891
81      28281696469                 -3486784402
64      59662756915                 -3099363913
49      122037457303                -2711943424
36      246399437577                -2324522935
25      494735977625                -1937102446
16      991021637223                -1549681957
9       1983205535923               -1162261468
4       3967185912829               -774840979
1       7934759246149               -387420490
0       15869518492299              -1
1       31738649564111              387420488
4       63476524287249              774840977
9       126951886313041             1162261466
16      253902222944143             1549681955
25      507802508785867             1937102444
36      1015602693048837            2324522933
49      2031202674154301            2711943422
64      4062402248944755            3099363911
81      8124801011105191            3486784400

This is a less commonly known feature, but mawk-1 can directly generate formatted output without using printf() or sprintf() :

 jot - -11111111555359 900729999999999 49987777777556 | 
 
 mawk '$++NF=_+=$!__' CONVFMT='%+\047\043 30.f' OFS='\t' 

-11111111555359           -11,111,111,555,359.
38876666222197            +27,765,554,666,838.
88864443999753           +116,629,998,666,591.
138852221777309          +255,482,220,443,900.

188839999554865          +444,322,219,998,765.
238827777332421          +683,149,997,331,186.
288815555109977          +971,965,552,441,163.

338803332887533        +1,310,768,885,328,696.
388791110665089        +1,699,559,995,993,785.
438778888442645        +2,138,338,884,436,430.
488766666220201        +2,627,105,550,656,631.

538754443997757        +3,165,859,994,654,388.
588742221775313        +3,754,602,216,429,701.
638729999552869        +4,393,332,215,982,570.
688717777330425        +5,082,049,993,312,995.

738705555107981        +5,820,755,548,420,976.
788693332885537        +6,609,448,881,306,513.
838681110663093        +7,448,129,991,969,606.
888668888440649        +8,336,798,880,410,255.

With nawk, one even more obscure feature is being able to print out the exact IEEE 754 double precision floating point hex :

 jot - .00001591111137777 \
       9007299999.1111111111 123.990333333328 | 

nawk '$++NF=_+=_+= cos(exp(log($!__)/1.1))' CONVFMT='[ %20.13p ]' OFS='\t' \_=1 

0.00001591111137777     [   0x400fffffffbf27f8 ]
123.99034924443937200   [   0x401f1a2498670bcc ]
247.98068257776736800   [   0x40313bd908775e35 ]
371.97101591109537821   [   0x4040516a505a57a3 ]
495.96134924442338843   [   0x4050b807540a1c3a ]

619.95168257775139864   [   0x4060f800d1abb906 ]
743.94201591107935201   [   0x407112ffc8adec4a ]
867.93234924440730538   [   0x40810bab4a485ad9 ]
991.92268257773525875   [   0x4091089e1149c279 ]

1115.91301591106321212  [   0x40a10ac8cfb09c62 ]
1239.90334924439116548  [   0x40b10a7bfa7fa42d ]
1363.89368257771911885  [   0x40c109c2d1b9947c ]
1487.88401591104707222  [   0x40d10a2644d5ab3b ]

gawk w/ GMP is even more interesting - they're willing to provide comma-formatted hex on your behalf, plus strangely padding extra commas in the empty space to its left

=

jot -  .000591111137777 90079.1111111111 123.990333333328 | 

gawk -v PREC=20000 -nMbe '
              $++NF  = _ +=(15^16 * log($!__)/log(sqrt(10)))' \
              CONVFMT='< 0x %\04724.12x >' OFS=' | '   \_=1 

# rows skipped in the middle for illustration clarity
 
4339.662257777619743 | < 0x    ,   ,4e6,007,2f4,08a,b93,8b3 >
4463.652591110947469 | < 0x    ,   ,50f,967,27f,e5a,963,518 >
4835.623591110930647 | < 0x    ,   ,58d,250,b65,a8d,45d,b79 >
7315.430257777485167 | < 0x    ,   ,8eb,b36,ee9,fe6,149,da5 >
11779.082257777283303 | < 0x    ,   ,f4b,c34,a75,82a,826,abb >

12151.053257777266481 | < 0x    ,   ,fd7,3c2,25e,1ab,a09,bbf >
16738.695591110394162 | < 0x    ,  1,6b0,f3b,350,ed3,eca,c58 >
17978.598924443671422 | < 0x    ,  1,894,2f2,aba,a30,f63,bae >
20458.405591110225942 | < 0x    ,  1,c64,a40,87e,e35,4d4,896 >
23434.173591110091365 | < 0x    ,  2,108,186,96e,0dc,2ef,d46 >

31741.525924443049007 | < 0x    ,  2,e45,bae,b73,24f,981,637 >
32857.438924442998541 | < 0x    ,  3,014,3a7,b9e,daf,18c,c3e >
33849.361591109620349 | < 0x    ,  3,1b0,9b7,5f1,536,49c,74e >
41536.762257775939361 | < 0x    ,  3,e51,7c1,9b2,e74,516,220 >
45876.423924442409771 | < 0x    ,  4,58c,52d,078,edb,db4,4ba >

53067.863257775417878 | < 0x    ,  5,1aa,cf3,eed,33c,638,456 >
59391.370257775131904 | < 0x    ,  5,c73,38a,54d,b41,98d,a02 >
61127.234924441720068 | < 0x    ,  5,f6d,ce2,c40,117,6d2,6e7 >
66830.790257774875499 | < 0x    ,  6,944,fe1,378,9ea,235,7b0 >
71170.451924441600568 | < 0x    ,  7,0ce,de6,797,df3,009,35d >

76254.055591108335648 | < 0x    ,  7,9b0,f6d,03d,878,edf,97d >
83073.523924441760755 | < 0x    ,  8,5b0,aa9,7f7,a31,89a,f2e >
86669.243591108475812 | < 0x    ,  8,c0d,678,fa3,3b1,aad,f26 >
89149.050257775175851 | < 0x    ,  9,074,278,19d,4c7,443,a00 >
89769.001924441850861 | < 0x    ,  9,18e,464,ff9,0eb,ee4,4e1 >

but be weary of syntactical errors -

  • this is a selection of what's being printed out to STDOUT,
  • all 256 byte choices have been observed of what's being printed out, even if it's terminal window

=

   jot 3000 | 
   gawk -Me ' _=$++NF=____+=$++NF=___-= $++NF=__+=$++NF=\
             _^= exp(cos($++NF=______+=($1) %10 + 1))'   \
                                  ____=-111111089 OFMT='%32c`' 


 char >>[  --[ U+ 2 | 2 (ASCII) freq >>[ 8 sumtotal >>[ 45151 
 char >>[  --[ U+ 4 | 4 (ASCII) freq >>[ 11 sumtotal >>[ 45166 
 char >>[  --[ U+ 14 | 20 (ASCII) freq >>[ 9 sumtotal >>[ 45301 
 char >>[ + --[ U+ 2B | 43 (ASCII) freq >>[ 9 sumtotal >>[ 60645 
 char >>[ --[ U+ 9 | 9 (ASCII) freq >>[ 12 sumtotal >>[ 45216 
 char >>[ 8 --[ U+ 38 | 56 (ASCII) freq >>[ 1682 sumtotal >>[ 82522 
 char >>[ Q --[ U+ 51 | 81 (ASCII) freq >>[ 6 sumtotal >>[ 85040 
 char >>[ Y --[ U+ 59 | 89 (ASCII) freq >>[ 8 sumtotal >>[ 85105 
 char >>[ g --[ U+ 67 | 103 (ASCII) freq >>[ 10 sumtotal >>[ 85212 
 char >>[ p --[ U+ 70 | 112 (ASCII) freq >>[ 7 sumtotal >>[ 85411 
 char >>[ v --[ U+ 76 | 118 (ASCII) freq >>[ 7 sumtotal >>[ 85462 
 char >>[ ? --[ \216 \x8E | 142 (8-bit byte) freq >>[ 15 sumtotal >>[ 85653 
 char >>[ ? --[ \222 \x92 | 146 (8-bit byte) freq >>[ 13 sumtotal >>[ 85698 
 char >>[ ? --[ \250 \xA8 | 168 (8-bit byte) freq >>[ 9 sumtotal >>[ 85967 
 char >>[ ? --[ \307 \xC7 | 199 (8-bit byte) freq >>[ 7 sumtotal >>[ 86345 
 char >>[ ? --[ \332 \xDA | 218 (8-bit byte) freq >>[ 69 sumtotal >>[ 86576 
 char >>[ ? --[ \352 \xEA | 234 (8-bit byte) freq >>[ 6 sumtotal >>[ 86702 
 char >>[ ? --[ \354 \xEC | 236 (8-bit byte) freq >>[ 5 sumtotal >>[ 86713 
 char >>[ ? --[ \372 \xFA | 250 (8-bit byte) freq >>[ 11 sumtotal >>[ 86823 
 char >>[ ? --[ \376 \xFE | 254 (8-bit byte) freq >>[ 9 sumtotal >>[ 86859
Croteau answered 16/1, 2009 at 15:42 Comment(0)
M
0

Using env variable tmp

tmp=awk -v tmp="$tmp" '{print $tmp" "$1}' <filename>|echo $tmp|sed "s/ /+/g"|bc

tmp=cat <filename>|awk -v tmp="$tmp" '{print $tmp" "$1}'|echo $tmp|sed "s/ /+/g"|bc

Thanks.

Maricela answered 16/1, 2009 at 15:42 Comment(0)
R
0

One-liner in Rebol:

rebol -q --do 's: 0 while [d: input] [s: s + to-integer d] print s' < infile.txt

Unfortunately the above doesn't work in Rebol 3 just yet (INPUT doesn't stream STDIN).

So here's an interim solution which also works in Rebol 3:

rebol -q --do 's: 0 foreach n to-block read %infile.txt [s: s + n] print s'
Robles answered 16/1, 2009 at 15:42 Comment(0)
N
0

...and the PHP version, just for the sake of completeness

cat /file/with/numbers | php -r '$s = 0; while (true) { $e = fgets(STDIN); if (false === $e) break; $s += $e; } echo $s;'
Nitrosyl answered 16/1, 2009 at 15:42 Comment(1)
Can be shorter: seq 1 10 | php -r ' echo array_sum(file("php://stdin")) . PHP_EOL; 'Firebreak
B
0
#include <iostream>

int main()
{
    double x = 0, total = 0;
    while (std::cin >> x)
        total += x;
    if (!std::cin.eof())
        return 1;
    std::cout << x << '\n';
}
Benedix answered 16/1, 2009 at 15:42 Comment(0)
G
0

one simple solution would be to write a program to do it for you. This could probably be done pretty quickly in python, something like:

sum = 0
file = open("numbers.txt","R")
for line in file.readlines(): sum+=int(line)
file.close()
print sum

I haven't tested that code, but it looks right. Just change numbers.txt to the name of the file, save the code to a file called sum.py, and in the console type in "python sum.py"

Gratiana answered 16/1, 2009 at 15:42 Comment(1)
calling readlines() reads the entire file into memory - using 'for line in file' could be betterLornalorne
T
-1

Simple php

  cat numbers.txt | php -r "echo array_sum(explode(PHP_EOL, stream_get_contents(STDIN)));"
Tamikatamiko answered 16/1, 2009 at 15:42 Comment(0)
C
-6

Ok, here is how to do it in PowerShell (PowerShell core, should work on Windows, Linux and Mac)

Get-Content aaa.dat | Measure-Object -Sum
Chang answered 16/1, 2009 at 15:42 Comment(1)
This question is tagged [[shell]]: "Without a specific tag, a portable (POSIX-compliant) solution should be assumed" not PowerShellPotbellied

© 2022 - 2024 — McMap. All rights reserved.