Summing multiple columns in Spark
Asked Answered
A

6

15

How can I sum multiple columns in Spark? For example, in SparkR the following code works to get the sum of one column, but if I try to get the sum of both columns in df, I get an error.

# Create SparkDataFrame
df <- createDataFrame(faithful)

# Use agg to sum total waiting times
head(agg(df, totalWaiting = sum(df$waiting)))
##This works

# Use agg to sum total of waiting and eruptions
head(agg(df, total = sum(df$waiting, df$eruptions)))
##This doesn't work

Either SparkR or PySpark code will work.

Abrogate answered 12/6, 2017 at 14:35 Comment(0)
F
61

For PySpark, if you don't want to explicitly type out the columns:

from operator import add
from functools import reduce
new_df = df.withColumn('total',reduce(add, [F.col(x) for x in numeric_col_list]))
Faena answered 31/1, 2018 at 17:48 Comment(3)
Why this tool is not in spark api?Irvine
That is a useful technique, and surely will help many people who google this question, but not what the original question asked about :) (it asked about an aggregation, not a row operation)Temblor
The original question was confusing aggregation (summing rows) with calculated fields (in this case summing columns).Modify
P
8

you can do something like the below in pyspark

>>> from pyspark.sql import functions as F
>>> df = spark.createDataFrame([("a",1,10), ("b",2,20), ("c",3,30), ("d",4,40)], ["col1", "col2", "col3"])
>>> df.groupBy("col1").agg(F.sum(df.col2+df.col3)).show()
+----+------------------+
|col1|sum((col2 + col3))|
+----+------------------+
|   d|                44|
|   c|                33|
|   b|                22|
|   a|                11|
+----+------------------+
Practiced answered 12/6, 2017 at 14:46 Comment(1)
Yes but... 1 + NULL = NULL.Palaeobotany
N
5
org.apache.spark.sql.functions.sum(Column e)

Aggregate function: returns the sum of all values in the expression.

As you can see, sum takes just one column as input so sum(df$waiting, df$eruptions) wont work.Since you wan to sum up the numeric fields, you can dosum(df("waiting") + df("eruptions")).If you wan to sum up values for individual columns then, you can df.agg(sum(df$waiting),sum(df$eruptions)).show

Negrete answered 12/6, 2017 at 15:1 Comment(3)
For me, this one worked df.withColumn("newCol", col("col1")+col("col2"))Sovran
@Sovran yes that is also an alternative.Negrete
The original question as I understood it is about aggregation: summing columns "vertically" (for each column, sum all the rows), not a row operation: summing rows "horizontally" (for each row, sum the values in columns on that row).Temblor
G
4

You can use expr():

import pyspark.sql.functions as f

numeric_cols = ['col_a','col_b','col_c']
df = df.withColumn('total', f.expr('+'.join(cols)))

PySpark expr() is a SQL function to execute SQL-like expressions.

Gamut answered 1/9, 2022 at 21:23 Comment(1)
it should be df = df.withColumn('total', f.expr('+'.join(numeric_cols)))Dorrie
M
3

sparkR code:

library(SparkR)
df <- createDataFrame(sqlContext,faithful)
w<-agg(df,sum(df$waiting)),agg(df,sum(df$eruptions))
head(w[[1]])
head(w[[2]])
Manslaughter answered 13/6, 2017 at 10:10 Comment(0)
T
1

The accepted answer was helpful for me, but I found out the one below is simpler and it does not use external API.

sum_df = df.withColumn('total', lit(0))
for c in col_list:
    sum_df = sum_df.withColumn('total', col('total') + col(c))
Took answered 12/4, 2023 at 17:56 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.