PySpark replace value in several column at once
Asked Answered
Y

3

7

I want to replace a value in a dataframe column with another value and I've to do it for many column (lets say 30/100 columns)

I've gone through this and this already.

from pyspark.sql.functions import when, lit, col

df = sc.parallelize([(1, "foo", "val"), (2, "bar", "baz"), (3, "baz", "buz")]).toDF(["x", "y", "z"])
df.show()

# I can replace "baz" with Null separaely in column y and z
def replace(column, value):
    return when(column != value, column).otherwise(lit(None))

df = df.withColumn("y", replace(col("y"), "baz"))\
    .withColumn("z", replace(col("z"), "baz"))
df.show()    

enter image description here

I can replace "baz" with Null separaely in column y and z. But I want to do it for all columns - something like list comprehension way like below

[replace(df[col], "baz") for col in df.columns]
Ytterbia answered 12/4, 2019 at 2:37 Comment(0)
T
5

Since there are to the tune of 30/100 columns, so let's add a few more columns to the DataFrame to generalize it well.

# Loading the requisite packages
from pyspark.sql.functions import col, when
df = sc.parallelize([(1,"foo","val","baz","gun","can","baz","buz","oof"), 
                     (2,"bar","baz","baz","baz","got","pet","stu","got"), 
                     (3,"baz","buz","pun","iam","you","omg","sic","baz")]).toDF(["x","y","z","a","b","c","d","e","f"])
df.show()
+---+---+---+---+---+---+---+---+---+ 
|  x|  y|  z|  a|  b|  c|  d|  e|  f| 
+---+---+---+---+---+---+---+---+---+ 
|  1|foo|val|baz|gun|can|baz|buz|oof| 
|  2|bar|baz|baz|baz|got|pet|stu|got| 
|  3|baz|buz|pun|iam|you|omg|sic|baz| 
+---+---+---+---+---+---+---+---+---+

Let's say we want to replace baz with Null in all the columns except in column x and a. Use list comprehensions to choose those columns where replacement has to be done.

# This contains the list of columns where we apply replace() function
all_column_names = df.columns
print(all_column_names)
    ['x', 'y', 'z', 'a', 'b', 'c', 'd', 'e', 'f']
columns_to_remove = ['x','a']
columns_for_replacement = [i for i in all_column_names if i not in columns_to_remove]
print(columns_for_replacement)
    ['y', 'z', 'b', 'c', 'd', 'e', 'f']

Finally, doing the replacement using when(), which actually is a pseudonym for if clause.

# Doing the replacement on all the requisite columns
for i in columns_for_replacement:
    df = df.withColumn(i,when((col(i)=='baz'),None).otherwise(col(i)))
df.show()
+---+----+----+---+----+---+----+---+----+ 
|  x|   y|   z|  a|   b|  c|   d|  e|   f| 
+---+----+----+---+----+---+----+---+----+ 
|  1| foo| val|baz| gun|can|null|buz| oof| 
|  2| bar|null|baz|null|got| pet|stu| got| 
|  3|null| buz|pun| iam|you| omg|sic|null| 
+---+----+----+---+----+---+----+---+----+

There is no need to create a UDF and define a function to do the replacement if it can be done with normal if-else clause. UDFs are in general a costly operation and should be avoided when ever possible.

Tiercel answered 12/4, 2019 at 8:37 Comment(0)
O
3

use a reduce() function:

from functools import reduce

reduce(lambda d, c: d.withColumn(c, replace(col(c), "baz")), [df, 'y', 'z']).show()
#+---+----+----+
#|  x|   y|   z|
#+---+----+----+
#|  1| foo| val|
#|  2| bar|null|
#|  3|null| buz|
#+---+----+----+
Outgo answered 12/4, 2019 at 3:2 Comment(0)
H
1

You can use select and a list comprehension:

df = df.select([replace(f.col(column), 'baz').alias(column) if column!='x' else f.col(column)
                for column in df.columns])
df.show()
Humberto answered 12/4, 2019 at 8:22 Comment(3)
where did you import this replace function fromScullery
it is defined in the questionHumberto
ah okay, there's a replace function in the SQL API, can't believe it's still not been ported to scala/ Python SparkScullery

© 2022 - 2024 — McMap. All rights reserved.