Spark SQL DataFrame - distinct() vs dropDuplicates()
Asked Answered
D

4

30

I was looking at the DataFrame API, i can see two different methods doing the same functionality for removing duplicates from a data set.

I can understand dropDuplicates(colNames) will remove duplicates considering only the subset of columns.

Is there any other differences between these two methods?

Defamatory answered 27/2, 2016 at 7:22 Comment(0)
S
26

The main difference is the consideration of the subset of columns which is great! When using distinct you need a prior .select to select the columns on which you want to apply the duplication and the returned Dataframe contains only these selected columns while dropDuplicates(colNames) will return all the columns of the initial dataframe after removing duplicated rows as per the columns.

Stcyr answered 29/11, 2018 at 19:19 Comment(0)
S
21

Let's assume we have the following spark dataframe

+---+------+---+                                                                
| id|  name|age|
+---+------+---+
|  1|Andrew| 25|
|  1|Andrew| 25|
|  1|Andrew| 26|
|  2| Maria| 30|
+---+------+---+

distinct() does not accept any arguments which means that you cannot select which columns need to be taken into account when dropping the duplicates. This means that the following command will drop the duplicate records taking into account all the columns of the dataframe:

df.distinct().show()

+---+------+---+
| id|  name|age|
+---+------+---+
|  1|Andrew| 26|
|  2| Maria| 30|
|  1|Andrew| 25|
+---+------+---+

Now in case you want to drop the duplicates considering ONLY id and name you'd have to run a select() prior to distinct(). For example,

>>> df.select(['id', 'name']).distinct().show()
+---+------+
| id|  name|
+---+------+
|  2| Maria|
|  1|Andrew|
+---+------+

But in case you wanted to drop the duplicates only over a subset of columns like above but keep ALL the columns, then distinct() is not your friend.


dropDuplicates() will drop the duplicates detected over the provided set of columns, but it will also return all the columns appearing in the original dataframe.

df.dropDuplicates().show()

+---+------+---+
| id|  name|age|
+---+------+---+
|  1|Andrew| 26|
|  2| Maria| 30|
|  1|Andrew| 25|
+---+------+---+

dropDuplicates() is thus more suitable when you want to drop duplicates over a selected subset of columns, but also want to keep all the columns:

df.dropDuplicates(['id', 'name']).show()

+---+------+---+
| id|  name|age|
+---+------+---+
|  2| Maria| 30|
|  1|Andrew| 25|
+---+------+---+

For more details refer to the article distinct() vs dropDuplicates() in Python

Smirk answered 19/10, 2020 at 16:41 Comment(1)
How does it decide which value to keep for age in your example for name==Andrew?Complainant
O
15

From javadoc, there is no difference between distinc() and dropDuplicates().

dropDuplicates

public DataFrame dropDuplicates()

Returns a new DataFrame that contains only the unique rows from this DataFrame. This is an alias for distinct.

dropDuplicates() was introduced in 1.4 as a replacement for distinct(), as you can use it's overloaded methods to get unique rows based on subset of columns.

Octavia answered 27/2, 2016 at 7:35 Comment(0)
R
0

.distinct() - returns rows that are unique for all the columns combination

.dropDuplicates() - alias to .distinct()

.dropDuplicates(["col1", "col2", ....]) - returns rows that are unique for the mentioned columns combination. i.e., ["col1", "col2", ....]

Rating answered 17/9, 2023 at 12:37 Comment(1)
@MohamedThasinah. spark.apache.org/docs/1.5.1/api/java/org/apache/spark/sql/…Rating

© 2022 - 2024 — McMap. All rights reserved.