Get the row(s) which have the max value in groups using groupby
Asked Answered
P

16

491

How do I find all rows in a pandas DataFrame which have the max value for count column, after grouping by ['Sp','Mt'] columns?

Example 1: the following DataFrame:

   Sp   Mt Value   count
0  MM1  S1   a     **3**
1  MM1  S1   n       2
2  MM1  S3   cb    **5**
3  MM2  S3   mk    **8**
4  MM2  S4   bg    **10**
5  MM2  S4   dgd     1
6  MM4  S2   rd      2
7  MM4  S2   cb      2
8  MM4  S2   uyi   **7**

Expected output is to get the result rows whose count is max in each group, like this:

   Sp   Mt   Value  count
0  MM1  S1   a      **3**
2  MM1  S3   cb     **5**
3  MM2  S3   mk     **8**
4  MM2  S4   bg     **10** 
8  MM4  S2   uyi    **7**

Example 2:

   Sp   Mt   Value  count
4  MM2  S4   bg     10
5  MM2  S4   dgd    1
6  MM4  S2   rd     2
7  MM4  S2   cb     8
8  MM4  S2   uyi    8

Expected output:

   Sp   Mt   Value  count
4  MM2  S4   bg     10
7  MM4  S2   cb     8
8  MM4  S2   uyi    8
Piperidine answered 29/3, 2013 at 14:48 Comment(2)
#18880282 Could be usefulGriffe
This answer is the fastest solution I could find: https://mcmap.net/q/75431/-pandas-get-most-recent-value-of-a-particular-column-indexed-by-another-column-get-maximum-value-of-a-particular-column-indexed-by-another-columnCaper
A
599

Firstly, we can get the max count for each group like this:

In [1]: df
Out[1]:
    Sp  Mt Value  count
0  MM1  S1     a      3
1  MM1  S1     n      2
2  MM1  S3    cb      5
3  MM2  S3    mk      8
4  MM2  S4    bg     10
5  MM2  S4   dgd      1
6  MM4  S2    rd      2
7  MM4  S2    cb      2
8  MM4  S2   uyi      7

In [2]: df.groupby(['Sp', 'Mt'])['count'].max()
Out[2]:
Sp   Mt
MM1  S1     3
     S3     5
MM2  S3     8
     S4    10
MM4  S2     7
Name: count, dtype: int64

To get the indices of the original DF you can do:

In [3]: idx = df.groupby(['Sp', 'Mt'])['count'].transform(max) == df['count']

In [4]: df[idx]
Out[4]:
    Sp  Mt Value  count
0  MM1  S1     a      3
2  MM1  S3    cb      5
3  MM2  S3    mk      8
4  MM2  S4    bg     10
8  MM4  S2   uyi      7

Note that if you have multiple max values per group, all will be returned.


Update

On a Hail Mary chance that this is what the OP is requesting:

In [5]: df['count_max'] = df.groupby(['Sp', 'Mt'])['count'].transform(max)

In [6]: df
Out[6]:
    Sp  Mt Value  count  count_max
0  MM1  S1     a      3          3
1  MM1  S1     n      2          3
2  MM1  S3    cb      5          5
3  MM2  S3    mk      8          8
4  MM2  S4    bg     10         10
5  MM2  S4   dgd      1         10
6  MM4  S2    rd      2          7
7  MM4  S2    cb      2          7
8  MM4  S2   uyi      7          7
Alboran answered 29/3, 2013 at 15:9 Comment(5)
@Alboran I'm using the second, idx approach. But, I can only afford to a single maximum for each group (and my data has a few duplicate-max's). is there a way to get around this with your solution?Lorin
transform method may have pool performance when the data set is large enough, get the max value first then merge the dataframes will be better.Tightfisted
As @Lorin mentioned, this is wrong for the original question asked.Password
@PrakashVanapalli no it isn'tAlboran
In never versions need to do transform('max')Lastminute
W
318

You can sort the dataFrame by count and then remove duplicates. I think it's easier:

df.sort_values('count', ascending=False).drop_duplicates(['Sp','Mt'])
Waterage answered 16/11, 2016 at 10:14 Comment(9)
Very nice! Fast with largish frames (25k rows)Vervet
For those who are somewhat new with Python, you will need to assign this to a new variable, it doesn't change the current df variable.Finecut
@Samir or use inplace = True as an argument to drop_duplicatesCaraway
This is a great answer when need only one of rows with the same max values, however it wont work as expected if I need all the rows with max values.Tightfisted
@WoodsChen it should work. It keeps the max value per key of [sp,mt]Waterage
I mean if the dataframe is pd.DataFrame({'sp':[1, 1, 2], 'mt':[1, 1, 2], 'value':[2, 2, 3]}, then there will be 2 rows with the same max value 2 in the group where sp==1 and mt==2. @WaterageTightfisted
@WoodsChen, it drops duplicates of [sp, mt], therefore in your example, the output should be only one row.Waterage
You can address the issue of sort ties by using a different kind argument. Look in pandas.pydata.org/docs/reference/api/… for more information; some sort kinds are stable.Speck
this was many times faster with millions of rows than the more readable .groupby(...).max() approach.Cressida
K
123

Easy solution would be to apply the idxmax() function to get indices of rows with max values. This would filter out all the rows with max value in the group.

In [367]: df
Out[367]: 
    sp  mt  val  count
0  MM1  S1    a      3
1  MM1  S1    n      2
2  MM1  S3   cb      5
3  MM2  S3   mk      8
4  MM2  S4   bg     10
5  MM2  S4  dgb      1
6  MM4  S2   rd      2
7  MM4  S2   cb      2
8  MM4  S2  uyi      7


# Apply idxmax() and use .loc() on dataframe to filter the rows with max values:
In [368]: df.loc[df.groupby(["sp", "mt"])["count"].idxmax()]
Out[368]: 
    sp  mt  val  count
0  MM1  S1    a      3
2  MM1  S3   cb      5
3  MM2  S3   mk      8
4  MM2  S4   bg     10
8  MM4  S2  uyi      7


# Just to show what values are returned by .idxmax() above:
In [369]: df.groupby(["sp", "mt"])["count"].idxmax().values
Out[369]: array([0, 2, 3, 4, 8])
Karp answered 7/7, 2017 at 0:40 Comment(2)
The questioner here specified "I want to get ALL the rows where count equals max in each group", while idxmax Return[s] index of first occurrence of maximum over requested axis" according to the docs (0.21).Wivina
This is a great solution, but for a different problemIzak
C
80

You may not need to do groupby(), but use both sort_values + drop_duplicates

df.sort_values('count').drop_duplicates(['Sp', 'Mt'], keep='last')
Out[190]: 
    Sp  Mt Value  count
0  MM1  S1     a      3
2  MM1  S3    cb      5
8  MM4  S2   uyi      7
3  MM2  S3    mk      8
4  MM2  S4    bg     10

Also almost same logic by using tail

df.sort_values('count').groupby(['Sp', 'Mt']).tail(1)
Out[52]: 
    Sp  Mt Value  count
0  MM1  S1     a      3
2  MM1  S3    cb      5
8  MM4  S2   uyi      7
3  MM2  S3    mk      8
4  MM2  S4    bg     10
Consistency answered 4/1, 2019 at 14:55 Comment(5)
Not only is this an order of magnitude faster than the other solutions (at least for my use case), it has the added benefit of simply chaining as part of the construction of the original dataframe.Pregnant
When you see this answer, you realize that all the others are wrong. This is clearly the way to do it. Thanks.Strongroom
One should add na_position="first" to sort_values in order to ignore NaNs.Alejandraalejandrina
I found this to be fast for my DF of several million rows.Whilom
This doesn't appear to work with ties.Maliamalice
G
41

Having tried the solution suggested by Zelazny on a relatively large DataFrame (~400k rows) I found it to be very slow. Here is an alternative that I found to run orders of magnitude faster on my data set.

df = pd.DataFrame({
    'sp' : ['MM1', 'MM1', 'MM1', 'MM2', 'MM2', 'MM2', 'MM4', 'MM4', 'MM4'],
    'mt' : ['S1', 'S1', 'S3', 'S3', 'S4', 'S4', 'S2', 'S2', 'S2'],
    'val' : ['a', 'n', 'cb', 'mk', 'bg', 'dgb', 'rd', 'cb', 'uyi'],
    'count' : [3,2,5,8,10,1,2,2,7]
    })

df_grouped = df.groupby(['sp', 'mt']).agg({'count':'max'})

df_grouped = df_grouped.reset_index()

df_grouped = df_grouped.rename(columns={'count':'count_max'})

df = pd.merge(df, df_grouped, how='left', on=['sp', 'mt'])

df = df[df['count'] == df['count_max']]
Gobbler answered 11/2, 2014 at 17:54 Comment(7)
indeed this is much faster. transform seems to be slow for large dataset.James
Can you add comments to explain what each line does?Caper
fwiw: I found the more elegant-looking solution from @Alboran took a long time to execute for my set of ~100K rows, but this one ran pretty quickly. (I'm running a now way-obsolete 0.13.0, which might account for slowness).Holierthanthou
But doing this df[df['count'] == df['count_max']] will lose NaN rows, as well as the answers above.Trisect
I highly suggest to use this approach, for bigger data frames it is much faster to use .appy() or .agg().Heinrich
I am using pandas version 0.25.1 and timed everything in jupyter (extended the lists with *1000000). I found the @Alboran solution to be faster. Did I miss something in the test or is it due to the new pandas version?Towe
This seems overcomplicated, but maybe there's some optimization or prior version limitation I'm not aware of. This works perfectly fine instead: df.groupby(['Sp', 'Mt']).agg({'count': 'max'}).merge(df)[df.columns]. The [df.columns] is only to keep the same column order.Boni
S
19

Use groupby and idxmax methods:

  1. transfer col date to datetime:

    df['date'] = pd.to_datetime(df['date'])
    
  2. get the index of max of column date, after groupyby ad_id:

    idx = df.groupby(by='ad_id')['date'].idxmax()
    
  3. get the wanted data:

    df_max = df.loc[idx,]
    
   ad_id  price       date
7     22      2 2018-06-11
6     23      2 2018-06-22
2     24      2 2018-06-30
3     28      5 2018-06-22
Stratus answered 24/7, 2018 at 10:45 Comment(1)
date column??? This seems like the answer to a different question. Otherwise, it's a duplicate of Surya's answer and it has the same problem: in case of a tie, only the first occurrence is kept.Boni
C
15

For me, the easiest solution would be keep value when count is equal to the maximum. Therefore, the following one line command is enough :

df[df['count'] == df.groupby(['Mt'])['count'].transform(max)]
Chimney answered 2/7, 2015 at 12:52 Comment(1)
This is the same solution as Zelazny7's answer. Please don't post duplicate answers.Boni
S
10

Summarizing, there are many ways, but which one is faster?

import pandas as pd
import numpy as np
import time

df = pd.DataFrame(np.random.randint(1,10,size=(1000000, 2)), columns=list('AB'))

start_time = time.time()
df1idx = df.groupby(['A'])['B'].transform(max) == df['B']
df1 = df[df1idx]
print("---1 ) %s seconds ---" % (time.time() - start_time))

start_time = time.time()
df2 = df.sort_values('B').groupby(['A']).tail(1)
print("---2 ) %s seconds ---" % (time.time() - start_time))

start_time = time.time()
df3 = df.sort_values('B').drop_duplicates(['A'],keep='last')
print("---3 ) %s seconds ---" % (time.time() - start_time))

start_time = time.time()
df3b = df.sort_values('B', ascending=False).drop_duplicates(['A'])
print("---3b) %s seconds ---" % (time.time() - start_time))

start_time = time.time()
df4 = df[df['B'] == df.groupby(['A'])['B'].transform(max)]
print("---4 ) %s seconds ---" % (time.time() - start_time))

start_time = time.time()
d = df.groupby('A')['B'].nlargest(1)
df5 = df.iloc[[i[1] for i in d.index], :]
print("---5 ) %s seconds ---" % (time.time() - start_time))

And the winner is...

  • --1 ) 0.03337574005126953 seconds ---
  • --2 ) 0.1346898078918457 seconds ---
  • --3 ) 0.10243558883666992 seconds ---
  • --3b) 0.1004343032836914 seconds ---
  • --4 ) 0.028397560119628906 seconds ---
  • --5 ) 0.07552886009216309 seconds ---
Surfboat answered 2/3, 2021 at 11:42 Comment(1)
Great job including the timer which is missing from all of these suggestions. There are a few more and importantly, it would also be good to add it on a larger dataset. Using 2.8 million rows with varying amount of duplicates shows some startling figures. Especially using the nlargest fails spectacularly (like more than 100 fold slower) on large data. The fastest for my data was the sort by then drop duplicate (drop all but last marginally faster than sort descending and drop all but first)Hawks
T
8

Try using nlargest on the groupby object. The advantage is that it returns the rows where "the nlargest item(s)" were fetched from, and we can get their index.

In this case, we want n=1 for the max and keep='all' to include duplicate maxes.

Note: we slice the last (-1) element of our index since our index in this case consist of tuples (e.g. ('MM1', 'S1', 0)).

df = pd.DataFrame({
    'Sp': ['MM1', 'MM1', 'MM1', 'MM2', 'MM2', 'MM2', 'MM4', 'MM4','MM4'],
    'Mt': ['S1', 'S1', 'S3', 'S3', 'S4', 'S4', 'S2', 'S2', 'S2'],
    'Val': ['a', 'n', 'cb', 'mk', 'bg', 'dgb', 'rd', 'cb', 'uyi'],
    'count': [3, 2, 5, 8, 10, 1, 2, 2, 7]
})

d = df.groupby(['Sp', 'Mt'])['count'].nlargest(1, keep='all')

df.loc[[i[-1] for i in d.index]]
    Sp  Mt  Val  count
0  MM1  S1    a      3
2  MM1  S3   cb      5
3  MM2  S3   mk      8
4  MM2  S4   bg     10
8  MM4  S2  uyi      7
Trotskyite answered 8/7, 2019 at 16:37 Comment(3)
If the input has a MultiIndex, it might be better to do something more like df.loc[d.droplevel(['Sp', 'Mt']).index]. I'm not sure.Boni
You could do this more idiomatically with df.loc[d.index.get_level_values(-1)].Boni
this is correct but very very slow on large dataset with ~100k rows.Password
B
7

I've been using this functional style for many group operations:

df = pd.DataFrame({
    'Sp': ['MM1', 'MM1', 'MM1', 'MM2', 'MM2', 'MM2', 'MM4', 'MM4', 'MM4'],
    'Mt': ['S1', 'S1', 'S3', 'S3', 'S4', 'S4', 'S2', 'S2', 'S2'],
    'Val': ['a', 'n', 'cb', 'mk', 'bg', 'dgb', 'rd', 'cb', 'uyi'],
    'Count': [3, 2, 5, 8, 10, 1, 2, 2, 7]
})

(df.groupby(['Sp', 'Mt'])
   .apply(lambda group: group[group['Count'] == group['Count'].max()])
   .reset_index(drop=True))

    Sp  Mt  Val  Count
0  MM1  S1    a      3
1  MM1  S3   cb      5
2  MM2  S3   mk      8
3  MM2  S4   bg     10
4  MM4  S2  uyi      7

.reset_index(drop=True) gets you back to the original index by dropping the group-index.

Bleacher answered 14/1, 2019 at 10:3 Comment(1)
Instead of reset_index, you could consider .droplevel([0]), with .groupby(..., as_index=False)Boni
K
6

Realizing that "applying" "nlargest" to groupby object works just as fine:

Additional advantage - also can fetch top n values if required:

In [85]: import pandas as pd

In [86]: df = pd.DataFrame({
    ...: 'sp' : ['MM1', 'MM1', 'MM1', 'MM2', 'MM2', 'MM2', 'MM4', 'MM4','MM4'],
    ...: 'mt' : ['S1', 'S1', 'S3', 'S3', 'S4', 'S4', 'S2', 'S2', 'S2'],
    ...: 'val' : ['a', 'n', 'cb', 'mk', 'bg', 'dgb', 'rd', 'cb', 'uyi'],
    ...: 'count' : [3,2,5,8,10,1,2,2,7]
    ...: })

## Apply nlargest(1) to find the max val df, and nlargest(n) gives top n values for df:
In [87]: df.groupby(["sp", "mt"]).apply(lambda x: x.nlargest(1, "count")).reset_index(drop=True)
Out[87]:
   count  mt   sp  val
0      3  S1  MM1    a
1      5  S3  MM1   cb
2      8  S3  MM2   mk
3     10  S4  MM2   bg
4      7  S2  MM4  uyi
Karp answered 10/4, 2019 at 2:38 Comment(1)
This doesn't appear to work with ties.Maliamalice
S
4

If you sort your DataFrame that ordering will be preserved in the groupby. You can then just grab the first or last element and reset the index.

df = pd.DataFrame({
    'sp' : ['MM1', 'MM1', 'MM1', 'MM2', 'MM2', 'MM2', 'MM4', 'MM4','MM4'],
    'mt' : ['S1', 'S1', 'S3', 'S3', 'S4', 'S4', 'S2', 'S2', 'S2'],
    'val' : ['a', 'n', 'cb', 'mk', 'bg', 'dgb', 'rd', 'cb', 'uyi'],
    'count' : [3,2,5,8,10,1,2,2,7]
})

df.sort_values("count", ascending=False).groupby(["sp", "mt"]).first().reset_index()
Shitty answered 20/4, 2021 at 21:51 Comment(1)
This is practically the same as Rani's answer and BENY's answer, just using a slightly different method.Boni
H
3

Many of these are great answers, but to help show scalability, on 2.8 million rows with varying amount of duplicates shows some startling differences. The fastest for my data was the sort by then drop duplicate (drop all but last marginally faster than sort descending and drop all but first)

  1. Sort Ascending, Drop duplicate keep last (2.22 s)
  2. Sort Descending, Drop Duplicate keep First (2.32 s)
  3. Transform Max within the loc function (3.73 s)
  4. Transform Max storing IDX then using loc select as second step (3.84 s)
  5. Groupby using Tail (8.98 s)
  6. IDMax with groupby and then using loc select as second step (95.39 s)
  7. IDMax with groupby within the loc select (95.74 s)
  8. NLargest(1) then using iloc select as a second step (> 35000 s ) - did not finish after running overnight
  9. NLargest(1) within iloc select (> 35000 s ) - did not finish after running overnight

As you can see Sort is 1/3 faster than transform and 75% faster than groupby. Everything else is up to 40x slower. In small datasets, this may not matter by much, but as you can see, this can significantly impact large datasets.

Hawks answered 2/8, 2022 at 13:4 Comment(1)
Good guides to performance for those who are using one of these methods!Feigned
S
2
df = pd.DataFrame({
'sp' : ['MM1', 'MM1', 'MM1', 'MM2', 'MM2', 'MM2', 'MM4', 'MM4','MM4'],
'mt' : ['S1', 'S1', 'S3', 'S3', 'S4', 'S4', 'S2', 'S2', 'S2'],
'val' : ['a', 'n', 'cb', 'mk', 'bg', 'dgb', 'rd', 'cb', 'uyi'],
'count' : [3,2,5,8,10,1,2,2,7]
})

df.groupby(['sp', 'mt']).apply(lambda grp: grp.nlargest(1, 'count'))
Spiccato answered 8/8, 2018 at 18:25 Comment(0)
L
0

Another approach using rank

idx = df.groupby(['Sp', 'Mt'])['count'].rank(method="dense", ascending=False)==1
df[idx]

Lastminute answered 21/4 at 15:34 Comment(0)
F
-1
df.loc[df.groupby('mt')['count'].idxmax()]

if the df index isn't unique you may need this step df.reset_index(inplace=True) first.

Fleda answered 7/7, 2022 at 2:52 Comment(1)
This is a duplicate of Surya's answer except for the point about a non-unique index.Boni

© 2022 - 2024 — McMap. All rights reserved.