pandas GroupBy columns with NaN (missing) values
Asked Answered
A

7

287

I have a DataFrame with many missing values in columns which I wish to groupby:

import pandas as pd
import numpy as np
df = pd.DataFrame({'a': ['1', '2', '3'], 'b': ['4', np.NaN, '6']})

In [4]: df.groupby('b').groups
Out[4]: {'4': [0], '6': [2]}

see that Pandas has dropped the rows with NaN target values. I want to include these rows!

Any suggestions?

Antisepsis answered 25/8, 2013 at 13:28 Comment(6)
@PhillipCloud I've edited this question to include just the question, which is actual quite good, relating to open pandas enhancement of Jeff's.Prose
There not being able to include (and propagate) NaNs in groups is quite aggravating. Citing R is not convincing, as this behavior is not consistent with a lot of other things. Anyway, the dummy hack is also pretty bad. However, the size (includes NaNs) and the count (ignores NaNs) of a group will differ if there are NaNs. dfgrouped = df.groupby(['b']).a.agg(['sum','size','count']) dfgrouped['sum'][dfgrouped['size']!=dfgrouped['count']] = NoneBradbury
Can you summarize what you are specifically trying to achieve? i.e. we see an output, but what is the "desired" output?Footsie
With pandas 1.1 you will soon be able to specify dropna=False in groupby() to get your desired result. More infoNalchik
Note that as of this writing, there is a bug that makes dropna=False fail with MultiIndex grouping. There are a handful of open issues mentioning this on their github and not a lot of momentum on fixing it yet unfortunately.Jokester
More detail regarding the MultiIndex bug mentioned by @totalhack: still not yet fixed as of pandas 1.5. Closest issue is this one, which was closed as a duplicate of some more obscure issues, which are in progress but still open: github.com/pandas-dev/pandas/issues/36470Sentimentalize
N
349

pandas >= 1.1

From pandas 1.1 you have better control over this behavior, NA values are now allowed in the grouper using dropna=False:

pd.__version__
# '1.1.0.dev0+2004.g8d10bfb6f'

# Example from the docs
df

   a    b  c
0  1  2.0  3
1  1  NaN  4
2  2  1.0  3
3  1  2.0  2

# without NA (the default)
df.groupby('b').sum()

     a  c
b        
1.0  2  3
2.0  2  5
# with NA
df.groupby('b', dropna=False).sum()

     a  c
b        
1.0  2  3
2.0  2  5
NaN  1  4
Nalchik answered 20/5, 2020 at 21:10 Comment(6)
It doesn't work to me. kroscek_jupyter_metabase = fromdb_1474_detail.groupby(groupby, dropna = False)[col_to_count].count() returns TypeError: groupby() got an unexpected keyword argument 'dropna'Camus
@Camus please run print(pd.__version__) and let me know what it says.Nalchik
This fails with MultiIndex grouping unfortunately. The most straightforward workaround I've seen so far, albeit ugly, appears to be replacing the NaN value before grouping.Jokester
I wound like None to be in the same group!Crackleware
Logically they aren’t the same thing though. @IevgenNaida you should ideally want to have only one way of representing missing data. Anything else is confusing, brittle, error proneNalchik
@Nalchik Sorry, my comment is misleading. I just need my None to be a grouped and dropna=False is perfectly doing the job.Crackleware
P
178

This is mentioned in the Missing Data section of the docs:

NA groups in GroupBy are automatically excluded. This behavior is consistent with R

One workaround is to use a placeholder before doing the groupby (e.g. -1):

In [11]: df.fillna(-1)
Out[11]: 
   a   b
0  1   4
1  2  -1
2  3   6

In [12]: df.fillna(-1).groupby('b').sum()
Out[12]: 
    a
b    
-1  2
4   1
6   3

That said, this feels pretty awful hack... perhaps there should be an option to include NaN in groupby (see this github issue - which uses the same placeholder hack).

However, as described in another answer, "from pandas 1.1 you have better control over this behavior, NA values are now allowed in the grouper using dropna=False"

Prose answered 25/8, 2013 at 16:55 Comment(14)
This is a logical but a sort of funny solution that I've thought of earlier, Pandas makes NaN fields from the empty ones, and we have to change them back. This is the reason that I'm thinking of looking for other solutions like running an SQL server and querying the tables from there (looks a bit too complicated), or looking another library in spite of Pandas, or use my own (that I want to get rid of). ThxGuarantee
@GyulaSámuelKarli To me this seems a small bug (see the bugreport above), and my solution is a workaround. I find it strange you write off the entire library.Prose
I don't want to write down Pandas just look for the tool that fits my requests the most.Guarantee
But what if you do not want to change the NaNs with different values? Is there no way to use the sum method incorporating NaNs? (for instance, like the dataframe sum method df.sum(skipna=True): pandas.pydata.org/pandas-docs/version/0.17.1/generated/…Acetanilide
@Acetanilide This question is about the groupby key being NaN, so I'm not sure I follow the question.Prose
You are right. I was looking at the target values, which can be NaN too. If they are NaN, they cannot be summed using the proposed method. But I guess that is out of scope for this thread.Acetanilide
Also, do not forget to reassign the df back to another/same variable. new_df = df.replace(np.nan, -1).groupby('b').sum()Slub
Have a look at my answer below, I believe I have found a pretty good (cleaner, and probably faster) solution. https://mcmap.net/q/107477/-pandas-groupby-columns-with-nan-missing-valuesFootsie
Best answer is in the comment to @Tuetschek answerWarner
@yuval thanks, I updated that part, it's interesting that .replace(np.nan, x) behavior would have changed!Prose
@AndyHayden Thank you so much for saving me the rest of my day, this issue was driving me crazy.Variola
No, this is not consistent with R. df %>% group_by will give NA summaries too with a warning which can be avoided by passing the grouping column through fct_explicit_na and then a (Missing) level is created.Lunalunacy
Agree with Ravaging Care. This is not consistent with R. Data manipulation packages like dplyr & data.table in R default to include NA's when grouping. Excluding NA's when grouping, like Pandas does, seems counter-intuitive to me: even in SQL NULLs are included when grouping.Responser
"One workaround is to use a placeholder before doing the groupby (e.g. -1):". It turns out that -1 is treated like an NA value, at least in pandas version 1.4.2.Macready
G
52

Ancient topic, if someone still stumbles over this--another workaround is to convert via .astype(str) to string before grouping. That will conserve the NaN's.

df = pd.DataFrame({'a': ['1', '2', '3'], 'b': ['4', np.NaN, '6']})
df['b'] = df['b'].astype(str)
df.groupby(['b']).sum()
    a
b   
4   1
6   3
nan 2
Graeme answered 4/10, 2016 at 16:20 Comment(4)
@K3---rnc: See the comment to your link - the author of the post in your link did something wrong.Foliation
@Thomas, yes, exactly as in the example above. Please edit if you can make the example safe (and as trivial).Malpighiaceous
The sum of a is string concatenation here, not a numeric sum. This only "works" because 'b' consisted of distinct entries. You need 'a' to be numeric and 'b' to be stringBoilermaker
Note the column a is an object and the resultant mean after groupby may not be the thing you want!!!!!!!!!Excrescency
S
13

I am not able to add a comment to M. Kiewisch since I do not have enough reputation points (only have 41 but need more than 50 to comment).

Anyway, just want to point out that M. Kiewisch solution does not work as is and may need more tweaking. Consider for example

>>> df = pd.DataFrame({'a': [1, 2, 3, 5], 'b': [4, np.NaN, 6, 4]})
>>> df
   a    b
0  1  4.0
1  2  NaN
2  3  6.0
3  5  4.0
>>> df.groupby(['b']).sum()
     a
b
4.0  6
6.0  3
>>> df.astype(str).groupby(['b']).sum()
      a
b
4.0  15
6.0   3
nan   2

which shows that for group b=4.0, the corresponding value is 15 instead of 6. Here it is just concatenating 1 and 5 as strings instead of adding it as numbers.

Stoichiometry answered 25/11, 2016 at 21:22 Comment(3)
That's because you converted the entire DF to str, instead of just the b columnBrisson
Note that this have been fixed in the mentioned answer now.Interlocutory
The new solution is better but still not safe, in my opinion. Consider a case where one of the entries in column 'b' is same as stringified np.NaN. Then those things are clubbed together. df = pd.DataFrame({'a': [1, 2, 3, 5, 6], 'b': ['foo', np.NaN, 'bar', 'foo', 'nan']}); df['b'] = df['b'].astype(str); df.groupby(['b']).sum()Stoichiometry
G
8

All answers provided thus far result in potentially dangerous behavior as it is quite possible you select a dummy value that is actually part of the dataset. This is increasingly likely as you create groups with many attributes. Simply put, the approach doesn't always generalize well.

A less hacky solve is to use pd.drop_duplicates() to create a unique index of value combinations each with their own ID, and then group on that id. It is more verbose but does get the job done:

def safe_groupby(df, group_cols, agg_dict):
    # set name of group col to unique value
    group_id = 'group_id'
    while group_id in df.columns:
        group_id += 'x'
    # get final order of columns
    agg_col_order = (group_cols + list(agg_dict.keys()))
    # create unique index of grouped values
    group_idx = df[group_cols].drop_duplicates()
    group_idx[group_id] = np.arange(group_idx.shape[0])
    # merge unique index on dataframe
    df = df.merge(group_idx, on=group_cols)
    # group dataframe on group id and aggregate values
    df_agg = df.groupby(group_id, as_index=True)\
               .agg(agg_dict)
    # merge grouped value index to results of aggregation
    df_agg = group_idx.set_index(group_id).join(df_agg)
    # rename index
    df_agg.index.name = None
    # return reordered columns
    return df_agg[agg_col_order]

Note that you can now simply do the following:

data_block = [np.tile([None, 'A'], 3),
              np.repeat(['B', 'C'], 3),
              [1] * (2 * 3)]

col_names = ['col_a', 'col_b', 'value']

test_df = pd.DataFrame(data_block, index=col_names).T

grouped_df = safe_groupby(test_df, ['col_a', 'col_b'],
                          OrderedDict([('value', 'sum')]))

This will return the successful result without having to worry about overwriting real data that is mistaken as a dummy value.

Gooch answered 25/10, 2018 at 22:11 Comment(1)
This is the best solution for the general case, but in cases where I know of an invalid string / number I can use instead, I'm probably going to go with Andy Hayden's answer below... I hope pandas fixes this behavior soon.Aspinwall
D
6

One small point to Andy Hayden's solution – it doesn't work (anymore?) because np.nan == np.nan yields False, so the replace function doesn't actually do anything.

What worked for me was this:

df['b'] = df['b'].apply(lambda x: x if not np.isnan(x) else -1)

(At least that's the behavior for Pandas 0.19.2. Sorry to add it as a different answer, I do not have enough reputation to comment.)

Depilatory answered 23/1, 2017 at 16:18 Comment(1)
There is also df['b'].fillna(-1).Malpighiaceous
B
5

I answered this already, but some reason the answer was converted to a comment. Nevertheless, this is the most efficient solution:

Not being able to include (and propagate) NaNs in groups is quite aggravating. Citing R is not convincing, as this behavior is not consistent with a lot of other things. Anyway, the dummy hack is also pretty bad. However, the size (includes NaNs) and the count (ignores NaNs) of a group will differ if there are NaNs.

dfgrouped = df.groupby(['b']).a.agg(['sum','size','count'])

dfgrouped['sum'][dfgrouped['size']!=dfgrouped['count']] = None

When these differ, you can set the value back to None for the result of the aggregation function for that group.

Bradbury answered 23/5, 2017 at 19:7 Comment(1)
This was super helpful to me but it answers a slightly different question than the original one. IIUC, your solution propagates NaNs in the summation, but the NaN items in the "b" column still get dropped as rows.Kingmaker

© 2022 - 2024 — McMap. All rights reserved.