How to select rows with one or more nulls from a pandas DataFrame without listing columns explicitly?
Asked Answered
D

6

333

I have a dataframe with ~300K rows and ~40 columns. I want to find out if any rows contain null values - and put these 'null'-rows into a separate dataframe so that I could explore them easily.

I can create a mask explicitly:

mask = False
for col in df.columns: 
    mask = mask | df[col].isnull()
dfnulls = df[mask]

Or I can do something like:

df.ix[df.index[(df.T == np.nan).sum() > 1]]

Is there a more elegant way of doing it (locating rows with nulls in them)?

Disk answered 9/1, 2013 at 22:22 Comment(0)
C
509

[Updated to adapt to modern pandas, which has isnull as a method of DataFrames..]

You can use isnull and any to build a boolean Series and use that to index into your frame:

>>> df = pd.DataFrame([range(3), [0, np.NaN, 0], [0, 0, np.NaN], range(3), range(3)])
>>> df.isnull()
       0      1      2
0  False  False  False
1  False   True  False
2  False  False   True
3  False  False  False
4  False  False  False
>>> df.isnull().any(axis=1)
0    False
1     True
2     True
3    False
4    False
dtype: bool
>>> df[df.isnull().any(axis=1)]
   0   1   2
1  0 NaN   0
2  0   0 NaN

[For older pandas:]

You could use the function isnull instead of the method:

In [56]: df = pd.DataFrame([range(3), [0, np.NaN, 0], [0, 0, np.NaN], range(3), range(3)])

In [57]: df
Out[57]: 
   0   1   2
0  0   1   2
1  0 NaN   0
2  0   0 NaN
3  0   1   2
4  0   1   2

In [58]: pd.isnull(df)
Out[58]: 
       0      1      2
0  False  False  False
1  False   True  False
2  False  False   True
3  False  False  False
4  False  False  False

In [59]: pd.isnull(df).any(axis=1)
Out[59]: 
0    False
1     True
2     True
3    False
4    False

leading to the rather compact:

In [60]: df[pd.isnull(df).any(axis=1)]
Out[60]: 
   0   1   2
1  0 NaN   0
2  0   0 NaN
Civics answered 9/1, 2013 at 22:33 Comment(0)
D
95
def nans(df): return df[df.isnull().any(axis=1)]

then when ever you need it you can type:

nans(your_dataframe)
Dermato answered 22/6, 2017 at 14:15 Comment(3)
df[df.isnull().any(axis=1)] works but throws UserWarning: Boolean Series key will be reindexed to match DataFrame index.. How does one rewrite this more explicitly and in a way that doesn't trigger that warning message?Aeromechanic
@vishal I think all you would need to do is add loc like this; df.loc[df.isnull().any(axis=1)]Hardhack
As an aside - you shouldn't be naming your anonymous (lambda) functions. Always use a def statement instead of an assignment statement that binds a lambda expression directly to an identifier.Barranquilla
C
14

If you want to filter rows by a certain number of columns with null values, you may use this:

df.iloc[df[(df.isnull().sum(axis=1) >= qty_of_nuls)].index]

So, here is the example:

Your dataframe:

>>> df = pd.DataFrame([range(4), [0, np.NaN, 0, np.NaN], [0, 0, np.NaN, 0], range(4), [np.NaN, 0, np.NaN, np.NaN]])
>>> df
     0    1    2    3
0  0.0  1.0  2.0  3.0
1  0.0  NaN  0.0  NaN
2  0.0  0.0  NaN  0.0
3  0.0  1.0  2.0  3.0
4  NaN  0.0  NaN  NaN

If you want to select the rows that have two or more columns with null value, you run the following:

>>> qty_of_nuls = 2
>>> df.iloc[df[(df.isnull().sum(axis=1) >=qty_of_nuls)].index]
     0    1    2   3
1  0.0  NaN  0.0 NaN
4  NaN  0.0  NaN NaN
Connatural answered 20/8, 2020 at 20:21 Comment(0)
M
9

Four fewer characters, but 2 more ms

%%timeit
df.isna().T.any()
# 52.4 ms ± 352 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

%%timeit
df.isna().any(axis=1)
# 50 ms ± 423 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

I'd probably use axis=1

Meara answered 21/11, 2020 at 18:7 Comment(0)
D
3

.any() and .all() are great for the extreme cases, but not when you're looking for a specific number of null values. Here's an extremely simple way to do what I believe you're asking. It's pretty verbose, but functional.

import pandas as pd
import numpy as np

# Some test data frame
df = pd.DataFrame({'num_legs':          [2, 4,      np.nan, 0, np.nan],
                   'num_wings':         [2, 0,      np.nan, 0, 9],
                   'num_specimen_seen': [10, np.nan, 1,     8, np.nan]})

# Helper : Gets NaNs for some row
def row_nan_sums(df):
    sums = []
    for row in df.values:
        sum = 0
        for el in row:
            if el != el: # np.nan is never equal to itself. This is "hacky", but complete.
                sum+=1
        sums.append(sum)
    return sums

# Returns a list of indices for rows with k+ NaNs
def query_k_plus_sums(df, k):
    sums = row_nan_sums(df)
    indices = []
    i = 0
    for sum in sums:
        if (sum >= k):
            indices.append(i)
        i += 1
    return indices

# test
print(df)
print(query_k_plus_sums(df, 2))

Output

   num_legs  num_wings  num_specimen_seen
0       2.0        2.0               10.0
1       4.0        0.0                NaN
2       NaN        NaN                1.0
3       0.0        0.0                8.0
4       NaN        9.0                NaN
[2, 4]

Then, if you're like me and want to clear those rows out, you just write this:

# drop the rows from the data frame
df.drop(query_k_plus_sums(df, 2),inplace=True)
# Reshuffle up data (if you don't do this, the indices won't reset)
df = df.sample(frac=1).reset_index(drop=True)
# print data frame
print(df)

Output:

   num_legs  num_wings  num_specimen_seen
0       4.0        0.0                NaN
1       0.0        0.0                8.0
2       2.0        2.0               10.0
Dyanne answered 8/6, 2020 at 23:2 Comment(0)
E
2
df1 = df[df.isna().any(axis=1)]

Refer link: (Display rows with one or more NaN values in pandas dataframe)

Edging answered 6/6, 2021 at 7:17 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.