Pandas: Remove NaN only at beginning and end of dataframe
Asked Answered
C

5

31

I've got a pandas DataFrame that looks like this:

       sum
1948   NaN
1949   NaN
1950     5
1951     3
1952   NaN
1953     4
1954     8
1955   NaN

and I would like to cut off the NaNs at the beginning and at the end ONLY (i.e. only the values incl. NaN from 1950 to 1954 should remain). I already tried .isnull() and dropna(), but somehow I couldn't find a proper solution. Can anyone help?

Choli answered 20/7, 2015 at 6:55 Comment(0)
B
57

Use the built in first_valid_index and last_valid_index they are designed specifically for this and slice your df:

In [5]:

first_idx = df.first_valid_index()
last_idx = df.last_valid_index()
print(first_idx, last_idx)
df.loc[first_idx:last_idx]
1950 1954
Out[5]:
      sum
1950    5
1951    3
1952  NaN
1953    4
1954    8
Breuer answered 20/7, 2015 at 8:12 Comment(0)
H
4

Here is one way to do it.

import pandas as pd

# your data
# ==============================
df

      sum
1948  NaN
1949  NaN
1950    5
1951    3
1952  NaN
1953    4
1954    8
1955  NaN

# processing
# ===============================
idx = df.fillna(method='ffill').dropna().index
res_idx = df.loc[idx].fillna(method='bfill').dropna().index
df.loc[res_idx]

      sum
1950    5
1951    3
1952  NaN
1953    4
1954    8
Heterophony answered 20/7, 2015 at 7:6 Comment(0)
P
3

Here is a an approach with Numpy:

import numpy as np

x    = np.logical_not(pd.isnull(df))
mask = np.logical_and(np.cumsum(x)!=0, np.cumsum(x[::-1])[::-1]!=0)

In [313]: df.loc[mask['sum'].tolist()]

Out[313]:
      sum
1950    5
1951    3
1952  NaN
1953    4
1954    8
Paperback answered 20/7, 2015 at 8:20 Comment(0)
E
1

One-liner:

df.query('[email protected]().isna().any(axis=1)&[email protected]().isna().any(axis=1)')
Erhart answered 5/2, 2023 at 20:41 Comment(0)
P
1

Expanding on Jianxun Li's idea and making it method-chain-able, we start with,

df = pd.DataFrame(
    {
        "val": [np.nan, 1, np.nan, 2, np.nan] + [np.nan, 1, 2, 3, np.nan]
    }
)

Out[0]:

    val
0   NaN
1   1.0
2   NaN
3   2.0
4   NaN
5   NaN
6   1.0
7   2.0
8   3.0
9   NaN

and then we can get the idx as follows,

(df.ffill().notna() & df.bfill().notna()).all(axis=1)

Out[1]:

0    False
1     True
2     True
3     True
4     True
5     True
6     True
7     True
8     True
9    False
dtype: bool

but we can put it into a chainable object like so,

(
    df
    .loc[lambda df_: (df_.ffill().notna() & df_.bfill().notna()).all(axis=1)]
)

Out[2]:

    val
1   1.0
2   NaN
3   2.0
4   NaN
5   NaN
6   1.0
7   2.0
8   3.0

The beauty here is that it can be easily "multi-columned". Note: switch .any for .all to keep any outer row containing a non-nan value.

df = pd.DataFrame(
    {
        # "gp": ["a"] * 5 + ["b"] * 5,
        "val": [np.nan, 1, np.nan, 2, np.nan] + [np.nan, 1, 2, 3, np.nan],
        "val2": [1, 1, 3, 2, np.nan] + [np.nan, 1, 2, 3, np.nan]
    }
)

(
    df
    .loc[lambda df_: (df_.ffill().notna() & df_.bfill().notna()).any(axis=1)]
)

Out[3]:

    val val2
0   NaN 1.0
1   1.0 1.0
2   NaN 3.0
3   2.0 2.0
4   NaN NaN
5   NaN NaN
6   1.0 1.0
7   2.0 2.0
8   3.0 3.0

Lastly, it's groupby-able,

df = pd.DataFrame(
    {
        "gp": ["a"] * 5 + ["b"] * 5,
        "val": [np.nan, 1, np.nan, 2, np.nan] + [np.nan, 1, 2, 3, np.nan],
        "val2": [1, 1, 3, 2, np.nan] + [np.nan, 1, 2, 3, np.nan]
    }
)

(
    df
    .loc[(
        lambda df_: (
            df_
            .groupby("gp", group_keys=False)
            .apply(lambda s: s.ffill().notna() & s.bfill().notna())
            .all(axis=1)
        )
    )]
)

Out[4]:

    gp  val   val2
1   a   1.0   1.0
2   a   NaN   3.0
3   a   2.0   2.0
6   b   1.0   1.0
7   b   2.0   2.0
8   b   3.0   3.0
Petulia answered 1/12, 2023 at 19:20 Comment(0)

© 2022 - 2025 — McMap. All rights reserved.