Replacing Pandas or Numpy Nan with a None to use with MysqlDB
Asked Answered
C

18

274

I am trying to write a Pandas dataframe (or can use a numpy array) to a mysql database using MysqlDB . MysqlDB doesn't seem understand 'nan' and my database throws out an error saying nan is not in the field list. I need to find a way to convert the 'nan' into a NoneType.

Any ideas?

Cisalpine answered 4/1, 2013 at 18:26 Comment(1)
Is there no setting you can change in Pandas to make it return None for NULL instead of nan?Coxcombry
N
330

@bogatron has it right, you can use where, it's worth noting that you can do this natively in pandas:

df1 = df.where(pd.notnull(df), None)

Note: this changes the dtype of all columns to object.

Example:

In [1]: df = pd.DataFrame([1, np.nan])

In [2]: df
Out[2]: 
    0
0   1
1 NaN

In [3]: df1 = df.where(pd.notnull(df), None)

In [4]: df1
Out[4]: 
      0
0     1
1  None

Note: what you cannot do recast the DataFrames dtype to allow all datatypes types, using astype, and then the DataFrame fillna method:

df1 = df.astype(object).replace(np.nan, 'None')

Unfortunately neither this, nor using replace, works with None see this (closed) issue.


As an aside, it's worth noting that for most use cases you don't need to replace NaN with None, see this question about the difference between NaN and None in pandas.

However, in this specific case it seems you do (at least at the time of this answer).

Northernmost answered 4/1, 2013 at 19:1 Comment(11)
see docs pandas.pydata.org/pandas-docs/stable/…Garton
FWIW..this will also change the dtype of the columns to object, you probably don't care thoughGarton
@Garton Thanks for the link, weirdly I couldn't find it earlier! I figured it had to change the dtype to allow None, definitely worth mentioning!Northernmost
useful to use before inserting with Django to avoid the np.nan being converted to string "nan"Hautrhin
Useful caveat. Makes sense to loop through only those columns that are already dtype of object and do it for those and handle other types differently as needed. Ideally, fillna(None) would be terrific.Symmetrize
An important use case is when converting to JSON. Not all languages support NaNs in JSON (such as PHP), so they need to be converted to None. This is something I've run into quite a bit as a data scientist.Calibrate
Is this method still working? Currently only @Bedroll 's answer below worked without errors for me, at least in pandas version 1.0.3 .Beseech
Using df.where(pd.notnull(df), None) no longer works in 1.3.0 - instead I found the next answer from @Bedroll to still work fine: https://mcmap.net/q/108303/-replacing-pandas-or-numpy-nan-with-a-none-to-use-with-mysqldbEuh
This only works for certain data types, example: it changes NaN to None in string columns but not for Float type, that are not modified. Going straight to numpy as said above worked for me df = df.replace({np.nan: None})Toothpick
This works for me. It also preserves my column dtypes for columns that don't have NaNs in them (I'm using pandas 1.2.4)Handbill
Please update this code with replace(np.NaN, None) as the example code no longer works after 1.3.0 see github.com/pandas-dev/pandas/issues/42423Samathasamau
B
350
df = df.replace({np.nan: None})

Note: For pandas versions <1.4, this changes the dtype of all affected columns to object.
To avoid that, use this syntax instead:

df = df.replace(np.nan, None)

Credit goes to this guy here on this Github issue and Killian Huyghe's comment.

Bedroll answered 28/1, 2019 at 14:7 Comment(9)
this is the best answer as you can use df.replace({np.nan: None}) as a temp objectRake
if the values in df are already None this answer will toggle them back to np.nanTheomachy
@MaxSegal How is that? I haven't found this in replace() documentation. Can you refer me to where this is mentioned in the docs?Deenadeenya
@AmmarAlyousfi to = {np.nan: None}; assert df.replace(to).replace(to).equals(df)Bedroll
it does not toggle them back for me, nor does the documentation indicate it would.Geesey
I see the save behavior as @MaxSegal. np.nan is transformed into None, while None is transformed into np.nanHeteropterous
for pandas versions <1.3.0 if the values in df are already None this answer will toggle them back to np.nanTheomachy
I had the issue of them being toggled back on version >1.3.0. My column was categorical. When I switched it to object it worked again. Perhaps that may be the cause.Mathamathe
For pandas versions <1.4, there is a bug when using a dict in replace and your column dtypes may change unexpectedly, you should prefer this syntax instead: df = df.replace(np.nan, None). See github.com/pandas-dev/pandas/issues/35268Krouse
N
330

@bogatron has it right, you can use where, it's worth noting that you can do this natively in pandas:

df1 = df.where(pd.notnull(df), None)

Note: this changes the dtype of all columns to object.

Example:

In [1]: df = pd.DataFrame([1, np.nan])

In [2]: df
Out[2]: 
    0
0   1
1 NaN

In [3]: df1 = df.where(pd.notnull(df), None)

In [4]: df1
Out[4]: 
      0
0     1
1  None

Note: what you cannot do recast the DataFrames dtype to allow all datatypes types, using astype, and then the DataFrame fillna method:

df1 = df.astype(object).replace(np.nan, 'None')

Unfortunately neither this, nor using replace, works with None see this (closed) issue.


As an aside, it's worth noting that for most use cases you don't need to replace NaN with None, see this question about the difference between NaN and None in pandas.

However, in this specific case it seems you do (at least at the time of this answer).

Northernmost answered 4/1, 2013 at 19:1 Comment(11)
see docs pandas.pydata.org/pandas-docs/stable/…Garton
FWIW..this will also change the dtype of the columns to object, you probably don't care thoughGarton
@Garton Thanks for the link, weirdly I couldn't find it earlier! I figured it had to change the dtype to allow None, definitely worth mentioning!Northernmost
useful to use before inserting with Django to avoid the np.nan being converted to string "nan"Hautrhin
Useful caveat. Makes sense to loop through only those columns that are already dtype of object and do it for those and handle other types differently as needed. Ideally, fillna(None) would be terrific.Symmetrize
An important use case is when converting to JSON. Not all languages support NaNs in JSON (such as PHP), so they need to be converted to None. This is something I've run into quite a bit as a data scientist.Calibrate
Is this method still working? Currently only @Bedroll 's answer below worked without errors for me, at least in pandas version 1.0.3 .Beseech
Using df.where(pd.notnull(df), None) no longer works in 1.3.0 - instead I found the next answer from @Bedroll to still work fine: https://mcmap.net/q/108303/-replacing-pandas-or-numpy-nan-with-a-none-to-use-with-mysqldbEuh
This only works for certain data types, example: it changes NaN to None in string columns but not for Float type, that are not modified. Going straight to numpy as said above worked for me df = df.replace({np.nan: None})Toothpick
This works for me. It also preserves my column dtypes for columns that don't have NaNs in them (I'm using pandas 1.2.4)Handbill
Please update this code with replace(np.NaN, None) as the example code no longer works after 1.3.0 see github.com/pandas-dev/pandas/issues/42423Samathasamau
T
28

You can replace nan with None in your numpy array:

>>> x = np.array([1, np.nan, 3])
>>> y = np.where(np.isnan(x), None, x)
>>> print y
[1.0 None 3.0]
>>> print type(y[1])
<type 'NoneType'>
Toms answered 4/1, 2013 at 18:57 Comment(1)
The only potential concern is the change of dtype, x.dtype is dtype('float64') ,while y.dtype is dtype('object').Raleighraley
C
27

After stumbling around, this worked for me:

df = df.astype(object).where(pd.notnull(df),None)
Colonize answered 2/8, 2017 at 19:47 Comment(1)
This seems to required on newer versions of pandas. The where and replace methods both get converted back to NaN when applied to a pd.Categorical columnElytron
T
13

replace np.nan with None is accomplished differently across different version of pandas:

if version.parse(pd.__version__) >= version.parse('1.3.0'):
    df = df.replace({np.nan: None})
else:
    df = df.where(pd.notnull(df), None)

this solves the issue that for pandas versions <1.3.0, if the values in df are already None then df.replace({np.nan: None}) will toggle them back to np.nan and vice versa.

Theomachy answered 26/1, 2022 at 11:37 Comment(0)
D
9

Another addition: be careful when replacing multiples and converting the type of the column back from object to float. If you want to be certain that your None's won't flip back to np.NaN's apply @andy-hayden's suggestion with using pd.where. Illustration of how replace can still go 'wrong':

In [1]: import pandas as pd

In [2]: import numpy as np

In [3]: df = pd.DataFrame({"a": [1, np.NAN, np.inf]})

In [4]: df
Out[4]:
     a
0  1.0
1  NaN
2  inf

In [5]: df.replace({np.NAN: None})
Out[5]:
      a
0     1
1  None
2   inf

In [6]: df.replace({np.NAN: None, np.inf: None})
Out[6]:
     a
0  1.0
1  NaN
2  NaN

In [7]: df.where((pd.notnull(df)), None).replace({np.inf: None})
Out[7]:
     a
0  1.0
1  NaN
2  NaN
Downwards answered 10/10, 2019 at 13:35 Comment(2)
Thanks for adding this. Going over the documentation again, I still can't understand this behavior. Anyway, this can be worked around by chaining yet another .replace({np.nan: None})Bedroll
Yes, you could finish by adding another replace({np.nan: None}). My comment was added to point out the potential pitfall when replacing np.nan's. The above certainly tripped me out for a bit!Downwards
M
8

Just an addition to @Andy Hayden's answer:

Since DataFrame.mask is the opposite twin of DataFrame.where, they have the exactly same signature but with opposite meaning:

  • DataFrame.where is useful for Replacing values where the condition is False.
  • DataFrame.mask is used for Replacing values where the condition is True.

So in this question, using df.mask(df.isna(), other=None, inplace=True) might be more intuitive.

Magneton answered 29/4, 2019 at 4:21 Comment(0)
F
4

Quite old, yet I stumbled upon the very same issue. Try doing this:

df['col_replaced'] = df['col_with_npnans'].apply(lambda x: None if np.isnan(x) else x)
Flashback answered 9/11, 2016 at 14:48 Comment(1)
doesn't work if column data type is numeric because None just gets converted back into nan (pandas 0.23)Hautrhin
B
2

I believe the cleanest way would be to make use of the na_value argument in the pandas.DataFrame.to_numpy() method (docs):

na_value : Any, optional

The value to use for missing values. The default value depends on dtype and the dtypes of the DataFrame columns.

New in version 1.1.0.

You could e.g. convert to dictionaries with NaN's replaced by None using

columns = df.columns.tolist()
dicts_with_nan_replaced = [
    dict(zip(columns, x))
    for x in df.to_numpy(na_value=None)
]
Benue answered 13/7, 2021 at 12:51 Comment(1)
Your code keeps NaN as NaN, but you can fix it if you also pass dtype=object.Bedroll
A
2

Sometimes it is better to use this code. Note that np refers to the numpy:

df = df.fillna(np.nan).replace([np.nan], [None])
Arethaarethusa answered 30/1, 2023 at 16:5 Comment(4)
Why is .fillna(np.nan) needed here?Bedroll
In my case replace doesn't work without it. It seems it's needed to convert na to a numpy object first, then use it in replace method.Arethaarethusa
By "na" which value are you referring to, exactly?Bedroll
I meant Pandas NaNArethaarethusa
H
1

Convert numpy NaN to pandas NA before replacing with the where statement:

df = df.replace(np.NaN, pd.NA).where(df.notnull(), None)
Heeler answered 1/12, 2021 at 7:7 Comment(0)
M
1

Astoundingly, None of the previous answers worked for me, so I had to do it for each column.

for column in df.columns:
            df[column] = df[column].where(pd.notnull(df[column]), None)
Manaker answered 12/5, 2022 at 13:43 Comment(2)
It would be useful if you can explain why the other answers did not work and how this one helps.Journalism
@YuvrajJaiswal I don't know why it didn't work, likewise I don't know exactly why my version works lol. I suppose series.where is more straight forward.Manaker
K
0

Do you have a code block to review by chance?

Using .loc, pandas can access records based on logic conditions (filtering) and do action with them (when using =). Setting a .loc mask equal to some value will change the return array inplace (so be a touch careful here; I suggest test on a df copy prior to using in code block).

df.loc[df['SomeColumn'].isna(), 'SomeColumn'] = None

The outer function is df.loc[row_label, column_label] = None. We're going to use a boolean mask for row_label by using the .isna() method to find 'NoneType' values in our column SomeColumn.

We'll use the .isna() method to return a boolean array of rows/records in column SomeColumn as our row_label: df['SomeColumn'].isna(). It will isolate all rows where SomeColumn has any of the 'NoneType' items pandas checks for with the .isna() method.

We'll use the column_label both when masking the dataframe for the row_label, and to identify the column we want to act on for the .loc mask.

Finally, we set the .loc mask equal to None, so the rows/records returned are changed to None based on the masked index.

Below are links to pandas documentation regarding .loc & .isna().

References:
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.loc.html https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.isna.html

Kirkwall answered 5/10, 2021 at 19:52 Comment(0)
M
0

After finding that neither the recommended answer, nor the alternate suggested worked for my application after a Pandas update to 1.3.2 I settled for safety with a brute force approach:

buf = df.to_json(orient='records')
recs = json.loads(buf)
Myeloid answered 2/12, 2021 at 20:16 Comment(0)
M
0

Yet another option, that actually did the trick for me:

df = df.astype(object).replace(np.nan, None)
Martynne answered 4/3, 2022 at 15:35 Comment(0)
B
0

Doing it by hand is the only way that is working for me right now.

This answare from @rodney cox worked for me in almost every case.

The following code set all columns to object data type and then replace any null value to None. Setting the column data type to object is crucial because it prevents pandas to change the type further.

for col in df.columns:
    df[col] = df[col].astype(object)
    df.loc[df[col].isnull(), col] = None

Warning: This solution is not eficient, because it process columns that might not have np.nan values.

Buenrostro answered 30/5, 2022 at 16:16 Comment(0)
K
0

This should work: df["column"]=df["column"].apply(lambda x: None if pd.isnull(x) else x)

Kreda answered 28/3, 2023 at 23:18 Comment(0)
D
-3

This worked for me:

df = df.fillna(0)
Danczyk answered 25/11, 2021 at 13:54 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.