What might be the cause of 'invalid value encountered in less_equal' in numpy
Asked Answered
K

5

52

I experienced a RuntimeWarning

 RuntimeWarning: invalid value encountered in less_equal

Generated by this line of code of mine:

center_dists[j] <= center_dists[i]

Both center_dists[j] and center_dists[i] are numpy arrays

What might be the cause of this warning ?

Kioto answered 22/1, 2016 at 20:14 Comment(2)
Are the numpy arrays of equal length?Scarf
Possible duplicate of inequality comparison of numpy array with nan to a scalarMobile
W
57

That's most likely happening because of a np.nan somewhere in the inputs involved. An example of it is shown below -

In [1]: A = np.array([4, 2, 1])

In [2]: B = np.array([2, 2, np.nan])

In [3]: A<=B
RuntimeWarning: invalid value encountered in less_equal
Out[3]: array([False,  True, False], dtype=bool)

For all those comparisons involving np.nan, it would output False. Let's confirm it for a broadcasted comparison. Here's a sample -

In [1]: A = np.array([4, 2, 1])

In [2]: B = np.array([2, 2, np.nan])

In [3]: A[:,None] <= B
RuntimeWarning: invalid value encountered in less_equal
Out[3]: 
array([[False, False, False],
       [ True,  True, False],
       [ True,  True, False]], dtype=bool)

Please notice the third column in the output which corresponds to the comparison involving third element np.nan in B and that results in all False values.

Wisniewski answered 22/1, 2016 at 20:43 Comment(6)
How can I avoid printing the RuntimeWarning? I'm doing a lot comparison that has nan, so I don't want to print them all....Memory
@Memory You don't want to print the RuntimeWarning or you want to tell which comparisons were because of comparing with NaNs?Wisniewski
I don't want to print the RuntimeWarning.Memory
@Memory Use this : warnings.filterwarnings("ignore",category =RuntimeWarning) at the top of the script?Wisniewski
Interestingly, if the arrays contain a single element (one of which is NaN), no warning is issued (which could make one think that NaN comparison is not the real issue with the warning).Altamira
Re: warnings.filterwarnings(), using with np.errstate() is usually better. For details, see my answer.Boiled
B
23

As a follow-up to Divakar's answer and his comment on how to suppress the RuntimeWarning, a safer way is suppressing them only locally using with np.errstate() (docs): it is good to generally be alerted when comparisons to np.nan yield False, and ignore the warning only when this is really what is intended. Here for the OP's example:

with np.errstate(invalid='ignore'):
  center_dists[j] <= center_dists[i]

Upon exiting the with block, error handling is reset to what it was before.

Instead of invalid value encountered, one can also ignore all errors by passing all='ignore'. Interestingly, this is missing from the kwargs in the docs for np.errstate(), but not in the ones for np.seterr(). (Seems like a small bug in the np.errstate() docs.)

Boiled answered 5/2, 2019 at 15:37 Comment(1)
A perfect solution, thank you. I usually keep nans on purpose because I know they will eventually fail all comparisons and get masked.Parament
M
4

Adding to the above answers another way to suppress this warning is to use numpy.less explicitly, supplying the where and out parameters:

np.less([1, 2], [2, np.nan])  

outputs: array([ True, False]) causing the runtime warning,

np.less([1, 2], [2, np.nan], where=np.isnan([2, np.nan])==False)

does not calculate result for the 2nd array element according to the docs leaving the value undefined (I got True output for both elements), while

np.less([1, 2], [2, np.nan], where=np.isnan([2, np.nan])==False, out=np.full((1, 2), False)

writes the result into an array pre-initilized to False (and so always gives False in the 2nd element).

Mobile answered 10/11, 2019 at 7:17 Comment(0)
A
1

This happens due to Nan values in dataframe, which is completely fine with DF.

In Pycharm, This worked like a charm for me:

import warnings

warnings.simplefilter(action = "ignore", category = RuntimeWarning)
Apportionment answered 31/7, 2019 at 7:37 Comment(0)
E
0

Numpy dtypes are so strict. So it doesnt produce an array like np.array([False, True, np.nan]), it returns array([ 0., 1., nan]) which a float array.

If you try to change a bool array like:

x= np.array([False, False, False])
x[0] = 5

will retrun array([ True, False, False]) ... wow

But I think 5>np.nan cannot be False, it should be nan, False would mean that a data comparison has been made and it returned the result like 3>5, which I think it's a disaster. Numpy produces data that we actually don't have. If it could have returned nan then we could handle it with ease.

So I tried to modify the behavior with a function.

def ngrater(x, y):
    with np.errstate(invalid='ignore'):
        c=x>y
        c=c.astype(np.object)
        c[np.isnan(x)] = np.nan
        c[np.isnan(y)] = np.nan
        return c
a = np.array([np.nan,1,2,3,4,5, np.nan, np.nan, np.nan]) #9 elements
b = np.array([0,1,-2,-3,-4,-5, -5, -5, -5]) #9 elements

ngrater(a,b)

returns: array([nan, False, True, True, True, True, nan, nan, nan], dtype=object)

But I think whole memory structure is changed in that way. Instead of getting a memory-block with uniform unites, it will produce a block of pointers, where the real data is somewhere else. So function may perform slower and probably that's why Numpy doesn't do that. We need a superBool dtype which will contain also np.nan, or we just have to use float arrays +1:True, -1:False, nan:nan

Estovers answered 9/8, 2019 at 19:14 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.