Short version:
The reason it's different is because pandas
uses bottleneck
(if it's installed) when calling the mean
operation, as opposed to just relying on numpy
. bottleneck
is presumably used since it appears to be faster than numpy
(at least on my machine), but at the cost of precision. They happen to match for the 64 bit version, but differ in 32 bit land (which is the interesting part).
Long version:
It's extremely difficult to tell what's going on just by inspecting the source code of these modules (they're quite complex, even for simple computations like mean
, turns out numerical computing is hard). Best to use the debugger to avoid brain-compiling and those types of mistakes. The debugger won't make a mistake in logic, it'll tell you exactly what's going on.
Here's some of my stack trace (values differ slightly since no seed for RNG):
Can reproduce (Windows):
>>> import numpy as np; import pandas as pd
>>> x=np.random.normal(-9.,.005,size=900000)
>>> df=pd.DataFrame(x,dtype='float32',columns=['x'])
>>> df['x'].mean()
-9.0
>>> x.mean()
-9.0000037501099754
>>> x.astype(np.float32).mean()
-9.0000029
Nothing extraordinary going on with numpy
's version. It's the pandas
version that's a little wacky.
Let's have a look inside df['x'].mean()
:
>>> def test_it_2():
... import pdb; pdb.set_trace()
... df['x'].mean()
>>> test_it_2()
... # Some stepping/poking around that isn't important
(Pdb) l
2307
2308 if we have an ndarray as a value, then simply perform the operation,
2309 otherwise delegate to the object
2310
2311 """
2312 -> delegate = self._values
2313 if isinstance(delegate, np.ndarray):
2314 # Validate that 'axis' is consistent with Series's single axis.
2315 self._get_axis_number(axis)
2316 if numeric_only:
2317 raise NotImplementedError('Series.{0} does not implement '
(Pdb) delegate.dtype
dtype('float32')
(Pdb) l
2315 self._get_axis_number(axis)
2316 if numeric_only:
2317 raise NotImplementedError('Series.{0} does not implement '
2318 'numeric_only.'.format(name))
2319 with np.errstate(all='ignore'):
2320 -> return op(delegate, skipna=skipna, **kwds)
2321
2322 return delegate._reduce(op=op, name=name, axis=axis, skipna=skipna,
2323 numeric_only=numeric_only,
2324 filter_type=filter_type, **kwds)
So we found the trouble spot, but now things get kind of weird:
(Pdb) op
<function nanmean at 0x000002CD8ACD4488>
(Pdb) op(delegate)
-9.0
(Pdb) delegate_64 = delegate.astype(np.float64)
(Pdb) op(delegate_64)
-9.000003749978807
(Pdb) delegate.mean()
-9.0000029
(Pdb) delegate_64.mean()
-9.0000037499788075
(Pdb) np.nanmean(delegate, dtype=np.float64)
-9.0000037499788075
(Pdb) np.nanmean(delegate, dtype=np.float32)
-9.0000029
Note that delegate.mean()
and np.nanmean
output -9.0000029
with type float32
, not -9.0
as pandas
nanmean
does. With a bit of poking around, you can find the source to pandas
nanmean
in pandas.core.nanops
. Interestingly, it actually appears like it should be matching numpy
at first. Let's have a look at pandas
nanmean
:
(Pdb) import inspect
(Pdb) src = inspect.getsource(op).split("\n")
(Pdb) for line in src: print(line)
@disallow('M8')
@bottleneck_switch()
def nanmean(values, axis=None, skipna=True):
values, mask, dtype, dtype_max = _get_values(values, skipna, 0)
dtype_sum = dtype_max
dtype_count = np.float64
if is_integer_dtype(dtype) or is_timedelta64_dtype(dtype):
dtype_sum = np.float64
elif is_float_dtype(dtype):
dtype_sum = dtype
dtype_count = dtype
count = _get_counts(mask, axis, dtype=dtype_count)
the_sum = _ensure_numeric(values.sum(axis, dtype=dtype_sum))
if axis is not None and getattr(the_sum, 'ndim', False):
the_mean = the_sum / count
ct_mask = count == 0
if ct_mask.any():
the_mean[ct_mask] = np.nan
else:
the_mean = the_sum / count if count > 0 else np.nan
return _wrap_results(the_mean, dtype)
Here's a (short) version of the bottleneck_switch
decorator:
import bottleneck as bn
...
class bottleneck_switch(object):
def __init__(self, **kwargs):
self.kwargs = kwargs
def __call__(self, alt):
bn_name = alt.__name__
try:
bn_func = getattr(bn, bn_name)
except (AttributeError, NameError): # pragma: no cover
bn_func = None
...
if (_USE_BOTTLENECK and skipna and
_bn_ok_dtype(values.dtype, bn_name)):
result = bn_func(values, axis=axis, **kwds)
This is called with alt
as the pandas
nanmean
function, so bn_name
is 'nanmean'
, and this is the attr that's grabbed from the bottleneck
module:
(Pdb) l
93 result = np.empty(result_shape)
94 result.fill(0)
95 return result
96
97 if (_USE_BOTTLENECK and skipna and
98 -> _bn_ok_dtype(values.dtype, bn_name)):
99 result = bn_func(values, axis=axis, **kwds)
100
101 # prefer to treat inf/-inf as NA, but must compute the fun
102 # twice :(
103 if _has_infs(result):
(Pdb) n
> d:\anaconda3\lib\site-packages\pandas\core\nanops.py(99)f()
-> result = bn_func(values, axis=axis, **kwds)
(Pdb) alt
<function nanmean at 0x000001D2C8C04378>
(Pdb) alt.__name__
'nanmean'
(Pdb) bn_func
<built-in function nanmean>
(Pdb) bn_name
'nanmean'
(Pdb) bn_func(values, axis=axis, **kwds)
-9.0
Pretend that bottleneck_switch()
decorator doesn't exist for a second. We can actually see that calling that manually stepping through this function (without bottleneck
) will get you the same result as numpy
:
(Pdb) from pandas.core.nanops import _get_counts
(Pdb) from pandas.core.nanops import _get_values
(Pdb) from pandas.core.nanops import _ensure_numeric
(Pdb) values, mask, dtype, dtype_max = _get_values(delegate, skipna=skipna)
(Pdb) count = _get_counts(mask, axis=None, dtype=dtype)
(Pdb) count
900000.0
(Pdb) values.sum(axis=None, dtype=dtype) / count
-9.0000029
That never gets called, though, if you have bottleneck
installed. Instead, the bottleneck_switch()
decorator instead blasts over the nanmean
function with bottleneck
's version. This is where the discrepancy lies (interestingly it matches on the float64
case, though):
(Pdb) import bottleneck as bn
(Pdb) bn.nanmean(delegate)
-9.0
(Pdb) bn.nanmean(delegate.astype(np.float64))
-9.000003749978807
bottleneck
is used solely for speed, as far as I can tell. I'm assuming they're taking some type of shortcut with their nanmean
function, but I didn't look into it much (see @ead's answer for details on this topic). You can see that it's typically a bit faster than numpy
by their benchmarks: https://github.com/kwgoodman/bottleneck. Clearly the price to pay for this speed is precision.
Is bottleneck actually faster?
Sure looks like it (at least on my machine).
In [1]: import numpy as np; import pandas as pd
In [2]: x=np.random.normal(-9.8,.05,size=900000)
In [3]: y_32 = x.astype(np.float32)
In [13]: %timeit np.nanmean(y_32)
100 loops, best of 3: 5.72 ms per loop
In [14]: %timeit bn.nanmean(y_32)
1000 loops, best of 3: 854 µs per loop
It might be nice for pandas
to introduce a flag here (one for speed, the other for better precision, default is for speed since that's the current impl). Some users care much more about the accuracy of the computation than the speed at which it happens.
HTH.
x
contains doubles but yourdf
contains floats. That will always give you a difference, but not as large as the original one. Any chance that you have missing values that messes with how the mean is computed? – Duce-9.8
is being repeatedly added to something bigger than2**23
, and limitedfloat32
resolution means that the actual sum changes by exactly -10.0 for most of the random samples. Use of pairwise summation or Kahan summation instead of a simple accumulating sum would have improved the result greatly here. But yes, computing the mean in double precision is the obvious quick fix. – Scholarshipdf['x'].sum() / len(df.index)
, which gives the correct result even withfloat32
? – Sniftersum
operations in some (but not all) circumstances; it's possible that for whatever reason this particular use ofdf['x'].sum()
ends up in one of those NumPy cases. – Scholarship