How to get the mean for a whole dataframe instead of columns?
Asked Answered
R

3

10

How do I get the mean for all of the values (except for NaN) in a pandas dataframe?

pd.DataFrame.mean() only gives the means for each column (or row, when setting axis=1), but I want the mean over the whole thing. And df.mean().mean() isn't really the wisest option (see below).

Note that in my specific real case, the dataframe has a large multiindex, which additionally complicates things. For situations where this does not matter, one could deem @EdChum's answer more straightforward, which might be preferable to a faster solution in some cases.

Example code

data1 = np.arange(16).reshape(4, 4)
df = pd.DataFrame(data=data1)

df.mean()
0    9.0
1    7.0
2    8.0
3    9.0
dtype: float64

df.mean().mean()
7.5

np.arange(16).mean()
7.5

works, but if I mask parts of the df (which in reality, is a hundreds of rows/columns correlation matrix which by its nature has half of itself filled with redundant data), it gets funny:

triang = np.triu_indices(4)
data2 = np.arange(4.,20.).reshape(4, 4)
data2[triang]=np.nan
df2 = pd.DataFrame(data=data2)

df2.mean().mean()
15.0

But (8. + 12. + 13. + 16. + 17. + 18.)/6 is 14.

How can I best get the "real" mean, except writing some kind of loop that does the above by hand?

Reentry answered 22/6, 2018 at 10:25 Comment(0)
S
6

You can use numpy.nanmean:

triang = np.triu_indices(4)
data2 = np.arange(4.,20.).reshape(4, 4)
data2[triang]=np.nan
df2 = pd.DataFrame(data=data2)

res = np.nanmean(df2)  # 14.0

Also possible via stack, as described by @EdChum, but slower:

df2 = pd.concat([df2]*100000)

%timeit np.nanmean(df2)              # 14.0ms
%timeit df2.stack().dropna().mean()  # 55.7ms

If your data is numeric only, you can also remove the Pandas overhead altogether.

Scholium answered 22/6, 2018 at 10:31 Comment(2)
I guess when the doc says "The average is taken over the flattened array by default" I probably really does just use the values and ignores my index. It does result in a single number which also seems realistic for my real data. I'll play around with it for a bit until I understand it though, before I'll accept that answer.Reentry
@JC_CL, numpy will not use the index here, it'll only use dataframe values. But do test for yourself.Scholium
G
3

You can stack, then dropna and then call mean:

In[201]:
df2.stack().dropna().mean()

Out[201]: 14.0

So this converts the df into a single column Series, then you can call dropna to remove the NaN rows, now mean will calculate the mean correctly.

Ga answered 22/6, 2018 at 10:27 Comment(5)
This works with the minimal example, but it appears to fail when the df has a large (multi)index. Probably would have to remove that index first.Reentry
@Reentry well you never stated this in your example so I can only answer what is statedGa
You are right. I just wasn't expecting that the indices play a role when I'm just interested in the values. However, your method still works, I just have stack stack upon stack… But I'll take the np.nanmean(df) approach, since it's just 62.9 µs per loop, whereas df.stack().stack().stack().dropna().mean() is quite a bit slower at 19.9 ms per loop.Reentry
@Reentry I forgot about np.nanmean that would be my preferred method here also, not surprising that my method is slower as there are a lot intermediate structures being generated. The only difference is that you keep everything in pandas landGa
Still, could come in handy at some point. I added a remark to the original question.Reentry
P
0

Solution similar in spirit to that by @EdChum, but using numpy (which calculates by default the mean of a flattened array):

df.to_numpy().mean()

If we want to avoid nans, possible solutions are

df[df.notna()].to_numpy().mean()

or

np.nanmean(df.to_numpy())

(np.nanmean is similar to np.mean, but does not have a suffix form.)

Pizor answered 15/1, 2024 at 13:24 Comment(0)

© 2022 - 2025 — McMap. All rights reserved.