Why does numpy std() give a different result to matlab std()?
Asked Answered
B

3

103

I try to convert matlab code to numpy and figured out that numpy has a different result with the std function.

in matlab

std([1,3,4,6])
ans =  2.0817

in numpy

np.std([1,3,4,6])
1.8027756377319946

Is this normal? And how should I handle this?

Belostok answered 22/12, 2014 at 9:52 Comment(0)
E
175

The NumPy function np.std takes an optional parameter ddof: "Delta Degrees of Freedom". By default, this is 0. Set it to 1 to get the MATLAB result:

>>> np.std([1,3,4,6], ddof=1)
2.0816659994661326

To add a little more context, in the calculation of the variance (of which the standard deviation is the square root) we typically divide by the number of values we have.

But if we select a random sample of N elements from a larger distribution and calculate the variance, division by N can lead to an underestimate of the actual variance. To fix this, we can lower the number we divide by (the degrees of freedom) to a number less than N (usually N-1). The ddof parameter allows us change the divisor by the amount we specify.

Unless told otherwise, NumPy will calculate the biased estimator for the variance (ddof=0, dividing by N). This is what you want if you are working with the entire distribution (and not a subset of values which have been randomly picked from a larger distribution). If the ddof parameter is given, NumPy divides by N - ddof instead.

The default behaviour of MATLAB's std is to correct the bias for sample variance by dividing by N-1. This gets rid of some of (but probably not all of) of the bias in the standard deviation. This is likely to be what you want if you're using the function on a random sample of a larger distribution.

The nice answer by @hbaderts gives further mathematical details.

Ecliptic answered 22/12, 2014 at 9:54 Comment(3)
I'll add that in Matlab, std([1 3 4 6],1) is equivalent to NumPy's default np.std([1,3,4,6]). All of this is quite clearly explained in the documentation for Matlab and NumPy, so I strongly recommend that the OP be sure to read those in the future.Linnell
At some point this standard has changed: np.std() = np.std( ddof=1) , even though the documentation says that np.std() should default to ddof=0...Soaring
Not sure why numpy doesn't just default to 1 for this param. It's probably too late now, but yikes.Sect
E
67

The standard deviation is the square root of the variance. The variance of a random variable X is defined as

definition of variance

An estimator for the variance would therefore be

biased estimator

where sample mean denotes the sample mean. For randomly selected xi, it can be shown that this estimator does not converge to the real variance, but to

unbiased estimator

If you randomly select samples and estimate the sample mean and variance, you will have to use a corrected (unbiased) estimator

unbiased estimator

which will converge to sigma squared. The correction term n-1 is also called Bessel's correction.

Now by default, MATLABs std calculates the unbiased estimator with the correction term n-1. NumPy however (as @ajcr explained) calculates the biased estimator with no correction term by default. The parameter ddof allows to set any correction term n-ddof. By setting it to 1 you get the same result as in MATLAB.

Similarly, MATLAB allows to add a second parameter w, which specifies the "weighing scheme". The default, w=0, results in the correction term n-1 (unbiased estimator), while for w=1, only n is used as correction term (biased estimator).

Enterectomy answered 22/12, 2014 at 10:55 Comment(3)
In the formula for the corrected estimator, the factor n (within the sum) shouldn't be present.Adolfo
The intuition behind the n-1 term in the variance: you already used your samples for estimating the mean which you will use for approximating the variance. This introduces a correlation and thus ddof must be 1.Aldis
@Adolfo I've fixed the typo for posterity. What happened in the original equation was the upper limit of the sum wasn't being rendered properly. Instead of n going at the top of the summation notation, it went inside the sum.Rex
P
8

For people who aren't great with statistics, a simplistic guide is:

  • Include ddof=1 if you're calculating np.std() for a sample taken from your full dataset.

  • Ensure ddof=0 if you're calculating np.std() for the full population

The DDOF is included for samples in order to counterbalance bias that can occur in the numbers.

Pitchstone answered 14/6, 2017 at 9:42 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.