Your additions are basically a random walk, and the error you make is a different random walk (because you have roundoff error at almost every step). (Note that Eigen::MatrixXf::Random
fills the matrix with random values in [-1, 1]
.)
Let's assume that you are, on average, at a float value of 10.0
(estimated only from that single data point you provided). Your epsilon (how much absolute rounding error you will probably make with any addition) is thus around 10.0 * 6e-8
(float epsilon is 2-23 or about 6e-8) or about 6e-7
.
If you do N = 1000000
random error-accumulation steps of step size +6e-7
(or -6e-7
), you have a good chance of ending up at around sqrt(N) * stepSize = 1000 * 6e-7 = 6e-4
(see here), which is not-too-coincidentally close to your 0.01%.
I would similarly estimate an absolute error of 1000 * 10 * 1e-16 = 1e-12
for the addition of 1 million random doubles between -1 and 1 due to floating point precision.
This is obviously not a rigorous mathematical treatment. It just shows that the error is certainly in the right ballpark.
The common way to reduce this issue is to sort the floats in order of ascending magnitude before adding them, but you can still be arbitrarily imprecise when doing so. (Example: Keep adding the number 1.0f
to itself - the sum will stop increasing at 2^24
where the epsilon becomes larger than 1.0f
.)
MatrixXf M
is a matrix of 10^6 single precision floats. Try with doubles. Also, somewhat related: #6699566 – Engdahl