Tiny numbers in place of zero?
Asked Answered
G

5

5

I have been making a matrix class (as a learning exercise) and I have come across and issue whilst testing my inverse function.

I input a arbitrary matrix as such:

2 1 1
1 2 1
1 1 2

And got it to calculate the inverse and I got the correct result:

0.75 -0.25 -0.25
-0.25 0.75 -0.25
-0.25 -0.25 0.75

But when I tried multiplying the two together to make sure I got the identity matrix I get:

1 5.5111512e-017 0
0 1 0
-1.11022302e-0.16 0 1

Why am I getting these results? I would understand if I was multiplying weird numbers where I could understand some rounding errors but the sum it's doing is:

2 * -0.25 + 1 * 0.75 + 1 * -0.25

which is clearly 0, not 5.111512e-017

If I manually get it to do the calculation; eg:

std::cout << (2 * -0.25 + 1 * 0.75 + 1 * -0.25) << "\n";

I get 0 as expected?

All the numbers are represented as doubles. Here's my multiplcation overload:

Matrix operator*(const Matrix& A, const Matrix& B)
{
    if(A.get_cols() == B.get_rows())
    {
        Matrix temp(A.get_rows(), B.get_cols());
        for(unsigned m = 0; m < temp.get_rows(); ++m)
        {
            for(unsigned n = 0; n < temp.get_cols(); ++n)
            {
                for(unsigned i = 0; i < temp.get_cols(); ++i)
                {
                    temp(m, n) += A(m, i) * B(i, n);
                }
            }
        }

        return temp;
    }

    throw std::runtime_error("Bad Matrix Multiplication");
}

and the access functions:

double& Matrix::operator()(unsigned r, unsigned c)
{
    return data[cols * r + c];
}

double Matrix::operator()(unsigned r, unsigned c) const
{
    return data[cols * r + c];
}

Here's the function to find the inverse:

Matrix Inverse(Matrix& M)
{
    if(M.rows != M.cols)
    {
        throw std::runtime_error("Matrix is not square");
    }

    int r = 0;
    int c = 0;
    Matrix augment(M.rows, M.cols*2);
    augment.copy(M);

    for(r = 0; r < M.rows; ++r)
    {
        for(c = M.cols; c < M.cols * 2; ++c)
        {
            augment(r, c) = (r == (c - M.cols) ? 1.0 : 0.0);
        }
    }

    for(int R = 0; R < augment.rows; ++R)
    {
        double n = augment(R, R);
        for(c = 0; c < augment.cols; ++c)
        {
            augment(R, c) /= n;
        }

        for(r = 0; r < augment.rows; ++r)
        {
            if(r == R) { continue; }
            double a = augment(r, R);

            for(c = 0; c < augment.cols; ++c)
            {
                augment(r, c) -= a * augment(R, c);
            }
        }
    }

    Matrix inverse(M.rows, M.cols);
    for(r = 0; r < M.rows; ++r)
    {
        for(c = M.cols; c < M.cols * 2; ++c)
        {
            inverse(r, c - M.cols) = augment(r, c);
        }
    }

    return inverse;
}
Gynecologist answered 3/6, 2011 at 18:0 Comment(7)
search: floating point math inaccuraciesBlackman
All of the numbers you're working with are exactly representable in base 2, so this is indeed curious.Col
I was under the impression that that only came into effect when you were using silly numbers, not numbers like 1/2 or 3/4?Gynecologist
Please correct std::cout << (2 * -0.25 + 1 * 0.75 + 1 * -0.25) << "\n"; to std::cout << (2.0 * -0.25 + 1.0 * 0.75 + 1.0 * -0.25) << "\n"; and see if you get the same thingCribbage
I do get the same thing. The problem's been found. I hadn't set my stream precision high enough for the true value of the numbers to be shown. Although I just read that might not be the problem. I'm so confused D:Gynecologist
There's one thing missing here - the code to generate the inverse. Could you add that please?Col
I have added it for you.Gynecologist
C
9

You've got numbers like 0.250000000000000005 in your inverted matrix, they're just rounded for display so you see nice little round numbers like 0.25.

Catlaina answered 3/6, 2011 at 18:8 Comment(13)
Ahh yes. I set the stream precision reallllly high and the -0.25s were being displayed as -0.2499999999999999999999997 Thanks :SGynecologist
@Rarge, that might be due to the inability of converting an exact -0.25 representation back into decimal. I wouldn't draw any conclusions from it.Col
@Mark: No, -0.25 has an exact floating-point representation and will be displayed as -0.25000000... The output of the matrix inversion algorithm is clearly the culprit here.Understandable
@Rarge, if you use printf with the %a specification you'll get the exact representation in hex, which will leave no doubt.Col
@Rarge: OK, now try %.30a. That will tell you the real representation.Understandable
@Mark: %a rounds up or down, like all format specifiers. If you want the exact representation, you have to specify a big enough precision.Understandable
-0x1.fffffffffffff00000000p-3 (Might not be the exact amount of "f"s and "0"s)Gynecologist
@TonyK, I only just learned about the %a specifier recently here on StackOverflow, and haven't had a chance to use it yet. Still not aware of the nuances, thanks.Col
@Mark: You said: "which will leave no doubt". As if you knew what you were talking about.Understandable
@TonyK, I was under the impression that the default for %a would be enough places to show the full internal representation. I thought I did know what I was talking about - stupid assumption on my part I guess. Don't mind being proven wrong, as long as I learn something.Col
What exactly is this telling me?Gynecologist
@Rarge: the exact value of -0.25 would display as -0x2.000000000000000000000p-3. So the result of your inversion algorithm is inexact.Understandable
Actually, by ironic coincidence, 0.25 and 0.75 are two of only three numbers in the 0.01 .. 0.99 sequence that have short and simple exact representations in the floating point format. But your intermediate products do not and that's why your calculations are only accurate to a finite precision.Onomatopoeia
C
11

Please read this paper: What Every Computer Scientist Should Know About Floating-Point Arithmetic

Cribbage answered 3/6, 2011 at 18:4 Comment(4)
Doesn't apply if your numbers are exactly representable in binary.Col
@Mark does apply if the operations done on that number creates an intermediate result at one point which is not exactly representable... which is what this question suffers fromCribbage
I looked for intermediate results that weren't exactly representable but didn't find any. Can you point me in the right direction?Col
@Mark try printing out the n you are dividing by during your inversion - it's 1.3333 at some point, which is not going to give an exact resultTurcotte
C
9

You've got numbers like 0.250000000000000005 in your inverted matrix, they're just rounded for display so you see nice little round numbers like 0.25.

Catlaina answered 3/6, 2011 at 18:8 Comment(13)
Ahh yes. I set the stream precision reallllly high and the -0.25s were being displayed as -0.2499999999999999999999997 Thanks :SGynecologist
@Rarge, that might be due to the inability of converting an exact -0.25 representation back into decimal. I wouldn't draw any conclusions from it.Col
@Mark: No, -0.25 has an exact floating-point representation and will be displayed as -0.25000000... The output of the matrix inversion algorithm is clearly the culprit here.Understandable
@Rarge, if you use printf with the %a specification you'll get the exact representation in hex, which will leave no doubt.Col
@Rarge: OK, now try %.30a. That will tell you the real representation.Understandable
@Mark: %a rounds up or down, like all format specifiers. If you want the exact representation, you have to specify a big enough precision.Understandable
-0x1.fffffffffffff00000000p-3 (Might not be the exact amount of "f"s and "0"s)Gynecologist
@TonyK, I only just learned about the %a specifier recently here on StackOverflow, and haven't had a chance to use it yet. Still not aware of the nuances, thanks.Col
@Mark: You said: "which will leave no doubt". As if you knew what you were talking about.Understandable
@TonyK, I was under the impression that the default for %a would be enough places to show the full internal representation. I thought I did know what I was talking about - stupid assumption on my part I guess. Don't mind being proven wrong, as long as I learn something.Col
What exactly is this telling me?Gynecologist
@Rarge: the exact value of -0.25 would display as -0x2.000000000000000000000p-3. So the result of your inversion algorithm is inexact.Understandable
Actually, by ironic coincidence, 0.25 and 0.75 are two of only three numbers in the 0.01 .. 0.99 sequence that have short and simple exact representations in the floating point format. But your intermediate products do not and that's why your calculations are only accurate to a finite precision.Onomatopoeia
R
2

You shouldn't have any problems with these numbers, since with this particular matrix the inverse is all power of 2's and may be represented accurately. In general, operations on floating point numbers introduce small errors that may accumulate and the results may be surprising.

In your case, I'm pretty sure the inverse is inaccurate and you're just displaying the first few digits. I.e., it isn't exactly 0.25 (=1/4), 0.75 (=3/4), etc.

Reynalda answered 3/6, 2011 at 18:10 Comment(0)
M
2

You're always going to run into floating point rounding errors like this, especially when working with numbers that do not have exact binary representations (i.e., your numbers are not equal to 2^(N) or 1/(2^N), where N is some integer value).

That being said, there are a number of ways to increase the precision of your results, and you may want to-do a google search for numerically stable gaussian elimination algorithms using fixed-precision floating point values.

You can also, if you are willing to take a speed hit, incorporate an inifinite-precision math library that uses rational numbers, and if you take that choice, just avoid the use of roots which can create irrational numbers. There are a number of libraries out there that can help you with the use of rational numbers, such as GMP. You can also make a rational class yourself, although beware it's relatively easy to overflow the results of multiple math operations if you are only using unsigned 64-bit values along with an extra sign-flag variable for the components of your rational numbers. That's where GMP, with it's unlimited-length integer string objects comes in handy.

Manoff answered 3/6, 2011 at 18:29 Comment(1)
Actually, the fractions that have exact representations are the ones that can be formed by a sequence of 1/2 + 1/4 + 1/8 ...Onomatopoeia
P
1

It's just simple floating point error. Even doubles on computers aren't 100% accurate. There just is no way to 100% accurately represent a base-10 decimal number in binary with a finite number of bits.

Petaliferous answered 3/6, 2011 at 18:4 Comment(2)
That statement applies to many floating point numbers, but not all of them. Specifically integers and any fraction with a small power of 2 in the denominator.Col
There are ways to exactly represent base-10 numbers in binary (most obviously: strings or BCD, and also less ineffective types (fractional, symbolic)), you just can't do it with the much faster-calculable primitive floating point types.Judiejudith

© 2022 - 2024 — McMap. All rights reserved.