PI and accuracy of a floating-point number
Asked Answered
S

11

17

A single/double/extended-precision floating-point representation of Pi is accurate up to how many decimal places?

Sipe answered 3/2, 2009 at 16:24 Comment(3)
This can't be answered without you telling which language are you using and where are you getting PI from? Are you using a constant or a library function?Histogen
Or do you mean the time-series database PIVocable
You might want to look at exploringbinary.com/pi-and-e-in-binarySelfinsurance
G
27
#include <stdio.h>

#define E_PI 3.1415926535897932384626433832795028841971693993751058209749445923078164062

int main(int argc, char** argv)
{
    long double pild = E_PI;
    double pid = pild;
    float pif = pid;
    printf("%s\n%1.80f\n%1.80f\n%1.80Lf\n",
    "3.14159265358979323846264338327950288419716939937510582097494459230781640628620899",
    pif, pid, pild);
    return 0;
}

Results:

[quassnoi #] gcc --version
gcc (GCC) 4.3.2 20081105 (Red Hat 4.3.2-7)

[quassnoi #] ./test

3.14159265358979323846264338327950288419716939937510582097494459230781640628620899

3.14159274101257324218750000000000000000000000000000000000000000000000000000000000
        ^
3.14159265358979311599796346854418516159057617187500000000000000000000000000000000
                 ^
3.14159265358979311599796346854418516159057617187500000000000000000000000000000000
                 ^
  0000000001111111
  1234567890123456
Grizzled answered 3/2, 2009 at 16:38 Comment(6)
interesting test... unfortunately, I bet it's all sorts of system dependent :PElectrodialysis
Actually I say dependent on the math.h library.Coelenterate
Sure, that's why I put gcc --version thereGrizzled
I used math.h only for M_PI constant, I think it should be same in every version, it's a PI, after all :) Anyway, I updated the code not to use math.hGrizzled
This test is invalid for the extended precision result, because your #define literal for pi is in double precision. You need it to be an extended precision literal. See this.Clynes
the E_PI must have L suffix to get long double precision, otherwise it'll stuck at double precisionBellow
E
20

When I examined Quassnoi's answer it seemed suspicious to me that long double and double would end up with the same accuracy so I dug in a little. If I ran his code compiled with clang I got the same results as him. However I found out that if I specified the long double suffix and used a literal to initialize the long double it provided more precision. Here is my version of his code:

#include <stdio.h>

int main(int argc, char** argv)
{
    long double pild = 3.14159265358979323846264338327950288419716939937510582097494459230781640628620899L;
    double pid = pild;
    float pif = pid;
    printf("%s\n%1.80f\n%1.80f\n%1.80Lf\n",
        "3.14159265358979323846264338327950288419716939937510582097494459230781640628620899",
        pif, pid, pild);
    return 0;
}

And the results:

3.14159265358979323846264338327950288419716939937510582097494459230781640628620899

3.14159274101257324218750000000000000000000000000000000000000000000000000000000000
        ^
3.14159265358979311599796346854418516159057617187500000000000000000000000000000000
                 ^
3.14159265358979323851280895940618620443274267017841339111328125000000000000000000
                    ^
Erechtheus answered 28/3, 2014 at 18:45 Comment(1)
This appears to be compiler and architecture dependent however: en.wikipedia.org/wiki/Long_doubleErechtheus
C
4

6 places and 14 places.1 place is over 0 for the 3, and the last place although stored can't be considered as a precision point.

And sorry but I don't know what extended means without more context. Do you mean C#'s decimal?

Counterproductive answered 3/2, 2009 at 16:30 Comment(2)
Please see "An Informal Description of IEEE754" cse.ttu.edu.tw/~jmchen/NM/refs/story754.pdfSipe
@Sipe The link is dead :( But I have found a working link.Botello
S
1

In the x86 floating-point unit (x87) there are instructions for loading certain floating point constants. "fldz" and "fld1" load 0.0 and 1.0 onto the stack top "st" (aka "st(0)") for example. Another is "fldpi".

All these values have a mantissa that's 64 bits long which translates into close to 20 decimal digits. The 64 bits are possible through the 80-bit tempreal floating point format used internally in the x87. The x87 can load tempreals from and store them to 10 byte memory locations as well.

Soloman answered 18/8, 2011 at 19:52 Comment(0)
B
0

Accuracy of a floating-point type is not related to PI or any specific numbers. It only depends on how many digits are stored in memory for that specific type.

In case of IEEE-754 float uses 23 bits of mantissa so it can be accurate to 23+1 bits of precision, or ~7 digits of precision in decimal. Regardless of π, e, 1.1, 9.87e9... all of them is stored with exactly 24 bits in a float. Similarly double (53 bits of mantissa) can store 15~17 decimal digits of precision.

Bellow answered 8/8, 2013 at 0:21 Comment(8)
Your logic / conclusion is actually incorrect. It is related to the specific value; the binary representation of floating-points have a fixed number of bits for mantissa, but depending on the exponent, some of those bits will be used on representing the integer portion, or the decimals portion. An example that helps visualize this: you store pi in a double and it will be accurate up to the 15th decimal (at least for the gcc that comes with Ubuntu 18, running on an intel core i5 --- I believe it's mapped to IEEE-754). You store 1000*pi, and it will be accurate up to the 12th decimal.Intricacy
@Intricacy you're mistaking the precision of a type vs the error after doing operations. If you do 1000*pi and got a slightly less accurate result, that doesn't mean the precision was reduced. You got it wrong because you don't understand what "significand" is, which isn't counted after the radix point. In fact 1000*pi lose only 1 digit of precision and is still correct to the 15th digit of significand, not 12. You're also confusing between 'precision' and 'accuracy'?Bellow
and if you have the exact 1000pi constant instead of doing it through the multiplication during runtime you'll still get exactly 53 bits of precisionBellow
you're still getting it wrong. It is a well-known aspect of floating points, that the accuracy/error in the representation is unevenly distributed across the range; you can distinguish between 0.1 and 0.1000001, but not between 10^50 and (0.0000001 + 10^50). FP stores a value as x times 2^_y_, where x uses a given number of bits to represent a value between 1 and 2 (or was it between 0 and 1?? I forget now), and y has a range given by the number of bits assigned to it. If y is large, the accuracy of x is mostly consumed by the integer part.Intricacy
As for the exact 1000pi as a constant --- you may get the same 53 bits of precision, but that's not what the thread is about: you get the same 16 correct decimal digits at the beginning; but now three out of those 16 are used for the integer part, 3141 --- the decimal places are correct up to the 89793, exactly as with pi; except that in pi, that 3 in 89793 is the 15th decimal, whereas in 1000pi, it is the 12th decimal!Intricacy
@Intricacy I'm well aware that the error and the distance between consecutive values scale according to the exponent, but it's irrelevant here. And the OP didn't ask about the decimal numbers after 1000piBellow
"And the OP didn't ask about the decimal numbers after 1000pi" -- no, but it is directly relevant; the OP asked how many decimal places of pi are correctly represented by a FP. You argued that the actual value has no relevance --- which is incorrect: for larger values, you get smaller amount of decimal places that are correctly represented. 1000pi is just an example to illustrate this; I'm still focusing, as the OP requested, on the number of decimal places, which what your argument gets wrong.Intricacy
For the fraction part of a floating point it is mostly incorrect to use the term decimal digits. It is correct sometimes, such as for 0.25 that is exactly representable in base 10 (as we are all familiar with) and in base 2 (2^-2). 0.1 is exact in base 10 but (because it can't be exactly represented) it will be an approximation in base 2 i e in the fraction part of an iEEE-754 floating-point number. 1/3 is an example of a number that cannot be exactly represented in either base.Soloman
B
0

* EDIT: see this post for up to date discussion: Implementation of sinpi() and cospi() using standard C math library *

The new math.h functions __sinpi() and __cospi() fixed the problem for me for right angles like 90 degrees and such.

cos(M_PI * -90.0 / 180.0) returns 0.00000000000000006123233995736766
__cospi( -90.0 / 180.0 )      returns 0.0, as it should

/*  __sinpi(x) returns the sine of pi times x; __cospi(x) and __tanpi(x) return
the cosine and tangent, respectively.  These functions can produce a more
accurate answer than expressions of the form sin(M_PI * x) because they
avoid any loss of precision that results from rounding the result of the
multiplication M_PI * x.  They may also be significantly more efficient in
some cases because the argument reduction for these functions is easier
to compute.  Consult the man pages for edge case details.                 */
extern float __cospif(float) __OSX_AVAILABLE_STARTING(__MAC_10_9, __IPHONE_NA);
extern double __cospi(double) __OSX_AVAILABLE_STARTING(__MAC_10_9, __IPHONE_NA);
extern float __sinpif(float) __OSX_AVAILABLE_STARTING(__MAC_10_9, __IPHONE_NA);
extern double __sinpi(double) __OSX_AVAILABLE_STARTING(__MAC_10_9, __IPHONE_NA);
extern float __tanpif(float) __OSX_AVAILABLE_STARTING(__MAC_10_9, __IPHONE_NA);
extern double __tanpi(double) __OSX_AVAILABLE_STARTING(__MAC_10_9, __IPHONE_NA);
Brashear answered 24/11, 2015 at 19:2 Comment(1)
__sinpi() and __cospi() are definitely not standard functions. It's easy to see as they have the __ prefix. Searching for them mostly returns result for macOS and iOS. This question said that it's been added by Apple Implementation of sinpi() and cospi() using standard C math library, and the man page also says that it's in OSXBellow
I
0

In C++20 there was introduced <format> header, which is able to print with given type full precision without unnecessary decimal places. Unfortunatelly, this functionality is missing even in compilers supporting C++20, e.g. gcc 12.

Thus it is necessary to use fmt library https://fmt.dev/latest/index.html I have extracted it to fmt dir.

Now create main.cpp

#include <iostream>
//#include <format>
#define FMT_HEADER_ONLY
#include <fmt/format.h>

int main(int argc, char** argv) {
    long double pild = std::numbers::pi_v<long double>;
    long double twopild = pild * 2.0L;
    long double fourpisquaredld = pild * pild * 4.0L;
    double pid  = pild;
    double twopid = twopild;
    double fourpisquaredd = fourpisquaredld;
    float pif  = pild;
    float twopif = twopild;
    float fourpisquaredf = fourpisquaredld;

    std::cout << fmt::format("PIL={}\n", pild);
    std::cout << fmt::format("TWOPIL={}\n", twopild);
    std::cout << fmt::format("FOURPISQUAREDL={}\n", fourpisquaredld);
    std::cout << fmt::format("PID={}\n", pid);
    std::cout << fmt::format("TWOPID={}\n", twopid);
    std::cout << fmt::format("FOURPISQUAREDD={}\n", fourpisquaredd);
    std::cout << fmt::format("PIF={}\n", pif);
    std::cout << fmt::format("TWOPIF={}\n", twopif);
    std::cout << fmt::format("FOURPISQUAREDF={}\n", fourpisquaredf);
    return 0;
}

Compiled using gcc 12.

g++ -I fmt/include -std=c++20 main.cpp

Output

PIL=3.1415926535897932385
TWOPIL=6.283185307179586477
FOURPISQUAREDL=39.478417604357434478
PID=3.141592653589793
TWOPID=6.283185307179586
FOURPISQUAREDD=39.47841760435743
PIF=3.1415927
TWOPIF=6.2831855
FOURPISQUAREDF=39.478416

Interestingly in Python you receive full precision print by conversion to string so we can do e.g.

import numpy as np
print("%s"%(float(np.pi)))

obtaining 3.141592653589793 since float in Python is represented as double precision number.

Isoagglutinin answered 15/3, 2024 at 13:36 Comment(0)
H
-1

Print and count, baby, print and count. (Or read the specs.)

Happily answered 3/2, 2009 at 16:36 Comment(0)
H
-1

World of PI have PI to 100,000,000,000 digits, you could just print and compare. For a slightly easier to read version Joy of PI have 10,000 digits. And if you want to remember the digits youself you could try lerning the Cadaeic Cadenza poem.

Histogen answered 3/2, 2009 at 17:1 Comment(0)
N
-1

Since there are sieve equations for binary representations of pi, one could combine variables to store pieces of the value to increase precision. The only limitation to the precision on this method is conversion from binary to decimal, but even rational numbers can run into issues with that.

Neils answered 15/3, 2016 at 5:53 Comment(0)
L
-1

"In IEEE 754, the float data type, also known as single precision, is a 32-bit value that gives you a range of ±1.18×10−38 to ±3.4×1038 and about 7 digits of precision. That means that you can only accurately represent pi as 3.141592." "In this case pi (3.141592653589793), has been encoded into the double precision floating point number. Note that the true value of this double precision number is 3.14159265358979311599796346854. There are multiple ways to store a decimal number in binary with a varying level of precision."

Loney answered 21/11, 2023 at 17:39 Comment(3)
Is this a quote from somewhere? Please provide a reference.Aelber
Looks like it's from here: embedded.fm/blog/2016/4/12/ew-floating-pointAelber
Your answer could be improved with additional supporting information. Please edit to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers in the help center.Watchword

© 2022 - 2025 — McMap. All rights reserved.