There are two different issues here. First – as already mentioned in
the comments – a binary floating point number cannot represent the
number 8.7
precisely. Swift uses the IEEE 754 standard for representing
single- and double-precision floating point numbers, and if you assign
let x = 8.7
then the closest representable number is stored in x
, and that is
8.699999999999999289457264239899814128875732421875
Much more information about this can be found in the excellent
Q&A Is floating point math broken?.
The second issue is: Why is the number sometimes printed as "8.7"
and sometimes as "8.6999999999999993"?
let str = "8.7"
print(Double(str)) // Optional(8.6999999999999993)
let x = 8.7
print(x) // 8.7
Is Double("8.7")
different from 8.7
? Is one more precise than
the other?
To answer these questions, we need to know how the print()
function works:
- If an argument conforms to
CustomStringConvertible
, the print function calls its description
property and prints the result
to the standard output.
- Otherwise, if an argument conforms to
CustomDebugStringConvertible
,
the print function calls is debugDescription
property and prints
the result to the standard output.
- Otherwise, some other mechanism is used. (Not imported here for our
purpose.)
The Double
type conforms to CustomStringConvertible
, therefore
let x = 8.7
print(x) // 8.7
produces the same output as
let x = 8.7
print(x.description) // 8.7
But what happens in
let str = "8.7"
print(Double(str)) // Optional(8.6999999999999993)
Double(str)
is an optional, and struct Optional
does not
conform to CustomStringConvertible
, but to
CustomDebugStringConvertible
. Therefore the print function calls
the debugDescription
property of Optional
, which in turn
calls the debugDescription
of the underlying Double
.
Therefore – apart from being an optional – the number output is
the same as in
let x = 8.7
print(x.debugDescription) // 8.6999999999999993
But what is the difference between description
and debugDescription
for floating point values? From the Swift source code one can see
that both ultimately call the swift_floatingPointToString
function in Stubs.cpp, with the Debug
parameter set to false
and true
, respectively.
This controls the precision of the number to string conversion:
int Precision = std::numeric_limits<T>::digits10;
if (Debug) {
Precision = std::numeric_limits<T>::max_digits10;
}
For the meaning of those constants, see http://en.cppreference.com/w/cpp/types/numeric_limits:
digits10
– number of decimal digits that can be represented without change,
max_digits10
– number of decimal digits necessary to differentiate all values of this type.
So description
creates a string with less decimal digits. That
string can be converted to a Double
and back to a string giving
the same result.
debugDescription
creates a string with more decimal digits, so that
any two different floating point values will produce a different output.
Summary:
- Most decimal numbers cannot be represented exactly as a binary
floating point value.
- The
description
and debugDescription
methods of the floating
point types use a different precision for the conversion to a
string. As a consequence,
- printing an optional floating point value uses a different precision for the conversion than printing a non-optional value.
Therefore in your case, you probably want to unwrap the optional
before printing it:
let str = "8.7"
if let d = Double(str) {
print(d) // 8.7
}
For better control, use NSNumberFormatter
or formatted
printing with the %.<precision>f
format.
Another option can be to use (NS)DecimalNumber
instead of Double
(e.g. for currency amounts), see e.g. Round Issue in swift.
8.7
can be8.699999999999999289457264239899814128875732421875000...
, you need to set the precision as said beforeprint(String(format:"%.1f", Double(str)!))
– Publias%.1f
. You can test it by putting inX.X
for the value and it should return0.0
on error... – Publias