Consider the following Python code:
from decimal import Decimal
d = Decimal("1.23")
print(f"{d = }, {d.shift(1) = }")
When I execute it in Python 3.12.4, I get this output:
d = Decimal('1.23'), d.shift(1) = Decimal('12.30')
This is exactly the output that I expect: the shift operation, given an arg of positive 1, left shifted 1.23 by 1 decimal point, producing 12.30
Now execute this code:
from decimal import Decimal
d = Decimal(1.23)
print(f"{d = }, {d.shift(1) = }")
The sole difference is that we construct d using a float instead of a str.
When I execute it, I get this output:
d = Decimal('1.229999999999999982236431605997495353221893310546875'), d.shift(1) = Decimal('6.059974953532218933105468750E-24')
The first part of that output is what I expect: the float for 1.23 cannot perfectly represent the base 10 digits; this is expected floating point error.
But the output from shift(1) was strange. I was expecting it to be something like Decimal('12.29999999999999982236431605997495353221893310546875')
. (That is, a value fairly close to 12.30.) Instead, it produced a completely unrelated result like Decimal('6.059974953532218933105468750E-24')
.
Why does the shift method produces such radically different results when you construct the Decimal from a float?
Decimal('1.2299999999999999822364316059')
produced a similar result, whileDecimal('1.229999999999999982236431605')
(one digit fewer) did not. Presumably related to Is floating-point math broken? – EaDecimal
, does it not? – NonobedienceDecimal
, and the pure-Python fallback implementation. Basically, theshift()
method is broken by design - the trigger appears to be the number having more significant digits than the current context's precision value (52 vs 28 here). – Larcher