I have a large amount of python code that tries to handle numbers with 4 decimal precision and I am stuck with python 2.4 for many reasons. The code does very simplistic math (its a credit management code that takes or add credits mostly)
It has intermingled usage of float and Decimal (MySQLdb returns Decimal objects for SQL DECIMAL types). After several strange bugs coming up from usage, I have found root cause of all to be a few places in the code that float and Decimals are being compared.
I got to cases like this:
>>> from decimal import Decimal
>>> max(Decimal('0.06'), 0.6)
Decimal("0.06")
Now my fear is that I might not be able to catch all such cases in the code. (a normal programmer will keep doing x > 0 instead of x > Decimal('0.0000') and it is very hard to avoid)
I have come up with a patch (inspired by improvements to decimal package in python 2.7).
import decimal
def _convert_other(other):
"""Convert other to Decimal.
Verifies that it's ok to use in an implicit construction.
"""
if isinstance(other, Decimal):
return other
if isinstance(other, (int, long)):
return Decimal(other)
# Our small patch begins
if isinstance(other, float):
return Decimal(str(other))
# Our small patch ends
return NotImplemented
decimal._convert_other = _convert_other
I just do it in a very early loading library and it will change the decimal package behavior by allowing for float to Decimal conversion before comparisons (to avoid hitting python's default object to object comparison).
I specifically used "str" instead of "repr" as it fixes some of float's rounding cases. E.g.
>>> Decimal(str(0.6))
Decimal("0.6")
>>> Decimal(repr(0.6))
Decimal("0.59999999999999998")
Now my question is: Am I missing anything here? Is this fairly safe? or am I breaking something here? (I am thinking the authors of the package had very strong reasons to avoid floats so much)