Python increment float by smallest step possible predetermined by its number of decimals
Asked Answered
F

3

7

I've been searching around for hours and I can't find a simple way of accomplishing the following.

Value 1 = 0.00531
Value 2 = 0.051959
Value 3 = 0.0067123

I want to increment each value by its smallest decimal point (however, the number must maintain the exact number of decimal points as it started with and the number of decimals varies with each value, hence my trouble).

Value 1 should be: 0.00532
Value 2 should be: 0.051960
Value 3 should be: 0.0067124

Does anyone know of a simple way of accomplishing the above in a function that can still handle any number of decimals?

Thanks.

Fever answered 16/12, 2017 at 19:13 Comment(5)
A floating point is not represented internally decimally, but binary. As a result 0.1 is for instance not a representable float. It is approximated with something 0.099999...ish, or 0.1000...ish. So it would be terribly unstable.Argosy
Related: #588504Argosy
If you really want this, then you should switch to a non-float representation (e.g., an integer along with a number giving the number of fractional digits).Plenum
@TomKarzes's approach works, but note that just because the decimal representation you see is 0.00532, there might actually be more decimal digits than what you see, so you might get different results than you expect if you're not seeing the whole representation.Matrix
.1 + .2 == 0.30000000000000004. What's the "true" last digit here?Nucleo
J
2

As the other commenters have noted: You should not operate with floats because a given number 0.1234 is converted into an internal representation and you cannot further process it the way you want. This is deliberately vaguely formulated. Floating points is a subject for itself. This article explains the topic very well and is a good primer on the topic.

That said, what you could do instead is to have the input as strings (e.g. do not convert it to float when reading from input). Then you could do this:

from decimal import Decimal

def add_one(v):
    after_comma = Decimal(v).as_tuple()[-1]*-1
    add = Decimal(1) / Decimal(10**after_comma)
    return Decimal(v) + add

if __name__ == '__main__':
    print(add_one("0.00531"))
    print(add_one("0.051959"))
    print(add_one("0.0067123"))
    print(add_one("1"))

This prints

0.00532
0.051960
0.0067124
2

Update:

If you need to operate on floats, you could try to use a fuzzy logic to come to a close presentation. decimal offers a normalize function which lets you downgrade the precision of the decimal representation so that it matches the original number:

from decimal import Decimal, Context

def add_one_float(v):
    v_normalized = Decimal(v).normalize(Context(prec=16))
    after_comma = v_normalized.as_tuple()[-1]*-1
    add = Decimal(1) / Decimal(10**after_comma)
    return Decimal(v_normalized) + add

But please note that the precision of 16 is purely experimental, you need to play with it to see if it yields the desired results. If you need correct results, you cannot take this path.

Jurkoic answered 16/12, 2017 at 20:54 Comment(1)
@dAJVqgVFsB thanks. Can you upvote and/or accept my answer then? (that's the currency on stackoverflow..)Jurkoic
K
4

Have you looked at the standard module decimal?

It circumvents the floating point behaviour.

Just to illustrate what can be done.

import decimal
my_number = '0.00531'
mnd = decimal.Decimal(my_number)
print(mnd)
mnt = mnd.as_tuple()
print(mnt)
mnt_digit_new = mnt.digits[:-1] + (mnt.digits[-1]+1,)
dec_incr = decimal.DecimalTuple(mnt.sign, mnt_digit_new, mnt.exponent)
print(dec_incr)
incremented = decimal.Decimal(dec_incr)
print(incremented)

prints

0.00531
DecimalTuple(sign=0, digits=(5, 3, 1), exponent=-5)
DecimalTuple(sign=0, digits=(5, 3, 2), exponent=-5)
0.00532

or a full version (after edit also carries any digit, so it also works on '0.199')...

from decimal import Decimal, getcontext

def add_one_at_last_digit(input_string):
    dec = Decimal(input_string)
    getcontext().prec = len(dec.as_tuple().digits)
    return dec.next_plus()

for i in ('0.00531', '0.051959', '0.0067123', '1', '0.05199'):
    print(add_one_at_last_digit(i))

that prints

0.00532
0.051960
0.0067124
2
0.05200
Kielce answered 16/12, 2017 at 20:35 Comment(0)
J
2

As the other commenters have noted: You should not operate with floats because a given number 0.1234 is converted into an internal representation and you cannot further process it the way you want. This is deliberately vaguely formulated. Floating points is a subject for itself. This article explains the topic very well and is a good primer on the topic.

That said, what you could do instead is to have the input as strings (e.g. do not convert it to float when reading from input). Then you could do this:

from decimal import Decimal

def add_one(v):
    after_comma = Decimal(v).as_tuple()[-1]*-1
    add = Decimal(1) / Decimal(10**after_comma)
    return Decimal(v) + add

if __name__ == '__main__':
    print(add_one("0.00531"))
    print(add_one("0.051959"))
    print(add_one("0.0067123"))
    print(add_one("1"))

This prints

0.00532
0.051960
0.0067124
2

Update:

If you need to operate on floats, you could try to use a fuzzy logic to come to a close presentation. decimal offers a normalize function which lets you downgrade the precision of the decimal representation so that it matches the original number:

from decimal import Decimal, Context

def add_one_float(v):
    v_normalized = Decimal(v).normalize(Context(prec=16))
    after_comma = v_normalized.as_tuple()[-1]*-1
    add = Decimal(1) / Decimal(10**after_comma)
    return Decimal(v_normalized) + add

But please note that the precision of 16 is purely experimental, you need to play with it to see if it yields the desired results. If you need correct results, you cannot take this path.

Jurkoic answered 16/12, 2017 at 20:54 Comment(1)
@dAJVqgVFsB thanks. Can you upvote and/or accept my answer then? (that's the currency on stackoverflow..)Jurkoic
R
0

A bit of improvement using numbers as input and implementing substraction as well:

import decimal 

def add_or_sub_one_at_last_digit(input_number,to_add = True):
    dec = decimal.Decimal(str(input_number))
    decimal.getcontext().prec = len(dec.as_tuple().digits)
    ret = dec.next_plus() if to_add else dec.next_minus()
    return ret 

a = 0.225487
# add
print(add_or_sub_one_at_last_digit(a))
# substract
print(add_or_sub_one_at_last_digit(a,False))

output:

0.225488
0.225486
Ralphralston answered 27/3, 2023 at 9:56 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.