If you deal in discrete quantities, use int
.
Sometimes people use float
in places where they definitely shouldn't. If you're counting something (like number of cars in the world) as opposed to measuring something (like how much gasoline is used per day), floating-point is probably the wrong choice. Currency is another example where floating point numbers are often abused: if you're storing your bank account balance in a database, it's really not 123.45 dollars, it's 12345 cents. (But also see below about Decimal
.)
Most of the rest of the time, use float
.
Floating-point numbers are general-purpose. They're extremely accurate; they just can't represent certain fractions, like finite decimal numbers can't represent the number 1/3. Floats are generally suited for any kind of analog quantity where the measurement has error bars: length, mass, frequency, energy -- if there's uncertainty on the order of 2^(-52) or greater, there's probably no good reason not to use float
.
If you need human-readable numbers, use float
but format it.
"This number looks weird" is a bad reason not to use float
. But that doesn't mean you have to display the number to arbitrary precision. If a number with only three significant figures comes out to 19.99909997918947, format it to one decimal place and be done with it.
>>> print('{:0.1f}'.format(e**pi - pi))
20.0
If you need precise decimal representation, use Decimal
.
Sraw's answer refers to the decimal
module, which is part of the standard library. I already mentioned currency as a discrete quantity, but you may need to do calculations on amounts of currency in which not all numbers are discrete, for example calculating interest. If you're writing code for an accounting system, there will be rules that say when rounding is applied and to what accuracy various calculations are done, and those specifications will be written in terms of decimal places. In this situation and others where the decimal representation is inherent to the problem specification, you'll want to use a decimal type.
>>> from decimal import Decimal
>>> rate = Decimal('0.0345')
>>> principal = Decimal('3412.65')
>>> interest = rate*principal
>>> interest
Decimal('117.736425')
>>> interest.quantize(Decimal('0.01'))
Decimal('117.74')
But most importantly, use data types and operations that make sense in context.
Several of your examples use math.floor
, which takes a float
and chops off the fractional part. In any situation where you should use math.floor
, floating-point error doesn't matter. (If you want to round to the nearest integer, use round
instead.) Yes, there are ways to use floating-point operations that have wrong results from a mathematical standpoint. But real-world quantities usually fall into one of these categories:
- Exact, and therefore should not be put in a
float
;
- Imprecise to a degree far exceeding the likely accumulation of floating-point error.
As a programmer, it's part of your job to know the quantities you're dealing with and choose appropriate data types. So there's no "fix" for floating point numbers, because there's no "problem" really -- just people using the wrong type for the wrong thing.
data_max
,data_min
andwidth
areint
, you can avoid the precision issues offloat
entirely by adapting integer floor division (//
) into integer ceiling division:total_bins = ((data_max - data_min) + (width - 1)) // width
. By adding one less than the divisor to the dividend, then using floor division, you get ceiling division for purely integer operands, with no floating point involved at all. – Rese(1.0/49.0)*49.0
is exactly equal to1.0
, or do you just want things with finite decimal expansions to behave intuitively? (Do you care about things like(2**0.2)**5.0
?) – Corncob