Why should I use ints instead of floats?
Asked Answered
B

6

7

I'm preparing for a class lesson (I'm teaching) and I'm trying to predict any possible questions from the students and I ran into one that I can't answer:

If we have floats, why do we ever use ints at all? What's the point?

I know (or at least I think) that floats take more memory because they have more accuracy, but surely the difference is nearly negligible as far as memory usage goes for most non-embedded applications.

And I realize in many cases we actually don't need a float, but honestly, why do we have ints in the first place? What's the point? There's nothing an int can do that a float can't.

So why are they there at all?

Edit: You could argue they're easier to write (3 vs. 3.0) but you could just make all numbers default to float, so 3 would be treated the same as 3.0. Why make it a different type?

Baring answered 8/12, 2013 at 10:11 Comment(9)
"There's nothing an int can do that a float can't." What about a shifting or masking of bits?Dacca
Uh...idk...can floats not do that?Baring
If by float you mean the raw bits representing its value, it's merely a bit string and your could do anything with it (how about a General Purpose Turing Machine on float point numbers?). However if you mean what IEEE754 and its extension specifies, obviously not.Dacca
"Obviously" to someone who has a strong understanding of the nature of a floating point numbers, "not even remotely apparent" to everyone else.Baring
Sorry for using that word. However have you seen an operator for bitwist operations on float point numbers in any languages with strong or weak typing? As far as I know there's none, while typically with integers there does exist some.Dacca
@Aerovistae: not every bit pattern is a valid float value, therefore they're unsuitable for general bitwise operations because the result you want might be outside the set of valid values. Of course, a language could supply a floating-point type and an unrelated type for bit maniuplation that doesn't include integer arithmetic.Taciturn
This stuff you think is negligible isn't actually negligible. A double is a 53-bit integer along with an exponent sitting in 64 bits; that's an overhead of about 17%.Philipines
@starrify: JavaScript. However, the bitwise operators are implemented by internally converting the operands to 32-bit integers.Phosphocreatine
It's funny, I stumbled across this question while I also was preparing an introductory teaching lesson on Python! The most satisfying answer I have found was that a = 1e16; a+1 == a is True. Have you ever came across a more satisfying answer? @BaringAlto
C
15

Floating point numbers are approximations in many cases. Some integers (and decimals) can be exactly represented by a float, but most can't. See Floating Point Arithmetic: Issues and Limitations.

>>> a = 1000000000000000000000000000
>>> a+1 == a
False
>>> a = 1000000000000000000000000000.0
>>> a+1 == a
True

Resulting from this approximative nature of floats, some calculations may yield unexpected results (this isn't directly pertinent to the question, but it illustrates the point quite well):

>>> sum(1.1 for _ in range(9))
9.899999999999999

For example, when you're dealing with money calculations, it's better to use integers, or (if speed is not an issue) the decimal module.

Cassette answered 8/12, 2013 at 10:16 Comment(7)
Your first argument boils down to “the mathematical value 1.1 cannot be representing as a floating-point number”. Considering that 1.1 cannot be represented as an integer, I do not see how this is an example of something “an int can do that a float can't”.Soothfast
@PascalCuoq: But you can represent 1.1 as the integer fraction "11/10" and do exact calculations with that.Cassette
Good examples. Actually there is (a + 1234567890 == a) == True. :DDacca
I take the question as being "why not store integers in floating point objects?", but 1.1 is not an integer.Rouble
@DavidHeffernan: Good point; I reordered the paragraphs to better fit the question.Cassette
@Tim: for what it's worth, you can represent 1.1 as a float fraction using (11.0, 10.0) and do exact calculations with that. So in itself that plan to define a fraction class isn't something that an int can do but a float can't.Taciturn
"When you're dealing with money calculations, it's better to use integers.": This is an annoying meme. doubles are plenty fine at lots of "money calculations" and the additional hardware support makes them much faster than the equivalent fixed-point calculations would be on modern machines. The disadvantage is that you usually have to be awake when writing floating-point code if you want it to work.Philipines
T
10

There are various historical reasons that apply to most languages:

  • A philosophy of "don't use what you don't need". A lot of programs have no need for non-integer values but use integer values a lot, so an integer type reflects the problem domain.

  • Floating point arithmetic used to be far more expensive than integer. It's still somewhat more expensive, but in a lot of cases in Python you'd hardly notice the difference.

  • A 32 bit IEEE float can only represent all integers up to 2**24 then loses precision. A 16 bit float ("half precision") only represents all integers to 2048. So for 16 and 32 bit computing, when register sizes impose a serious trade-off between performance and value range, float-for-everything makes that trade-off even more serious.

  • An 8-bit integer type (or whatever byte size exists on the platform), is very useful for low-level programming because it exactly maps to any data representable in memory. Same goes for a register-sized integer type with some efficiency advantage to working in words rather than bytes. These are the (signed and unsigned) char and int types in C.

There is an additional reason specifically for Python:

  • The int type automatically promotes to long when a computation goes beyond its range, thereby retaining precision. float doesn't get bigger to remain precise. Both behaviours are useful in different circumstances.

Note that Javascript doesn't provide an integer type. The only built-in numbers in Javascript are 64 bit floating-point. So for any reason why an integer type is beneficial, it's instructive to consider how Javascript gets on without it.

Taciturn answered 8/12, 2013 at 11:30 Comment(0)
R
9

It's important to use data types that are the best fit for the task they are used for. A data type may not fit in different ways. For instance, a single byte is a bad fit for a population count because you cannot count more than 255 individuals. On the other hand a float is a bad fit because many possible floating point values have no meaning. For example, 1.5 is a floating point value that has no meaning as a count. So, using an appropriately sized integer type gives us the best fit. No need to perform sanity checks to weed out meaningless values.

Another reason to favour integers over floats is performance and efficiency. Integer arithmetic is faster. And for a given range integers consume less memory because integers don't need to represent non-integer values.

Another reason is to show intent. When a reader of the code sees that you used an integer, that reader can infer that the quantity is only meant to take integer values.

Rouble answered 8/12, 2013 at 10:26 Comment(3)
Whether integer arithmetic is faster depends on your processor and use case. Floating-point math really is surprisingly fast these days.Philipines
@Philipines I've never come across a machine where fp is faster than int arithmeticRouble
IIRC, on the original Pentium, floating-point multiply and divide were considerably faster than integer multiply and divide. (Tangentially: On modern machines, when the alternative involves horsing around with fixed-point math, you're usually better off using floating-point instead. But I'm pretty sure you knew that already.)Philipines
G
2

There are four reasons which I can currently think of (and I'm sure there are more):

  1. Memory. Choosing wisely data types can dramatically affect the memory requirements (large databases, for example).
  2. Speed. Hardware implementation of integer arithmetic is much faster (and simpler) than floating point arithmetic.
  3. Programming practices. Having data types enforces better programming practices, as the programmer must be aware of kind of data each variable stores. This also allows early errors detection (compile time vs runtime).
  4. History. Memory used to be expensive (and still is on some systems for some applications).
Gainless answered 8/12, 2013 at 10:32 Comment(1)
The first point is trivially true, but it misleadingly implies that ints are smaller than floats (the size depends on the exact int and float types, which for Python's built in types depends on the language version and interpreter "bitness"; in Python 3 even on the value of the integer). The second is not true on today's desktop machines, and even if it was true any difference would be swallowed by Python's general slowness (PyPy nonwithstanding).Lotic
T
0

I am just learning python, and I too believe that it seems more simple to just use floats instead of both floats and int ,and try to remember which one we saved it as. Someone said "Don't use what you don't need." But what about the limited storage in my own brain? Isn't that why we use code for computers in the first place?IDK.

Tedmund answered 7/9, 2023 at 19:4 Comment(1)
As it’s currently written, your answer is unclear. Please edit to add additional details that will help others understand how this addresses the question asked. You can find more information on how to write good answers in the help center.Efficient
L
-3

I asked chatGPT why there were rounding error with floating point numbers This is interesting.

The potential for rounding issues in floating-point arithmetic arises from the way computers represent real numbers. Most computers use a binary representation for floating-point numbers, and not all decimal numbers can be precisely represented in binary. This can lead to rounding errors when converting between decimal and binary representations.

Consider the following example in base-10 (decimal):

1/3 = 0.333333...

Now, in base-2 (binary), representing 1/3 as a finite fraction results in a repeating binary fraction:

1/3 = 0.0101(01)...

In this case, the binary representation is not exact, and it repeats indefinitely. This repeating fraction cannot be represented precisely in a finite number of binary digits, leading to a loss of precision.

Similarly, some decimal numbers cannot be represented exactly in binary, and when these numbers are involved in calculations, rounding errors can accumulate. For example:

0.1 (decimal) = 0.0001100110011001100110011001100110011001100110011...(binary)

When working with floating-point numbers, especially in repetitive or extensive calculations, these rounding errors can accumulate and lead to discrepancies between the expected and actual results.

While floating-point representations are highly useful and versatile for a broad range of applications, developers need to be aware of these limitations and take precautions when precision is critical. Techniques such as rounding, using fixed-point arithmetic, or employing specialized numerical libraries can help mitigate some of the issues associated with floating-point arithmetic.

#include <stdio.h>

int main() {
  int num1 = 5;
  int num2 = 2;
  float sum = (float) num1 / num2;

  printf("%f", sum);
  return 0;

THE ABOVE WORKS AND THE BELOW WORKS

#include <stdio.h>

int main() {
  float num1 = 5;
  float num2 = 2;
  float sum =  num1 / num2;

  printf("%f", sum);
  return 0;
}

BUT I THINK I WILL GO WITH WHAT W3 SUGGESTS.

Lingle answered 10/1 at 3:17 Comment(1)
Posting AI-generated answers is forbidden on this site.Spermatogonium

© 2022 - 2024 — McMap. All rights reserved.