Inconsistency in divide-by-zero behavior between different value types
Asked Answered
A

5

48

Please consider the following code and comments:

Console.WriteLine(1 / 0); // will not compile, error: Division by constant zero

int i = 0;
Console.WriteLine(1 / i); // compiles, runs, throws: DivideByZeroException

double d = 0;
Console.WriteLine(1 / d); // compiles, runs, results in: Infinity   

I can understand the compiler actively checking for division by zero constant and the DivideByZeroException at runtime but:

Why would using a double in a divide-by-zero return Infinity rather than throwing an exception? Is this by design or is it a bug?

Just for kicks, I did this in VB.NET as well, with "more consistent" results:

dim d as double = 0.0
Console.WriteLine(1 / d) ' compiles, runs, results in: Infinity

dim i as Integer = 0
Console.WriteLine(1 / i) '  compiles, runs, results in: Infinity

Console.WriteLine(1 / 0) ' compiles, runs, results in: Infinity

EDIT:

Based on kekekela's feedback I ran the following which resulted in infinity:

Console.WriteLine(1 / .0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001);

This test seems to corroborate the idea and a literal double of 0.0 is actually a very, very tiny fraction which will result in Infinity...

Ancilin answered 5/1, 2011 at 22:7 Comment(1)
Here's my article on the subject: blogs.msdn.com/b/ericlippert/archive/2009/10/15/…Venge
L
41

In a nutshell: the double type defines a value for infinity while the int type doesn't. So in the double case, the result of the calculation is a value that you can actually express in the given type since it's defined. In the int case, there is no value for infinity and thus no way to return an accurate result. Hence the exception.

VB.NET does things a little bit differently; integer division automatically results in a floating point value using the / operator. This is to allow developers to write, e.g., the expression 1 / 2, and have it evaluate to 0.5, which some would consider intuitive. If you want to see behavior consistent with C#, try this:

Console.WriteLine(1 \ 0)

Note the use of the integer division operator (\, not /) above. I believe you'll get an exception (or a compile error--not sure which).

Similarly, try this:

Dim x As Object = 1 / 0
Console.WriteLine(x.GetType())

The above code will output System.Double.

As for the point about imprecision, here's another way of looking at it. It isn't that the double type has no value for exactly zero (it does); rather, the double type is not meant to provide mathematically exact results in the first place. (Certain values can be represented exactly, yes. But calculations give no promise of accuracy.) After all, the value of the mathematical expression 1 / 0 is not defined (last I checked). But 1 / x approaches infinity as x approaches zero. So from this perspective if we cannot represent most fractions n / m exactly anyway, it makes sense to treat the x / 0 case as approximate and give the value it approaches--again, infinity is defined, at least.

Leatherneck answered 5/1, 2011 at 22:34 Comment(3)
Although very usable, this answer is also slightly wrong. The answer is wrong because division by zero is not infinity - it is mathematically undefined. The real answer is that doubles are NOT real numbers (R) as is pointed out in the latter part of this answer. They are floating-point numbers, and OP is trying to apply real-number reasoning on something which isn't a real-number. They appear similar a lot of the time because they were designed to be similar, but they are fundamentally different. Doubles define something called NaN, "Not a Number", (continues...)Reposeful
(cont...) which would be a more mathematically correct "result" if this was in fact real-numbers. Yet it did not return NaN. The reason is roughly as described in the answer: because in floating point logic you mostly assume that "0.0" is a very small number. You can not however say that it is a small number, stating that there is and exact representation of zero is misleading, because there is no 1-to-1 mapping from float to R. Rather each float-value maps to a range of real-values, and the float "0.0" includes both actual zero and a range of other small values.Reposeful
One more example of floating-point being fundamentally different to real-numbers is that float-numbers define "-0.0" (negative zero), and had OP divided by that the result would have been Negative Infinity. Yet what do you think 0.0 == -0.0 evaluates to?Reposeful
L
8

A double is a floating point number and not an exact value, so what you are really dividing by from the compiler's viewpoint is something approaching zero, but not exactly zero.

Lens answered 5/1, 2011 at 22:11 Comment(8)
Actually, doubles do have a representation of a value that is exactly zero. The real reason is that by definition a divide by zero of a double results in the value of Inf - that is by design not by accident.Wilhelm
@Wilhelm - If doubles have "a representation of a value that is exactly zero", then why do doubles distinguish between "+0" and "-0" and why do 1/+0 and 1/-0 give different results? The idea of a "signed zero" only makes sense if those values are viewed as positive or negative values that are too small to be represented normally. Note that there is no unsigned zero in IEEE 754 floating point types.Sheath
@Jeffrey: +0 and -0 are still 0. Even if you argue that numbers that are smaller than epsilon are represented by 0 you are still saying that numbers that are too small are effectively 0. Yes, in math -0 makes no sense but when 754 was designed, the physicists running simulations argued that they wanted to know which direction a result came from if the limit of a calculation led to 0.Wilhelm
@Wilhelm - 1/+0 = +INF but 1/-0, on the other hand, = -INF. If they cause different results in a calculation, then how can +0 and -0 both be zero? Also, +0 makes just as little sense as -0. Zero is just zero. It has no sign at all. The behavior of floating point numbers makes no sense unless these values are considered very small non-zero values. But it makes perfect sense if it is so considered.Sheath
Computers don't usually implement "normal" maths like you were taught in school, because that's really hard for a computer to do. Instead, they normally implement the IEEE 754 floating point standard, which is similar but has some subtle differences. One difference is that there are two zeros, +0 and -0, which compare equal. They're both zero. Having two representations of zero apparently makes some computer algorithms faster/simpler.Spermatophore
@Wilhelm You are confusing "The real number zero can be exactly represented in floating-point" with "The floating-point number zero is an exact representation of the real-number zero". These are fundamentally different statements. The transformation from R -> Float is one-way only, and reversing the direction (Float -> R) gives a range of numbers, not a single number.Reposeful
@AnorZaken: OK, I'll restate: doubles CAN represent zero. The real reason is that by definition a divide by zero of double results in Inf - it is by design. The real point I'm trying to make is that doubles, by design, generate Inf when divide by zero (or a value that converts to a double's representation of zero). Integers don't have Inf and therefore throws an error instead.Wilhelm
@Wilhelm But that last sentence is where you steer of the road of logic... Even if Integers had Inf they probably (pure speculation since there is no such spec.) would not return that value in a divide by zero situation, because A) Integers are modeled after Z so NaN would be a more appropriate result (a value ints lack too), and B) Why Inf? Why not -Inf? Since there is no negative zero in 2-complement Integers one can not make any assumption about from which side of zero we are approaching zero, and thus returning Inf would be bad design.Reposeful
M
2

This is by design because the double type complies with IEEE 754, the standard for floating-point arithmetic. Check out the documentation for Double.NegativeInfinity and Double.PositiveInfinity.

The value of this constant is the result of dividing a positive {or negative} number by zero.

Mop answered 5/1, 2011 at 22:28 Comment(0)
M
2

Because the "numeric" floating point is nothing of the kind. Floating point operations:

  • are not associative
  • are not distributive
  • may not have a multiplicative inverse

(see http://www.cs.uiuc.edu/class/fa07/cs498mjg/notes/floating-point.pdf for some examples)

The floating point is a construct to solve a specific problem, and gets used all over when it shouldn't be. I think they're pretty awful, but that is subjective.

Mazuma answered 18/3, 2013 at 21:59 Comment(0)
C
1

This likely has something to do with the fact that IEEE standard floating point and double-precision floating point numbers have a specified "infinity" value. .NET is just exposing something that already exists, at the hardware level.

See kekekela's answer for why this makes sense, logically.

Cussedness answered 5/1, 2011 at 22:27 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.