Reading through the ECMAScript 5.1 specification, +0
and -0
are distinguished.
Why then does +0 === -0
evaluate to true
?
Reading through the ECMAScript 5.1 specification, +0
and -0
are distinguished.
Why then does +0 === -0
evaluate to true
?
JavaScript uses IEEE 754 standard to represent numbers. From Wikipedia:
Signed zero is zero with an associated sign. In ordinary arithmetic, −0 = +0 = 0. However, in computing, some number representations allow for the existence of two zeros, often denoted by −0 (negative zero) and +0 (positive zero). This occurs in some signed number representations for integers, and in most floating point number representations. The number 0 is usually encoded as +0, but can be represented by either +0 or −0.
The IEEE 754 standard for floating point arithmetic (presently used by most computers and programming languages that support floating point numbers) requires both +0 and −0. The zeroes can be considered as a variant of the extended real number line such that 1/−0 = −∞ and 1/+0 = +∞, division by zero is only undefined for ±0/±0 and ±∞/±∞.
The article contains further information about the different representations.
So this is the reason why, technically, both zeros have to be distinguished.
However,
+0 === -0
evaluates to true. Why is that (...) ?
This behaviour is explicitly defined in section 11.9.6, the Strict Equality Comparison Algorithm (emphasis partly mine):
The comparison
x === y
, wherex
andy
are values, produces true or false. Such a comparison is performed as follows:(...)
If Type(x) is Number, then
- If x is NaN, return false.
- If y is NaN, return false.
- If x is the same Number value as y, return true.
- If x is +0 and y is −0, return true.
- If x is −0 and y is +0, return true.
- Return false.
(...)
(The same holds for +0 == -0
btw.)
It seems logically to treat +0
and -0
as equal. Otherwise we would have to take this into account in our code and I, personally, don't want to do that ;)
Note:
ES2015 introduces a new comparison method, Object.is
. Object.is
explicitly distinguishes between -0
and +0
:
Object.is(-0, +0); // false
1/0 === Infinity; // true
and 1/-0 === -Infinity; // true
. –
Theological 1 === 1
and +0 === -0
but 1/+0 !== 1/-0
. How weird! –
Guide +0 !== -0
;) That could really create problems. –
Seismic 0 !== +0
/ 0 !== -0
, which would indeed create problems too! –
Luckily I'll add this as an answer because I overlooked @user113716's comment.
You can test for -0 by doing this:
function isMinusZero(value) {
return 1/value === -Infinity;
}
isMinusZero(0); // false
isMinusZero(-0); // true
e±308
, your number can be represented only in denormalized form and different implementations have different opinions about where to support them at all or not. The point is, on some machines in some floating point modes your number is represented as -0
and on others as denormalized number 0.000000000000001e-308
. Such floats, so fun –
Perilymph I just came across an example where +0 and -0 behave very differently indeed:
Math.atan2(0, 0); //returns 0
Math.atan2(0, -0); //returns Pi
Be careful: even when using Math.round on a negative number like -0.0001, it will actually be -0 and can screw up some subsequent calculations as shown above.
Quick and dirty way to fix this is to do smth like:
if (x==0) x=0;
or just:
x+=0;
This converts the number to +0 in case it was -0.
Are +0 and -0 the same?
Short answer: Depending on what comparison operator you use.
Long answer:
Basically, We've had 4 comparison types until now:
console.log(+0 == -0); // true
console.log(+0 === -0); // true
Object.is
)console.log(Object.is(+0, -0)); // false
console.log([+0].includes(-0)); // true
As a result, just Object.is(+0, -0)
makes difference with the other ones.
const x = +0, y = -0; // true -> using ‘loose’ equality
console.log(x === y); // true -> using ‘strict’ equality
console.log([x].indexOf(y)); // 0 (true) -> using ‘strict’ equality
console.log(Object.is(x, y)); // false -> using ‘Same-value’ equality
console.log([x].includes(y)); // true -> using ‘Same-value-zero’ equality
In the IEEE 754 standard used to represent the Number type in JavaScript, the sign is represented by a bit (a 1 indicates a negative number).
As a result, there exists both a negative and a positive value for each representable number, including 0
.
This is why both -0
and +0
exist.
Answering the original title Are +0 and -0 the same?
:
brainslugs83
(in comments of answer by Spudley
) pointed out an important case in which +0 and -0 in JS are not the same - implemented as function:
var sign = function(x) {
return 1 / x === 1 / Math.abs(x);
}
This will, other than the standard Math.sign
return the correct sign of +0 and -0.
true
or false
, but not -1
and 1
–
Gurule I'd blame it on the Strict Equality Comparison method ( '===' ). Look at section 4d
see 7.2.13 Strict Equality Comparison on the specification
We can use Object.is
to distinguish +0 and -0, and one more thing, NaN==NaN
.
Object.is(+0,-0) //false
Object.is(NaN,NaN) //true
If you need sign
function that supports -0
and +0
:
var sign = x => 1/x > 0 ? +1 : -1;
It acts as Math.sign
, except that sign(0)
returns 1
and sign(-0)
returns -1
.
sign(Infinity)
gives -1
–
Irenics There are two possible values (bit representations) for 0. This is not unique. Especially in floating point numbers this can occur. That is because floating point numbers are actually stored as a kind of formula.
Integers can be stored in separate ways too. You can have a numeric value with an additional sign-bit, so in a 16 bit space, you can store a 15 bit integer value and a sign-bit. In this representation, the value 1000 (hex) and 0000 both are 0, but one of them is +0 and the other is -0.
This could be avoided by subtracting 1 from the integer value so it ranged from -1 to -2^16, but this would be inconvenient.
A more common approach is to store integers in 'two complements', but apparently ECMAscript has chosen not to. In this method numbers range from 0000 to 7FFF positive. Negative numbers start at FFFF (-1) to 8000.
Of course, the same rules apply to larger integers too, but I don't want my F to wear out. ;)
+0 === -0
a little weird. Because now we have 1 === 1
and +0 === -0
but 1/+0 !== 1/-0
... –
Guide +0 === -0
despite the two bit representations being different. –
Guide Wikipedia has a good article to explain this phenomenon: http://en.wikipedia.org/wiki/Signed_zero
In brief, it both +0 and -0 are defined in the IEEE floating point specifications. Both of them are technically distinct from 0 without a sign, which is an integer, but in practice they all evaluate to zero, so the distinction can be ignored for all practical purposes.
© 2022 - 2024 — McMap. All rights reserved.
Object.is
to distinguish +0 and -0 – Corenecoreopsis