A point which has not yet been considered in any of the other answers is that equals
is required to be consistent with hashCode
, and the cost of a hashCode
implementation which was required to yield the same value for 123.0 as for 123.00 (but still do a reasonable job of distinguishing different values) would be much greater than that of a hashCode implementation which was not required to do so. Under the present semantics, hashCode
requires a multiply-by-31 and add for each 32 bits of stored value. If hashCode
were required to be consistent among values with different precision, it would either have to compute the normalized form of any value (expensive) or else, at minimum, do something like compute the base-999999999 digital root of the value and multiply that, mod 999999999, based upon the precision. The inner loop of such a method would be:
temp = (temp + (mag[i] & LONG_MASK) * scale_factor[i]) % 999999999;
replacing a multiply-by-31 with a 64-bit modulus operation--much more expensive. If one wants a hash table which regards numerically-equivalent BigDecimal
values as equivalent, and most keys which are sought in the table will be found, the efficient way to achieve the desired result would be to use a hash table which stores value wrappers, rather than storing values directly. To find a value in the table, start by looking for the value itself. If none is found, normalize the value and look for that. If nothing is found, create an empty wrapper and store an entry under the original and normalized forms of the number.
Looking for something which isn't in the table and hasn't been searched for previously would require an expensive normalization step, but looking for something that has been searched for would be much faster. By contrast, if HashCode needed to return equivalent values for numbers which, because of differing precision, were stored totally differently, that would make all hash table operations much slower.
new BigDecimal("2.0").compareTo(new BigDecimal("2.00")) == 0
– Dehaven