Why is it that
17m.GetHashCode() == 17d.GetHashCode()
(m=decimal, d=double)
Additionally, as expected
17f.GetHashCode() != 17d.GetHashCode()
(f=float)
This appears to be true for both net3.5 and net4.0.
As I understand, the internal bit representations of these types are quite different. So how come that the hash codes of decimal and double types equal for equal initialization values? Is there some conversion taking place before calculation of the hash?
I found that the source code for Double.GetHashCode()
is this:
//The hashcode for a double is the absolute value of the integer representation
//of that double.
//
[System.Security.SecuritySafeCritical] // auto-generated
public unsafe override int GetHashCode() {
double d = m_value;
if (d == 0) {
// Ensure that 0 and -0 have the same hash code
return 0;
}
long value = *(long*)(&d);
return unchecked((int)value) ^ ((int)(value >> 32));
}
I verified that this code returns desired value. But I did not found the source code for Decimal.GetHashCode()
. I tried using method
public static unsafe int GetHashCode(decimal m_value) {
decimal d = m_value;
if (d == 0) {
// Ensure that 0 and -0 have the same hash code
return 0;
}
int* value = (int*)(&d);
return unchecked(value[0] ^ value[1] ^ value[2] ^ value[3]);
}
But this did not match the desired results (it returned the hash corresponding to the int
type, which is also expected considering the internal layout of decimal). So the implementation of Decimal.GetHashCode()
remains currently unknown to me.
17f.GetHashCode() != 17d.GetHashCode()
is as expected? – ChantellechanterGetHashCode
function only tries to be unique within the same type. If double and float generates the same hashcode or not is not relevant. – Chantellechanter