Probability of getting a duplicate value when calling GetHashCode() on strings
Asked Answered
D

6

25

I want to know the probability of getting duplicate values when calling the GetHashCode() method on string instances. For instance, according to this blog post, blair and brainlessness have the same hashcode (1758039503) on an x86 machine.

Dasie answered 1/11, 2011 at 15:26 Comment(9)
Are you wanting to avoid collisions? It's fairly uncommon, but not impossible of course.Typeface
Yes, I wanted to know the probability of getting collisions.Dasie
The question is under-specified. First off, there are an infinite number of possible strings. You need to say what probability distribution you are using over the space of all possible strings. Second, you don't say what size of set you are interested in. Suppose you have n strings. Are you interested in the probability of hash collision of those n strings with a particular string, or are you interested in whether any string in the set collides with any other? Those are very different analyses.Photoreconnaissance
You are right, I'm interested in the posibility of collisions between any pair of strings. But considering the answers now I know that I can't rely on GetHashCode() to generate an unique value for a given string.Dasie
Oh, absolutely you cannot rely on uniqueness. There are only four billion possible hash values but obviously there are way more than four billion possible strings; the strings "0", "1", ... "5000000000" alone are five billion strings. You can't put an infinite number of pigeons into four billion pigeonholes without one of those pigeonholes having more than one pigeon!Photoreconnaissance
That's what I wanted to know. We were using GetHashCode() as a way to get unique hashes for a set of strings. Thanks!Dasie
@Diego: Pro-Tip: Do not use GetHashCode of any class for anything except hash tables.Dextrin
@Brian: Thanks! Now I know that :)Dasie
Lots of answers about hypothetical probabilities, but no one is mentioning how GetHashCode() is implemented which I find odd. e.g. see #15174977.Magistrate
P
43

Large.

(Sorry Jon!)

The probability of getting a hash collision among short strings is extremely large. Given a set of only ten thousand distinct short strings drawn from common words, the probability of there being at least one collision in the set is approximately 1%. If you have eighty thousand strings, the probability of there being at least one collision is over 50%.

For a graph showing the relationship between set size and probability of collision, see my article on the subject:

https://learn.microsoft.com/en-us/archive/blogs/ericlippert/socks-birthdays-and-hash-collisions

Photoreconnaissance answered 1/11, 2011 at 16:0 Comment(3)
+1 to both :) - you and Jon have answers to 2 different probabilities - Jon: "narrowing down the possible matches for a particular string" - low prob. of collision, Eric: "at least one collision in the set" - high.Wellinformed
I'm with Alexei: it depends on how you interpret the question. I absolutely agree with everything Eric's said, but stand by my (on the surface entirely contrary) answer too. Will edit to clarify, of course.Jahncke
I'll choose this as the accepted answer since it best suits the scenario I have. Thanks @JonSkeet and Jeremy McGee for your answers. They helped too.Dasie
J
27

Small - if you're talking about the chance of any two arbitrary unequal strings having a collision. (It will depend on just how "arbitrary" the strings are, of course - different contexts will be using different strings.)

Large - if you're talking about the chance of there being at least one collision in a large pool of arbitrary strings. The small individual probabilities are no match for the birthday problem.

That's about all you need to know. There are definitely cases where there will be collisions, and there have to be given that there are only 232 possible hash codes, and more than that many strings - so the pigeonhole principle proves that at least one hash code must have more than one string which generates it. However, you should trust that the hash has been designed to be pretty reasonable.

You can rely on it as a pretty good way of narrowing down the possible matches for a particular string. It would be an unusual set of naturally-occurring strings which generated a lot of collisions - and even when there are some collisions, obviously if you can narrow a candidate search set down from 50K to fewer than 10 strings, that's a pretty big win. But you must not rely on it as a unique value for any string.

Note that the algorithm used in .NET 4 differs between x86 and x64, so that example probably isn't valid on both platforms.

Jahncke answered 1/11, 2011 at 15:29 Comment(4)
I tried the example I provided in an x64 machine and the hashes are different, I will edit the question, thanks for pointing that.Dasie
that's better. Fence sit in the face of imprecise specifications ;)Salvo
@ShuggyCoUk: I'm trying to straddle the fence rather than sitting on top of it :)Jahncke
@Jon, watch out. If you get that wrong you can have a splittting...headacheSalvo
S
14

I think all that's possible to say is "small, but finite and definitely not zero" -- in other words you must not rely on GetHashCode() ever returning unique values for two different instances.

To my mind, hashcodes are best used when you want to tell quickly if two instances are different -- not if they're the same.

In other words, if two objects have different hash codes, you know they are different and need not do a (possibly expensive) deeper comparison.

However, if the hash codes for two objects are the same, you must go on to compare the objects themselves to see if they're actually the same.

Subcelestial answered 1/11, 2011 at 15:29 Comment(0)
L
5

I ran a test on a database of 466k English words and got 48 collisions with string.GetHashCode(). MurmurHash gives slightly better results. More results are here: https://github.com/jitbit/MurmurHash.net

Laroy answered 9/3, 2018 at 9:39 Comment(1)
Thanks! I will check that.Dasie
U
2

Just in case your question is meant to be what is the probability of a collision in a group of strings,

For n available slots and m occupying items:
Prob. of no collision on first insertion is 1.
Prob. of no collision on 2nd insertion is ( n - 1 ) / n
Prob. of no collision on 3rd insertion is ( n - 2 ) / n
Prob. of no collision on mth insertion is ( n - ( m - 1 ) ) / n

The probability of no collision after m insertions is the product of the above values: (n - 1)!/((n - m)! * n^(m - 1)).

which simplifies to ( n choose k ) / ( n^m ).

And everybody is right, you can't assume 0 collisions, so, saying the probability is "low" may be true but doesn't allow you to assume that there will be no collisions. If you're looking at a hashtable, I think the standard is you begin to have trouble with significant collisions when you're hashtable is about 2/3rds full.

Ulcerous answered 1/11, 2011 at 16:12 Comment(0)
T
-1

The probability of a collision between two randomly chosen strings is 1 / 2^(bits in hash code), if the hash is perfect, which is unlikely or impossible.

Tolidine answered 1/11, 2011 at 15:35 Comment(3)
Define "random" in the context of strings. I would contend that in real life, strings of fewer than 10 characters occur more often than strings of more than ten million characters.Jahncke
@JonSkeet, fair enough, I've qualified that it's two randomly chosen strings, rather than truly random strings.Tolidine
But now that means the hash has to be designed knowing what kinds of strings will be involved. I suspect that a line-of-business app and a DNA-profiling app will have very different sets of strings. I'm just trying to highlight that designing a hash to really have uniform distribution for every real life use case is extremely hard or impossible. Having a "pretty good" distribution is probably good enough for most people.Jahncke

© 2022 - 2024 — McMap. All rights reserved.