Best hash function for mixed numeric and literal identifiers
Asked Answered
T

3

7

For performance reasons I have a need to split a set of objects identified by a string into groups. Objects may be either identified by a number or by a string in prefixed (qualified) form with dots separating parts of the identifier:

12
323
12343
2345233
123123131
ns1:my.label.one
ns1:my.label.two
ns1:my.label.three
ns1:system.text.one
ns2:edit.box.grey
ns2:edit.box.black
ns2:edit.box.mixed

Numeric identifiers are from 1 to several millions. Text identifiers are most likely to have very many starting with the same name space prefix (ns1:) and with the same path prefix (edit.box.).

What is the best hash function for this purpose? It would be good if I can predict somehow the size of the bucket based on object identifier statistics. Are there some good articles for constructing good hash function based on some statistical information?

There are several millions of such identifiers, but the purpose is to split them into groups of 1-2 thousands based on the hash function.

Trine answered 14/12, 2009 at 16:33 Comment(1)
Have you considered using one or more of the following general purpose hash functions: partow.net/programming/hashfunctions/index.html they are extremely fast and efficient.Vessel
P
3

Two good hash functions can both be mapped into the same space of values, and will in general not cause any new problems as a result of combining them.

So your hash function can look like this:

if it's an integer value:
    return int_hash(integer value)
return string_hash(string value)

Unless there's any clumping of your integers around certain values modulo N, where N is a possible number of buckets, then int_hash can just return its input.

Picking a string hash is not a novel problem. Try "djb2" (http://www.cse.yorku.ca/~oz/hash.html) or similar, unless you have obscene performance requirements.

I don't think there's much point in modifying the hash function to take account of the common prefixes. If your hash function is good to start with, then it is unlikely that common prefixes will create any clumping of hash values.

If you do this, and the hash doesn't unexpectedly perform badly, and you put your several million hash values into a few thousand buckets, then the bucket populations will be normally distributed, with mean (several million / a few thousand) and variance 1/12 (a few thousand)^2

With an average of 1500 entries per bucket, that makes the standard deviation somewhere around 430. 95% of a normal distribution lies within 2 standard deviations of the mean, so 95% of your buckets will contain 640-2360 entries, unless I've done my sums wrong. Is that adequate, or do you need the buckets to be of more closely similar sizes?

Politicking answered 14/12, 2009 at 17:17 Comment(4)
If the variation is still too much, use two hash functions instead of one and put the item in the bin that currently has fewer items in it. That reduces the variation from O(lg n / lg lg n) to O(lg lg n).Hesperian
@Steve, thanks for your detailed answer. Combination of hash functions is very good idea, that I will definitely reuse. I don't really care if buckets are of similar size, for performance reasons I'm more concerned that maximum bucket size is not larger than 1-2 thousands. So, you think that djb2 will make good distribution for prefixed identifiers, right?Trine
@Keith, I can't put objects to different buckets, the bucket should be identified uniquely based on the object identifier.Trine
Do beware that the flipside of 95% being in the range 640-2360, is that 2.5% of buckets will be larger than 2360. If your performance requirement is a hard "must not be more than 2000", then you have a different sort of problem from the one I'm solving. djb2 will give as good distribution for prefixed identifiers, as it would give if the prefix wasn't there. Effectively the prefix just changes the "starting value" of the hash, so that instead of 5381 it's hash(prefix).Politicking
B
0

You would probably be safe going with sha1 and truncating it to whatever size you want.

It wouldn't be extremely efficient, but perhaps the hash function won't be a bottleneck?

Brody answered 14/12, 2009 at 16:42 Comment(0)
T
0

I reckon CRC16 would be a reasonable hash to use on these strings, and the groups shouldn't go any bigger than 1-2 thousand.

This should make the hash table about 1MB + however many items you have in it * 4 bytes, so we're talking 50MB, and then you also have all the actual data being stored, which had better be very small.

Tracheid answered 14/12, 2009 at 16:45 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.