I'm not talking about distributed key/value systems, such as typically used with memcached, which use consistent hashing to make adding/removing nodes a relatively cheap procedure.
I'm talking about your standard in-memory hashtable like python's dict or perl's hash.
It would seem like the benefits of using consistent hashing would also apply to these standard data structures, by lowering the cost of resizing the hashtable. Real-time systems (and other latency-sensitive systems) would benefit from / require hashtables optimized for low-cost growth, even if overall throughput declines slightly.
Wikipedia alludes to "incremental resizing" but basically talks about a hot/cold replacement approach to resizing; there is a separate article about "extendible hashing" that uses a trie for bucket lookup to accomplish cheap rehashing.
Just curious if anyone's heard of in-core, single-node hashtables that use consistent hashing to lower growth cost. Or is this requirement better met using something other approach (ala the two wikipedia bits listed above)?
or ... is my whole question misguided? Do memory paging considerations make the complexity not worth it? That is, the extra indirection of consistent hashing lets you rehash only a fraction of the total keys, but perhaps that doesn't matter because you'll probably have to read from each existing page, so memory latency is your primary factor, and whether you rehash some or all of the keys doesn't matter compared to the cost of the memory access.... but on the other hand, with consistent hashing, all of your key remaps have the same destination page, so there's going to be less memory thrashing than if your keys remap to any of the existing pages.
EDIT: added "data-structures" tag, clarified final sentence to say "page" instead of "bucket".