Linear probing is actually more memory efficient when the hash table is close to full.
Historically, one had very, very little memory, so every byte mattered (and there are still some cases where memory is very limited).
Why does it use less memory?
Consider what the tables look like: (separate chaining variations as per Wikipedia - there are other variations too, but they typically use more memory)
Linear Separate chaining #1 Separate chaining #2
probing List head in table Pointer in table
|------| |------|---| |---| |------|---|
|Object| |Object|Ptr| |Ptr| -> |Object|Ptr|
|------| |------|---| |---| |------|---|
|Object| |Object|Ptr| |Ptr| -> |Object|Ptr|
|------| |------|---| |---| |------|---|
| NULL | | NULL |Ptr| |Ptr|
|------| |------|---| |---|
. . .
. . .
. . .
(Ptr
stands for "pointer" - any pointer not pointing to something can be considered NULL
)
Separate chaining #1 clearly uses more memory than linear probing (always), as every element in the table is bigger by the size of the pointer.
Separate chaining #2 might have an advantage when there isn't much in the table, but when it gets full, it's going to have roughly an additional 2 pointers floating around for every element.
templatetypedef is probably right about linear probing typically being faster (he's rarely wrong), but it's typically taught that separate chaining is faster, and you see it in major API's (like Java implementations, for example), perhaps because of this believe, to avoid cases when linear probing is much slower (with a few well-selected values, you can quickly get to O(n)
performance with linear probing while separate chaining would've still been O(1)
), or perhaps for some other reason.