If a dictionary/map is implemented as a HashMap
, it has a best case complexity of O(1)
, since i best case it requires exactly the calculation of the hash-code of the key element for retrieval, if there are no key collisions.
A hash-map may have a worst-case runtime complexity of O(n)
if you have a lot of key collisions or a very bad hash function, since in this case it degrades to a linear scan of the entire array which holds the data.
Also, O(1)
doesn't mean instantly, it means it has a constant amount. So choosing the right implementation for a dictionary may as well depend on the number of elements in the collection, since having a very high constant cost for the function will be much worse if there are only a few entries.
That's why dictionaryies/maps are implemented differently for different scenarios. For Java there are multiple different implementations, C++ uses red/black-trees, etc. You chose them based on the number of data and based on their best/average/worst-case runtime-efficiency.
the growth
of complexity with different inputs. It's not about how many operations you have. For example: with 1 value, you havex
seconds, withn
values, you needroughly
x*n
seconds => O (n).x
could be many operations combined. – SloshSHA
hashes, so a dictionary lookup could be done in just a few assembly instructions. – Digged