Why is the order in dictionaries and sets arbitrary?
Asked Answered
T

5

168

I don't understand how looping over a dictionary or set in python is done by 'arbitrary' order.

I mean, it's a programming language so everything in the language must be 100% determined, correct? Python must have some kind of algorithm that decides which part of the dictionary or set is chosen, 1st, second and so on.

What am I missing?

Torus answered 18/3, 2013 at 14:59 Comment(3)
The newest PyPy build (2.5, for Python 2.7) makes dictionaries ordered by default.Basenji
@KarlKnechtel: For dicts anyway. sets remain arbitrarily ordered (and there are no plans to change that; the use cases for dicts made the design that gets insertion-ordering more useful, and the costs are paid less frequently; for sets it's the other way around, so insertion-ordered sets are unlikely to ever become a thing).Escobar
@Escobar The top answer covers it anyway; really don't know what I was thinking with that older comment.Assumptive
B
264

Note: This answer was written before the implementation of the dict type changed, in Python 3.6. Most of the implementation details in this answer still apply, but the listing order of keys in dictionaries is no longer determined by hash values. The set implementation remains unchanged.

The order is not arbitrary, but depends on the insertion and deletion history of the dictionary or set, as well as on the specific Python implementation. For the remainder of this answer, for 'dictionary', you can also read 'set'; sets are implemented as dictionaries with just keys and no values.

Keys are hashed, and hash values are assigned to slots in a dynamic table (it can grow or shrink based on needs). And that mapping process can lead to collisions, meaning that a key will have to be slotted in a next slot based on what is already there.

Listing the contents loops over the slots, and so keys are listed in the order they currently reside in the table.

Take the keys 'foo' and 'bar', for example, and lets assume the table size is 8 slots. In Python 2.7, hash('foo') is -4177197833195190597, hash('bar') is 327024216814240868. Modulo 8, that means these two keys are slotted in slots 3 and 4 then:

>>> hash('foo')
-4177197833195190597
>>> hash('foo') % 8
3
>>> hash('bar')
327024216814240868
>>> hash('bar') % 8
4

This informs their listing order:

>>> {'bar': None, 'foo': None}
{'foo': None, 'bar': None}

All slots except 3 and 4 are empty, looping over the table first lists slot 3, then slot 4, so 'foo' is listed before 'bar'.

bar and baz, however, have hash values that are exactly 8 apart and thus map to the exact same slot, 4:

>>> hash('bar')
327024216814240868
>>> hash('baz')
327024216814240876
>>> hash('bar') % 8
4
>>> hash('baz') % 8
4

Their order now depends on which key was slotted first; the second key will have to be moved to a next slot:

>>> {'baz': None, 'bar': None}
{'bar': None, 'baz': None}
>>> {'bar': None, 'baz': None}
{'baz': None, 'bar': None}

The table order differs here, because one or the other key was slotted first.

The technical name for the underlying structure used by CPython (the most commonly used Python implemenation) is a hash table, one that uses open addressing. If you are curious, and understand C well enough, take a look at the C implementation for all the (well documented) details. You could also watch this Pycon 2010 presentation by Brandon Rhodes about how CPython dict works, or pick up a copy of Beautiful Code, which includes a chapter on the implementation written by Andrew Kuchling.

Note that as of Python 3.3, a random hash seed is used as well, making hash collisions unpredictable to prevent certain types of denial of service (where an attacker renders a Python server unresponsive by causing mass hash collisions). This means that the order of a given dictionary or set is then also dependent on the random hash seed for the current Python invocation.

Other implementations are free to use a different structure for dictionaries, as long as they satisfy the documented Python interface for them, but I believe that all implementations so far use a variation of the hash table.

CPython 3.6 introduces a new dict implementation that maintains insertion order, and is faster and more memory efficient to boot. Rather than keep a large sparse table where each row references the stored hash value, and the key and value objects, the new implementation adds a smaller hash array that only references indices in a separate 'dense' table (one that only contains as many rows as there are actual key-value pairs), and it is the dense table that happens to list the contained items in order. See the proposal to Python-Dev for more details. Note that in Python 3.6 this is considered an implementation detail, Python-the-language does not specify that other implementations have to retain order. This changed in Python 3.7, where this detail was elevated to be a language specification; for any implementation to be properly compatible with Python 3.7 or newer it must copy this order-preserving behaviour. And to be explicit: this change doesn't apply to sets, as sets already have a 'small' hash structure.

Python 2.7 and newer also provides an OrderedDict class, a subclass of dict that adds an additional data structure to record key order. At the price of some speed and extra memory, this class remembers in what order you inserted keys; listing keys, values or items will then do so in that order. It uses a doubly-linked list stored in an additional dictionary to keep the order up-to-date efficiently. See the post by Raymond Hettinger outlining the idea. OrderedDict objects have other advantages, such as being re-orderable.

If you wanted an ordered set, you can install the oset package; it works on Python 2.5 and up.

Broadbrim answered 18/3, 2013 at 15:1 Comment(4)
I don't think other Python implementations can use anything that isn't a hash table in one way or another (though there are now billions of different ways to implement hash tables, so there's still some liberty). The fact that dictionaries use __hash__ and __eq__ (and nothing else) is practically a language guarantee, not an implementation detail.Massorete
@delnan: I wonder if you can still use a BTree with hashes and equality tests.. I am certainly not ruling that out, in any case. :-)Broadbrim
It's certainly correct, and I'd be glad to be proven wrong w.r.t. feasibility, but I don't see any way one could beat a hash table without requiring a broader contract. A BTree wouldn't have better average-case performance and doesn't give you better worst-case either (hash collisions still mean linear search). So you only gain better resistance to many hashes neomg congruent (mod tablesize), and there are many other great ways to handle that (some of which are used in dictobject.c) and end up with far fewer comparisons than a BTree needs to even find the right subtree.Massorete
@delnan: I agree completely; I most of all didn't want to be bashed for not allowing for other implementation options.Broadbrim
B
41

This is more a response to Python 3.41 A set before it was closed as a duplicate.


The others are right: don't rely on the order. Don't even pretend there is one.

That said, there is one thing you can rely on:

list(myset) == list(myset)

That is, the order is stable.


Understanding why there is a perceived order requires understanding a few things:

  • That Python uses hash sets,

  • How CPython's hash set is stored in memory and

  • How numbers get hashed

From the top:

A hash set is a method of storing random data with really fast lookup times.

It has a backing array:

# A C array; items may be NULL,
# a pointer to an object, or a
# special dummy object
_ _ 4 _ _ 2 _ _ 6

We shall ignore the special dummy object, which exists only to make removes easier to deal with, because we won't be removing from these sets.

In order to have really fast lookup, you do some magic to calculate a hash from an object. The only rule is that two objects which are equal have the same hash. (But if two objects have the same hash they can be unequal.)

You then make in index by taking the modulus by the array length:

hash(4) % len(storage) = index 2

This makes it really fast to access elements.

Hashes are only most of the story, as hash(n) % len(storage) and hash(m) % len(storage) can result in the same number. In that case, several different strategies can try and resolve the conflict. CPython uses "linear probing" 9 times before doing pseudorandom probing, so it will look to the right of the slot for up to 9 places before looking elsewhere.

CPython's hash sets are stored like this:

  • A hash set can be no more than 60% full (note: this load factor was previously 66% it was reduced in Python 3.7). If there are 20 elements and the backing array is 30 elements long, the backing store will resize to be larger. This is because you get collisions more often with small backing stores, and collisions slow everything down.

  • When the backing store becomes too full, it will be automatically resized to increase the ratio of unused space (a higher ratio of unused space means it's faster to find a slot when handling a hash collision). For small sets, storage will be quadrupled in size, and for large sets (>50,000) it will be doubled in size (source).

So when you create an array the backing store is length 8. Once it is 4 full and you add an element, it will contain 5 elements. 5 > ³⁄₅·8 so this triggers a resize, and the backing store quadruples to size 32.

>>> import sys
>>> s = set()
>>> for i in range(10):
...     print(len(s), sys.getsizeof(s))
...     s.add(i)
... 
0 216
1 216
2 216
3 216
4 216
5 728
6 728
7 728
8 728
9 728

Finally, hash(n) just returns n for integers (except for hash(-1) which returns -2 because the value -1 is reserved for another usage).


So, let's look at the first one:

v_set = {88,11,1,33,21,3,7,55,37,8}

len(v_set) is 10, so the backing store is at least 15(+1) after all items have been added. The relevant power of 2 is 32. So the backing store is:

__ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __

We have

hash(88) % 32 = 24
hash(11) % 32 = 11
hash(1)  % 32 = 1
hash(33) % 32 = 1
hash(21) % 32 = 21
hash(3)  % 32 = 3
hash(7)  % 32 = 7
hash(55) % 32 = 23
hash(37) % 32 = 5
hash(8)  % 32 = 8

so these insert as:

__  1 __  3 __ 37 __  7  8 __ __ 11 __ __ __ __ __ __ __ __ __ 21 __ 55 88 __ __ __ __ __ __ __
   33 ← Can't also be where 1 is;
        either 1 or 33 has to move

So we would expect an order like

{[1 or 33], 3, 37, 7, 8, 11, 21, 55, 88}

with the 1 or 33 that isn't at the start somewhere else. This will use linear probing, so we will either have:

       ↓
__  1 33  3 __ 37 __  7  8 __ __ 11 __ __ __ __ __ __ __ __ __ 21 __ 55 88 __ __ __ __ __ __ __

or

       ↓
__ 33  1  3 __ 37 __  7  8 __ __ 11 __ __ __ __ __ __ __ __ __ 21 __ 55 88 __ __ __ __ __ __ __

You might expect the 33 to be the one that's displaced because the 1 was already there, but due to the resizing that happens as the set is being built, this isn't actually the case. Every time the set gets rebuilt, the items already added are effectively reordered.

Now you can see why

{7,5,11,1,4,13,55,12,2,3,6,20,9,10}

might be in order. There are 14 elements, so the backing store is at least 21+1, which means 32:

__ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __

1 to 13 hash in the first 13 slots. 20 goes in slot 20.

__  1  2  3  4  5  6  7  8  9 10 11 12 13 __ __ __ __ __ __ 20 __ __ __ __ __ __ __ __ __ __ __

55 goes in slot hash(55) % 32 which is 23:

__  1  2  3  4  5  6  7  8  9 10 11 12 13 __ __ __ __ __ __ 20 __ __ 55 __ __ __ __ __ __ __ __

If we chose 50 instead, we'd expect

__  1  2  3  4  5  6  7  8  9 10 11 12 13 __ __ __ __ 50 __ 20 __ __ __ __ __ __ __ __ __ __ __

And lo and behold:

>>> {1, 2, 3, 4, 5, 6, 7, 9, 10, 11, 12, 13, 20, 50}
{1, 2, 3, 4, 5, 6, 7, 9, 10, 11, 12, 13, 50, 20}

pop is implemented quite simply by the looks of things: it traverses the underlying array and pops the first element, skipping over unused slots and "dummy" entries (tombstone markers from removed elements).


This is all implementation detail.

Basenji answered 29/9, 2014 at 12:13 Comment(0)
A
19

"Arbitrary" isn't the same thing as "non-determined".

What they're saying is that there are no useful properties of dictionary iteration order that are "in the public interface". There almost certainly are many properties of the iteration order that are fully determined by the code that currently implements dictionary iteration, but the authors aren't promising them to you as something you can use. This gives them more freedom to change these properties between Python versions (or even just in different operating conditions, or completely at random at runtime) without worrying that your program will break.

Thus if you write a program that depends on any property at all of dictionary order, then you are "breaking the contract" of using the dictionary type, and the Python developers are not promising that this will always work, even if it appears to work for now when you test it. It's basically the equivalent of relying on "undefined behaviour" in C.

Atkins answered 3/11, 2013 at 1:47 Comment(1)
Note that one part of dictionary iteration is well defined: Iterating over the keys, values or items of a given dictionary will each happen in the same order, as long as no changes have been made to the dictionary in between. That means that d.items() is essentially identical to zip(d.keys(), d.values()). If any items are added to the dictionary however, all bets are off. The order could change completely (if the hash table needed to be resized), though most of the time you'd just find the new item turning up in some arbitrary spot in the sequence.Servais
L
6

The other answers to this question are excellent and well written. The OP asks "how" which I interpret as "how do they get away with" or "why".

The Python documentation says dictionaries are not ordered because the Python dictionary implements the abstract data type associative array. As they say

the order in which the bindings are returned may be arbitrary

In other words, a computer science student cannot assume that an associative array is ordered. The same is true for sets in math

the order in which the elements of a set are listed is irrelevant

and computer science

a set is an abstract data type that can store certain values, without any particular order

Implementing a dictionary using a hash table is an implementation detail that is interesting in that it has the same properties as associative arrays as far as order is concerned.

Luing answered 8/2, 2015 at 2:18 Comment(1)
You're basically right but it would be a little closer (and give a good hint at the reason it's "unordered") to say it's an implementation of a hash table rather than an assoc array.Brey
E
6

Python use hash table for storing the dictionaries, so there is no order in dictionaries or other iterable objects that use hash table.

But regarding the indices of items in a hash object, python calculate the indices based on following code within hashtable.c:

key_hash = ht->hash_func(key);
index = key_hash & (ht->num_buckets - 1);

Therefor, as the hash value of integers is the integer itself* the index is based on the number (ht->num_buckets - 1 is a constant) so the index calculated by Bitwise-and between (ht->num_buckets - 1) and the number itself* (expect for -1 which it's hash is -2) , and for other objects with their hash value.

consider the following example with set that use hash-table :

>>> set([0,1919,2000,3,45,33,333,5])
set([0, 33, 3, 5, 45, 333, 2000, 1919])

For number 33 we have :

33 & (ht->num_buckets - 1) = 1

That actually it's :

'0b100001' & '0b111'= '0b1' # 1 the index of 33

Note in this case (ht->num_buckets - 1) is 8-1=7 or 0b111.

And for 1919 :

'0b11101111111' & '0b111' = '0b111' # 7 the index of 1919

And for 333 :

'0b101001101' & '0b111' = '0b101' # 5 the index of 333

For more details about python hash function its good to read the following quotes from python source code :

Major subtleties ahead: Most hash schemes depend on having a "good" hash function, in the sense of simulating randomness. Python doesn't: its most important hash functions (for strings and ints) are very regular in common cases:

>>> map(hash, (0, 1, 2, 3))
  [0, 1, 2, 3]
>>> map(hash, ("namea", "nameb", "namec", "named"))
  [-1658398457, -1658398460, -1658398459, -1658398462]

This isn't necessarily bad! To the contrary, in a table of size 2**i, taking the low-order i bits as the initial table index is extremely fast, and there are no collisions at all for dicts indexed by a contiguous range of ints. The same is approximately true when keys are "consecutive" strings. So this gives better-than-random behavior in common cases, and that's very desirable.

OTOH, when collisions occur, the tendency to fill contiguous slices of the hash table makes a good collision resolution strategy crucial. Taking only the last i bits of the hash code is also vulnerable: for example, consider the list [i << 16 for i in range(20000)] as a set of keys. Since ints are their own hash codes, and this fits in a dict of size 2**15, the last 15 bits of every hash code are all 0: they all map to the same table index.

But catering to unusual cases should not slow the usual ones, so we just take the last i bits anyway. It's up to collision resolution to do the rest. If we usually find the key we're looking for on the first try (and, it turns out, we usually do -- the table load factor is kept under 2/3, so the odds are solidly in our favor), then it makes best sense to keep the initial index computation dirt cheap.


* The hash function for class int :

class int:
    def __hash__(self):
        value = self
        if value == -1:
            value = -2
        return value

Earplug answered 29/5, 2015 at 21:40 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.