How do I merge two dictionaries in a single expression in Python?
Asked Answered
E

43

6938

I want to merge two dictionaries into a new dictionary.

x = {'a': 1, 'b': 2}
y = {'b': 3, 'c': 4}
z = merge(x, y)

>>> z
{'a': 1, 'b': 3, 'c': 4}

Whenever a key k is present in both dictionaries, only the value y[k] should be kept.

Eagleeyed answered 2/9, 2008 at 7:44 Comment(0)
A
9101

How can I merge two Python dictionaries in a single expression?

For dictionaries x and y, their shallowly-merged dictionary z takes values from y, replacing those from x.

  • In Python 3.9.0 or greater (released 17 October 2020, PEP-584, discussed here):

    z = x | y
    
  • In Python 3.5 or greater:

    z = {**x, **y}
    
  • In Python 2, (or 3.4 or lower) write a function:

    def merge_two_dicts(x, y):
        z = x.copy()   # start with keys and values of x
        z.update(y)    # modifies z with keys and values of y
        return z
    

    and now:

    z = merge_two_dicts(x, y)
    

Explanation

Say you have two dictionaries and you want to merge them into a new dictionary without altering the original dictionaries:

x = {'a': 1, 'b': 2}
y = {'b': 3, 'c': 4}

The desired result is to get a new dictionary (z) with the values merged, and the second dictionary's values overwriting those from the first.

>>> z
{'a': 1, 'b': 3, 'c': 4}

A new syntax for this, proposed in PEP 448 and available as of Python 3.5, is

z = {**x, **y}

And it is indeed a single expression.

Note that we can merge in with literal notation as well:

z = {**x, 'foo': 1, 'bar': 2, **y}

and now:

>>> z
{'a': 1, 'b': 3, 'foo': 1, 'bar': 2, 'c': 4}

It is now showing as implemented in the release schedule for 3.5, PEP 478, and it has now made its way into the What's New in Python 3.5 document.

However, since many organizations are still on Python 2, you may wish to do this in a backward-compatible way. The classically Pythonic way, available in Python 2 and Python 3.0-3.4, is to do this as a two-step process:

z = x.copy()
z.update(y) # which returns None since it mutates z

In both approaches, y will come second and its values will replace x's values, thus b will point to 3 in our final result.

Not yet on Python 3.5, but want a single expression

If you are not yet on Python 3.5 or need to write backward-compatible code, and you want this in a single expression, the most performant while the correct approach is to put it in a function:

def merge_two_dicts(x, y):
    """Given two dictionaries, merge them into a new dict as a shallow copy."""
    z = x.copy()
    z.update(y)
    return z

and then you have a single expression:

z = merge_two_dicts(x, y)

You can also make a function to merge an arbitrary number of dictionaries, from zero to a very large number:

def merge_dicts(*dict_args):
    """
    Given any number of dictionaries, shallow copy and merge into a new dict,
    precedence goes to key-value pairs in latter dictionaries.
    """
    result = {}
    for dictionary in dict_args:
        result.update(dictionary)
    return result

This function will work in Python 2 and 3 for all dictionaries. e.g. given dictionaries a to g:

z = merge_dicts(a, b, c, d, e, f, g) 

and key-value pairs in g will take precedence over dictionaries a to f, and so on.

Critiques of Other Answers

Don't use what you see in the formerly accepted answer:

z = dict(x.items() + y.items())

In Python 2, you create two lists in memory for each dict, create a third list in memory with length equal to the length of the first two put together, and then discard all three lists to create the dict. In Python 3, this will fail because you're adding two dict_items objects together, not two lists -

>>> c = dict(a.items() + b.items())
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for +: 'dict_items' and 'dict_items'

and you would have to explicitly create them as lists, e.g. z = dict(list(x.items()) + list(y.items())). This is a waste of resources and computation power.

Similarly, taking the union of items() in Python 3 (viewitems() in Python 2.7) will also fail when values are unhashable objects (like lists, for example). Even if your values are hashable, since sets are semantically unordered, the behavior is undefined in regards to precedence. So don't do this:

>>> c = dict(a.items() | b.items())

This example demonstrates what happens when values are unhashable:

>>> x = {'a': []}
>>> y = {'b': []}
>>> dict(x.items() | y.items())
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: unhashable type: 'list'

Here's an example where y should have precedence, but instead the value from x is retained due to the arbitrary order of sets:

>>> x = {'a': 2}
>>> y = {'a': 1}
>>> dict(x.items() | y.items())
{'a': 2}

Another hack you should not use:

z = dict(x, **y)

This uses the dict constructor and is very fast and memory-efficient (even slightly more so than our two-step process) but unless you know precisely what is happening here (that is, the second dict is being passed as keyword arguments to the dict constructor), it's difficult to read, it's not the intended usage, and so it is not Pythonic.

Here's an example of the usage being remediated in django.

Dictionaries are intended to take hashable keys (e.g. frozensets or tuples), but this method fails in Python 3 when keys are not strings.

>>> c = dict(a, **b)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: keyword arguments must be strings

From the mailing list, Guido van Rossum, the creator of the language, wrote:

I am fine with declaring dict({}, **{1:3}) illegal, since after all it is abuse of the ** mechanism.

and

Apparently dict(x, **y) is going around as "cool hack" for "call x.update(y) and return x". Personally, I find it more despicable than cool.

It is my understanding (as well as the understanding of the creator of the language) that the intended usage for dict(**y) is for creating dictionaries for readability purposes, e.g.:

dict(a=1, b=10, c=11)

instead of

{'a': 1, 'b': 10, 'c': 11}

Response to comments

Despite what Guido says, dict(x, **y) is in line with the dict specification, which btw. works for both Python 2 and 3. The fact that this only works for string keys is a direct consequence of how keyword parameters work and not a short-coming of dict. Nor is using the ** operator in this place an abuse of the mechanism, in fact, ** was designed precisely to pass dictionaries as keywords.

Again, it doesn't work for 3 when keys are not strings. The implicit calling contract is that namespaces take ordinary dictionaries, while users must only pass keyword arguments that are strings. All other callables enforced it. dict broke this consistency in Python 2:

>>> foo(**{('a', 'b'): None})
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: foo() keywords must be strings
>>> dict(**{('a', 'b'): None})
{('a', 'b'): None}

This inconsistency was bad given other implementations of Python (PyPy, Jython, IronPython). Thus it was fixed in Python 3, as this usage could be a breaking change.

I submit to you that it is malicious incompetence to intentionally write code that only works in one version of a language or that only works given certain arbitrary constraints.

More comments:

dict(x.items() + y.items()) is still the most readable solution for Python 2. Readability counts.

My response: merge_two_dicts(x, y) actually seems much clearer to me, if we're actually concerned about readability. And it is not forward compatible, as Python 2 is increasingly deprecated.

{**x, **y} does not seem to handle nested dictionaries. the contents of nested keys are simply overwritten, not merged [...] I ended up being burnt by these answers that do not merge recursively and I was surprised no one mentioned it. In my interpretation of the word "merging" these answers describe "updating one dict with another", and not merging.

Yes. I must refer you back to the question, which is asking for a shallow merge of two dictionaries, with the first's values being overwritten by the second's - in a single expression.

Assuming two dictionaries of dictionaries, one might recursively merge them in a single function, but you should be careful not to modify the dictionaries from either source, and the surest way to avoid that is to make a copy when assigning values. As keys must be hashable and are usually therefore immutable, it is pointless to copy them:

from copy import deepcopy

def dict_of_dicts_merge(x, y):
    z = {}
    overlapping_keys = x.keys() & y.keys()
    for key in overlapping_keys:
        z[key] = dict_of_dicts_merge(x[key], y[key])
    for key in x.keys() - overlapping_keys:
        z[key] = deepcopy(x[key])
    for key in y.keys() - overlapping_keys:
        z[key] = deepcopy(y[key])
    return z

Usage:

>>> x = {'a':{1:{}}, 'b': {2:{}}}
>>> y = {'b':{10:{}}, 'c': {11:{}}}
>>> dict_of_dicts_merge(x, y)
{'b': {2: {}, 10: {}}, 'a': {1: {}}, 'c': {11: {}}}

Coming up with contingencies for other value types is far beyond the scope of this question, so I will point you at my answer to the canonical question on a "Dictionaries of dictionaries merge".

Less Performant But Correct Ad-hocs

These approaches are less performant, but they will provide correct behavior. They will be much less performant than copy and update or the new unpacking because they iterate through each key-value pair at a higher level of abstraction, but they do respect the order of precedence (latter dictionaries have precedence)

You can also chain the dictionaries manually inside a dict comprehension:

{k: v for d in dicts for k, v in d.items()} # iteritems in Python 2.7

or in Python 2.6 (and perhaps as early as 2.4 when generator expressions were introduced):

dict((k, v) for d in dicts for k, v in d.items()) # iteritems in Python 2

itertools.chain will chain the iterators over the key-value pairs in the correct order:

from itertools import chain
z = dict(chain(x.items(), y.items())) # iteritems in Python 2

Performance Analysis

I'm only going to do the performance analysis of the usages known to behave correctly. (Self-contained so you can copy and paste yourself.)

from timeit import repeat
from itertools import chain

x = dict.fromkeys('abcdefg')
y = dict.fromkeys('efghijk')

def merge_two_dicts(x, y):
    z = x.copy()
    z.update(y)
    return z

min(repeat(lambda: {**x, **y}))
min(repeat(lambda: merge_two_dicts(x, y)))
min(repeat(lambda: {k: v for d in (x, y) for k, v in d.items()}))
min(repeat(lambda: dict(chain(x.items(), y.items()))))
min(repeat(lambda: dict(item for d in (x, y) for item in d.items())))

In Python 3.8.1, NixOS:

>>> min(repeat(lambda: {**x, **y}))
1.0804965235292912
>>> min(repeat(lambda: merge_two_dicts(x, y)))
1.636518670246005
>>> min(repeat(lambda: {k: v for d in (x, y) for k, v in d.items()}))
3.1779992282390594
>>> min(repeat(lambda: dict(chain(x.items(), y.items()))))
2.740647904574871
>>> min(repeat(lambda: dict(item for d in (x, y) for item in d.items())))
4.266070580109954
$ uname -a
Linux nixos 4.19.113 #1-NixOS SMP Wed Mar 25 07:06:15 UTC 2020 x86_64 GNU/Linux

Resources on Dictionaries

Aristaeus answered 10/11, 2014 at 22:11 Comment(22)
Strings only limitation for keywords expansion is enough to rule out {**x, **y} method. However, the items approach can be made workable by converting dictitems to list like dict(list(x.items()), list(y.items())).Subito
@MohammadAzim "strings only" only applies to keyword argument expansion in callables, not generalized unpacking syntax. To demonstrate that this works: {**{(0, 1):2}} -> {(0, 1): 2}Aristaeus
If dict1 and dict2 have some keys in common, {**dict1, **dict2} raises TypeError: type object got multiple values for keyword argument common_key_name. I guess I'll stay with {dict1, **dict2} and string keys.Childbirth
@Childbirth - that error message is unrelated. Check out this answer: #18950554Aristaeus
@GringoSuave Are you saying that it needs a summary? Because I would characterize the first part as a summary. If you say it's too long, what would you like to cut from this answer, that you think, in the cutting, would create value for users? Cheers!Aristaeus
Hi, the top is a summary, yes. Up to you. The whole thing would be a great blog post. Note Py 3.4 and below are EOL, 3.5 approaching EOL in 2020-09.Harrovian
I agree with Gringo, this answer should be made shorter by removing all mention of Python < 3.5.Dissociable
I agree with the eagerness to leave the old way behind, but sometimes people have to work in environments where they only have the older technology available to them. People also have to update code, and seeing the old way next to the new way allows them to confidently replace the old code with equivalent new code. I am open to suggestions on reorganizing the material, but I think we need to keep the older information.Aristaeus
Here's the expected release schedule: python.org/dev/peps/pep-0596Aristaeus
What x and y did you use in the benchmarks? And I find dict((k, v) ... for k, v in d.items()) somewhat clumsy and artificially inefficient. No need to unpack and repack every item. I think it should be dict(item ... for item in d.items()).Spec
@AaronHall With fresh new benchmarks, thanks :-). You have another dict((k, v) ...) a bit higher in your post, btw. And I just thought of another solution that I think hasn't been mentioned yet: dict([*x.items(), *y.items()]). It's a bit slower for me than the chain one.Spec
While possible, that's using the "new way" except for lists instead of dictionaries - I'm not going to suggest people are doing something we have no evidence they're doing that is clearly suboptimal... you could do even more things that are suboptimal, so I'm not going to introduce the idea.Aristaeus
Wow! That's a thorough answer. One minor comment, though: "the intended usage for dict(**y) is for creating dictionaries for readability purposes". I would argue that it's also to make the code less error prone, because dict(a=17, b=19, a=23) would fail with "SyntaxError: keyword argument repeated", while {"a": 17, "b": 19, "a": 23} would not (with 23 overwriting 17) and your accidental double key "a" would've gone unnoticed. Of course, if you want to allow double keys (I cannot see why, though), then {...} syntax is the way to go. Many linters, of course, warn of double keys.Belfort
@VedranŠego - There's a lot of cases where you would prefer to override. As an example - hieratical value setup. You have your base config in dictionary and overrides coming from a different dict. Doing copy & update works, but that's inconsistent API design. list+list work.... while dict+dict doesn't.Quiz
@AleksandrPanzin, I said "I cannot see why [you'd want double keys in a single dict definition]". Of course you'd want proper updates between multiple dictionaries, but I don't see the purpose inside a single definition (like two "a" keys in my example).Belfort
huh; I would have expected x | y to be like {**y, **x}, not {**x, **y}Sargassum
I think that the requirement of "single-line" should give way to the clarity given by the two line "copy + update". This will be readable to almost all, where the more concise syntax require intimate knowledge of semantics.Wreath
x | y is confusing to read, as @Sargassum explained it is counter-intuitive. I prefer x |= y which is going to mutate x with new values from y and reads far clearer!Placement
having to use 3.6 and need the deep copy. for the, if that, one other person interested, the dict_of_dicts_merge() function took 29 times longer than {**x, **y}. sigh.Cluny
@Cluny - I think you understand, but for others who might come along later: dict_of_dicts_merge is slower because it's a recursive deep copy, so the comparison is a bit unfair.Aristaeus
my "sigh" is that copying {'foo': {'bar': 'blah'}} takes 29 times longer than {'foo': 'bar', 'bar': 'blah'}Cluny
Nice comprehensive answer - should be using this as an exampleMatsumoto
T
1821

In your case, you can do:

z = dict(list(x.items()) + list(y.items()))

This will, as you want it, put the final dict in z, and make the value for key b be properly overridden by the second (y) dict's value:

>>> x = {'a': 1, 'b': 2}
>>> y = {'b': 10, 'c': 11}
>>> z = dict(list(x.items()) + list(y.items()))
>>> z
{'a': 1, 'c': 11, 'b': 10}

If you use Python 2, you can even remove the list() calls. To create z:

>>> z = dict(x.items() + y.items())
>>> z
{'a': 1, 'c': 11, 'b': 10}

If you use Python version 3.9.0a4 or greater, you can directly use:

>>> x = {'a': 1, 'b': 2}
>>> y = {'b': 10, 'c': 11}
>>> z = x | y
>>> z
{'a': 1, 'c': 11, 'b': 10}
Tuyere answered 2/9, 2008 at 7:50 Comment(2)
Don't use this as it is very inefficient. (See the timeit results below.) It may have been necessary in the Py2 days if a wrapper function was not an option, but those days are now past.Harrovian
This didn't work. I got keys as values in the combined dict.Sarina
C
755

An alternative:

z = x.copy()
z.update(y)
Cardoon answered 2/9, 2008 at 13:0 Comment(5)
To clarify why this doesn't meet the critera provided by the question: it's not a single expression and it doesn't return z.Nitza
Put it this way: if you need to put two lines of comments explaining your one line of code to the people you hand your code off to...have you really done it in one line? :) I fully agree Python is not good for this: there should be a much easier way. While this answer is more pythonic, is it really all that explicit or clear? Update is not one of the "core" functions that people tend to use a lot.Kanarese
Well, if people insist on making it a oneliner, you can always do (lambda z: z.update(y) or z)(x.copy()) :PBanquer
@AlexanderOh I am not sure whenever this is a joke or not; I see this as a perfectly (valid) answ! (at least in terms of it works) but Of course; yeah; the second comment sets a precedent! either way; it is Indeed pythonic!Procter
@WilliamMartens it wasn't a joke. But let's face it, if you optimize for single line expressions,you are optimizing for the wrong thing.Nitza
E
442

Another, more concise, option:

z = dict(x, **y)

Note: this has become a popular answer, but it is important to point out that if y has any non-string keys, the fact that this works at all is an abuse of a CPython implementation detail, and it does not work in Python 3, or in PyPy, IronPython, or Jython. Also, Guido is not a fan. So I can't recommend this technique for forward-compatible or cross-implementation portable code, which really means it should be avoided entirely.

Eagleeyed answered 2/9, 2008 at 15:52 Comment(2)
Works fine in Python 3 and PyPy and PyPy 3, can't speak to Jython or Iron. Given this pattern is explicitly documented (see the third constructor form in this documentation) I'd argue it's not an "implementation detail" but intentional feature use.Bivouac
@Bivouac You missed the key phrase "if y has any non-string keys." That's what doesn't work in Python3; the fact that it works in CPython 2 is an implementation detail that can't be relied on. IFF all your keys are guaranteed to be strings, this is a fully supported option.Eagleeyed
S
259

This probably won't be a popular answer, but you almost certainly do not want to do this. If you want a copy that's a merge, then use copy (or deepcopy, depending on what you want) and then update. The two lines of code are much more readable - more Pythonic - than the single line creation with .items() + .items(). Explicit is better than implicit.

In addition, when you use .items() (pre Python 3.0), you're creating a new list that contains the items from the dict. If your dictionaries are large, then that is quite a lot of overhead (two large lists that will be thrown away as soon as the merged dict is created). update() can work more efficiently, because it can run through the second dict item-by-item.

In terms of time:

>>> timeit.Timer("dict(x, **y)", "x = dict(zip(range(1000), range(1000)))\ny=dict(zip(range(1000,2000), range(1000,2000)))").timeit(100000)
15.52571702003479
>>> timeit.Timer("temp = x.copy()\ntemp.update(y)", "x = dict(zip(range(1000), range(1000)))\ny=dict(zip(range(1000,2000), range(1000,2000)))").timeit(100000)
15.694622993469238
>>> timeit.Timer("dict(x.items() + y.items())", "x = dict(zip(range(1000), range(1000)))\ny=dict(zip(range(1000,2000), range(1000,2000)))").timeit(100000)
41.484580039978027

IMO the tiny slowdown between the first two is worth it for the readability. In addition, keyword arguments for dictionary creation was only added in Python 2.3, whereas copy() and update() will work in older versions.

Steffi answered 8/9, 2008 at 11:16 Comment(0)
T
195

In a follow-up answer, you asked about the relative performance of these two alternatives:

z1 = dict(x.items() + y.items())
z2 = dict(x, **y)

On my machine, at least (a fairly ordinary x86_64 running Python 2.5.2), alternative z2 is not only shorter and simpler but also significantly faster. You can verify this for yourself using the timeit module that comes with Python.

Example 1: identical dictionaries mapping 20 consecutive integers to themselves:

% python -m timeit -s 'x=y=dict((i,i) for i in range(20))' 'z1=dict(x.items() + y.items())'
100000 loops, best of 3: 5.67 usec per loop
% python -m timeit -s 'x=y=dict((i,i) for i in range(20))' 'z2=dict(x, **y)' 
100000 loops, best of 3: 1.53 usec per loop

z2 wins by a factor of 3.5 or so. Different dictionaries seem to yield quite different results, but z2 always seems to come out ahead. (If you get inconsistent results for the same test, try passing in -r with a number larger than the default 3.)

Example 2: non-overlapping dictionaries mapping 252 short strings to integers and vice versa:

% python -m timeit -s 'from htmlentitydefs import codepoint2name as x, name2codepoint as y' 'z1=dict(x.items() + y.items())'
1000 loops, best of 3: 260 usec per loop
% python -m timeit -s 'from htmlentitydefs import codepoint2name as x, name2codepoint as y' 'z2=dict(x, **y)'               
10000 loops, best of 3: 26.9 usec per loop

z2 wins by about a factor of 10. That's a pretty big win in my book!

After comparing those two, I wondered if z1's poor performance could be attributed to the overhead of constructing the two item lists, which in turn led me to wonder if this variation might work better:

from itertools import chain
z3 = dict(chain(x.iteritems(), y.iteritems()))

A few quick tests, e.g.

% python -m timeit -s 'from itertools import chain; from htmlentitydefs import codepoint2name as x, name2codepoint as y' 'z3=dict(chain(x.iteritems(), y.iteritems()))'
10000 loops, best of 3: 66 usec per loop

lead me to conclude that z3 is somewhat faster than z1, but not nearly as fast as z2. Definitely not worth all the extra typing.

This discussion is still missing something important, which is a performance comparison of these alternatives with the "obvious" way of merging two lists: using the update method. To try to keep things on an equal footing with the expressions, none of which modify x or y, I'm going to make a copy of x instead of modifying it in-place, as follows:

z0 = dict(x)
z0.update(y)

A typical result:

% python -m timeit -s 'from htmlentitydefs import codepoint2name as x, name2codepoint as y' 'z0=dict(x); z0.update(y)'
10000 loops, best of 3: 26.9 usec per loop

In other words, z0 and z2 seem to have essentially identical performance. Do you think this might be a coincidence? I don't....

In fact, I'd go so far as to claim that it's impossible for pure Python code to do any better than this. And if you can do significantly better in a C extension module, I imagine the Python folks might well be interested in incorporating your code (or a variation on your approach) into the Python core. Python uses dict in lots of places; optimizing its operations is a big deal.

You could also write this as

z0 = x.copy()
z0.update(y)

as Tony does, but (not surprisingly) the difference in notation turns out not to have any measurable effect on performance. Use whichever looks right to you. Of course, he's absolutely correct to point out that the two-statement version is much easier to understand.

Trapp answered 23/10, 2008 at 2:38 Comment(1)
This does not work in Python 3; items() is not catenable, and iteritems does not exist.Viviparous
T
192

In Python 3.0 and later, you can use collections.ChainMap which groups multiple dicts or other mappings together to create a single, updateable view:

>>> from collections import ChainMap
>>> x = {'a':1, 'b': 2}
>>> y = {'b':10, 'c': 11}
>>> z = dict(ChainMap({}, y, x))
>>> for k, v in z.items():
        print(k, '-->', v)
    
a --> 1
b --> 10
c --> 11

Update for Python 3.5 and later: You can use PEP 448 extended dictionary packing and unpacking. This is fast and easy:

>>> x = {'a':1, 'b': 2}
>>> y = {'b':10, 'c': 11}
>>> {**x, **y}
{'a': 1, 'b': 10, 'c': 11}

Update for Python 3.9 and later: You can use the PEP 584 union operator:

>>> x = {'a':1, 'b': 2}
>>> y = {'b':10, 'c': 11}
>>> x | y
{'a': 1, 'b': 10, 'c': 11}
Turnbull answered 28/4, 2013 at 3:15 Comment(5)
But one should be cautious while using ChainMap there's a catch that if you have duplicate keys the values from first mapping get used and when you call a del on say a ChainMap c will delete the first mapping of that key.Unseam
@Prerit What else would you expect it to do? That's the normal way chained namespaces work. Consider how $PATH works in bash. Deleting an executable on the path doesn't preclude another executable with the same name further upstream.Turnbull
@Raymond Hettinger I agree, just added a caution. Most people may not know about it. :DUnseam
@Prerit You could cast to dict to avoid that, i.e.: dict(ChainMap({}, y, x))Hecatomb
@RaymondHettinger We all know you designed ChainMap in the most reasonable way, thank you for it!Bainite
Q
156

I wanted something similar, but with the ability to specify how the values on duplicate keys were merged, so I hacked this out (but did not heavily test it). Obviously this is not a single expression, but it is a single function call.

def merge(d1, d2, merge_fn=lambda x,y:y):
    """
    Merges two dictionaries, non-destructively, combining 
    values on duplicate keys as defined by the optional merge
    function.  The default behavior replaces the values in d1
    with corresponding values in d2.  (There is no other generally
    applicable merge strategy, but often you'll have homogeneous 
    types in your dicts, so specifying a merge technique can be 
    valuable.)

    Examples:

    >>> d1
    {'a': 1, 'c': 3, 'b': 2}
    >>> merge(d1, d1)
    {'a': 1, 'c': 3, 'b': 2}
    >>> merge(d1, d1, lambda x,y: x+y)
    {'a': 2, 'c': 6, 'b': 4}

    """
    result = dict(d1)
    for k,v in d2.iteritems():
        if k in result:
            result[k] = merge_fn(result[k], v)
        else:
            result[k] = v
    return result
Quiff answered 4/9, 2008 at 19:8 Comment(1)
Handy solution when the default behaviour of the shorter and simpler solutions (replacement of values of common keys by the second dictionary) is not wished. For Python 3, iteritems() is not available anymore in dicts, and one can simply use items() instead.Masquer
M
128

Recursively/deep update a dict

def deepupdate(original, update):
    """
    Recursively update a dict.
    Subdict's won't be overwritten but also updated.
    """
    for key, value in original.iteritems(): 
        if key not in update:
            update[key] = value
        elif isinstance(value, dict):
            deepupdate(value, update[key]) 
    return update

Demonstration:

pluto_original = {
    'name': 'Pluto',
    'details': {
        'tail': True,
        'color': 'orange'
    }
}

pluto_update = {
    'name': 'Pluutoo',
    'details': {
        'color': 'blue'
    }
}

print deepupdate(pluto_original, pluto_update)

Outputs:

{
    'name': 'Pluutoo',
    'details': {
        'color': 'blue',
        'tail': True
    }
}

Thanks rednaw for edits.

Messner answered 29/11, 2011 at 11:52 Comment(3)
This does not answer the question. The question clearly asks for a new dictionary, z, from original dictionaries, x and y, with values from y replacing those of x - not an updated dictionary. This answer modifies y in-place by adding values from x. Worse, it does not copy these values, so one could further modify the modified dictionary, y, and modifications could be reflected in dictionary x. @Jérôme I hope this code is not causing any bugs for your application - at least consider using deepcopy to copy the values.Aristaeus
@AaronHall agreed this does not answer the question. But it answers my need. I understand those limitations, but that's not an issue in my case. Thinking of it, maybe the name is misleading, as it might evoke a deepcopy, which it does not provide. But it addresses deep nesting. Here's another implementation from the Martellibot: #3233443.Didynamous
In python 3.8.2 it gives me AttributeError: 'dict' object has no attribute 'iteritems' so I have changes the iteritems with items.Gesundheit
K
113

I benchmarked the suggested with perfplot and found that

x | y   # Python 3.9+

is the fastest solution together with the good old

{**x, **y}

and

temp = x.copy()
temp.update(y)

enter image description here


Code to reproduce the plot:

from collections import ChainMap
from itertools import chain
import perfplot


def setup(n):
    x = dict(zip(range(n), range(n)))
    y = dict(zip(range(n, 2 * n), range(n, 2 * n)))
    return x, y


def copy_update(x, y):
    temp = x.copy()
    temp.update(y)
    return temp


def add_items(x, y):
    return dict(list(x.items()) + list(y.items()))


def curly_star(x, y):
    return {**x, **y}


def chain_map(x, y):
    return dict(ChainMap({}, y, x))


def itertools_chain(x, y):
    return dict(chain(x.items(), y.items()))


def python39_concat(x, y):
    return x | y


b = perfplot.bench(
    setup=setup,
    kernels=[
        copy_update,
        add_items,
        curly_star,
        chain_map,
        itertools_chain,
        python39_concat,
    ],
    labels=[
        "copy_update",
        "dict(list(x.items()) + list(y.items()))",
        "{**x, **y}",
        "chain_map",
        "itertools.chain",
        "x | y",
    ],
    n_range=[2 ** k for k in range(18)],
    xlabel="len(x), len(y)",
    equality_check=None,
)
b.save("out.png")
b.show()
Kreit answered 9/7, 2020 at 17:35 Comment(2)
Can you add the test conditions? Exact Python version. Python implementation (e.g. CPython). Hardware (CPU type and model number, L2 cache size, clock speed, dynamic clock speed scaling, number of cores, etc.). Operating system (type, edition, and version). Whatever else that may be relevant (e.g., warm/cold). (But ******* without But ******* "Edit:", "Update:", or similar - the answer should appear as if it was written today)Duhamel
would be good to have examples with more than two dicts, e.g. x_1 | x_2 | ... | x_nMantoman
A
104

Python 3.5 (PEP 448) allows a nicer syntax option:

x = {'a': 1, 'b': 1}
y = {'a': 2, 'c': 2}
final = {**x, **y} 
final
# {'a': 2, 'b': 1, 'c': 2}

Or even

final = {'a': 1, 'b': 1, **x, **y}

In Python 3.9 you also use | and |= with the below example from PEP 584

d = {'spam': 1, 'eggs': 2, 'cheese': 3}
e = {'cheese': 'cheddar', 'aardvark': 'Ethel'}
d | e
# {'spam': 1, 'eggs': 2, 'cheese': 'cheddar', 'aardvark': 'Ethel'}
Antiperistalsis answered 26/2, 2015 at 21:27 Comment(2)
In what way is this solution better than the dict(x, **y)-solution? As you (@CarlMeyer) mentioned within the note of your own answer (https://mcmap.net/q/36055/-how-do-i-merge-two-dictionaries-in-a-single-expression-in-python) Guido considers that solution illegal.Internationale
Guido dislikes dict(x, **y) for the (very good) reason that it relies on y only having keys which are valid keyword argument names (unless you are using CPython 2.7, where the dict constructor cheats). This objection/restriction does not apply to PEP 448, which generalizes the ** unpacking syntax to dict literals. So this solution has the same concision as dict(x, **y), without the downside.Eagleeyed
C
100
x = {'a':1, 'b': 2}
y = {'b':10, 'c': 11}
z = dict(x.items() + y.items())
print z

For items with keys in both dictionaries ('b'), you can control which one ends up in the output by putting that one last.

Coral answered 2/9, 2008 at 7:49 Comment(2)
In python 3 you would get TypeError: unsupported operand type(s) for +: 'dict_items' and 'dict_items' ... you should encapsulate each dict with list() like: dict(list(x.items()) + list(y.items()))Shool
@Shool itertools.chain(x.items(), y.items()) could also be used.Foretopsail
S
96

The best version I could think while not using copy would be:

from itertools import chain
x = {'a':1, 'b': 2}
y = {'b':10, 'c': 11}
dict(chain(x.iteritems(), y.iteritems()))

It's faster than dict(x.items() + y.items()) but not as fast as n = copy(a); n.update(b), at least on CPython. This version also works in Python 3 if you change iteritems() to items(), which is automatically done by the 2to3 tool.

Personally I like this version best because it describes fairly good what I want in a single functional syntax. The only minor problem is that it doesn't make completely obvious that values from y takes precedence over values from x, but I don't believe it's difficult to figure that out.

Shanghai answered 14/10, 2010 at 18:55 Comment(0)
S
75

While the question has already been answered several times, this simple solution to the problem has not been listed yet.

x = {'a':1, 'b': 2}
y = {'b':10, 'c': 11}
z4 = {}
z4.update(x)
z4.update(y)

It is as fast as z0 and the evil z2 mentioned above, but easy to understand and change.

Sublease answered 14/10, 2011 at 16:12 Comment(7)
but it's three statements rather than one expressionPotentilla
Yes! The mentioned one-expression-solutions are either slow or evil. Good code is readable and maintainable. So the problem is the question not the answer. We should ask for the best solution of a problem not for a one-line-solution.Sublease
Lose the z4 = {} and change the next line to z4 = x.copy() -- better than just good code doesn't do unnecessary things (which makes it even more readable and maintainable).Heist
Your suggestion would change this to Matthews answer. While his answer is fine, I think mine is more readable and better maintainable. The extra line would only be bad if it would cost execution time.Sublease
I suggest you put this into a functionPaquito
This solution works but will it work if let's say one dict has a bunch of nested lists and the other is a simple dict with replacement values?Grishilda
@MurtazaMohsin for sure it "works", but will it do what you want? Think about which merge-strategy you need in your code! My answer just tries to offer a simple solution to the OPs question. For recursive updates you should have a look at Deep merge dictionaries of dictionaries in Python.Sublease
E
68
def dict_merge(a, b):
  c = a.copy()
  c.update(b)
  return c

new = dict_merge(old, extras)

Among such shady and dubious answers, this shining example is the one and only good way to merge dicts in Python, endorsed by dictator for life Guido van Rossum himself! Someone else suggested half of this, but did not put it in a function.

print dict_merge(
      {'color':'red', 'model':'Mini'},
      {'model':'Ferrari', 'owner':'Carl'})

gives:

{'color': 'red', 'owner': 'Carl', 'model': 'Ferrari'}
Epiphora answered 6/8, 2012 at 9:24 Comment(0)
M
63

Be Pythonic. Use a comprehension:

z={k: v for d in [x,y] for k, v in d.items()}

>>> print z
{'a': 1, 'c': 11, 'b': 10}
Maladroit answered 20/1, 2016 at 11:46 Comment(0)
V
60

If you think lambdas are evil then read no further. As requested, you can write the fast and memory-efficient solution with one expression:

x = {'a':1, 'b':2}
y = {'b':10, 'c':11}
z = (lambda a, b: (lambda a_copy: a_copy.update(b) or a_copy)(a.copy()))(x, y)
print z
{'a': 1, 'c': 11, 'b': 10}
print x
{'a': 1, 'b': 2}

As suggested above, using two lines or writing a function is probably a better way to go.

Vote answered 23/11, 2011 at 18:8 Comment(0)
C
47

In python3, the items method no longer returns a list, but rather a view, which acts like a set. In this case you'll need to take the set union since concatenating with + won't work:

dict(x.items() | y.items())

For python3-like behavior in version 2.7, the viewitems method should work in place of items:

dict(x.viewitems() | y.viewitems())

I prefer this notation anyways since it seems more natural to think of it as a set union operation rather than concatenation (as the title shows).

Edit:

A couple more points for python 3. First, note that the dict(x, **y) trick won't work in python 3 unless the keys in y are strings.

Also, Raymond Hettinger's Chainmap answer is pretty elegant, since it can take an arbitrary number of dicts as arguments, but from the docs it looks like it sequentially looks through a list of all the dicts for each lookup:

Lookups search the underlying mappings successively until a key is found.

This can slow you down if you have a lot of lookups in your application:

In [1]: from collections import ChainMap
In [2]: from string import ascii_uppercase as up, ascii_lowercase as lo; x = dict(zip(lo, up)); y = dict(zip(up, lo))
In [3]: chainmap_dict = ChainMap(y, x)
In [4]: union_dict = dict(x.items() | y.items())
In [5]: timeit for k in union_dict: union_dict[k]
100000 loops, best of 3: 2.15 µs per loop
In [6]: timeit for k in chainmap_dict: chainmap_dict[k]
10000 loops, best of 3: 27.1 µs per loop

So about an order of magnitude slower for lookups. I'm a fan of Chainmap, but looks less practical where there may be many lookups.

Colloquial answered 9/10, 2013 at 18:9 Comment(0)
T
38

Two dictionaries

def union2(dict1, dict2):
    return dict(list(dict1.items()) + list(dict2.items()))

n dictionaries

def union(*dicts):
    return dict(itertools.chain.from_iterable(dct.items() for dct in dicts))

sum has bad performance. See https://mathieularose.com/how-not-to-flatten-a-list-of-lists-in-python/

Trammell answered 17/10, 2012 at 2:9 Comment(0)
R
36

Simple solution using itertools that preserves order (latter dicts have precedence)

# py2
from itertools import chain, imap
merge = lambda *args: dict(chain.from_iterable(imap(dict.iteritems, args)))

# py3
from itertools import chain
merge = lambda *args: dict(chain.from_iterable(map(dict.items, args)))

And it's usage:

>>> x = {'a':1, 'b': 2}
>>> y = {'b':10, 'c': 11}
>>> merge(x, y)
{'a': 1, 'b': 10, 'c': 11}

>>> z = {'c': 3, 'd': 4}
>>> merge(x, y, z)
{'a': 1, 'b': 10, 'c': 3, 'd': 4}
Rheostat answered 4/8, 2015 at 14:54 Comment(0)
P
32

Abuse leading to a one-expression solution for Matthew's answer:

>>> x = {'a':1, 'b': 2}
>>> y = {'b':10, 'c': 11}
>>> z = (lambda f=x.copy(): (f.update(y), f)[1])()
>>> z
{'a': 1, 'c': 11, 'b': 10}

You said you wanted one expression, so I abused lambda to bind a name, and tuples to override lambda's one-expression limit. Feel free to cringe.

You could also do this of course if you don't care about copying it:

>>> x = {'a':1, 'b': 2}
>>> y = {'b':10, 'c': 11}
>>> z = (x.update(y), x)[1]
>>> z
{'a': 1, 'b': 10, 'c': 11}
Papuan answered 7/8, 2013 at 21:23 Comment(0)
H
26

If you don't mind mutating x,

x.update(y) or x

Simple, readable, performant. You know update() always returns None, which is a false value. So the above expression will always evaluate to x, after updating it.

Most mutating methods in the standard library (like .update()) return None by convention, so this kind of pattern will work on those too. However, if you're using a dict subclass or some other method that doesn't follow this convention, then or may return its left operand, which may not be what you want. Instead, you can use a tuple display and index, which works regardless of what the first element evaluates to (although it's not quite as pretty):

(x.update(y), x)[-1]

If you don't have x in a variable yet, you can use lambda to make a local without using an assignment statement. This amounts to using lambda as a let expression, which is a common technique in functional languages, but is maybe unpythonic.

(lambda x: x.update(y) or x)({'a': 1, 'b': 2})

Although it's not that different from the following use of the new walrus operator (Python 3.8+ only),

(x := {'a': 1, 'b': 2}).update(y) or x

especially if you use a default argument:

(lambda x={'a': 1, 'b': 2}: x.update(y) or x)()

If you do want a copy, PEP 584 style x | y is the most Pythonic on 3.9+. If you must support older versions, PEP 448 style {**x, **y} is easiest for 3.5+. But if that's not available in your (even older) Python version, the let expression pattern works here too.

(lambda z=x.copy(): z.update(y) or z)()

(That is, of course, nearly equivalent to (z := x.copy()).update(y) or z, but if your Python version is new enough for that, then the PEP 448 style will be available.)

Herder answered 22/9, 2017 at 2:57 Comment(0)
R
23

New in Python 3.9: Use the union operator (|) to merge dicts similar to sets:

>>> d = {'a': 1, 'b': 2}
>>> e = {'a': 9, 'c': 3}
>>> d | e
{'a': 9, 'b': 2, 'c': 3}

For matching keys, the right dict takes precedence.

This also works for |= to modify a dict in-place:

>>> e |= d    # e = e | d
>>> e
{'a': 1, 'c': 3, 'b': 2}
Redwing answered 1/6, 2020 at 21:23 Comment(1)
What does this add that wasn't mentioned already months earlier? https://mcmap.net/q/36055/-how-do-i-merge-two-dictionaries-in-a-single-expression-in-pythonGalluses
T
20

Drawing on ideas here and elsewhere I've comprehended a function:

def merge(*dicts, **kv): 
      return { k:v for d in list(dicts) + [kv] for k,v in d.items() }

Usage (tested in python 3):

assert (merge({1:11,'a':'aaa'},{1:99, 'b':'bbb'},foo='bar')==\
    {1: 99, 'foo': 'bar', 'b': 'bbb', 'a': 'aaa'})

assert (merge(foo='bar')=={'foo': 'bar'})

assert (merge({1:11},{1:99},foo='bar',baz='quux')==\
    {1: 99, 'foo': 'bar', 'baz':'quux'})

assert (merge({1:11},{1:99})=={1: 99})

You could use a lambda instead.

Turd answered 19/7, 2013 at 5:49 Comment(0)
P
19

(For Python 2.7* only; there are simpler solutions for Python 3*.)

If you're not averse to importing a standard library module, you can do

from functools import reduce

def merge_dicts(*dicts):
    return reduce(lambda a, d: a.update(d) or a, dicts, {})

(The or a bit in the lambda is necessary because dict.update always returns None on success.)

Pickings answered 28/3, 2016 at 13:13 Comment(0)
B
18

It's so silly that .update returns nothing.
I just use a simple helper function to solve the problem:

def merge(dict1,*dicts):
    for dict2 in dicts:
        dict1.update(dict2)
    return dict1

Examples:

merge(dict1,dict2)
merge(dict1,dict2,dict3)
merge(dict1,dict2,dict3,dict4)
merge({},dict1,dict2)  # this one returns a new copy
Bax answered 2/3, 2014 at 1:44 Comment(0)
M
18

There will be a new option when Python 3.8 releases (scheduled for 20 October, 2019), thanks to PEP 572: Assignment Expressions. The new assignment expression operator := allows you to assign the result of the copy and still use it to call update, leaving the combined code a single expression, rather than two statements, changing:

newdict = dict1.copy()
newdict.update(dict2)

to:

(newdict := dict1.copy()).update(dict2)

while behaving identically in every way. If you must also return the resulting dict (you asked for an expression returning the dict; the above creates and assigns to newdict, but doesn't return it, so you couldn't use it to pass an argument to a function as is, a la myfunc((newdict := dict1.copy()).update(dict2))), then just add or newdict to the end (since update returns None, which is falsy, it will then evaluate and return newdict as the result of the expression):

(newdict := dict1.copy()).update(dict2) or newdict

Important caveat: In general, I'd discourage this approach in favor of:

newdict = {**dict1, **dict2}

The unpacking approach is clearer (to anyone who knows about generalized unpacking in the first place, which you should), doesn't require a name for the result at all (so it's much more concise when constructing a temporary that is immediately passed to a function or included in a list/tuple literal or the like), and is almost certainly faster as well, being (on CPython) roughly equivalent to:

newdict = {}
newdict.update(dict1)
newdict.update(dict2)

but done at the C layer, using the concrete dict API, so no dynamic method lookup/binding or function call dispatch overhead is involved (where (newdict := dict1.copy()).update(dict2) is unavoidably identical to the original two-liner in behavior, performing the work in discrete steps, with dynamic lookup/binding/invocation of methods.

It's also more extensible, as merging three dicts is obvious:

newdict = {**dict1, **dict2, **dict3}

where using assignment expressions won't scale like that; the closest you could get would be:

(newdict := dict1.copy()).update(dict2), newdict.update(dict3)

or without the temporary tuple of Nones, but with truthiness testing of each None result:

(newdict := dict1.copy()).update(dict2) or newdict.update(dict3)

either of which is obviously much uglier, and includes further inefficiencies (either a wasted temporary tuple of Nones for comma separation, or pointless truthiness testing of each update's None return for or separation).

The only real advantage to the assignment expression approach occurs if:

  1. You have generic code that needs handle both sets and dicts (both of them support copy and update, so the code works roughly as you'd expect it to)
  2. You expect to receive arbitrary dict-like objects, not just dict itself, and must preserve the type and semantics of the left hand side (rather than ending up with a plain dict). While myspecialdict({**speciala, **specialb}) might work, it would involve an extra temporary dict, and if myspecialdict has features plain dict can't preserve (e.g. regular dicts now preserve order based on the first appearance of a key, and value based on the last appearance of a key; you might want one that preserves order based on the last appearance of a key so updating a value also moves it to the end), then the semantics would be wrong. Since the assignment expression version uses the named methods (which are presumably overloaded to behave appropriately), it never creates a dict at all (unless dict1 was already a dict), preserving the original type (and original type's semantics), all while avoiding any temporaries.
Mcclary answered 28/2, 2019 at 17:16 Comment(0)
O
17

The problem I have with solutions listed to date is that, in the merged dictionary, the value for key "b" is 10 but, to my way of thinking, it should be 12. In that light, I present the following:

import timeit

n=100000
su = """
x = {'a':1, 'b': 2}
y = {'b':10, 'c': 11}
"""

def timeMerge(f,su,niter):
    print "{:4f} sec for: {:30s}".format(timeit.Timer(f,setup=su).timeit(n),f)

timeMerge("dict(x, **y)",su,n)
timeMerge("x.update(y)",su,n)
timeMerge("dict(x.items() + y.items())",su,n)
timeMerge("for k in y.keys(): x[k] = k in x and x[k]+y[k] or y[k] ",su,n)

#confirm for loop adds b entries together
x = {'a':1, 'b': 2}
y = {'b':10, 'c': 11}
for k in y.keys(): x[k] = k in x and x[k]+y[k] or y[k]
print "confirm b elements are added:",x

Results:

0.049465 sec for: dict(x, **y)
0.033729 sec for: x.update(y)                   
0.150380 sec for: dict(x.items() + y.items())   
0.083120 sec for: for k in y.keys(): x[k] = k in x and x[k]+y[k] or y[k]

confirm b elements are added: {'a': 1, 'c': 11, 'b': 12}
Oshea answered 3/12, 2013 at 18:11 Comment(1)
You might be interested in cytoolz.merge_with (toolz.readthedocs.io/en/latest/…)Overthrust
L
17
from collections import Counter
dict1 = {'a':1, 'b': 2}
dict2 = {'b':10, 'c': 11}
result = dict(Counter(dict1) + Counter(dict2))

This should solve your problem.

Lactate answered 30/11, 2015 at 13:4 Comment(1)
I will recommend using the Counter's .update() instead of +. This is because, if the sum results to a value of 0 for any of the keys, Counter will delete it.Vulnerary
J
14

This can be done with a single dict comprehension:

>>> x = {'a':1, 'b': 2}
>>> y = {'b':10, 'c': 11}
>>> { key: y[key] if key in y else x[key]
      for key in set(x) + set(y)
    }

In my view the best answer for the 'single expression' part as no extra functions are needed, and it is short.

Juxtapose answered 17/7, 2015 at 14:47 Comment(2)
I suspect performance will not be very good though; creating a set out of each dict then only iterating through the keys means another lookup for the value each time (though relatively fast, still increases the order of the function for scaling)Longsuffering
it all depends on the version of the python we are using. In 3.5 and above {**x,**y} gives the concatenated dictionaryBitthia
W
12
>>> x = {'a':1, 'b': 2}
>>> y = {'b':10, 'c': 11}
>>> x, z = dict(x), x.update(y) or x
>>> x
{'a': 1, 'b': 2}
>>> y
{'c': 11, 'b': 10}
>>> z
{'a': 1, 'c': 11, 'b': 10}
Wrathful answered 13/11, 2013 at 10:1 Comment(1)
This method overwrites x with its copy. If x is a function argument this won't work (see example)Venenose
H
11

Python 3.9+ only

Merge (|) and update (|=) operators have been added to the built-in dict class.

>>> d = {'spam': 1, 'eggs': 2, 'cheese': 3}
>>> e = {'cheese': 'cheddar', 'aardvark': 'Ethel'}
>>> d | e
{'spam': 1, 'eggs': 2, 'cheese': 'cheddar', 'aardvark': 'Ethel'}

The augmented assignment version operates in-place:

>>> d |= e
>>> d
{'spam': 1, 'eggs': 2, 'cheese': 'cheddar', 'aardvark': 'Ethel'}

See PEP 584

Heliometer answered 9/4, 2020 at 8:20 Comment(0)
P
11

In Python 3.9

Based on PEP 584, the new version of Python introduces two new operators for dictionaries: union (|) and in-place union (|=). You can use | to merge two dictionaries, while |= will update a dictionary in place:

>>> pycon = {2016: "Portland", 2018: "Cleveland"}
>>> europython = {2017: "Rimini", 2018: "Edinburgh", 2019: "Basel"}

>>> pycon | europython
{2016: 'Portland', 2018: 'Edinburgh', 2017: 'Rimini', 2019: 'Basel'}

>>> pycon |= europython
>>> pycon
{2016: 'Portland', 2018: 'Edinburgh', 2017: 'Rimini', 2019: 'Basel'}

If d1 and d2 are two dictionaries, then d1 | d2 does the same as {**d1, **d2}. The | operator is used for calculating the union of sets, so the notation may already be familiar to you.

One advantage of using | is that it works on different dictionary-like types and keeps the type through the merge:

>>> from collections import defaultdict
>>> europe = defaultdict(lambda: "", {"Norway": "Oslo", "Spain": "Madrid"})
>>> africa = defaultdict(lambda: "", {"Egypt": "Cairo", "Zimbabwe": "Harare"})

>>> europe | africa
defaultdict(<function <lambda> at 0x7f0cb42a6700>,
  {'Norway': 'Oslo', 'Spain': 'Madrid', 'Egypt': 'Cairo', 'Zimbabwe': 'Harare'})

>>> {**europe, **africa}
{'Norway': 'Oslo', 'Spain': 'Madrid', 'Egypt': 'Cairo', 'Zimbabwe': 'Harare'}

You can use a defaultdict when you want to effectively handle missing keys. Note that | preserves the defaultdict, while {**europe, **africa} does not.

There are some similarities between how | works for dictionaries and how + works for lists. In fact, the + operator was originally proposed to merge dictionaries as well. This correspondence becomes even more evident when you look at the in-place operator.

The basic use of |= is to update a dictionary in place, similar to .update():

>>> libraries = {
...     "collections": "Container datatypes",
...     "math": "Mathematical functions",
... }
>>> libraries |= {"zoneinfo": "IANA time zone support"}
>>> libraries
{'collections': 'Container datatypes', 'math': 'Mathematical functions',
 'zoneinfo': 'IANA time zone support'}

When you merge dictionaries with |, both dictionaries need to be of a proper dictionary type. On the other hand, the in-place operator (|=) is happy to work with any dictionary-like data structure:

>>> libraries |= [("graphlib", "Functionality for graph-like structures")]
>>> libraries
{'collections': 'Container datatypes', 'math': 'Mathematical functions',
 'zoneinfo': 'IANA time zone support',
 'graphlib': 'Functionality for graph-like structures'}
Polycrates answered 6/10, 2020 at 15:23 Comment(1)
Other answers already covered theseGalluses
D
9

I know this does not really fit the specifics of the questions ("one liner"), but since none of the answers above went into this direction while lots and lots of answers addressed the performance issue, I felt I should contribute my thoughts.

Depending on the use case it might not be necessary to create a "real" merged dictionary of the given input dictionaries. A view which does this might be sufficient in many cases, i. e. an object which acts like the merged dictionary would without computing it completely. A lazy version of the merged dictionary, so to speak.

In Python, this is rather simple and can be done with the code shown at the end of my post. This given, the answer to the original question would be:

z = MergeDict(x, y)

When using this new object, it will behave like a merged dictionary but it will have constant creation time and constant memory footprint while leaving the original dictionaries untouched. Creating it is way cheaper than in the other solutions proposed.

Of course, if you use the result a lot, then you will at some point reach the limit where creating a real merged dictionary would have been the faster solution. As I said, it depends on your use case.

If you ever felt you would prefer to have a real merged dict, then calling dict(z) would produce it (but way more costly than the other solutions of course, so this is just worth mentioning).

You can also use this class to make a kind of copy-on-write dictionary:

a = { 'x': 3, 'y': 4 }
b = MergeDict(a)  # we merge just one dict
b['x'] = 5
print b  # will print {'x': 5, 'y': 4}
print a  # will print {'y': 4, 'x': 3}

Here's the straight-forward code of MergeDict:

class MergeDict(object):
  def __init__(self, *originals):
    self.originals = ({},) + originals[::-1]  # reversed

  def __getitem__(self, key):
    for original in self.originals:
      try:
        return original[key]
      except KeyError:
        pass
    raise KeyError(key)

  def __setitem__(self, key, value):
    self.originals[0][key] = value

  def __iter__(self):
    return iter(self.keys())

  def __repr__(self):
    return '%s(%s)' % (
      self.__class__.__name__,
      ', '.join(repr(original)
          for original in reversed(self.originals)))

  def __str__(self):
    return '{%s}' % ', '.join(
        '%r: %r' % i for i in self.iteritems())

  def iteritems(self):
    found = set()
    for original in self.originals:
      for k, v in original.iteritems():
        if k not in found:
          yield k, v
          found.add(k)

  def items(self):
    return list(self.iteritems())

  def keys(self):
    return list(k for k, _ in self.iteritems())

  def values(self):
    return list(v for _, v in self.iteritems())
Dilks answered 18/5, 2016 at 15:57 Comment(2)
I saw by now that some answers refer to a class called ChainMap which is available in Python 3 only and which does more or less what my code does. So shame on me for not reading everything carefully enough. But given that this only exists for Python 3, please take my answer as a contribution for the Python 2 users ;-)Dilks
ChainMap was backported for earlier Pythons: pypi.python.org/pypi/chainmapReamer
G
7

This is an expression for Python 3.5 or greater that merges dictionaries using reduce:

>>> from functools import reduce
>>> l = [{'a': 1}, {'b': 2}, {'a': 100, 'c': 3}]
>>> reduce(lambda x, y: {**x, **y}, l, {})
{'a': 100, 'b': 2, 'c': 3}

Note: this works even if the dictionary list is empty or contains only one element.

For a more efficient merge on Python 3.9 or greater, the lambda can be replaced directly by operator.ior:

>>> from functools import reduce
>>> from operator import ior
>>> l = [{'a': 1}, {'b': 2}, {'a': 100, 'c': 3}]
>>> reduce(ior, l, {})
{'a': 100, 'b': 2, 'c': 3}

For Python 3.8 or less, the following can be used as an alternative to ior:

>>> from functools import reduce
>>> l = [{'a': 1}, {'b': 2}, {'a': 100, 'c': 3}]
>>> reduce(lambda x, y: x.update(y) or x, l, {})
{'a': 100, 'b': 2, 'c': 3}
Galligan answered 15/4, 2018 at 23:2 Comment(2)
This is very inefficient (as bad as time proportional to the number of keys squared). Use operator.ior instead.Lennyleno
Thanks - I've updated my answer. I'm not usually too concerned with efficiency here, as I'd use something like this for merging configuration once and at initialisation (also, I currently have to maintain compatibility for Python 3.7+)Galligan
S
6

Using a dict comprehension, you may

x = {'a':1, 'b': 2}
y = {'b':10, 'c': 11}

dc = {xi:(x[xi] if xi not in list(y.keys()) 
           else y[xi]) for xi in list(x.keys())+(list(y.keys()))}

gives

>>> dc
{'a': 1, 'c': 11, 'b': 10}

Note the syntax for if else in comprehension

{ (some_key if condition else default_key):(something_if_true if condition 
          else something_if_false) for key, value in dict_.items() }
Stomatal answered 27/5, 2013 at 9:4 Comment(1)
I like the idea of using a dict comprehension, but your implementation is weak. It is insane to use ... in list(y.keys()) instead of just ... in y.Galluses
P
5

A union of the OP's two dictionaries would be something like:

{'a': 1, 'b': 2, 10, 'c': 11}

Specifically, the union of two entities(x and y) contains all the elements of x and/or y. Unfortunately, what the OP asks for is not a union, despite the title of the post.

My code below is neither elegant nor a one-liner, but I believe it is consistent with the meaning of union.

From the OP's example:

x = {'a':1, 'b': 2}
y = {'b':10, 'c': 11}

z = {}
for k, v in x.items():
    if not k in z:
        z[k] = [(v)]
    else:
        z[k].append((v))
for k, v in y.items():
    if not k in z:
        z[k] = [(v)]
    else:
        z[k].append((v))

{'a': [1], 'b': [2, 10], 'c': [11]}

Whether one wants lists could be changed, but the above will work if a dictionary contains lists (and nested lists) as values in either dictionary.

Provencher answered 30/9, 2014 at 2:36 Comment(2)
I've edited the question to not use the word union, for clarity.Eagleeyed
Perhaps you mean {'a': 1, 'b': (2, 10), 'c': 11} …?Dilks
E
5

You can use toolz.merge([x, y]) for this.

Efrem answered 18/11, 2016 at 12:53 Comment(1)
why should we use a 3rd party to perform such a trivial task when we can do it in native python?Landahl
M
5

I was curious if I could beat the accepted answer's time with a one line stringify approach:

I tried 5 methods, none previously mentioned - all one liner - all producing correct answers - and I couldn't come close.

So... to save you the trouble and perhaps fulfill curiosity:

import json
import yaml
import time
from ast import literal_eval as literal

def merge_two_dicts(x, y):
    z = x.copy()   # start with x's keys and values
    z.update(y)    # modifies z with y's keys and values & returns None
    return z

x = {'a':1, 'b': 2}
y = {'b':10, 'c': 11}

start = time.time()
for i in range(10000):
    z = yaml.load((str(x)+str(y)).replace('}{',', '))
elapsed = (time.time()-start)
print (elapsed, z, 'stringify yaml')

start = time.time()
for i in range(10000):
    z = literal((str(x)+str(y)).replace('}{',', '))
elapsed = (time.time()-start)
print (elapsed, z, 'stringify literal')

start = time.time()
for i in range(10000):
    z = eval((str(x)+str(y)).replace('}{',', '))
elapsed = (time.time()-start)
print (elapsed, z, 'stringify eval')

start = time.time()
for i in range(10000):
    z = {k:int(v) for k,v in (dict(zip(
            ((str(x)+str(y))
            .replace('}',' ')
            .replace('{',' ')
            .replace(':',' ')
            .replace(',',' ')
            .replace("'",'')
            .strip()
            .split('  '))[::2], 
            ((str(x)+str(y))
            .replace('}',' ')
            .replace('{',' ').replace(':',' ')
            .replace(',',' ')
            .replace("'",'')
            .strip()
            .split('  '))[1::2]
             ))).items()}
elapsed = (time.time()-start)
print (elapsed, z, 'stringify replace')

start = time.time()
for i in range(10000):
    z = json.loads(str((str(x)+str(y)).replace('}{',', ').replace("'",'"')))
elapsed = (time.time()-start)
print (elapsed, z, 'stringify json')

start = time.time()
for i in range(10000):
    z = merge_two_dicts(x, y)
elapsed = (time.time()-start)
print (elapsed, z, 'accepted')

results:

7.693928956985474 {'c': 11, 'b': 10, 'a': 1} stringify yaml
0.29134678840637207 {'c': 11, 'b': 10, 'a': 1} stringify literal
0.2208399772644043 {'c': 11, 'b': 10, 'a': 1} stringify eval
0.1106564998626709 {'c': 11, 'b': 10, 'a': 1} stringify replace
0.07989692687988281 {'c': 11, 'b': 10, 'a': 1} stringify json
0.005082368850708008 {'c': 11, 'b': 10, 'a': 1} accepted

What I did learn from this is that JSON approach is the fastest way (of those attempted) to return a dictionary from string-of-dictionary; much faster (about 1/4th of the time) of what I considered to be the normal method using ast. I also learned that, the YAML approach should be avoided at all cost.

Yes, I understand that this is not the best/correct way. I was curious if it was faster, and it isn't; I posted to prove it so.

Mythologize answered 22/3, 2018 at 4:8 Comment(1)
Note that the json approach is faster than ast.literal_eval, but it's also not as comprehensive. It can't handle Python literals not in the JSON spec, so no tuples, sets, frozensets, bools (it can handle JSON bools, but not the result of stringifying a Python bool directly), etc. ast.literal_eval is slower, but at least some of that is a consequence of handling more complex inputs. That said, I'm pretty sure it could be faster if they bothered to optimize it, it's just pretty rare that evaluating strings of Python literals is the chokepoint in code.Mcclary
F
4

I think my ugly one-liners are just necessary here.

z = next(z.update(y) or z for z in [x.copy()])
# or
z = (lambda z: z.update(y) or z)(x.copy())
  1. Dicts are merged.
  2. Single expression.
  3. Don't ever dare to use it.

P.S. This is a solution working in both versions of Python. I know that Python 3 has this {**x, **y} thing and it is the right thing to use (as well as moving to Python 3 if you still have Python 2 is the right thing to do).

Fizgig answered 11/5, 2018 at 10:0 Comment(0)
S
4

A method is deep merging. Making use of the | operator in 3.9+ for the use case of dict new being a set of default settings, and dict existing being a set of existing settings in use. My goal was to merge in any added settings from new without over writing existing settings in existing. I believe this recursive implementation will allow one to upgrade a dict with new values from another dict.

def merge_dict_recursive(new: dict, existing: dict):
    merged = new | existing

    for k, v in merged.items():
        if isinstance(v, dict):
            if k not in existing:
                # The key is not in existing dict at all, so add entire value
                existing[k] = new[k]

            merged[k] = merge_dict_recursive(new[k], existing[k])
    return merged

Example test data:

new
{'dashboard': True,
 'depth': {'a': 1, 'b': 22222, 'c': {'d': {'e': 69}}},
 'intro': 'this is the dashboard',
 'newkey': False,
 'show_closed_sessions': False,
 'version': None,
 'visible_sessions_limit': 9999}
existing
{'dashboard': True,
 'depth': {'a': 5},
 'intro': 'this is the dashboard',
 'newkey': True,
 'show_closed_sessions': False,
 'version': '2021-08-22 12:00:30.531038+00:00'}
merged
{'dashboard': True,
 'depth': {'a': 5, 'b': 22222, 'c': {'d': {'e': 69}}},
 'intro': 'this is the dashboard',
 'newkey': True,
 'show_closed_sessions': False,
 'version': '2021-08-22 12:00:30.531038+00:00',
 'visible_sessions_limit': 9999}
Santamaria answered 22/8, 2021 at 16:7 Comment(1)
this is a perfectly valid answer for the question in sight; In my opinion; very good ! :)Procter
P
2

Deep merge of dicts:

from typing import List, Dict
from copy import deepcopy

def merge_dicts(*from_dicts: List[Dict], no_copy: bool=False) -> Dict :
    """ no recursion deep merge of two dicts

    By default creates fresh Dict and merges all to it.

    no_copy = True, will merge all dicts to a fist one in a list without copy.
    Why? Sometime I need to combine one dictionary from "layers".
    The "layers" are not in use and dropped immediately after merging.
    """

    if no_copy:
        xerox = lambda x:x
    else:
        xerox = deepcopy

    result = xerox(from_dicts[0])

    for _from in from_dicts[1:]:
        merge_queue = [(result, _from)]
        for _to, _from in merge_queue:
            for k, v in _from.items():
                if k in _to and isinstance(_to[k], dict) and isinstance(v, dict):
                    # key collision add both are dicts.
                    # add to merging queue
                    merge_queue.append((_to[k], v))
                    continue
                _to[k] = xerox(v)

    return result

Usage:

print("=============================")
print("merge all dicts to first one without copy.")
a0 = {"a":{"b":1}}
a1 = {"a":{"c":{"d":4}}}
a2 = {"a":{"c":{"f":5}, "d": 6}}
print(f"a0 id[{id(a0)}] value:{a0}")
print(f"a1 id[{id(a1)}] value:{a1}")
print(f"a2 id[{id(a2)}] value:{a2}")
r = merge_dicts(a0, a1, a2, no_copy=True)
print(f"r  id[{id(r)}] value:{r}")

print("=============================")
print("create fresh copy of all")
a0 = {"a":{"b":1}}
a1 = {"a":{"c":{"d":4}}}
a2 = {"a":{"c":{"f":5}, "d": 6}}
print(f"a0 id[{id(a0)}] value:{a0}")
print(f"a1 id[{id(a1)}] value:{a1}")
print(f"a2 id[{id(a2)}] value:{a2}")
r = merge_dicts(a0, a1, a2)
print(f"r  id[{id(r)}] value:{r}")
Possible answered 26/9, 2021 at 13:20 Comment(1)
Is it really necessary to have explicit for loops nested three levels deep?Duhamel
P
1

The question is tagged python-3x but, taking into account that it's a relatively recent addition and that the most voted, accepted answer deals extensively with a Python 2.x solution, I dare add a one liner that draws on an irritating feature of Python 2.x list comprehension, that is name leaking...

$ python2
Python 2.7.13 (default, Jan 19 2017, 14:48:08) 
[GCC 6.3.0 20170118] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> x = {'a':1, 'b': 2}
>>> y = {'b':10, 'c': 11}
>>> [z.update(d) for z in [{}] for d in (x, y)]
[None, None]
>>> z
{'a': 1, 'c': 11, 'b': 10}
>>> ...

I'm happy to say that the above doesn't work any more on any version of Python 3.

Pier answered 30/5, 2017 at 12:28 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.