Safety of std::unordered_map::merge()
Asked Answered
U

2

28

While writing some code targeting C++17, I kind of hit a stumbling block determining the exception-safety of an operation merging two compatible std::unordered_maps. Per the current working draft, §26.2.7, table 91 reads, in part, regarding the conditions of a.merge( a2 ):

Requires: a.get_allocator() == a2.get_allocator().

Attempts to extract each element in a2 and insert it into a using the hash function and key equality predicate of a. In containers with unique keys, if there is an element in a with key equivalent to the key of an element from a2, then that element is not extracted from a2.

Postconditions: Pointers and references to the transferred elements of a2 refer to those same elements but as members of a. Iterators referring to the transferred elements and all iterators referring to a will be invalidated, but iterators to elements remaining in a2 will remain valid.

Throws: Nothing unless the hash function or key equality predicate throws.

It's worth noting that these conditions are strongly reminiscent of those given for the requirements of ordinary associative containers (std::map), given in §26.2.6, table 90, a.merge( a2 ):

Requires: a.get_allocator() == a2.get_allocator().

Attempts to extract each element in a2 and insert it into a using the comparison object of a. In containers with unique keys, if there is an element in a with key equivalent to the key of an element from a2, then that element is not extracted from a2.

Postconditions: Pointers and references to the transferred elements of a2 refer to those same elements but as members of a. Iterators referring to the transferred elements will continue to refer to their elements, but they now behave as iterators into a, not into a2.

Throws: Nothing unless the comparison object throws.

I needed to merge two std::unordered_maps with the same number of elements which I could ensure would be unique across both containers, meaning that the map containing the merged result would have double the number of elements it previously had, and the container merged-from would be empty. This should be perfectly safe thanks to C++17, right?

This is a huge win in terms of performance… except, I had this nagging doubt. Hilariously, the postcondition statement says nothing about whether or not the previous max load factor in the merged map would be honored, and while that seems like a safe implicit assumption, it seemed to naïvely conflict with the statement about the unordered_map's exception-safety. If you're using a hash table design wherein the buckets are contiguously allocated buffers, maintaining your load factor would seem to imply rehashing, which would seem to imply reallocation of the bucket buffer.

Already, this seemed like an exercise in extreme navel-gazing and there was good cause to leave well enough alone: a more complicated hash table could conceivably be made as a completely node-based structure, similar to the red-black trees usually underpinning std::map, and in such a case, the spec seemed reasonable as a rehash wouldn't imply allocation.

Perhaps to my benefit, I succumbed to doubt and dug into gcc-7.1's implementation of merge. It's incredibly complicated, but to summarize my findings, I found that the buckets are, indeed, contiguously allocated buffers, and rehashing does imply reallocation. Maybe, I thought, there was some deeper magic I was missing (I stared at the source for nearly an entire day and still felt I had a poor understanding of it) which just disabled rehashing during a merge, meaning that all of the specified conditions would be upheld, but you could get a nasty performance regression after a suitably large merge, since your map would likely be overloaded.

I moved for a practical assessment mirroring my use-case (which I would have presented if it were possible, I'm sorry), instead of just questioning my interpretation of libstdc++:

#include <memory>        // for std::shared_ptr<>
#include <new>           // for std::bad_alloc
#include <utility>       // for std::move(), std::pair<>
#include <type_traits>   // for std::true_type
#include <unordered_map> // for std::unordered_map<>
#include <functional>    // for std::hash<>, std::equal_to<>
#include <string>        // for std::string
#include <iostream>      // for std::cout
#include <cstddef>       // for std::size_t

template<typename T>
class PrimedFailureAlloc
{
public:
  using value_type = T;
  using propagate_on_container_copy_assignment = std::true_type;
  using propagate_on_container_move_assignment = std::true_type;
  using propagate_on_container_swap = std::true_type;

  PrimedFailureAlloc() = default;

  template<typename U>
  PrimedFailureAlloc( const PrimedFailureAlloc<U>& source ) noexcept
    : m_triggered{ source.m_triggered }
  { }

  template<typename U>
  PrimedFailureAlloc( PrimedFailureAlloc<U>&& source ) noexcept
    : m_triggered{ std::move( source.m_triggered ) }
  { }

  T* allocate( std::size_t n )
  {
    if ( *m_triggered ) throw std::bad_alloc{};
    return static_cast<T*>( ::operator new( sizeof( T ) * n ) );
  }

  void deallocate( T* ptr, std::size_t n ) noexcept
  {
    ::operator delete( ptr );
  }

  bool operator==( const PrimedFailureAlloc& rhs ) noexcept
  {
    return m_triggered == rhs.m_triggered;
  }

  void trigger() noexcept { *m_triggered = true; }

private:
  template<typename U>
  friend class PrimedFailureAlloc;

  std::shared_ptr<bool> m_triggered{ new bool{ false } };
};

template<typename T>
bool operator!=( const PrimedFailureAlloc<T>& lhs,
                 const PrimedFailureAlloc<T>& rhs ) noexcept
{
  return !(lhs == rhs);
}

template< typename Key
        , typename T
        , typename Hash = std::hash<Key>
        , typename KeyEqual = std::equal_to<Key>
        >
using FailingMap = std::unordered_map<
  Key,
  T,
  Hash,
  KeyEqual,
  PrimedFailureAlloc<std::pair<const Key, T>>
>;

template<typename Key, typename T>
void printMap( const FailingMap<Key, T>& map )
{
  std::cout << "{\n";
  for ( const auto& [str, index] : map )
    std::cout << "  { " << str << ", " << index << " }\n";
  std::cout << "}\n";
}

int main()
{
  PrimedFailureAlloc<std::pair<const std::string, int>> a;
  FailingMap<std::string, int> m1{ a };
  FailingMap<std::string, int> m2{ a };

  m1.insert( { { "Purple", 0 }, { "Green", 3 }, { "Indigo", 16 } } );
  m2.insert( { { "Blue", 12 }, { "Red", 2 }, { "Violet", 5 } } );

  // m1.reserve( m1.size() + m2.size() );
  a.trigger();
  m1.merge( m2 );

  std::cout << "map :=\n";
  printMap( m1 );

  return 0;
}

Sure enough, after compiling this code under GCC-7.1, I get:

terminate called after throwing an instance of 'std::bad_alloc'
  what():  std::bad_alloc
[1]    10944 abort      ./a.out

Whereas, uncommenting line 95 (m1.reserve( m1.size() + m2.size() );), results in the expected output:

map :=
{
  { Red, 2 }
  { Violet, 5 }
  { Purple, 0 }
  { Green, 3 }
  { Blue, 12 }
  { Indigo, 16 }
}

Understanding that C++17 is still a draft standard which hasn't yet been finalized, and that gcc's implementation is experimental, I suppose my question would be:

  1. Have I gravely misconstrued the safety of the merge operation as set out in the standard? Are there best-practices to using std::unordered_map::merge() that I've missed? Was I supposed to be implicitly responsible for ensuring the allocation of buckets? Will using std::unordered_map::reserve() actually be portable when clang, MSVC, and Intel finally support C++17? I mean, my program aborting when an exception-free merge was impossible does, after some notion, adhere to the listed postconditions…
  2. Is this actually a defect in the standard? The similarity of the wording between unordered associative containers and ordinary associative containers might have allowed unintended guarantees to slip in if the text were, say, copy-pasted.
  3. Is this just a gcc bug?

It's worth noting that I did check gcc's bug tracker prior to this write-up and seemed to find no open bugs matching my description, and furthermore, I checked the C++ standard defect report and similarly seemed to have come up empty (admittedly, doing a text-search of that site is aggravating and I may have been less than thorough). The latter is unsurprising, since standard defects and their workarounds or consequences are usually noted in gcc's source code and I found no such notations during my examination. I tried compiling my example code in my most recent checkout of clang (over a week old), but the compiler segfaulted so I took my examination there no further, and have not consulted libc++.

Untune answered 13/6, 2017 at 19:47 Comment(4)
Looks like an obvious defect to me.Incumber
Off to LWG we go...Incumber
Thanks, planning to file bugs against GCC to discuss the noexcept which triggers that std::terminate() call, as well as some bad behavior the present implementation engages in.Untune
@Incumber I believe, that your bug report was premature. See my post.Intervale
I
1

This is just a defect in the standard, i.e., your possibility 2.

LWG has just moved LWG issue 2977 to "Ready" status, which will strike the erroneous Throws clause.

Incumber answered 12/7, 2017 at 9:37 Comment(0)
I
-1

In order to understand whether the wording in standard is right or not you need to look into underlying functions.
Merge itself consist of two operations.

  • setting pointers to added elements. As they are already allocated, there is no allocation involved in here
  • in case the bounds for number of elements in bucket are reached, rehash() is called.

Calling rehash() is point, where you expect exception to be thrown. Sol let's look into its exception safety.

23.2.5.1 Exception safety guarantees [unord.req.except]
1 For unordered associative containers, no clear() function throws an exception. erase(k) does not throw an exception unless that exception is thrown by the container’s Hash or Pred object (if any).
2 For unordered associative containers, if an exception is thrown by any operation other than the container’s hash function from within an insert or emplace function inserting a single element, the insertion has no effect.
3 For unordered associative containers, no swap function throws an exception unless that exception is thrown by the swap of the container’s Hash or Pred object (if any).
4 For unordered associative containers, if an exception is thrown from within a rehash() function other than by the container’s hash function or comparison function, the rehash() function has no effect.

As you can see for rehash() function is defined that it does nothing if exception is thrown inside other than for hash or comparison function. That is, in my view, perfectly in line with definition for merge:

Throws: Nothing unless the hash function or key equality predicate throws.

My understanding is that when there is no room to increase underlying data structure for bucket list, then it is kept in its original form. This might lead to slightly inefficient access to elements as there might be more elements in individual buckets than defined. Same can happen during insert.

Where is the problem in your case? Possibly in implementation of rehash() function that throws where it shouldn't.

DISCLAIMER: I am not an expert on the topic. It is just what I have found. So feel free to correct me if I am wrong.

Intervale answered 12/7, 2017 at 9:27 Comment(8)
"If rehash() throws it has no effect" doesn't mean "rehash() doesn't throw" and certainly doesn't mean "rehash operations performed by other functions also don't throw".Incumber
@Incumber It was meant something throws inside rehash. Standard has better wording than I wrote it.Intervale
Better question, if we accept your interpretation of the standard, how does the user know if anything's gone awry in the merge? Do they have to manually inspect the merged-from container? The most reasonable failure behavior is to just leave unmerged elements behind, but how would you then distinguish failure from same-key merges to allocation failures owing to rehash()?Untune
First @Incumber was right that rehash() throws an exception when allocator fails. At least when I test it under gcc or msvc. So how severe is this exception for merge operation? Will it cause that some elements can't be merged? No, just reallocation to buckets will fail. Will it cause less efficient unordered_map? Yes. But unordered_map will still work correctly, it just may be maybe slightly inefficient. When you get biggest hit by this? By trying to merge very large map into very small map.Intervale
But these inefficiencies will be present to some extent in your existing code too when you will have data, that will not distribute evenly. So my opinion is that rehashing during merge is not hard error, merge can recover from it and can leave unordered_map in consistent state. How do you know if allocation failed when no exception is thrown? You can compare load_factor and max_load_factor. You can even check, if thee will be call to rehash(), before merge as load factor is just size()/bucket_count(). So I will answer your question with question.Intervale
If the exception is thrown by merge in the middle of the merge, how do you know what elements were already merged? It leads to question. 1) Should rehash be called before we start merge? It seems best, but probably not as we don't know how many elements will be merged in the end. 2) Should it be called when threshold is reached? Seems good, but it might fail in the middle of merge and leave container in undefined state. 3) Should it be called at the end of merge? Might be good option, but then merging some elements might be slow in case of merging big unordered_map into small one.Intervale
Still it might be good option as the user can easily defeat it by calling rehash beforehand in the case where it might be needed. Other two options don't give you this option. Even throwing an exception due to allocation problem when using 1) and 3) will not leave you with container in undefined state unlike the 2). So if keeping exception, then it might be good idea to specify container state when it is thrown. Also if the wording in standard should be changed, then I would rather suggest using one similar to 23.3.5 when it will say something likeIntervale
"merge will not throw if (N+n) < z * B, where N is the number of elements in the container prior to the merge operation, n is the number of elements inserted from other container, B is the container’s bucket count, and z is the container’s maximum load factor."Intervale

© 2022 - 2024 — McMap. All rights reserved.