I implemented an algorithm where I make use of an priority queue. I was motivated by this question: Transform a std::multimap into std::priority_queue
I am going to store up to 10 million elements with their specific priority value.
I then want to iterate until the queue is empty. Every time an element is retrieved it is also deleted from the queue.
After this I recalculate the elements pririty value, because of previous iterations it can change.
If the value did increase I am inserting the element againg into the queue. This happens more often dependent on the progress. (at the first 25% it does not happen, in the next 50% it does happen, in the last 25% it will happen multiple times).
After receiving the next element and not reinserting it, I am going to process it. This for I do not need the priority value of this element but the technical ID of this element.
This was the reason I intuitively had chosen a std::multimap
to achieve this, using .begin()
to get the first element, .insert()
to insert it and .erase()
to remove it.
Also, I did not intuitively choose std::priority_queue
directly because of other questions to this topic answering that std::priority_queue
most likely is used for only single values and not for mapped values.
After reading the link above I reimplemented it using priority queue analogs to the other question from the link.
My runtimes seem to be not that unequal (about an hour on 10 mio elements).
Now I am wondering why std::priority_queue
is faster at all.
I actually would expect to be the std::multimap
faster because of the many reinsertions.
Maybe the problem is that there are too many reorganizations of the multimap?
int, T
pair container designed to pick the extremeint
. It has less constraints. I would expect it to be faster. Is your key in the multimap the priority from the queue like in the linked question? Do I get it right? – Transect