You can remove the i-th element from a heap quite easily:
h[i] = h[-1]
h.pop()
heapq.heapify(h)
Just replace the element you want to remove with the last element and remove the last element then re-heapify the heap. This is O(n), if you want you can do the same thing in O(log(n)) but you'll need to call a couple of the internal heapify functions, or better as larsmans pointed out just copy the source of _siftup/_siftdown out of heapq.py into your own code:
h[i] = h[-1]
h.pop()
if i < len(h):
heapq._siftup(h, i)
heapq._siftdown(h, 0, i)
Note that in each case you can't just do h[i] = h.pop()
as that would fail if i
references the last element. If you special case removing the last element then you could combine the overwrite and pop.
Note that depending on the typical size of your heap you might find that just calling heapify
while theoretically less efficient could be faster than re-using _siftup
/_siftdown
: a little bit of introspection will reveal that heapify
is probably implemented in C but the C implementation of the internal functions aren't exposed. If performance matter to you then consider doing some timing tests on typical data to see which is best. Unless you have really massive heaps big-O may not be the most important factor.
Edit: someone tried to edit this answer to remove the call to _siftdown
with a comment that:
_siftdown is not needed. New h[i] is guaranteed to be the smallest of the old h[i]'s children, which is still larger than old h[i]'s parent
(new h[i]'s parent). _siftdown will be a no-op. I have to edit since I
don't have enough rep to add a comment yet.
What they've missed in this comment is that h[-1]
might not be a child of h[i]
at all. The new value inserted at h[i]
could come from a completely different branch of the heap so it might need to be sifted in either direction.
Also to the comment asking why not just use sort()
to restore the heap: calling _siftup
and _siftdown
are both O(log n) operations, calling heapify is O(n). Calling sort()
is an O(n log n) operation. It is quite possible that calling sort will be fast enough but for large heaps it is an unnecessary overhead.
Edited to avoid the issue pointed out by @Seth Bruder. When i
references the end element the _siftup()
call would fail, but in that case popping an element off the end of the heap doesn't break the heap invariant.
Note Someone suggested an edit (rejected before I got to it) changing the heapq._siftdown(h, 0, i)
to heapq._siftdown(h, o, len(h))
.
This would be incorrect as the final parameter on _siftdown()
is the position of the element to move, not some limit on where to move it. The second parameter 0
is the limit.
Sifting up temporarily removes from the list the new item at the specified position and moves the smallest child of that position up repeating that until all smaller children have been moved up and a leaf is left empty. The removed item is inserted in the empty leaf node then _siftdown()
is called to move it below any larger parent node. The catch is the call to _siftdown()
inside _siftup()
uses the second parameter to terminate the sift at the original position. The extra call to _siftdown()
in the code I gave is used to continue the sift down as far as the root of the heap. It only does something if the new element actually needs to be further down than the position it got inserted.
For the avoidance of doubt: sift up moves to higher indexes in the list. Sift down moves to lower indexes i.e. earlier in the list. The heap has its root at position 0 and its leaves at higher numbers.
_siftup
into the program as recommended by @AlexMartelli, here. – Shipboard