The skew binomial heaps described in Okasaki's Purely Functional Data Structures support merge in worst-case O(log (max (m,n)))
time, where m
and n
are the lengths of the queues being merged. This is worse than segmented binomial queues, which support it in worst-case O(log (min (m,n)))
time, and lazy binomial queues, which support it in worst-case O(log (max (m,n)))
time but O(log (min (m,n)))
amortized time[*]. This seems to be inherent in the restriction that the skew binary number in the queue representation is in canonical form (only one 2, and only as the least significant nonzero digit). Would it be possible to relax this restriction somewhat to get more efficient merges? The basic challenge is that a 2 must not be allowed to cascade into another 2.
[*] I've also recently come up with a variant of scheduled binomial queues with the same worst-case bounds as segmented queues; that version is not yet fully implemented.