The speaker is wrong in this case. The actual cost is O(n * log(t))
. Heapify is called only on the first t
elements of the iterable. That's O(t)
, but is insignificant if t
is much smaller than n
. Then all the remaining elements are added to this "little heap" via heappushpop
, one at a time. That takes O(log(t))
time per invocation of heappushpop
. The length of the heap remains t
throughout. At the very end, the heap is sorted, which costs O(t * log(t))
, but that's also insignificant if t
is much smaller than n
.
Fun with Theory ;-)
There are reasonably easy ways to find the t'th-largest element in expected O(n)
time; for example, see here. There are harder ways to do it in worst-case O(n)
time. Then, in another pass over the input, you could output the t
elements >= the t-th largest (with tedious complications in case of duplicates). So the whole job can be done in O(n)
time.
But those ways require O(n)
memory too. Python doesn't use them. An advantage of what's actually implemented is that the worst-case "extra" memory burden is O(t)
, and that can be very significant when the input is, for example, a generator producing a great many values.
nlargest
witht=n
to comparison sort a list in linear time. If you just want thet
largest elements in any order, that can be done in O(n) with quickselect.heapq.nlargest
doesn't use quickselect, though; it gives the items in sorted order with a heap-based algorithm. – Skaggs