123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319 |
- # -*- coding: Latin-1 -*-
- """Heap queue algorithm (a.k.a. priority queue).
- Heaps are arrays for which a[k] <= a[2*k+1] and a[k] <= a[2*k+2] for
- all k, counting elements from 0. For the sake of comparison,
- non-existing elements are considered to be infinite. The interesting
- property of a heap is that a[0] is always its smallest element.
- Usage:
- heap = [] # creates an empty heap
- heappush(heap, item) # pushes a new item on the heap
- item = heappop(heap) # pops the smallest item from the heap
- item = heap[0] # smallest item on the heap without popping it
- heapify(x) # transforms list into a heap, in-place, in linear time
- item = heapreplace(heap, item) # pops and returns smallest item, and adds
- # new item; the heap size is unchanged
- Our API differs from textbook heap algorithms as follows:
- - We use 0-based indexing. This makes the relationship between the
- index for a node and the indexes for its children slightly less
- obvious, but is more suitable since Python uses 0-based indexing.
- - Our heappop() method returns the smallest item, not the largest.
- These two make it possible to view the heap as a regular Python list
- without surprises: heap[0] is the smallest item, and heap.sort()
- maintains the heap invariant!
- """
- # Original code by Kevin O'Connor, augmented by Tim Peters and Raymond Hettinger
- __about__ = """Heap queues
- [explanation by François Pinard]
- Heaps are arrays for which a[k] <= a[2*k+1] and a[k] <= a[2*k+2] for
- all k, counting elements from 0. For the sake of comparison,
- non-existing elements are considered to be infinite. The interesting
- property of a heap is that a[0] is always its smallest element.
- The strange invariant above is meant to be an efficient memory
- representation for a tournament. The numbers below are `k', not a[k]:
- 0
- 1 2
- 3 4 5 6
- 7 8 9 10 11 12 13 14
- 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
- In the tree above, each cell `k' is topping `2*k+1' and `2*k+2'. In
- an usual binary tournament we see in sports, each cell is the winner
- over the two cells it tops, and we can trace the winner down the tree
- to see all opponents s/he had. However, in many computer applications
- of such tournaments, we do not need to trace the history of a winner.
- To be more memory efficient, when a winner is promoted, we try to
- replace it by something else at a lower level, and the rule becomes
- that a cell and the two cells it tops contain three different items,
- but the top cell "wins" over the two topped cells.
- If this heap invariant is protected at all time, index 0 is clearly
- the overall winner. The simplest algorithmic way to remove it and
- find the "next" winner is to move some loser (let's say cell 30 in the
- diagram above) into the 0 position, and then percolate this new 0 down
- the tree, exchanging values, until the invariant is re-established.
- This is clearly logarithmic on the total number of items in the tree.
- By iterating over all items, you get an O(n ln n) sort.
- A nice feature of this sort is that you can efficiently insert new
- items while the sort is going on, provided that the inserted items are
- not "better" than the last 0'th element you extracted. This is
- especially useful in simulation contexts, where the tree holds all
- incoming events, and the "win" condition means the smallest scheduled
- time. When an event schedule other events for execution, they are
- scheduled into the future, so they can easily go into the heap. So, a
- heap is a good structure for implementing schedulers (this is what I
- used for my MIDI sequencer :-).
- Various structures for implementing schedulers have been extensively
- studied, and heaps are good for this, as they are reasonably speedy,
- the speed is almost constant, and the worst case is not much different
- than the average case. However, there are other representations which
- are more efficient overall, yet the worst cases might be terrible.
- Heaps are also very useful in big disk sorts. You most probably all
- know that a big sort implies producing "runs" (which are pre-sorted
- sequences, which size is usually related to the amount of CPU memory),
- followed by a merging passes for these runs, which merging is often
- very cleverly organised[1]. It is very important that the initial
- sort produces the longest runs possible. Tournaments are a good way
- to that. If, using all the memory available to hold a tournament, you
- replace and percolate items that happen to fit the current run, you'll
- produce runs which are twice the size of the memory for random input,
- and much better for input fuzzily ordered.
- Moreover, if you output the 0'th item on disk and get an input which
- may not fit in the current tournament (because the value "wins" over
- the last output value), it cannot fit in the heap, so the size of the
- heap decreases. The freed memory could be cleverly reused immediately
- for progressively building a second heap, which grows at exactly the
- same rate the first heap is melting. When the first heap completely
- vanishes, you switch heaps and start a new run. Clever and quite
- effective!
- In a word, heaps are useful memory structures to know. I use them in
- a few applications, and I think it is good to keep a `heap' module
- around. :-)
- --------------------
- [1] The disk balancing algorithms which are current, nowadays, are
- more annoying than clever, and this is a consequence of the seeking
- capabilities of the disks. On devices which cannot seek, like big
- tape drives, the story was quite different, and one had to be very
- clever to ensure (far in advance) that each tape movement will be the
- most effective possible (that is, will best participate at
- "progressing" the merge). Some tapes were even able to read
- backwards, and this was also used to avoid the rewinding time.
- Believe me, real good tape sorts were quite spectacular to watch!
- From all times, sorting has always been a Great Art! :-)
- """
- __all__ = ['heappush', 'heappop', 'heapify', 'heapreplace', 'nlargest',
- 'nsmallest']
- from itertools import islice, repeat
- import bisect
- def heappush(heap, item):
- """Push item onto heap, maintaining the heap invariant."""
- heap.append(item)
- _siftdown(heap, 0, len(heap)-1)
- def heappop(heap):
- """Pop the smallest item off the heap, maintaining the heap invariant."""
- lastelt = heap.pop() # raises appropriate IndexError if heap is empty
- if heap:
- returnitem = heap[0]
- heap[0] = lastelt
- _siftup(heap, 0)
- else:
- returnitem = lastelt
- return returnitem
- def heapreplace(heap, item):
- """Pop and return the current smallest value, and add the new item.
- This is more efficient than heappop() followed by heappush(), and can be
- more appropriate when using a fixed-size heap. Note that the value
- returned may be larger than item! That constrains reasonable uses of
- this routine unless written as part of a conditional replacement:
- if item > heap[0]:
- item = heapreplace(heap, item)
- """
- returnitem = heap[0] # raises appropriate IndexError if heap is empty
- heap[0] = item
- _siftup(heap, 0)
- return returnitem
- def heapify(x):
- """Transform list into a heap, in-place, in O(len(heap)) time."""
- n = len(x)
- # Transform bottom-up. The largest index there's any point to looking at
- # is the largest with a child index in-range, so must have 2*i + 1 < n,
- # or i < (n-1)/2. If n is even = 2*j, this is (2*j-1)/2 = j-1/2 so
- # j-1 is the largest, which is n//2 - 1. If n is odd = 2*j+1, this is
- # (2*j+1-1)/2 = j so j-1 is the largest, and that's again n//2-1.
- for i in reversed(xrange(n//2)):
- _siftup(x, i)
- def nlargest(n, iterable):
- """Find the n largest elements in a dataset.
- Equivalent to: sorted(iterable, reverse=True)[:n]
- """
- it = iter(iterable)
- result = list(islice(it, n))
- if not result:
- return result
- heapify(result)
- _heapreplace = heapreplace
- sol = result[0] # sol --> smallest of the nlargest
- for elem in it:
- if elem <= sol:
- continue
- _heapreplace(result, elem)
- sol = result[0]
- result.sort(reverse=True)
- return result
- def nsmallest(n, iterable):
- """Find the n smallest elements in a dataset.
- Equivalent to: sorted(iterable)[:n]
- """
- if hasattr(iterable, '__len__') and n * 10 <= len(iterable):
- # For smaller values of n, the bisect method is faster than a minheap.
- # It is also memory efficient, consuming only n elements of space.
- it = iter(iterable)
- result = sorted(islice(it, 0, n))
- if not result:
- return result
- insort = bisect.insort
- pop = result.pop
- los = result[-1] # los --> Largest of the nsmallest
- for elem in it:
- if los <= elem:
- continue
- insort(result, elem)
- pop()
- los = result[-1]
- return result
- # An alternative approach manifests the whole iterable in memory but
- # saves comparisons by heapifying all at once. Also, saves time
- # over bisect.insort() which has O(n) data movement time for every
- # insertion. Finding the n smallest of an m length iterable requires
- # O(m) + O(n log m) comparisons.
- h = list(iterable)
- heapify(h)
- return map(heappop, repeat(h, min(n, len(h))))
- # 'heap' is a heap at all indices >= startpos, except possibly for pos. pos
- # is the index of a leaf with a possibly out-of-order value. Restore the
- # heap invariant.
- def _siftdown(heap, startpos, pos):
- newitem = heap[pos]
- # Follow the path to the root, moving parents down until finding a place
- # newitem fits.
- while pos > startpos:
- parentpos = (pos - 1) >> 1
- parent = heap[parentpos]
- if parent <= newitem:
- break
- heap[pos] = parent
- pos = parentpos
- heap[pos] = newitem
- # The child indices of heap index pos are already heaps, and we want to make
- # a heap at index pos too. We do this by bubbling the smaller child of
- # pos up (and so on with that child's children, etc) until hitting a leaf,
- # then using _siftdown to move the oddball originally at index pos into place.
- #
- # We *could* break out of the loop as soon as we find a pos where newitem <=
- # both its children, but turns out that's not a good idea, and despite that
- # many books write the algorithm that way. During a heap pop, the last array
- # element is sifted in, and that tends to be large, so that comparing it
- # against values starting from the root usually doesn't pay (= usually doesn't
- # get us out of the loop early). See Knuth, Volume 3, where this is
- # explained and quantified in an exercise.
- #
- # Cutting the # of comparisons is important, since these routines have no
- # way to extract "the priority" from an array element, so that intelligence
- # is likely to be hiding in custom __cmp__ methods, or in array elements
- # storing (priority, record) tuples. Comparisons are thus potentially
- # expensive.
- #
- # On random arrays of length 1000, making this change cut the number of
- # comparisons made by heapify() a little, and those made by exhaustive
- # heappop() a lot, in accord with theory. Here are typical results from 3
- # runs (3 just to demonstrate how small the variance is):
- #
- # Compares needed by heapify Compares needed by 1000 heappops
- # -------------------------- --------------------------------
- # 1837 cut to 1663 14996 cut to 8680
- # 1855 cut to 1659 14966 cut to 8678
- # 1847 cut to 1660 15024 cut to 8703
- #
- # Building the heap by using heappush() 1000 times instead required
- # 2198, 2148, and 2219 compares: heapify() is more efficient, when
- # you can use it.
- #
- # The total compares needed by list.sort() on the same lists were 8627,
- # 8627, and 8632 (this should be compared to the sum of heapify() and
- # heappop() compares): list.sort() is (unsurprisingly!) more efficient
- # for sorting.
- def _siftup(heap, pos):
- endpos = len(heap)
- startpos = pos
- newitem = heap[pos]
- # Bubble up the smaller child until hitting a leaf.
- childpos = 2*pos + 1 # leftmost child position
- while childpos < endpos:
- # Set childpos to index of smaller child.
- rightpos = childpos + 1
- if rightpos < endpos and heap[rightpos] <= heap[childpos]:
- childpos = rightpos
- # Move the smaller child up.
- heap[pos] = heap[childpos]
- pos = childpos
- childpos = 2*pos + 1
- # The leaf at pos is empty now. Put newitem there, and bubble it up
- # to its final resting place (by sifting its parents down).
- heap[pos] = newitem
- _siftdown(heap, startpos, pos)
- # If available, use C implementation
- try:
- from _heapq import heappush, heappop, heapify, heapreplace, nlargest, nsmallest
- except ImportError:
- pass
- if __name__ == "__main__":
- # Simple sanity test
- heap = []
- data = [1, 3, 5, 7, 9, 2, 4, 6, 8, 0]
- for item in data:
- heappush(heap, item)
- sort = []
- while heap:
- sort.append(heappop(heap))
- print sort
|