Register

# 13.12. Heapsort¶

## 13.12.1. Heapsort¶

Our discussion of Quicksort began by considering the practicality of using a BST for sorting. The BST requires more space than the other sorting methods and will be slower than Quicksort or Mergesort due to the relative expense of inserting values into the tree. There is also the possibility that the BST might be unbalanced, leading to a $\Theta(n^2)$ worst-case running time. Subtree balance in the BST is closely related to Quicksort's partition step. Quicksort's pivot serves roughly the same purpose as the BST root value in that the left partition (subtree) stores values less than the pivot (root) value, while the right partition (subtree) stores values greater than or equal to the pivot (root).

A good sorting algorithm can be devised based on a tree structure more suited to the purpose. In particular, we would like the tree to be balanced, space efficient, and fast. The algorithm should take advantage of the fact that sorting is a special-purpose application in that all of the values to be stored are available at the start. This means that we do not necessarily need to insert one value at a time into the tree structure.

Heapsort is based on the heap data structure. Heapsort has all of the advantages just listed. The complete binary tree is balanced, its array representation is space efficient, and we can load all values into the tree at once, taking advantage of the efficient buildheap function. The asymptotic performance of Heapsort when all of the records have unique key values is $\Theta(n \log n)$ in the best, average, and worst cases. It is not as fast as Quicksort in the average case (by a constant factor), but Heapsort has special properties that will make it particularly useful for external sorting algorithms, used when sorting data sets too large to fit in main memory.

Settings

Saving...
Server Error
Resubmit

A complete implementation is as follows.

static void heapsort(Comparable[] A) {
// The heap constructor invokes the buildheap method
MaxHeap H = new MaxHeap(A, A.length, A.length);
for (int i=0; i<A.length; i++)  // Now sort
H.removemax(); // Removemax places max at end of heap
}

void heapsort(Comparable* A[], int n) {
std::cout << "Getting started with array:" << std::endl;
for (int j = 0; j<n; j++)
std::cout << *A[j] << " ";
std::cout << std::endl;
MaxHeap H(A,n,n);
std::cout << "Now, ready to unpack the heap" << std::endl;
for (int i = 0; i < n; i++)
H.removemax();
}


Here is a warmup practice exercise for Heapsort.

## 13.12.2. Heapsort Proficiency Practice¶

Now test yourself to see how well you understand Heapsort. Can you reproduce its behavior?

## 13.12.3. Heapsort Analysis¶

This visualization presents the running time analysis of Heap Sort

Settings

Saving...
Server Error
Resubmit

While typically slower than Quicksort by a constant factor (because unloading the heap using removemax is somewhat slower than Quicksort's series of partitions), Heapsort has one special advantage over the other sorts studied so far. Building the heap is relatively cheap, requiring $\Theta(n)$ time. Removing the maximum-valued record from the heap requires $\Theta(\log n)$ time in the worst case. Thus, if we wish to find the $k$ records with the largest key values in an array, we can do so in time $\Theta(n + k \log n)$. If $k$ is small, this is a substantial improvement over the time required to find the $k$ largest-valued records using one of the other sorting methods described earlier (many of which would require sorting all of the array first). One situation where we are able to take advantage of this concept is in the implementation of Kruskal's algorithm for minimal-cost spanning trees. That algorithm requires that edges be visited in ascending order (so, use a min-heap), but this process stops as soon as the MST is complete. Thus, only a relatively small fraction of the edges need be sorted.

Another special case arises when all of the records being sorted have the same key value. This represents the best case for Heapsort. This is because removing the smallest value requires only constant time, since the value swapped to the top is never pushed down the heap.