Close
Register
Close Window

Show Source |    | About   «  12.17. Heaps and Priority Queues   ::   Contents   ::   12.19. Trees versus Tries  »

12.18. Huffman Coding Trees

12.18.1. Huffman Coding Trees

One can often gain an improvement in space requirements in exchange for a penalty in running time. There are many situations where this is a desirable tradeoff. A typical example is storing files on disk. If the files are not actively used, the owner might wish to compress them to save space. Later, they can be uncompressed for use, which costs some time, but only once.

We often represent a set of items in a computer program by assigning a unique code to each item. For example, the standard ASCII coding scheme assigns a unique eight-bit value to each character. It takes a certain minimum number of bits to provide enough unique codes so that we have a different one for each character. For example, it takes \(\left\lceil log\ 128\right\rceil\) or seven bits to provide the 128 unique codes needed to represent the 128 symbols of the ASCII character set. [1]

The requirement for \(\left \lceil log\ n \right\rceil\) bits to represent \(n\) unique code values assumes that all codes will be the same length, as are ASCII codes. These are called fixed-length codes. If all characters were used equally often, then a fixed-length coding scheme is the most space efficient method. However, you are probably aware that not all characters are used equally often in many applications. For example, the various letters in an English language document have greatly different frequencies of use.

Table 12.18.1 shows the relative frequencies of the letters of the alphabet. From this table we can see that the letter ‘E’ appears about 60 times more often than the letter ‘Z’. In normal ASCII, the words “DEED” and “MUCK” require the same amount of space (four bytes). It would seem that words such as “DEED”, which are composed of relatively common letters, should be storable in less space than words such as “MUCK”, which are composed of relatively uncommon letters.

If some characters are used more frequently than others, is it possible to take advantage of this fact and somehow assign them shorter codes? The price could be that other characters require longer codes, but this might be worthwhile if such characters appear rarely enough. This concept is at the heart of file compression techniques in common use today. The next section presents one such approach to assigning variable-length codes, called Huffman coding. While it is not commonly used in its simplest form for file compression (there are better methods), Huffman coding gives the flavor of such coding schemes. One motivation for studying Huffman coding is because it provides our first opportunity to see a type of tree structure referred to as a search trie.

12.18.1.1. Building Huffman Coding Trees

Huffman coding assigns codes to characters such that the length of the code depends on the relative frequency or weight of the corresponding character. Thus, it is a variable-length code. If the estimated frequencies for letters match the actual frequency found in an encoded message, then the length of that message will typically be less than if a fixed-length code had been used. The Huffman code for each letter is derived from a full binary tree called the Huffman coding tree, or simply the Huffman tree. Each leaf of the Huffman tree corresponds to a letter, and we define the weight of the leaf node to be the weight (frequency) of its associated letter. The goal is to build a tree with the minimum external path weight. Define the weighted path length of a leaf to be its weight times its depth. The binary tree with minimum external path weight is the one with the minimum sum of weighted path lengths for the given set of leaves. A letter with high weight should have low depth, so that it will count the least against the total path length. As a result, another letter might be pushed deeper in the tree if it has less weight.

The process of building the Huffman tree for \(n\) letters is quite simple. First, create a collection of \(n\) initial Huffman trees, each of which is a single leaf node containing one of the letters. Put the \(n\) partial trees onto a priority queue organized by weight (frequency). Next, remove the first two trees (the ones with lowest weight) from the priority queue. Join these two trees together to create a new tree whose root has the two trees as children, and whose weight is the sum of the weights of the two trees. Put this new tree back into the priority queue. This process is repeated until all of the partial Huffman trees have been combined into one.

The following slideshow illustrates the Huffman tree construction process for the eight letters of Table 12.18.2. [2]

Settings

Proficient Saving... Error Saving
Server Error
Resubmit

Here is the implementation for Huffman tree nodes.

/** Huffman tree node implementation: Base class */
interface HuffBaseNode {
  boolean isLeaf(); 
  int weight();
}


/** Huffman tree node: Leaf class */
class HuffLeafNode implements HuffBaseNode {
  private char element;      // Element for this node
  private int weight;        // Weight for this node

  /** Constructor */
  HuffLeafNode(char el, int wt)
    { element = el; weight = wt; }

  /** @return The element value */
  char value() { return element; }

  /** @return The weight */
  int weight() { return weight; }

  /** Return true */
  boolean isLeaf() { return true; }
}


/** Huffman tree node: Internal class */
class HuffInternalNode implements HuffBaseNode {
  private int weight;            
  private HuffBaseNode left;  
  private HuffBaseNode right; 

  /** Constructor */
  HuffInternalNode(HuffBaseNode l,
                          HuffBaseNode r, int wt)
    { left = l; right = r; weight = wt; }

  /** @return The left child */
  HuffBaseNode left() { return left; }

  /** @return The right child */
  HuffBaseNode right() { return right; }

  /** @return The weight */
  int weight() { return weight; }

  /** Return false */
  boolean isLeaf() { return false; }
}

This implementation is similar to a typical class hierarchy for implementing full binary trees. There is an abstract base class, named HuffNode, and two subclasses, named LeafNode and IntlNode. This implementation reflects the fact that leaf and internal nodes contain distinctly different information.

Here is the implementation for the Huffman Tree class.

/** A Huffman coding tree */
class HuffTree implements Comparable {
  private HuffBaseNode root;  

  /** Constructors */
  HuffTree(char el, int wt)
    { root = new HuffLeafNode(el, wt); }
  HuffTree(HuffBaseNode l, HuffBaseNode r, int wt)
    { root = new HuffInternalNode(l, r, wt); }

  HuffBaseNode root() { return root; }
  int weight() // Weight of tree is weight of root
    { return root.weight(); }
  int compareTo(Object t) {
    HuffTree that = (HuffTree)t;
    if (root.weight() < that.weight()) { return -1; }
    else if (root.weight() == that.weight()) { return 0; }
    else { return 1; }
  }
}

Here is the implementation for the tree-building process.

static HuffTree buildTree() {
  HuffTree tmp1, tmp2, tmp3 = null;

  while (Hheap.heapsize() > 1) { // While two items left
    tmp1 = Hheap.removemin();
    tmp2 = Hheap.removemin();
    tmp3 = new HuffTree(tmp1.root(), tmp2.root(),
                             tmp1.weight() + tmp2.weight());
    Hheap.insert(tmp3);   // Return new tree to heap
  }
  return tmp3;            // Return the tree
}

buildHuff takes as input fl, the min-heap of partial Huffman trees, which initially are single leaf nodes as shown in Step 1 of the slideshow above. The body of function buildTree consists mainly of a for loop. On each iteration of the for loop, the first two partial trees are taken off the heap and placed in variables temp1 and temp2. A tree is created (temp3) such that the left and right subtrees are temp1 and temp2, respectively. Finally, temp3 is returned to fl.

Assigning and Using Huffman Codes

Once the Huffman tree has been constructed, it is an easy matter to assign codes to individual letters. Beginning at the root, we assign either a ‘0’ or a ‘1’ to each edge in the tree. ‘0’ is assigned to edges connecting a node with its left child, and ‘1’ to edges connecting a node with its right child. This process is illustrated by the following slideshow.

Settings

Proficient Saving... Error Saving
Server Error
Resubmit

Now that we see how the edges associate with bits in the code, it is a simple matter to generate the codes for each letter (since each letter corresponds to a leaf node in the tree).

Settings

Proficient Saving... Error Saving
Server Error
Resubmit

Now that we have a code for each letter, encoding a text message is done by replacing each letter of the message with its binary code. A lookup table can be used for this purpose.

12.18.1.2. Decoding

A set of codes is said to meet the prefix property if no code in the set is the prefix of another. The prefix property guarantees that there will be no ambiguity in how a bit string is decoded. In other words, once we reach the last bit of a code during the decoding process, we know which letter it is the code for. Huffman codes certainly have the prefix property because any prefix for a code would correspond to an internal node, while all codes correspond to leaf nodes.

When we decode a character using the Huffman coding tree, we follow a path through the tree dictated by the bits in the code string. Each ‘0’ bit indicates a left branch while each ‘1’ bit indicates a right branch. The following slideshow shows an example for how to decode a message by traversing the tree appropriately.

Settings

Proficient Saving... Error Saving
Server Error
Resubmit

12.18.1.3. How efficient is Huffman coding?

In theory, Huffman coding is an optimal coding method whenever the true frequencies are known, and the frequency of a letter is independent of the context of that letter in the message. In practice, the frequencies of letters in an English text document do change depending on context. For example, while E is the most commonly used letter of the alphabet in English documents, T is more common as the first letter of a word. This is why most commercial compression utilities do not use Huffman coding as their primary coding method, but instead use techniques that take advantage of the context for the letters.

Another factor that affects the compression efficiency of Huffman coding is the relative frequencies of the letters. Some frequency patterns will save no space as compared to fixed-length codes; others can result in great compression. In general, Huffman coding does better when there is large variation in the frequencies of letters.

Huffman coding for all ASCII symbols should do better than this example. The letters of Table 12.18.1 are atypical in that there are too many common letters compared to the number of rare letters. Huffman coding for all 26 letters would yield an expected cost of 4.29 bits per letter. The equivalent fixed-length code would require about five bits. This is somewhat unfair to fixed-length coding because there is actually room for 32 codes in five bits, but only 26 letters. More generally, Huffman coding of a typical text file will save around 40% over ASCII coding if we charge ASCII coding at eight bits per character. Huffman coding for a binary file (such as a compiled executable) would have a very different set of distribution frequencies and so would have a different space savings. Most commercial compression programs use two or three coding schemes to adjust to different types of files.

In decoding example, “DEED” was coded in 8 bits, a saving of 33% over the twelve bits required from a fixed-length coding. However, “MUCK” would require 18 bits, more space than required by the corresponding fixed-length coding. The problem is that “MUCK” is composed of letters that are not expected to occur often. If the message does not match the expected frequencies of the letters, than the length of the encoding will not be as expected either.

You can use the following visualization to create a huffman tree for your own set of letters and frequencies.

   «  12.17. Heaps and Priority Queues   ::   Contents   ::   12.19. Trees versus Tries  »

Close Window