Close
Register
Close Window

Show Source |    | About   «  27.1. The Sparse Matrix   ::   Contents   ::   27.3. Amortized Analysis  »

27.2. Dynamic Programming

27.2.1. Dynamic Programming

Dynamic programming is an algorithm design technique that can improve the efficiency of any inherently recursive algorithm that repeatedly re-solves the same subproblems. Using dynamic programming requires two steps:

  1. You find a recursive solution to a problem where subproblems are redundantly solved many times.

  2. Optimize the recursive algorithm to eliminate re-solving subproblems. The resulting algorithm may be recursive or iterative. The iterative form is commonly referred to by the term dynamic programming.

We will see first how to remove redundancy with a simple problem, computing Fibonacci numbers. Then we introduce the knapsack problem, and show how it can be solved efficiently with dynamic programming.

27.2.2. Computing Fibonacci Numbers

Consider the recursive function for computing the \(n\)’th Fibonacci number.

/** Recursively generate and return the n'th Fibonacci
    number */
static long fibr(int n) {
  // fibr(91) is the largest value that fits in a long
  if ((n <= 0) || (n > 91)) { return -1; }
  if ((n == 1) || (n == 2)) { return 1; }    // Base case
  return fibr(n-1) + fibr(n-2);      // Recursive call
}

The cost of this recursive algorithm (in terms of function calls) is the size of the \(n\)’th Fibonacci number itself, which is exponential on \(n\) (approximately \(1.62^n\) ). Why is this so expensive? Primarily because two recursive calls are made by the function, and the work that they do is largely redundant. That is, each of the two calls is recomputing most of the series, as is each sub-call, and so on. Thus, the smaller values of the function are being recomputed a huge number of times. If we could eliminate this redundancy, the cost would be greatly reduced. The approach that we will use can also improve any algorithm that spends most of its time recomputing common subproblems.

The following slideshow explains the redundancy problem.

Settings

Proficient Saving... Error Saving
Server Error
Resubmit

Looking at the final tree, we see that there are only seven unique subproblems to solve (for Fibonacci values 0 through 6). The graphical representation below is called a dependency graph, and shows the dependencies for the subproblems.

Note that the dependency graph was laid out on in a one dimensional table of size seven, corresponding to the unique subproblems invoked by the algorithm. This table can simply store the value of each subproblem. In this way, redundant calls can be avoided because the value of a subproblem which was previously computed can be read from its corresponding cell in the table without the need to recompute it again.

The table can be used to derive two alternative, but efficient, algorithms. One way to accomplish this goal is to keep a table of values, and first check the table to see if the computation can be avoided. This technique is called memoization. Here is a straightforward example of doing so. Note that it mirrors the original version of the Fibonacci recursive algorithm.

static int fibrt(int n) {
  // Assume Values has at least n slots, and all
  // slots are initialized to 0
  if ((n <= 0) || (n > 91)) { return -1; }
  if (n <= 2) { return 1; }             // Base case
  if (Values[n] == 0) {
    Values[n] = fibrt(n-1) + fibrt(n-2);
  }
  return Values[n];
}

This version of the algorithm will not compute a value more than once, so its cost is linear. The corresponding recursion tree is shown below. Note that the first occurrence of each recursive call invokes two recursive calls. However, subsequent occurrences of such a call do not produce additional calls because they just read the contents of its corresponding cell.

Todo

type: Figure

Put the figure here that was specified in the original document (top of the 3rd page), showing the recursion tree.

A second technique is called tabulation. The dependency graph must be analyzed to infer an alternative computation order for the subproblems. The only restriction is that a subproblem can only be computed when the subproblems it depends on have been computed. In addition, the value of each subproblem must be stored in the table. In the case of computing a value in the Fibonacci series, we reverse the order to calculate the series from the starting point, and implement this by a simple loop. Unfortunately, since it does not have any similarity to the original recursive algorithm, there is no mechanical way to get from the original recursive form to the dynamic programming form.

An additional optimization can be made. Of course, we didn’t actually need to use a table storing all of the values, since future computations do not need access to all prior subproblems. Instead, we could build the value by working from 0 and 1 up to \(n\) rather than backwards from \(n\) down to 0 and 1. Going up from the bottom we only need to store the previous two values of the function, as is done by our iterative version.

/** Iteratively generate and return the n'th Fibonacci
    number */
static long fibi(int n) {
  // fibr(91) is the largest value that fits in a long
  if ((n <= 0) || (n > 91)) { return -1; }
  long curr, prev, past;
  if ((n == 1) || (n == 2)) { return 1; }
  curr = prev = 1;     // curr holds current Fib value
  for (int i=3; i<=n; i++) { // Compute next value
    past = prev;             // past holds fibi(i-2)
    prev = curr;             // prev holds fibi(i-1)
    curr = past + prev;      // curr now holds fibi(i)
  }
  return curr;
}

Recomputing of subproblems comes up in many algorithms. It is not so common that we can store only a few prior results as we did for fibi. Thus, there are many times where storing a complete table of subresults will be useful.

The approach shown above to designing an algorithm that works by storing a table of results for subproblems is called dynamic programming when it is applied to optimization algorithms. The name is somewhat arcane, because it doesn’t bear much obvious similarity to the process that is taking place when storing subproblems in a table. However, it comes originally from the field of dynamic control systems, which got its start before what we think of as computer programming. The act of storing precomputed values in a table for later reuse is referred to as “programming” in that field. Dynamic programming algorithms are usually implemented with the tabulation technique described above. Thus, fibi better represents the most common form of dynamic programming than does fibrt, even though it doesn’t use the complete table.

27.2.3. The Knapsack Problem

We will next consider a problem that appears with many variations in a variety of commercial settings. Many businesses need to package items with the greatest efficiency. One way to describe this basic idea is in terms of packing items into a knapsack, and so we will refer to this as the Knapsack Problem. We will first define a particular formulation of the knapsack problem, and then we will discuss an algorithm to solve it based on dynamic programming. There are many other versions for the problem. Some versions ask for the greatest amount that will fit, others introduce values to the items along with size. We will look at a fairly easy to understand variation.

Assume that we have a knapsack with a certain amount of space that we will define using integer value \(K\). We also have \(n\) items each with a certain size such that that item \(i\) has integer size \(k_i\). The problem is to find a subset of the \(n\) items whose sizes exactly sum to \(K\), if one exists. For example, if our knapsack has capacity \(K = 5\) and the two items are of size \(k_1 = 2\) and \(k_2 = 4\), then no such subset exists. But if we add a third item of size \(k_3 = 1\), then we can fill the knapsack exactly with the second and third items. We can define the problem more formally as: Find \(S \subset \{1, 2, ..., n\}\) such that

\[\sum_{i \in S} k_i = K.\]

Example 27.2.1

Assume that we are given a knapsack of size \(K = 163\) and 10 items of sizes 4, 9, 15, 19, 27, 44, 54, 68, 73, 101. Can we find a subset of the items that exactly fills the knapsack? You should take a few minutes and try to do this before reading on and looking at the answer.

One solution to the problem is: 19, 27, 44, 73.

Example 27.2.2

Having solved the previous example for knapsack of size 163, how hard is it now to solve for a knapsack of size 164? Try it.

Unfortunately, knowing the answer for 163 is of almost no use at all when solving for 164. One solution is: 9, 54, 101.

If you tried solving these examples, you probably found yourself doing a lot of trial-and-error and a lot of backtracking. To come up with an algorithm, we want an organized way to go through the possible subsets. Is there a way to make the problem smaller, so that we can apply recursion? We essentially have two parts to the input: The knapsack size \(K\) and the \(n\) items. It probably will not do us much good to try and break the knapsack into pieces and solve the sub-pieces (since we already saw that knowing the answer for a knapsack of size 163 did nothing to help us solve the problem for a knapsack of size 164).

So, what can we say about solving the problem with or without the \(n\)’th item? This seems to lead to a way to break down the problem. If the \(n\)’th item is not needed for a solution (that is, if we can solve the problem with the first \(n-1\) items) then we can also solve the problem when the \(n\)’th item is available (we just ignore it). On the other hand, if we do include the \(n\)’th item as a member of the solution subset, then we now would need to solve the problem with the first \(n-1\) items and a knapsack of size \(K - k_n\) (since the \(n\)’th item is taking up \(k_n\) space in the knapsack).

To organize this process, we can define the problem in terms of two parameters: the knapsack size \(K\) and the number of items \(n\). Denote a given instance of the problem as \(P(n, K)\). Now we can say that \(P(n, K)\) has a solution if and only if there exists a solution for either \(P(n-1, K)\) or \(P(n-1, K-k_n)\). That is, we can solve \(P(n, K)\) only if we can solve one of the sub problems where we use or do not use the \(n\) th item. Of course, the ordering of the items is arbitrary. We just need to give them some order to keep things straight.

Continuing this idea, to solve any subproblem of size \(n-1\), we need only to solve two subproblems of size \(n-2\). And so on, until we are down to only one item that either fills the knapsack or not.

Continuing this idea, to solve any subproblem of size \(n-1\), we need only to solve two subproblems of size \(n-2\). And so on, until we are down to only one item that either fits the knapsack or not. Assuming that \(P(i, S)\) represents the problem for object i and after, and with size s still free in the knapsack, the following algorithm expresses the ideas.

if \(P(n-1, K)\) has a solution,
then \(P(n, K)\) has a solution
else if \(P(n-1, K-k_n)\) has a solution
then \(P(n, K)\) has a solution
else \(P(n, K)\) has no solution.

Although this algorithm is correct, it naturally leads to a cost expressed by the recurrence relation \(\mathbf{T}(n) = 2\mathbf{T}(n-1) + c = \Theta(2^n)\). That can be pretty expensive!

But… we should quickly realize that there are only \(n(K+1)\) subproblems to solve! Clearly, there is the possibility that many subproblems are being solved repeatedly. This is a natural opportunity to apply dynamic programming. If we draw the recursion tree of this naive recursive algorithm and derive its corresponding dependency graph, we notice that all the recursive calls can be laid out on an array of size \(n \times K+1\) to contain the solutions for all subproblems \(P(i, k), 0 \leq i \leq n-1, 0 \leq k \leq K\).

Example 27.2.3

Consider the instance of the Knapsack Problem for \(K=10\) and five items with sizes 9, 2, 7, 4, 1. The recursion tree generated by the recursive algorithm follows, where each node contains the index of the object under consideration (from 0 to 4) and the size available of the knapsack.

Settings

Proficient Saving... Error Saving
Server Error
Resubmit

The dependency graph for this problem instance, laid out in a table of size \(n × K + 1\), follows:

As mentioned above, there are two approaches to actually solving the problem. One is memoization, that is, to start with our problem of size \(P(n, K)\) and make recursive calls to solve the subproblems, each time checking the array to see if a subproblem has been solved, and filling in the corresponding cell in the array whenever we get a new subproblem solution. The other is tabulation. Conceivably we could adopt one of several computation orders, although the most “natural” is to start filling the array for row 0 (which indicates a successful solution only for a knapsack of size \(k_0\)). We then fill in the succeeding rows from \(i=1\) to \(n\).

In other words, a new slot in the array gets its solution by looking at most at two slots in the preceding row. Since filling each slot in the array takes constant time, the total cost of the algorithm is \(\Theta(nK)\).

Example 27.2.4

Consider again the instance of the Knapsack Problem for K=10 and five items with sizes 9, 2, 7, 4, 1. A tabulation algorithm will fill a table of size n×K+1 starting from object i=0 up to object i=4, filling all the cells in the table in a top-down fashion.

\[\begin{split}\begin{array}{l|ccccccccccc} &0&1&2&3&4&5&6&7&8&9&10\\ \hline k_0\!=\!9&O&-&-&-&-&-&-&-&-&I&\\ k_1\!=\!2&O&-&I&-&-&-&-&-&-&O&-\\ k_2\!=\!7&O&-&O&-&-&-&-&I&-&I/O&-\\ k_3\!=\!4&O&-&O&-&I&-&I&O&-&O&-\\ k_4\!=\!1&O&I&O&I&O&I&O&I/O&I&O&I \end{array}\end{split}\]
Key:
-: No solution for \(P(i, k)\).
O: Solution(s) for \(P(i, k)\) with \(i\) omitted.
I: Solution(s) for \(P(i, k)\) with \(i\) included.
I/O: Solutions for \(P(i, k)\) with \(i\) included AND omitted.

For example, \(P(2, 9)\) stores value I/O. It contains O because \(P(1, 9)\) has a solution (so, this item is not needed along that path). It contains I because \(P(1,2) = P(1, 9-7)\) has a solution (so, this item is needed along that path). Since \(P(4, 10)\) is marked with I, it has a solution. We can determine what that solution actually is by recognizing that it includes \(k_4\) (of size 1), which then leads us to look at the solution for \(P(3, 9)\). This in turn has a solution that omits \(k_3\) (of size 4), leading us to \(P(2, 9)\). At this point, we can either use item \(k_2\) or not. We can find a solution by taking one valid path through the table. We can find all solutions by following all branches when there is a choice.

Note that the table is first filled with the values of the different subproblems, and later we inferred the sequence of decisions that allows computing an optimal solution from the values stored in the table. This last phase of the algorithm precludes the possibility of actually reducing the size of the table. Otherwise, the table for the knapsack problem could have been reduced to a one dimensional array.

27.2.4. Chained Matrix Multiplication

Many engineering problems require multiplying a lot of matrices. Sometimes really large matrices. It turns out to make a big difference in which order we do the computation.

First, let’s recall the basics. If we have two matrices (on of \(r\) rows and \(s\) columns, and the other of \(s\) rows and \(t\) columns), then the result will be a matrix of \(r\) rows and \(t\) columns. What we really care about is that the cost of the matrix multiplication is dominated by the number of terms that have to be multipled together. Here, it would be a total cost of \(r \times s \times t\) multiplications (plus some additions that we will ignore since the time is dominated by the multiplications).

The other thing to realize is this: Of course it matters whether we multiply \(A \times B\) or \(B \times A\). But let’s assume that we already have determined the order that they go in (that it should be \(A \times B\) But we still have choices to make if there are many matrices to multiply together The thing that we need to consider is this: If we want to multiply three matrices, and we know the order, we still have a choice of how to group them. In other words, we can multiply three matrices as either \(A(BC)\) or \((AB)C\), and the answer will be the same in the end. However, as we see below, it can matter a lot which way we do this in terms of the cost of getting that answer.

Settings

Proficient Saving... Error Saving
Server Error
Resubmit

To solve this problem efficiently (of how to group the order of the multiplications), we should notice that there are a lot of duplicate nodes in the recursion tree. But there are only a relatively limited number of actual subproblems to solve. For instance, we repeatedly need to decide the best order to multiply ABC. And to solve that, we repeatedly compute AB’s cost, and BC’s cost. One way to speed this up is simply to remember the answers whenever we compute them. This is called memoization. Whenever we ask the question again, we simply use the stored result. This implies that we have a good way to remember where to store them, that is, how to organize the subproblems to easily check if the problem has already been solved.

So, how do we organize the subproblems when there are \(n\) matrices to multiply, labeled 1 to \(n\)? One way is to use a table of size \(n \times n\). In this table, the entry at \([i, j]\) is the cost for the best solution of multiplying matrices \(i\) to \(j\). So, the upper left corner (entry \([1, n]\)) is the full solution. Entries on the main diagonal are simply a single matrix (no multiplication). Only the upper left triangle has entries (since there is no meaning to the cost for multiplying matrix 5 through matrix 3, only for multiplying matrix 3 through matrix 5).

Now, when we need to compute a series of matrices from \(i\) to \(j\), we just look in position \([i, j]\) in the table. If there is an answer there, we use it. Otherwise, we do the computation, and note it in the table.

   «  27.1. The Sparse Matrix   ::   Contents   ::   27.3. Amortized Analysis  »

nsf
Close Window