Laziness and Dynamic Programming

Recently I was thinking about purely functional solutions to dynamic programming problems. The problem with the conventional, imperative technique for solving such problems is that it assumes the existence of O(1) access arrays. So translating the code directly using purely functional data structures is going to add an extra factor of log(n).

Originally this started out as a purely academic exercise since one can always use mutable arrays in Haskell. However it turns out that there is a purely functional approach that is just as fast, and really quite elegant.

The Chain Matrix Multiplication Problem

I chose this to start with since its standard, and simple (In all this I’m going to assume you know the imperative solution. If not here it is). My initial idea was to do something like the following. First consider the Fibonacci function:

fib 0 = 1
fib 1 = 1
fib n = fib (n-1) + fib (n-2)

Everyone knows how slow this is. But it’s easy to speed up (Edit:fixed!):

fib n = fst $ foldl (\a b -> (fst a + snd a, fst a)) (1, 0) [1..n]

I was wondering if something similar could be done with chain matrix multiplication. The idea is that although they are called random access arrays, the actual order of accesses is far from random! So I wanted to see if, given the recursive solution, I could determine the order of accesses and use that to transform the recursive solution into a fold-like form.

For chain matrix multiplication, the order of accesses is something like this: The value of a cell in the memo table depends on the minimum of the sums of the corresponding elements in the column below and the row to its left. So you focus on a cell and the corresponding row and column. You can calculate the value in the cell by zipping the corresponding row and column together with (+), and taking the minimum. Then you move to the next cell. Now your focus must change to the appropriate row and column (efficiently!), using what is essentially a type of zipper

So I wrote a purely functional program in Haskell that juggled things appropriately so as to be O(n^3) (which is the time-complexity of the imperative version). It can be done, and all you need are lists. However the result is not very nice, because everything must fit together just so. It works but its not really very satisfactory.

Now the juggling required for this problem is bad enough, but to make matters worse there would have to be a different solution to each DP problem. Actually, things are much worse than that. There will be some problems where the order of accesses will depend on the specific instance of the problem (eg Job scheduling as long as the jobs aren’t all of unit time). It is of course possible that even these can be transformed into a form where a strict, efficient, purely functional program can be written but it will certainly be far too messy a solution to what was once a simple problem.

Laziness

Notice how I snuck in the word “strict” in the previous sentence. It turns out that in a lazy language, the solution is simple. Because of the problems mentioned above, I started looking for alternatives. If I wanted to solve the problem efficiently using just lists, I needed to regulate the order of accesses which seemed to be impossible. So then I wondered if there might be a way of forming an acceptable data structure for arrays, or rather an alternative data structure that is “good enough” for the current problem.

It turns out that no fancy alternative is needed. An immutable array will work just fine! The key here is that even in the imperative world, we are writing to the array just once. An immutable array can be “written to” once – when initializing it. In a strict language this doesn’t help much. But with laziness you can build up an immutable array recursively.

I’ll explain through an example. Lets take fibonacci. This is not usually considered a DP problem, but anyway:

fib n = fib' n
  where
    memo = array (0, n) [(i, fib' i) | i <- [0..n]]
    fib' 0 = 1
    fib' 1 = 1
    fib' i = memo!(i-1) + memo!(i-2)

Notice that we never update the array. It starts out with the correct values already! Within fib' I’m using memo in place of the recursive calls to fib'. Each element of memo gets initialized to the correct value the first time its accessed. If it’s accessed more than once, then the other calls to memo!i will return immediately, since the value has been calculated and stored already. In effect, laziness is giving us a “one-time-write” array.

Now its trivial to extend this example to any recursive function. Coming back to chain matrix multiplication:

matrix dim = matrix' (1, n)
  where
    n = snd $ bounds dim
    memo = array ((1,1), (n, n)) [((i, j), matrix' (i, j)) | i <- [1..n], j <- [i..n]]

    matrix' (i, j)
      |i == j = 0
      |otherwise = foldl1 min 
        [memo!(i,k) + (dim!(i-1) * dim!k * dim!j) + memo!(k+1,j) | k <- [i..j-1]]

Here dim is an array containing matrix dimensions so that the dimensions of the i’th matrix are dim!(i-1) x dim!i.

Moral!

When solving a DP problem, the usual technique is to first write a recursive solution, and then convert it into an efficient solution by building up a table of intermediate values. The cool thing about the technique I described above is that the second step of this process becomes trivial. The solution is just a memoized version of the recursive solution, and hence easier to understand than the usual imperative solution (you can compare on wikipedia). The memoization is almost implicit. There is no need to check whether a value has been computed or not. We just use it, and laziness handles the rest.

The transformation from basic recursive to memoized is also a simple one. In fact it shouldn’t be hard to automate this transformation (perhaps in Template Haskell), but even manually this is easy and gives a nice, clean solution.

Of course, I’m sure people have thought of this before, but it seemed new to me because I think deep down I’m still stuck in a “strict” mindset. Hopefully it was just as non-obvious to people reading this! For me it was a reminder that some very cool things are possible in lazy languages once you start grokking them.

About these ads

~ by hellfeuer on March 26, 2009.

3 Responses to “Laziness and Dynamic Programming”

  1. fib n = fst $ iterate (\(a, b) -> (a + b, a)) (1, 0) !! n

  2. I agree that this technique is really sweet. Dynamic programming looks like it may be a bad fit for functional languages but with laziness it suddenly become easier than in imperative language ! :)

    It’s a well known idiom in the Haskell circles.

  3. I’m not sure of this, but I tend to think of it as a difference in the type of recursion. The common way is to us function recursion, while what you talk about is data recursion. Laziness allows you to use infinitely recursive data, and just pick out the piece you need.

    It is very sweet indeed!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
Follow

Get every new post delivered to your Inbox.

%d bloggers like this: