Functional Pearls 2.5 – Derivatives of Types

This post is not only extremely late, its also a bit of a cop-out. Last time I said I would write about derivatives of types. This is a cop-out in the sense that its a bit more “overview”-ish than I would have liked, but I did not find a way around that without making the whole thing unacceptably long. So think of this as a way to get a feel for what’s going on, and if you want more details you can always look at the paper

Zippers again

We’ll start with a simple binary tree:

data BTree = Leaf | BTree (BTree, BTree)

Notice that this is a “bare” tree – only structure, no data. But it will do for our purposes. Now you’ll remember that in a Zipper, we had a subtree which was the focus of attention, and a Path which represented “everything else”. The Path itself was a series of steps, going one level down at a time. It is this notion of “step” that we are going to capture i.e. how do you represent the “context” of an immediate subtree of this tree. You can then do this multiple times to get the complete Path used in the zipper.

This should be clearer by refering to the BTree example. The “step” type here is:

Bool * BTree

The Bool tells us whether we are in the left or the right branch, and the BTree is the other branch i.e. the one we didn’t take. The Path essentially consists of a series of these steps, one for each level that we have descended.

Now lets be a little loose and write the definition for BTree as follows:

1 + BTree * BTree

or

1 + BTree^2

Here we are ignoring the data constructors, and just thinking about the types themselves. And here’s the trick: Think of this as an algebraic expression, and differentiate it wrt BTree (which is now a variable in this expression) as you would have in high school.

d(1 + BTree^2)/dBTree = 2*BTree = (1 + 1) * BTree

But Bool = True | False or in our notation 1 + 1. This means that the “derivative” has given us exactly the step type we were looking for!

This was not just a lucky example. McBride found that the rules of differentiation can be applied (unchanged) to types (including rules for products, the chain rule etc.), and they always gave the “correct” step type. That good ol’ Leibniz should have anything to say about type theory is really a remarkable coincidence. Or is it?

Details, details

Now in the above section (and too an extent in this entire post) I’ve glossed over some things which are worth going into. For example, what sort of types can we differentiate? (“regular” types) Functional types as well? (no). What exactly does differentiation mean? The notation I am using to describe types also needs to be explicitly defined, we need to worry about variable capture, etc. etc. I cannot go into all that here, but hopefully the notation makes sense intuitively so I’ll only deal with some of the details.

I had asked you to accept that a BTree could be defined as 1 + BTree^2. This is in fact incorrect, because BTree = 1 + BTree^2 doesn’t make too much sense (and in fact tempts me to solve the quadratic and conclude that BTree = (-1 + \sqrt{3}i)/2 or some such!).

McBride defines such recursive types using a least fixed point

BTree = \mu(x).(1 + x^2)

But then, why did I differentiate 1 + BTree^2 instead of \mu(x).(1 + x^2)?

Here’s why: Taking a derivative is more general than calculating a step type (think about non-recursive types. “Step type” is meaningless there, but you can still take a derivative). What dF/dx really does, is it punches a “hole” for an element x in a container F. This gives us the “context” or the surroundings of that element – i.e. the hole tells us where this element “fits” into the larger type. For recursive types like BTree suppose we punch a hole at a particular level (in this example, in 1 + x^2). Here, the container is one level in the tree, and the element is the immediate subtree. By punching a hole for this element we get what we’ve been calling the “step type”. So for step types we don’t differentiate the tree, but rather one level of the tree.

Just to make this clearer, consider the type [String]. We can represent this with a fixed point:

\lbrack String \rbrack= \mu(x).(1 + String*x)

Now, I haven’t given you the differentiation rule for \mu (the chain rule is involved here!) so accept for the moment that:

d[String]/dString "is isomorphic to" [String]*[String]

Where “is isomorphic to” means that syntactically differentiating [String] will give us an expression which is “essentially the same as” [String]*[String]

Notice that this result makes sense: The context of a String in a list of Strings should be a pair of strings: the list of its predecessors and the list of its successors.

On the other hand, [String] is a recursively defined type. So I could consider its “step type”. This would be:

d(1+String*x)/dx = String

Again, this makes sense. If a step is a String, then a sequence of these steps will give us back the list of Strings. By the way it makes no sense to differentiate something of the form \mu(x).F with respect to x. Why?

Doing this for real

I’ve been busy implementing this in Template Haskell (although for a different purpose). McBride’s types don’t map directly to Haskell types so there are some problems, but they’re quite easy to circumvent. For one thing you’ll notice that McBride dispenses with data constructors, but we have to keep track of them if we are to “plug in” an element back into its context. But this is not really a “problem”. There are other niggles such as the fact that McBride does not deal with parametrized types, or mutually recursive types (directly). On the other hand something like x*(y+z) cannot be expressed directly in Haskell: it will have to be broken up into something like x*a and a=y+z (with data constructors).

But really the biggest problem in the implementation has been Template Haskell itself. Don’t get me wrong: TH is actually quite nice and very, very powerful. Its just that Haskell AST’s are complicated, and you need a bit of time to get used to using the TH system (btw if you’re looking to learn TH I recommend these: http://www.haskell.org/bz/thdoc.htm and http://www.haskell.org/bz/th3.htm are great).

I guess Lisp’s simpler syntax is a huge gain as far as macro systems are concerned. Also I’m guessing you wouldn’t get stuck inside the quotation monad there since impure functions are OK?

Anyway it’ll be some time before I can put up the derivative code since it is a part of something else, but TH usage has given me some interesting ideas for other projects. It really is quite powerful!

Recommended reading

If you want things to be more precise, you’ll have to look at McBride’s paper. You might also want to look at “dissection” of types, which is the next logical step in a sense. But in any case, I highly recommend this post on sigfpe’s excellent blog. It really makes you think about how deep this connection between might be. Is it all just an artifact of our notation? Which of the results and concepts from differential calculus have meaning in the world of types? Could we assign some meaning to integration (other than just inverse differentiation)? Some pretty interesting questions here – any answers?

About these ads

~ by hellfeuer on October 22, 2008.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
Follow

Get every new post delivered to your Inbox.

%d bloggers like this: