Monday, August 13, 2012

The Work Horse of PL Theory: Structural Induction

The main idea in the Crash Course post was that we can define infinite sets and relations by giving a list of rules (some of which are recursive), that is, by giving an inductive definition. Such inductive definitions are used everywhere. In the crash course we used them to define the syntax of the language, the operational semantics, and the type system.

Once you define a programming language, the next step is to implement an interpreter or compiler and write lots of programs. The point of this, from the design point of view, is to make sure that your definition actually matches your intuitions about how your language should behave. Once you're reasonably happy with the definition of the language, you might want to gain further confidence in the design by proving that the language has the properties that you think it should. For example, you might want reduction to be deterministic, so you would try prove that for each program there is one unique reduction sequence. Another property that you might want is type safety, which means that values aren't misused but also includes things like memory safety (no segmentation faults, buffer overruns, etc.).

It may seem rather daunting to consider proving something about all possible programs. After all, there's an infinite number of them! Fortunately, if we use inductive definitions to specify our languages, then there's a proof technique called structural induction that we can use to prove things about them.

A Simple Example

Let's begin with an example of using structural induction on the relation that we defined in the Crash Course. Recall that this relation was defined by the following two rules.

  • .
  • For any natural numbers and , if , then .
Also recall that we had informally described this relation as mapping each natural number to its successor. The following is the formal version of that statement. Now how would we go about proving this statement mathematically?
A proof by structural induction creates a recipe for proving the property of interest for any element of the set. The recipe includes a case for each rule in the inductive definition of the set. Because every element in the set is the root of a tree of rule applications (a derivation), the recipe can be applied to each node in the tree, starting at the leaves, to eventually build a proof for the element at the root.

But it's easy to get lost in generalities, so let's get back to our example and build a recipe for generating a proof that for any pair .

  1. We need to give a proof that for the case where , so by simple arithmetic we have .
  2. The second rule in the definition of is more interesting. We're going to give a recipe for creating a proof of for the element in the conclusion of this rule, , out of the proof of for the premise of this rule, . That is, we need to take a proof of and create a proof of . Of course, that's really easy: we just add one to both sides!

So we've created a recipe that includes cases for all of the rules that defined the relation . Furthermore, in the second case, we didn't assume anything about or , so we can apply the second case in lots of different situations. Now let's see how this recipe can be used to create proofs for elements of . In the Crash Course, we built a derivation that showed . The derivation started with (by rule 1), then had (by rule 2), and finally (by rule 2). Now let's apply our proof recipe to each of these steps. We have by case 1 above, then by case 2, and finally again by case 2. Thus, we've seen that our property holds for one element of the relation, namely for . However, this process would have worked for any element of . Thus, the above two recipes can be taken as enough to prove that for every pair .

A More Interesting Example

Let's now move on to an example of a proof about programming languages. In the post My new favorite abstraction machine, I wrote down a translation that puts the simply-typed lambda calculus (STLC) into A-normal form (ANF) and then defined an abstract machine that operates on the ANF. The proof of type safety for this definition of the STLC includes two parts: the first shows that the translation of a well-typed program always produces a well-typed program in ANF and the second part proves that the ANF is type safe, that is, the abstract machine doesn't get stuck. Let's prove the first part by structural induction. Formally, what we'd like to prove is where refers to the type system for STLC (see the definition here) and is the type system for ANF statements. We have not yet given a definition of the type system for ANF statements, definitions, or expressions, so we must pause here to do so. We make one small change to the syntax of ANF statements to make the proof go through easier, allowing a sequence of declarations in a statement: replacing the statement with .

The ToANF function is defined in terms of a recursive function ANF. It typically helps to prove something separately for each function, so let's try to come up with a lemma about ANF that will help prove that ToANF preserves types. Looking at the definition of ToANF, what we'd like to be true is the following property. Let's see if we can prove this by structural induction. (We won't be able to but the failure will point out what minor modifications need to be made.)

A First Attempt

Our first choice is what to do structural induction on. There's several things in the above statement defined using inductive definitions. The expression and the typing derivation are both good candidates because they appear to the left of the implication. Let's do induction on the typing derivation because that tends to give us more ammunition than induction on the structure of expressions.

There are five rules that define the typing relation for the STLC, so we'll need five cases.

  1. Case The premise is false, so this case is vacuously true. However, this should make us start to worry, something fishy is going on.
  2. Case The proof recipe for this case needs to take a proof for the premise and create a proof for the conclusion. However, our property doesn't apply to the premise because the type environment isn't empty. At this point we're stuck and we must have the good sense to give up this proof attempt.

A Successful Structural Induction

But what have we learned? What we're trying to prove must not fix the type environment as being empty, but needs to work with any type environment. Let's reformulate the property with an arbitrary replacing the . Let's try the proof by induction on again.

  1. Case We have , so we need to show that is well typed. From we have and therefore .
  2. Case We have where and . So we need to prove that and so it suffices to prove that . Our recipe for this case takes as input a proof for the premise, so we know that (The statement is called an induction hypothesis. It's always a good idea to write out an induction hypothesis carefully because it's easy to get it wrong. To get an induction hypothesis right, just instantiate the forall in the property you are trying to prove with the appropriate variables from the premise. Do this mechanically; thinking about it too much can lead to errors! Also, always tell your reader when you're using the induction hypothesis; think of it as the climax of the case!)

    Because is a function (in the mathematical sense), we know that the and in the existential must be the same as the variables of the same name in our current proof. So our induction hypothesis tells us that . We then just apply the typing rule for lambda's to prove our goal, that .

  3. Case We have , where , , and is a fresh variable. We need to prove that We have two induction hypotheses, one for each premise: and So we have and from the first induction hypothesis (IH for short). We have and from the second IH. Looking back at what we need to prove, there's a bit of a jump that needs to happen to get us from and being well-typed separately to their concatenation being well typed. Let's posit the following lemma: under the assumption that the variables on the left-hand sides are all distinct. Also, we need to show that Again there's a bit of a jump that needs to happen. This time, the problem is that we know and are well typed in smaller environments (missing in the former case and missing in the later). So we need a lemma that tells us it is ok to add more variables to the environment so long as those variables to don't interfere with variables that are already there. This lemma comes up so frequently it has a special name. It is know as the Weakening Lemma. Applying this lemma twice, we have and which gives us and then applying the concatenation lemma again, we have All that remains is to deal with the final . But it's easy to see that so we can conclude this case, having shown that
So far we've handled three of the five cases, so the proof is not yet complete. I'd encourage the reader to try and fill in the two remaining cases (for constants and primitive operators) and the two lemmas that we used. (Hint: use structural induction to prove the lemmas.)

Conclusion

If you define a set using an inductive definition, then you can prove stuff about every element of the set (even if the set is infinite!) by structural induction. To do this, create a recipe that includes a case for each rule in the inductive definition. Your recipe for each case takes as input the proof for the premises and it should output a proof for the conclusion of the rule. That's all it takes! Just make sure not to skip any cases and be extra careful with the induction hypotheses!

1 comment:

  1. Great post! One of my favorite papers on this topic is "An introduction to (co)algebras and (co)induction," which gave a fun detour on the categorical background behind inductively defined structures (and functions which were defined over them!), along with the (co)universe you get when you stand your head.

    ReplyDelete