## Implementing a Correct Type-Checker for the Simply Typed Lambda Calculus

November 25th, 2009 by Darin MorrisonLast Friday I discussed how to implement a “correct” type-checker for the simply typed lambda calculus in a dependently typed metalanguage – in this case Agda.

What does it mean to implement a correct type-checker? Stepping back for a moment, most people have probably at one point or another encountered a buggy type checker. Such a type checker will occasionally give the wrong result either (1) by saying that an ill-typed term does indeed type-check, or (2) that a well-typed term doesn’t type-check.

For example, if we have a typing judgement and a type-checking function , a buggy type-checker could give either of the following:

(1)

(2)

A *correct* type-checker should be constructed in such a way that both of these cases are always ruled out.

In order to construct such a type-checker, we will leverage the richness of our type system in order to prevent us from writing buggy code. Looking back at the type of our check function, we see that it returns a . The problem is that there is nothing in this type that ensures that the value we return is actually related to our input in any way. Our check function could be a constant function returning for all we know.

We can fix this by refining to a sum type of proofs:

What’s going on here? We’re saying that if type-checking is successful, we should have a proof in the form of a typing derivation. If type-checking fails, we want a proof that such a derivation can’t exist (or we would have a contradiction).

Stepping back again, if we look at this new type for the type-checking function through our classical glasses, we obtain:

which is completely trivial by the LEM. However, we are working in an intuitionistic metalanguage and by the disjunction property of intuitionistic logic, an implementation of our well-specified checking function is both a proof that type-checking is decidable and a decision procedure for actually doing so!

This is all quite nice so far, except for the fact that the standard typing rules for the STLC are non-algorithmic. By this, I mean one cannot transcribe the typing rules completely straightforwardly as a functional program. This is most apparent by considering the application case:

This rule must be read bottom-up to be consistent with an algorithmic interpretation realised as a structurally recursive checking function. But where does come from? We have to more or less pull this type out of thin air to continue the recursive type-checking. Also, in a real implementation, even if we are able to somehow infer some type, say , how do we know that ?

There are ways to get around this, such as by modifying the check function to also *return* a type, and implementing equality for types, but the problem with this approach is that we now have a mismatch between how we understand the typing judgments and how we understand type-checking. Furthermore, the specification of the check function becomes more complicated.

A better solution is to realise that we need to change the typing rules themselves to reflect when a term should be analysed against a given type and when a type should be synthesised from a term (as in the case for the recursive call on M).

We do this by splitting the original judgement into separate analytic and synthetic judgments.

( checks against )

( synthesises )

Now we can rewrite the application rule so that, read bottom up, it provides an algorithm from which a case arm in a functional program is immediately derivable:

(As a side note, there is an interesting philosophical argument described by Zeilberger et al. as to why the synthetic judgement should be read top-down and the analytic judgement bottom-up, but I will skip the details here…)

The full set of rules so far becomes as follows:

( checks against )

( synthesises )

( is equal to )

These rules should seem right with the exception of the last one. We need this rule so that we can, for example, check a variable in the body of a lambda. But there’s something wrong with it.

The problem is that, unlike all of the other rules, it is not associated with a particular constructor or destructor, which makes it non-syntactic. If we take it as it is, type-checking will no longer be deterministic and syntax directed which would be bad after all we’ve considered so far.

We need to introduce a new constructor to make this rule syntax directed, let’s call it “syn”, because when we encounter it, we recursively synthesise a type for its argument:

So we’ve added a constructor to a term language, which makes type-checking syntax directed again. But careful consideration reveals that we’ve created a new problem. Some terms which *should* be checkable are no longer checkable if they occur under “syn”. Consider . If we know the type of the codomain for the lambda term, we should be able to check it, but in this case, our type checking algorithm is going to try and infer the type of the lambda term, but looking up at our rules, we see there is no synthesis rule for lambda, so checking is going to fail.

Luckily, we can fix this too. We need to break terms M up into two syntactic categories, one for checkable terms and one for synthesisable terms:

Seems good so far, right? Well, not quite. We’ve successfully broken the original term language up into the appropriate syntactic categories, but in the process we’ve restricted the language in such a way that we cannot write any terms containing redexes. Everything we can write is already in normal form. Oops!

The culprit here is the fact that the only thing we can put in the left hand side of an application is an R term, but lambdas are M terms. We need a way to embed M terms back into R terms. Since the only real checkable term is a lambda term, and we cannot infer types for these, how can we possibly embed them into R terms? By adding a type annotation! Let’s add a constructor of the form . If we were to put this on a lambda term, the type annotation for the variable would become superfluous, so we might as well drop that entirely and only use the annotation constructor. Let’s consider the final system with all of our changes in place:

( checks against )

( synthesises )

( is equal to )

What impact is all of this going to have on our checking function? First, we’re going to have to create a new function corresponding to the synthetic judgment, and refine the types of the arguments slightly to reflect the fact that we now are dealing with two separate syntactic categories:

With all of these changes in place, it becomes pretty straightforward to write ‘check’ and ‘synth’ as two mutually recursive functions. The types will prevent us from writing buggy code and our type checker should be correct by construction.

If we plan to use this type-checker in the “real world”, we might want to change the negative evidence type to two separate inductive types:

and

Then we would want to show that these new inductive types are sound with respect to the negation of the original judgement:

implies

and

implies

This would allow us to produce error messages based on the negative evidence, since it is inductive whereas the negation (which is a function type) is not.

**UPDATE 1**

I’ve added some screenshots showing an implementation of what I describe above in action:

Type-checking the S combinator:

Type-checking an incorrect version of the S combinator:

Type-checking (and normalising) an arbitrary term:

** UPDATE 2 **

If you would like a copy of the source code, feel free to email me.

Also, I should point out that this idea of constructing a correct type-checker is not an original idea. At least one earlier iteration of a very similar idea was described by McBride and McKinna in “The view from the left”. There are two differences with what I do here. The first is that I use a bidirectional type system instead of the traditional one. The other difference is that I show that the judgment for negative evidence is sound with respect to negation of the judgement for positive evidence. This step allows me to prove that type-checking and type-inference are decidable.

November 25th, 2009 at 8:10 pm

am i too skeptical if at the end i also want to prove that

Gamma |- M <= t implies Gamma |- M : (stripSyn t)?

though then i’d wonder how to state the correctness of stripSyn.

November 25th, 2009 at 10:20 pm

oops, stripSyn should go on M, to convert it back to the original grammar

November 26th, 2009 at 4:28 pm

Hi,

While I find these bidirectional typing rules interesting, I hardly buy the fact that it is better to change the language definition than an intermediate definition of check (which returns a type)

November 27th, 2009 at 3:36 am

Hi Nicolas,

Of course you can write the check function that returns the type, but it doesn’t correspond to what the original typing rules mean because they don’t have a direct algorithmic reading.

You can change your rules, as I did, to have an algorithmic reading that matches what you actually want to do in your type-checking function, but at that point you might as well do it properly and split type-checking and type-inference up into their own judgements and corresponding functions rather than confusing them with a single judgement/function.

You may still disagree, but I feel this leads to a much cleaner and more elegant presentation, not to mention one (bidirectionality) that is crucial for more advanced type system.

November 27th, 2009 at 4:29 am

though type inference for STLC is complete so we could have only the syntetizing part, right?

the algorithm would use something like logic variables though.

November 28th, 2009 at 4:16 pm

Can you link to the Zeilberger arguments? thanks!

November 30th, 2009 at 9:14 am

It all depends if you have explicitely annotated lambdas. If you do then in the STLC the type is unique and then any strategy will works, thus computing the type of each part of the application can be done and then one just check that the types matches.

On the other hand if you remove the annotations then a bidirectional presentation helps.

Could you share your agda code?

December 7th, 2009 at 1:11 pm

@maud:

Unfortunately I don’t know of a particularly concise source for this. He discusses this as part of the “verificationist” versus “pragmatist” meaning theories. There’s some background on this in his paper “On the Unity of Duality” and also in his thesis. Frank Pfenning (his advisor) also discussed a little of this in an invited talk he gave for ICFP in Freiburg.

@Nicolas Pouillard:

I’d rather not quite post the code online yet because some bits of it are not as polished as I like. If you (or anyone else) would like a copy though, feel free to email me and I’ll send it to you.