## Djinn, monotonic

February 13th, 2009 by ConorOn my way to check the plumbing, I dropped in to see how people were doing and told a bit of a story about proof search in implicational logic. You may have encountered the problem before in the context of Roy Dyckhoff’s contraction-free sequent calculi, which Lennart Augustsson turned into Djinn. Here’s a Haskell rendering of the basic idea. An implicational formula is an atom or an implication.

data Fmla = Atom Atom | Fmla :-> Fmla type Atom = String

Let’s write a checker to see if a Fmla follows from a bunch of hypotheses. Step one, introduce the hypotheses, reducing the problem to testing if an atom holds.

fmla :: [Fmla] -> Fmla -> Bool fmla hs (h :-> g) = fmla (h : hs) g fmla hs (Atom a) = atom hs a

Step two: scan the hypotheses, trying to derive the atom from each.

atom :: [Fmla] -> Atom -> Bool atom hs a = try [] hs where try js [] = False try js (h : hs) = from h a (js ++ hs) || try (h : js) hs

Step three: test whether a given hypothesis (&lsqho;in the stoup’) delivers the atom.

from :: Fmla -> Atom -> [Fmla] -> Bool from (Atom b) a hs = b == a from (g :-> h) a hs = from h a hs && fmla hs g

That’s to say, to use an implicational hypothesis, show the premise holds and use the conclusion. But from which context should we show the premise? If you look carefully, you’ll see that atom passes in the whole of its context, *bar the hypothesis in the stoup*. That stops us getting in a loop when we try to deduce A from (A → A). We haven’t thrown away any theorems, because any proof which uses the same hypothesis twice on a path in the tree can be shortened to cut out the loop.

But is that enough to ensure termination? Sure is. How might we see that? The problem is quite nasty, because if we want to show

(A → B → C) → D, A → C |- D

backchaining actually *increases* the length of the context: we end up showing

A → C, A, B |- C

but even so, you might get the feeling that the problem is getting smaller somehow. And you’d be right: we got rid of a higher-order hypothesis but acquired more at lower order. We can make that clear with a wee bit of indexing.

Here are formulae, now in Agda:

data Fmla : Nat -> Set where atom : Atom -> {n : Nat} -> Fmla n _=>_ : {n : Nat} -> Fmla n -> Fmla (suc n) -> Fmla (suc n)

The index is a loose bound on the *order* (arrow-left-nestedness) of the formula. Never use tight bounds when loose bounds will do—it’s too much hassle keeping tight bounds accurate. Now we can collect formulae in buckets according to their order. Let’s use this handy structure

Tri : (Nat -> Set) -> Nat -> Set Tri T zero = One Tri T (suc n) = T n * Tri T n

which I’ve defined recursively today for ease of iso. To see why these are sort-of triangles, instantiate T to vectors. Also T = Fin·suc goes with a bang! But I digress… Vectors can be seen as degenerate triangles.

Vec : Set -> Nat -> Set Vec X = Tri \ _ -> X

Now we can say what a bucket of formulae of a given order might be.

List : Set -> Set List X = Sig Nat (Vec X) -- Sig is the usual dependent pair type Bucket : Nat -> Set Bucket n = List (Fmla n)

And now a context is a triangle of buckets:

Ctxt : Nat -> Set Ctxt = Tri Bucket

So our checker repeatedly makes higher-order buckets smaller, at the cost of making lower-order buckets larger, but that’s a perfectly respectable way to be monotonically decreasing.

Agda’s termination checker isn’t so good at spotting that this is what’s going on, but before we had Agda’s *bag on the side*, there was a language called Epigram which resorted to Type Theory as its language for explaining termination. Gasp!

It’s gone 5pm on Friday and I’m still in school. (Thorsten and Graham are away today: they’d never allow this!) I’m just going to link to this inscrutable lump of code and finish this post later.