Compatibility of equalities based on operational semantics

October 29th, 2010 by Nils Anders Danielsson

Today we discussed the kind of issues one can run into when proving that a program equivalence, defined on top of an operational semantics, is a congruence.

Using reflection to improve automatization

May 13th, 2010 by gallais

Last week I talked about a solver for propositional logic that uses reflection. This work is the opportunity to present how one shall develop a solver using reflection.


The purpose of reflection is being able to manipulate terms of the language inside the language itself. It allows you to design certified solvers whereas the use of a MetaLanguage (Ltac for example) doesn’t guarantee anything.

Since AIM XI, the latest version of Agda has a couple of new features. One of them is the possibility for the user to have access to the current goal [1]. From now, you can use :

  • A datatype Term that represents the terms in Agda
  • A command quoteGoal t in e which has the typing rule: e[t := `T] : T ⊢ quote t in e : T
  • A command quote which gives you the internal representation of an identifier

A solver will be designed in three steps. Let’s say that the type MyType will represent the set of goals that you want to deal with and that MyTerm will be the representation of the inhabitants of MyType. We need to:

  • Add a proper quoting function taking a Term and outputing a MyType element (preferably a non provable one if the Term has not a good shape)
  • Design the solver taking a MyType term and outputing a MyTerm element
  • Give the semantics of our datatypes and prove the soundness of our solver

A solver for propositional logic

Proving a formula of propositional logic is the same (thanks to Curry-Howard’s isomorphism) as finding a lambda term which is an inhabitant of the corresponding type. Our work is based on the (said to be “structural” but not in Agda’s sense) deduction rules presented in a paper by Roy Dyckhoff and Sara Negri [2].


The “MyType” datatype:

data Type : ℕ → Set where
atom : ∀ {n} → Fin n → Type n
⊥ : ∀ {n} → Type n
_∩_ _⊃_ _∪_ : ∀ {m} → Type m → Type m → Type m

The “MyTerm” datatype is more verbose but pretty straight-forward so I won’t include it here. It contains all the basic constructors for this simply-typed lambda caculus with sum and product types (var, lam, app, inj1, inj2, case, proj1, proj2, and).

The only tricky thing is the lift function that lifts all the free variables of a given term because it has to deal with modifications of the environment when going under a lambda.


The issue of partiality (the formula is maybe not provable) is solved by using dependent types: the solver requires un argument that will either have the type ⊤ if the proposition is provable (the placeholder tt is then inferred by agda) or have the same type as the goal if it is not provable.

Example of the use of the solver:

Ex : ∀ {A B C D : Set} → ((A → B) × (C → A)) → (A ⊎ C) → B × (((A → D) ⊎ D) → D)
Ex {A} {B} {C} {D} = quoteGoal t in solve 4 t (A ∷ B ∷ C ∷ D ∷ []) _


[1] See Agda/test/succeed/Reflection.agda and Agda/doc/release-notes/2-2-8.txt

[2] Roy Dyckhoff and Sara Negri, Admissibility of Structural Rules for Contraction-Free Systems of Intuitionistic Logic,

You can get the source code on the following darcs repository: darcs get

Solving first order stream equations

May 10th, 2010 by fyb

I presented a solver for first order stream equations. Those are simple equations of the form:

 x = 2 :: head (tail x) :: head (3 :: head x :: x) :: tail (4 :: x)

The solver is parametric in the domain type. It turns out that unless the domain is empty, those equations always have solutions. Given a term representing the right hand side of an equation, it builds a template which is a finite representation of its solutions. A template is instantiated by providing an environment in which the under-constrained elements of the stream get a value.

There are two steps: first reduce the term to a normal form by eliminating redexes:

  • head (d :: s) reduces to d
  • tail (d :: s) reduces to s

Then, solve the problem for normal forms. An equation whose right hand side is in normal form looks like:

 x = d_1 :: ... :: d_n :: tail^j x

where the d_i are either constants from the domain or equal to some element x_k of the stream x.

We get n equations: x_i = d_i. We get also a condition on the rest of the stream: if j = n, then there is no constraint on the remaining elements of the solutions (after the n-th), otherwise, there will be a loop of length abs(n-j).

Solving the n conditions boils to extracting the strongly connected components of a graph whose vertices are the x_1 ... x_n and the edges are the dependencies between them. The problem is actually simpler than the general one because each vertex is target of at most one edge.

I implemented such an algorithm in Agda but chickened out from writing the proofs in Agda that the algorithm produced a template whose instances are exactly the solutions of the given equation. I wrote the proofs in English instead.

Bag Equality via a Proof-Relevant Membership Relation

March 16th, 2010 by Nils Anders Danielsson

Last week I talked about a definition of bag equality—equality up to permutation—which I have used recently. The definition works for arbitrary containers, but for simplicity I will focus on lists.

The predicate transformer Any can be defined as follows (in Agda):

  data Any {A : Set} (P : A → Set) : List A → Set where
    here  : ∀ {x xs} → P x      → Any P (x ∷ xs)
    there : ∀ {x xs} → Any P xs → Any P (x ∷ xs)

Any P xs means that any element in xs satisfies P. Using Any and an equality relation it is easy to define the membership relation:

  _∈_ : {A : Set} → A → List A → Set
  x ∈ xs = Any (_≡_ x) xs

Finally bag equality can be defined:

  _≈_ : {A : Set} → List A → List A → Set
  xs ≈ ys = ∀ x → x ∈ xs ⇿ x ∈ ys

Here A ⇿ B is the set of invertible functions between A and B. Two lists xs and ys are equal when viewed as bags if, for every element x, the two sets x ∈ xs and x ∈ ys are equipotent; note that if x occurs a certain number of times in xs, then there will be a corresponding number of distinct elements in x ∈ xs.

The definition above has some nice properties:

  • If _⇿_ is replaced by equivalence (coinhabitance), then we get set equality instead of bag equality. This uniformity can be exploited; one can for instance prove that map preserves bag and set equality using a single, uniform proof.
  • The definition generalises to other containers, including types with infinite values (like streams).
  • The use of Any can give some convenient wiggle room in proofs.

Let me illustrate the last point by proving that bind distributes from the left over append:

  xs >>= (λ x → f x ++ g x) ≈ (xs >>= f) ++ (xs >>= g)

This property does not hold for ordinary list equality (one way to see this is to let xs be true ∷ false ∷ [] and f and g both be return). I will make use of the following properties:

  (∀ x → P₁ x ⇿ P₂ x) → xs₁ ≈ xs₂ → Any P₁ xs₁ ⇿ Any P₂ xs₂
  A ⇿ B → C ⇿ D → (A ⊎ C) ⇿ (B ⊎ D)

  Any P (xs ++ ys)          ⇿  Any P xs ⊎ Any P ys
  Any (λ x → P x ⊎ Q x) xs  ⇿  Any P xs ⊎ Any Q xs
  Any P (map f xs)          ⇿  Any (P ∘ f) xs
  Any P (concat xss)        ⇿  Any (Any P) xss

From the last two properties we get a derived property for bind:

  Any P (xs >>= f)           ⇿
  Any P (concat (map f xs))  ⇿
  Any (Any P) (map f xs)     ⇿
  Any (Any P ∘ f) xs

Using these properties we can prove left distributivity. The proof is a simple calculation:

  y ∈ (xs >>= λ x → f x ++ g x)                    ⇿
  Any (λ x → y ∈ f x ++ g x) xs                    ⇿
  Any (λ x → y ∈ f x ⊎ y ∈ g x) xs                 ⇿
  Any (λ x → y ∈ f x) xs ⊎ Any (λ x → y ∈ g x) xs  ⇿
  y ∈ (xs >>= f) ⊎ y ∈ (xs >>= g)                  ⇿
  y ∈ (xs >>= f) ++ (xs >>= g)

The “wiggle room” I mentioned above is visible in the calculation: the proof does not refer solely to _∈_ y, i.e. Any (_≡_ y), but also to Any P for other predicates P.

The definitions of Any and bag and set equality are available in Agda’s standard library (Data.List.Any), along with the proof of left distributivity (Data.List.Any.BagAndSetEquality).

Causal Semantics of (A)FRP

March 8th, 2010 by Neil Sculthorpe

Having being inspired by Conal Elliott’s blog post about the inaccuracy of the (Arrowised) FRP semantical model, I was trying to find as simple a model as possible that is inherently causal.

The basic concept in FRP is of time-varying values called signals (or behaviours):

Signal : Set -> Set
Signal A = Time -> A

Where Time is the non-negative real numbers.

In Arrowised FRP there are then Signal Functions, conceptually functions between signals:

SF : Set -> Set -> Set
SF A B = Signal A -> Signal B

Signal functions are nicely modular, and AFRP programs are constructed by composing signal functions together.

As an example of a signal function, consider “delay”, which delays an input signal by a specified amount of time:

delay : Time -> SF A A
delay d = \ s t -> s (t - d)

The problem with this semantics of SF, is that it isn’t inherently causal. One could define a “future” signal function that is similar to delay, but produces the future input as the current output.

future : Time -> SF A A
future d = \ s t -> s (t + d)

In FRP we want to always be able to produce the current output from the current and past inputs, so this is bad.

Consequently, a causality constraint is usually placed on signal functions. All primitive signal functions must be causal, and all primitive combinators must preserve causality.

Causality can be defined:

Causal : SF A B -> Set
Causal sf = (s1 s2 : Signal A) -> (t : Time) -> ((t' : Time) -> (t' <= t) -> (s1 t' == s2 t')) -> (sf s1 t == sf s2 t)

So a causal signal function is a product of the signal function and the causality property:

CausalSF A B = Sigma (SF A B) Causal

I was hoping to find a model of AFRP that is inherently causal, and so doesn’t require the addition of a causality property on top of it. Starting from the definition of SF:

SF A B = Signal A -> Signal B
SF A B = (Time -> A) -> (Time -> B)
{flipping arguments}
SF A B = Time -> (Time -> A) -> B
{constraining the input signal}
SF A B = (t : Time) -> ((t' : Time) -> (t' <= t) -> A) -> B

We’re now constraining the input signal. We are now only required to know the input signal up to the point in time at which we are sampling the output.

As far as I can tell, this is now inherently causal, and it seems to work out okay for defining primitives. For example, composition is:

_>>>_ : SF A B -> SF B C -> SF A C
sf1 >>> sf2 = \ t sa -> sf2 t (\ t' ltB -> sf1 t' (\ t'' ltA -> sa t'' (transitive ltA ltB)))

However, so far we’ve only considered continuous signals. FRP also contains discrete signals, most notably events. Events only occur at certain points in time, and the event signal is undefined elsewhere. We require that only a finite number of events occur within any finite interval. This can be modelled as a (co)list of time-delta/value pairs:

EventSignal : Set -> Set
EventSignal A = List (Dt , A)

where Dt is the type of positive Time, representing the time since the previous event occurrence.

We can incorporate this into the original model by parametrising signals with a signal descriptor that contains this time-domain information (rather than just a Set):

data SigDesc : Set1 where
C : Set -> SigDesc
E : Set -> SigDesc

Signal : SigDesc -> Set
Signal (C A) = Time -> A
Signal (E A) = List (Dt , A)

SF : SigDesc -> SigDesc -> Set
SF x y = Signal x -> Signal y

However, it doesn’t work with the causal version, as we don’t have the same Signal types. Once we’ve put the time constraint on, we really have a signal history. That is, a signal indexed by some finish time:

SignalHistory : SigDesc -> Time -> Set
SignalHistory (C A) t = (t' : Time) -> (t' <= t) -> A
SignalHistory (E A) t = EventList A t

where an EventList is just a list of time-delta/value pairs with an added constraint that the sum of all the time deltas does not exceed the index. (If anyone can suggest a better datatype for this then please do so; this is just the first one that came to mind.)

data EventList (A : Set) : Time -> Set where
[] : {t : Time} -> EventList A t
cons : {t t' : Time} -> (d : Dt) -> A -> EventList A t'
-> (t' + d) <= t -> EventList A t

Returning to our signal function definition, we can now define signal functions for either kind of input signal, but only continuous output signals:

SF : SigDesc -> SigDesc -> Set
SF x (C B) = (t : Time) -> SignalHistory x t -> B
SF x (E B) = ?

And that’s where I got stuck stuck.

I did consider:

SF : SigDesc -> SigDesc -> Set
SF x y = (t : Time) -> SignalHistory x t -> SignalHistory y t

but this isn’t properly causal. It is to some degree (at any given point in time, the output signal history does not depend upon future input), but there’s nothing preventing the past from being rewritten later (because at each point in time a new signal history is produced, which may or may not be consistent with previous histories).

The codensity monad

February 12th, 2010 by Graham

Today I gave a introduction to the codensity monad aimed at the “functional programmer in the street”, rather than the “category theorist in the cupboard”. The emphasis was on understanding how one can arrive at the codensity monad in a natural manner starting from the simple and familiar example of improving the performance of list reverse.

Implementing a Correct Type-Checker for the Simply Typed Lambda Calculus

November 25th, 2009 by Darin Morrison

Last Friday I discussed how to implement a “correct” type-checker for the simply typed lambda calculus in a dependently typed metalanguage – in this case Agda.

What does it mean to implement a correct type-checker? Stepping back for a moment, most people have probably at one point or another encountered a buggy type checker. Such a type checker will occasionally give the wrong result either (1) by saying that an ill-typed term does indeed type-check, or (2) that a well-typed term doesn’t type-check.

For example, if we have a typing judgement and a type-checking function , a buggy type-checker could give either of the following:



A correct type-checker should be constructed in such a way that both of these cases are always ruled out.

In order to construct such a type-checker, we will leverage the richness of our type system in order to prevent us from writing buggy code. Looking back at the type of our check function, we see that it returns a . The problem is that there is nothing in this type that ensures that the value we return is actually related to our input in any way. Our check function could be a constant function returning for all we know.

We can fix this by refining to a sum type of proofs:

What’s going on here? We’re saying that if type-checking is successful, we should have a proof in the form of a typing derivation. If type-checking fails, we want a proof that such a derivation can’t exist (or we would have a contradiction).

Stepping back again, if we look at this new type for the type-checking function through our classical glasses, we obtain:

which is completely trivial by the LEM. However, we are working in an intuitionistic metalanguage and by the disjunction property of intuitionistic logic, an implementation of our well-specified checking function is both a proof that type-checking is decidable and a decision procedure for actually doing so!

This is all quite nice so far, except for the fact that the standard typing rules for the STLC are non-algorithmic. By this, I mean one cannot transcribe the typing rules completely straightforwardly as a functional program. This is most apparent by considering the application case:

This rule must be read bottom-up to be consistent with an algorithmic interpretation realised as a structurally recursive checking function. But where does come from? We have to more or less pull this type out of thin air to continue the recursive type-checking. Also, in a real implementation, even if we are able to somehow infer some type, say , how do we know that ?

There are ways to get around this, such as by modifying the check function to also return a type, and implementing equality for types, but the problem with this approach is that we now have a mismatch between how we understand the typing judgments and how we understand type-checking. Furthermore, the specification of the check function becomes more complicated.

A better solution is to realise that we need to change the typing rules themselves to reflect when a term should be analysed against a given type and when a type should be synthesised from a term (as in the case for the recursive call on M).

We do this by splitting the original judgement into separate analytic and synthetic judgments.

( checks against )

( synthesises )

Now we can rewrite the application rule so that, read bottom up, it provides an algorithm from which a case arm in a functional program is immediately derivable:

(As a side note, there is an interesting philosophical argument described by Zeilberger et al. as to why the synthetic judgement should be read top-down and the analytic judgement bottom-up, but I will skip the details here…)

The full set of rules so far becomes as follows:

( checks against )

( synthesises )

( is equal to )

These rules should seem right with the exception of the last one. We need this rule so that we can, for example, check a variable in the body of a lambda. But there’s something wrong with it.

The problem is that, unlike all of the other rules, it is not associated with a particular constructor or destructor, which makes it non-syntactic. If we take it as it is, type-checking will no longer be deterministic and syntax directed which would be bad after all we’ve considered so far.

We need to introduce a new constructor to make this rule syntax directed, let’s call it “syn”, because when we encounter it, we recursively synthesise a type for its argument:

So we’ve added a constructor to a term language, which makes type-checking syntax directed again. But careful consideration reveals that we’ve created a new problem. Some terms which should be checkable are no longer checkable if they occur under “syn”. Consider . If we know the type of the codomain for the lambda term, we should be able to check it, but in this case, our type checking algorithm is going to try and infer the type of the lambda term, but looking up at our rules, we see there is no synthesis rule for lambda, so checking is going to fail.

Luckily, we can fix this too. We need to break terms M up into two syntactic categories, one for checkable terms and one for synthesisable terms:

Seems good so far, right? Well, not quite. We’ve successfully broken the original term language up into the appropriate syntactic categories, but in the process we’ve restricted the language in such a way that we cannot write any terms containing redexes. Everything we can write is already in normal form. Oops!

The culprit here is the fact that the only thing we can put in the left hand side of an application is an R term, but lambdas are M terms. We need a way to embed M terms back into R terms. Since the only real checkable term is a lambda term, and we cannot infer types for these, how can we possibly embed them into R terms? By adding a type annotation! Let’s add a constructor of the form . If we were to put this on a lambda term, the type annotation for the variable would become superfluous, so we might as well drop that entirely and only use the annotation constructor. Let’s consider the final system with all of our changes in place:

( checks against )

( synthesises )

( is equal to )

What impact is all of this going to have on our checking function? First, we’re going to have to create a new function corresponding to the synthetic judgment, and refine the types of the arguments slightly to reflect the fact that we now are dealing with two separate syntactic categories:

With all of these changes in place, it becomes pretty straightforward to write ‘check’ and ‘synth’ as two mutually recursive functions. The types will prevent us from writing buggy code and our type checker should be correct by construction.

If we plan to use this type-checker in the “real world”, we might want to change the negative evidence type to two separate inductive types:


Then we would want to show that these new inductive types are sound with respect to the negation of the original judgement:




This would allow us to produce error messages based on the negative evidence, since it is inductive whereas the negation (which is a function type) is not.

**UPDATE 1**

I’ve added some screenshots showing an implementation of what I describe above in action:

Type-checking the S combinator:

Type-checking an incorrect version of the S combinator:

Type-checking (and normalising) an arbitrary term:

** UPDATE 2 **

If you would like a copy of the source code, feel free to email me.

Also, I should point out that this idea of constructing a correct type-checker is not an original idea.  At least one earlier iteration of a very similar idea was described by McBride and McKinna in “The view from the left”.  There are two differences with what I do here.  The first is that I use a bidirectional type system instead of the traditional one.  The other difference is that I show that the judgment for negative evidence is sound with respect to negation of the judgement for positive evidence.  This step allows me to prove that type-checking and type-inference are decidable.

Scala: An Object-Oriented Language with Functional Programming Features

November 25th, 2009 by Darin Morrison

A couple of weeks ago I gave an overview of the Scala language.  This was originally prompted by a recent discussion about languages for teaching and what might be a nice replacement for Java.  I suggested Scala and since I was familiar with the language I was asked to give an introduction.

Scala is an object-oriented language with a sophisticated and flexible type system.  It also supports quite a few features which are familiar to functional programmers.  Some of the features of Scala include:

* Bounded (upper and lower) parametric polymorphism

* Pattern matching via case classes

* First-class “functions” via functional objects

* Type constructor polymorphism (quantifying over types with higher kinds)

* A generalized comprehension syntax (can be used for sequence comprehension or for monadic computations)

* Existential types


Although Scala’s type system is rather sophisticated, the basics of the language are easy to learn and there are quite a few tutorials and references available online:

Parser combinators are as expressive as possible

October 28th, 2009 by Nils Anders Danielsson

What is the expressive strength of parser combinators? Which languages can be expressed using them? The question depends on the definition of parser combinators used. In our last meeting I showed that a simple parser combinator library which I have implemented in Agda is as expressive as possible: it can be used to recognise exactly those languages (sets of bit strings) for which a decision procedure can be implemented in Agda.

Consider the following language of parser (or perhaps recogniser) combinators:

data P : Bool → Set where
  ∅   : P false
  ε   : P true
  tok : Bool → P false
  _∣_ : ∀ {n₁ n₂} → P n₁ →        P n₂  → P (n₁ ∨ n₂)
  _·_ : ∀ {n₁ n₂} → P n₁ → ∞? n₁ (P n₂) → P (n₁ ∧ n₂)

The parsers are indexed on their nullability; the index is true iff the parser accepts the empty string. The first four combinators are unsurprising: failure, the recogniser for the empty string, the recogniser for a given token, and symmetric choice. The last combinator, sequencing, is more interesting: the second argument is allowed to be coinductive if the first argument does not accept the empty string:

∞? : Bool → Set → Set
∞? true  A =   A
∞? false A = ∞ A

(Read ∞ A as a thunk, i.e. a suspended computation of type A.) This ensures that grammars can be infinite, but only in a controlled way. If you check out the code you will find a formal semantics for these combinators, and a (verified and total) procedure which decides whether a given string is a member of a given language defined using the combinators. This shows that every language definable using the combinators is decidable. (For more details, see Mixing Induction and Coinduction.)

What about the other direction? Given a decision procedure of type List Bool → Bool a corresponding grammar can be produced. The idea is to build an infinite tree with one “node” for every possible input string, and to make this “node” accepting iff the decision procedure accepts the given string:

accept-if-true : ∀ n → P n
accept-if-true true  = ε
accept-if-true false = ∅

grammar : (f : List Bool → Bool) → P (f [])
grammar f = tok true  · ♯ grammar (f ∘ _∷_ true )
          ∣ tok false · ♯ grammar (f ∘ _∷_ false)
          ∣ accept-if-true (f [])

(Read ♯_ as “delay”, i.e. as the constructor of a thunk.) For a formal proof of the correctness of the grammar, see the Agda code. I have also generalised the proof to proper parser combinators, for which the statement is that, for every R, every function of type List Bool → List R can be realised using parser combinators (if set equality is used for the list of results); if the bind combinator is allowed, then the result easily generalises to arbitrary token types.

The language defined by grammar f above can be parsed using deterministic recursive descent with one token of lookahead; grammar f could be viewed as an “infinite LL(1)” grammar. I suspect that this grammar scheme can be implemented using many parser combinator libraries. I also suspect that the result above is already known. It has been known for a long time that a potentially infinite grammar can represent any countable language (I found this mentioned in Marvin Solomon’s PhD thesis from 1977), and given that fact the result above is only a small step away. Pointers to the literature are welcome.

Factorising folds for faster functions

October 16th, 2009 by Graham

When I visited Galois a couple of years ago, I worked with Andy Gill on formalising the worker/wrapper transformation that is used within GHC to improve the performance of programs. The theory that we came up with was simple and general, but was based upon modelling recursive programs in terms of a fixed point operater, and hence wasn’t able to exploit additional structure that may be present in many programs.

In today’s talk (based upon joint work with Mauro Jaskelioff and Andy Gill), I showed how the worker/wrapper theory can be specialised to the structured pattern of recursion encapsulated by fold operators, and discussed the resulting benefits. I gave one simple example, deriving fast reverse, and one more complicated example, verifying the correctness of an efficient implementation of substitution from Voigtlander’s MPC 2008 paper. Further details are in our new paper.