A finite universe in Agda

May 9th, 2008 by Thorsten

In exercise 3 of our lecture notes on generic programming we suggest to show that the universe of finite types is closed under coproducts, products and exponentials and implement views allowing to do pattern matching over the derived structure. I though that (re)doing this in Agda may be a good way to explore views in Agda and play with the with-rule (local case). Indeed, I discovered that our remark that it is not possible to implement a view for function types due to lack of extensionality was wrong, since we only have to show that every code is generated by a function.

Exercises: show that the categorical properties of coproducts, products and exponentials hold and implement Pi and Sigma-types.

module funi where

  {- Defining a universe of finite types with +, × and =>
     and accompanying views -}

  data ℕ : Set where
    zero : ℕ
    succ : ℕ -> ℕ

  _+_ : ℕ -> ℕ -> ℕ
  zero   + m   = m
  succ n + m   = succ (n + m)

  data Fin : ℕ -> Set where
    fz : forall {n} -> Fin (succ n)
    fs : forall {n} -> Fin n -> Fin (succ n)

  {- coproducts -}

  finl : forall {m n} -> Fin m -> Fin (m + n)
  finl fz     = fz
  finl (fs i) = fs (finl i)

  finr : forall {m n} -> Fin n -> Fin (m + n)
  finr {zero}   {n} i = i
  finr {succ m} {n} i = fs (finr {m} i)

  data PlusView : forall {m n} -> Fin (m + n) -> Set where
    inl : forall {m n} -> (i : Fin m) -> PlusView {m} {n} (finl i)
    inr : forall {m n} -> (i : Fin n) -> PlusView {m} {n} (finr {m} i)

  plusView : forall {m n} -> (i : Fin (m + n)) -> PlusView {m} {n} i
  plusView {zero}   i      = inr i
  plusView {succ m} fz     = inl (fz {m})
  plusView {succ m} (fs i) with plusView {m} i
  plusView {succ m} (fs ._) | (inl j) = inl {succ m} (fs j)
  plusView {succ m} (fs ._) | (inr j) = inr j

  fcase : forall {m n} -> {A : Set}
          -> ((i : Fin m) -> A)
          -> ((j : Fin n) -> A)
          -> ( k : Fin (m + n) ) -> A
  fcase {m} minl minr k with plusView {m} k
  fcase {m} minl minr ._ | inl i = minl i
  fcase {m} minl minr ._ | inr j = minr j

  {- products -}

  _×_ : ℕ -> ℕ -> ℕ
  zero     × n = zero
  (succ m) × n = n + (m × n)

  fpair : forall {m n} -> Fin m -> Fin n -> Fin (m × n)
  fpair {zero}   {n} ()     _
  fpair {succ m} {n} fz     i = finl {n} {m × n} i
  fpair {succ m} {n} (fs j) i = finr {n} {m × n} (fpair j i) 

  data PairView : forall {m n} -> Fin (m × n) -> Set where
     pair : forall {m n} -> (i : Fin m)(j : Fin n) -> PairView {m} {n} (fpair i j)

  pairView : forall {m n} -> (i : Fin (m × n) ) -> PairView {m} {n} i
  pairView {zero}   {n} ()
  pairView {succ m} {n} i with plusView {n} {m × n} i
  pairView {succ m} {n} ._ | inl j =  pair fz j
  pairView {succ m} {n} ._ | inr k  with pairView {m} {n} k
  pairView {succ m} {n} ._ | inr ._  |  pair k0 k1 =  pair (fs k0) k1  

  fst : forall {m n} -> (i : Fin (m × n) ) -> Fin m
  fst {m} i with pairView {m} i
  fst {m} ._ | pair j k = j 

  snd : forall {m n} -> (i : Fin (m × n) ) -> Fin n
  snd {m} i with pairView {m} i
  snd {m} ._ | pair j k = k 

  {- exponentials -}

  _=>_ : ℕ -> ℕ -> ℕ
  zero     => n = succ zero
  (succ m) => n = n × (m => n)

  flam : forall {m n} -> ((Fin m) -> (Fin n)) -> Fin (m => n)
  flam {zero}   {n} f = fz
  flam {succ m} {n} f = fpair {n} {m => n} (f fz) (flam {m} {n} (\ i -> f (fs i) ) ) 

  data FunView : forall {m n} -> Fin (m => n) -> Set where
     lam : forall {m n} -> (f : (Fin m) -> (Fin n)) -> FunView {m} {n} (flam f)

  fzero : {A : Set} -> Fin zero -> A
  fzero () 

  fsucc : forall {m} -> {A : Set} -> A -> ((Fin m) -> A) -> Fin (succ m) -> A
  fsucc a f fz     = a
  fsucc a f (fs i) = f i

  funView : forall {m n} -> (i : Fin (m => n)) -> FunView {m} {n} i
  funView {zero}   {n} fz      = lam fzero
  funView {zero}   {n} (fs ())
  funView {succ m} {n} i       with pairView {n} {m => n} i
  funView {succ m} {n} ._      |    pair f0 ff  with funView {m} ff
  funView {succ m} {n} ._      |    pair f0 ._  |    lam g = lam (fsucc f0 g)

  fapp : forall {m n} -> Fin (m => n) -> Fin m -> Fin n
  fapp {m} {n} f  i with funView {m} {n} f
  fapp {m} {n} ._ i  |   lam g  = g i

Agda Demo 2

May 6th, 2008 by Nils Anders Danielsson

Code from today’s session:

module LiveDemo2 where

open import Data.Bool
open import Data.List

module Sort {a : Set} (_≤_ : a -> a -> Bool) where

  insert : a -> [ a ] -> [ a ]
  insert x []       = x ∷ []
  insert x (y ∷ ys) = if x ≤ y then x ∷ y ∷ ys
                               else y ∷ insert x ys

  sort : [ a ] -> [ a ]
  sort = foldr insert []

_≤_ : Bool -> Bool -> Bool
false ≤ _     = true
true  ≤ false = false
true  ≤ true  = true

open Sort _≤_

test = sort (true ∷ false ∷ true ∷ [])

open import Relation.Binary.PropositionalEquality

record Monad (M : Set -> Set) : Set1 where
  field
    return : forall {a} -> a -> M a
    _>>=_  : forall {a b} -> M a -> (a -> M b) -> M b

    left-identity : forall {a b} (x : a) (f : a -> M b) ->
                    (return x >>= f) ≡ f x

-- module Monad {M : Set -> Set} (_ : Monad M) : Set1 where
--   return : forall {a} -> a -> M a
--   _>>=_  : forall {a b} -> M a -> (a -> M b) -> M b

--   left-identity : forall {a b} (x : a) (f : a -> M b) ->
--                   (return x >>= f) ≡ f x

open import Data.Function

join : forall {M} -> Monad M -> forall {a} -> M (M a) -> M a
join M x = x >>= id
  where open Monad M
join M x = Monad._>>=_ M x id

listMonad : Monad [_]
listMonad = record
  { return        = \x -> x ∷ []
  ; _>>=_         = \xs f -> concat (map f xs)
  ; left-identity = ?
  }

The ? above is left as an exercise.

module SimplyTyped where

infixl 70 _·_
infix  60 λ_

infixr 50 _→_
infixl 40 _▻_
infix  30 _∋_ _⊢_

data Ty : Set where
  Nat : Ty
  _→_ : Ty -> Ty -> Ty

data Ctxt : Set where
  ε   : Ctxt
  _▻_ : Ctxt -> Ty -> Ctxt

data _∋_ : Ctxt -> Ty -> Set where
  zero : forall {Γ σ}   -> Γ ▻ σ ∋ σ
  suc  : forall {Γ σ τ} -> Γ ∋ σ -> Γ ▻ τ ∋ σ

data _⊢_ : Ctxt -> Ty -> Set where
  var : forall {Γ σ}   -> Γ ∋ σ -> Γ ⊢ σ
  λ_  : forall {Γ σ τ} -> Γ ▻ σ ⊢ τ -> Γ ⊢ σ → τ
  _·_ : forall {Γ σ τ} -> Γ ⊢ σ → τ -> Γ ⊢ σ -> Γ ⊢ τ

open import Data.Nat

Dom : Ty -> Set
Dom Nat     = ℕ
Dom (σ → τ) = Dom σ -> Dom τ

data Env : Ctxt -> Set where
  ε   : Env ε
  _▻_ : forall {Γ σ} -> Env Γ -> Dom σ -> Env (Γ ▻ σ)

lookup : forall {Γ σ} -> Γ ∋ σ -> Env Γ -> Dom σ
lookup zero    (ρ ▻ x) = x
lookup (suc n) (ρ ▻ x) = lookup n ρ

⟦_⟧ : forall {Γ σ} -> Γ ⊢ σ -> Env Γ -> Dom σ
⟦ var x   ⟧ ρ = lookup x ρ
⟦ λ t     ⟧ ρ = \\x -> ⟦ t ⟧ (ρ ▻ x)
⟦ t₁ · t₂ ⟧ ρ = ⟦ t₁ ⟧ ρ (⟦ t₂ ⟧ ρ)

t : ε ⊢ Nat → Nat
t = (λ var zero) · (λ var zero)

example = ⟦ t ⟧ ε 6

-- t₂ : ε ▻ ? ⊢ ?
-- t₂ = var zero · var zero

Agda Demo

May 2nd, 2008 by Nils Anders Danielsson

Today I ran a hands-on Agda demonstration. The code written during the demo:

module LiveDemo where

data ℕ : Set where
  zero : ℕ
  suc  : ℕ -> ℕ

_+_ : ℕ -> ℕ -> ℕ
zero  + n = n
suc m + n = suc (m + n)

infixr 5 _◅_ _++_

data Vec (a : Set) : ℕ -> Set where
  []  : Vec a zero
  _◅_ : {n : ℕ} -> a -> Vec a n -> Vec a (suc n)

_++_ : forall {a m n} -> Vec a m -> Vec a n -> Vec a (m + n)
[]       ++ ys = ys
(x ◅ xs) ++ ys = x ◅ xs ++ ys

data Last {a : Set} : forall {n} -> Vec a n -> Set where
  []  : Last []
  _▻_ : forall {n} (xs : Vec a n) (x : a) -> Last (xs ++ x ◅ [])

last : forall {a n} (xs : Vec a n) -> Last xs
last []       = []
last (x ◅ xs) with last xs
last (x ◅ .[])              | []       = [] ▻ x
last (x ◅ .(ys ++ y ◅ []))  | (ys ▻ y) = (x ◅ ys) ▻ y

Some exercises:

  • Define an inductive family for natural numbers less than a given natural number.
  • Use this family to define a safe lookup function for vectors.

If you want more exercises, check out the Agda courses from the Types Summer School 2007 and the LerNET Summer School 2008 (available from the Agda wiki). Conor’s Epigram lecture notes from the AFP Summer School 2004 also contain a bunch of exercises.

The shortest beta-normalizer

April 13th, 2008 by Thorsten

I presented the shortest implementation of a normalizer for beta equality for untyped lambda calculus which is based on Normalisation by Evaluation.

We start with the definition of lambda terms using de Bruijn levels:

\begin{code}
data T = Var Int | Lam T | App T T
\end{code}

The reason for using de Bruijn levels is that weakening comes for free. The other side of the coin is that one has to reindex bound variables when substituting. However, this is not an issue because we never do that.

We define values by solving a negative domain equation:

\begin{code}
data V = Fun (V -> V)
\end{code}

It is easy to define application:

\begin{code}
app :: V -> V -> V
app (Fun f) v = f v
\end{code}

We can evaluate lambda terms in the Value domain using a list of values as the environment:

\begin{code}
eval :: T -> [V] -> V
eval (Var x) e = e !! x
eval (Lam t) e = Fun (\ v -> eval t (e++[v]))
eval (App t u) e = app (eval t e) (eval u e)
\end{code}

All we have to do now is to invert evaluation, i.e. to define a function

\begin{code}
quote :: V -> T
\end{code}

To be able to do this we have first to extend values with variables. Then to be able to extend app, we also have to represent stuck applications, or neutral values:

\begin{code}
data V = Fun (V -> V) | NVar Int | NApp V V

app :: V -> V -> V
app (Fun f) v  = f v
app n v         = NApp n v
\end{code}
The code for eval remains unchanged. Now we can implement quote, though we need an extra parameter to keep track of the free variables:

\begin{code}
quote :: Int -> V -> T
quote x (Fun f)        = Lam (quote (x+1) (f (NVar x)))
quote x (NVar y)     = Var y
quote x (NApp n v)  = App (quote x n) (quote x v)
\end{code}
Now nf just evaluates and quotes the result:

\begin{code}
nf :: T -> T
nf t = quote 0 (eval t [])
\end{code}

We discussed how to verify such a normalisation function – I suggested to use the partiality monad.

Newton with scratch

April 4th, 2008 by Thorsten

I demonstrated how one can simulate Newtonian mechanics using scratch, a visual, impure language for children developed by people from MIT. My goal was to write a little program which shows that orbits are a consequence of Newton’s first law (inertia) and the law of gravity, which can be interpreted as a constant move towards the center of gravity. However, my first attempt has the problem that the planet always falls into the sun. The reason for this kind of friction is the discretisation of the movement. A little modification solves this problem but leads to an oscillating orbit. Nisse speculated (not entirely seriously) that this may be the real reason for the perihel movement of Mercury…

Productivity

March 14th, 2008 by Nils Anders Danielsson

In today’s FP Lunch I mentioned a problem with Coq’s guardedness checker.

Consider the following simple representation of languages (using Agda-like syntax):

codata L : Set where
  ε   : L
  sym : Char -> L
  _·_ : L -> L -> L
  _∣_ : L -> L -> L

Here ε is the language consisting of the empty string, sym c is the language containing only the character c, _·_ encodes sequencing, and _∣_ takes the union of two languages. I want to be able to represent languages with strings of arbitrary (but finite) length, so L is a coinductive definition.
Read the rest of this entry »

Fair schedulers

March 8th, 2008 by Hancock

From time to time, I try to put report things in this blog. If someone (local) can tell me how to put tags in so stuff like ASCII “art” comes out nice, I’d be grateful. It is beyond me. “code” doesn’t work.

This lunchtime was in 2 bits. First Hank spoke, about what he called “functional operating systems”. He seemed to mean combinations of continuous stream processing functions involving fair stream schedulers.

What is a fair stream scheduler? A function

m : Stream A -> Stream B -> Stream (A + B)

whose value always contains infinitely many inL’s and infinitely many inR’s.

Hank claimed that any such function could be represented by a program of type:

M = nu ((A->)* . (B ->)+. (A->))

where F* denotes the free monad over F, and F+ = F . F*.
He gave the constructors of (A->)* X the names Ret and Rd:

Ret : B -> (A->)* B
Rd : (A -> (A->)* B) -> (A ->)* B

The idea is as follows.

First, define

eat' : (A->)* (C) -> Str A -> ([A], C, Str A)
eat' (Ret c) as = ([],as)
eat' (Rd f) (a:as) = let (al,c,op) = eat (f a) in (a:al,c,op)

Now, let’s define

schd :  M -> Str A -> Str B -> Str (A +B)
schd m as bs = let
  t = elim m
  (al,f,as') = eat' t as : ([A], (B -> (B ->)* ((A ->) M)), Str A)
  t' = f (hd bs)
  (bl,g,bs') = eat' t' (tl bs) : ([B], (A ->) M, Str B)
  m' = g (hd as')
               in (map inL al)
                  ++ [inR (hd bs)]
                  ++ (map inR bl)
                  ++ [inL (hd as)]
                  ++ schd m' (tl as') bs'

In essence, m eats 0 or more A’s, then one or more B’s, then an A, then loops. As m reads things, we pass them through, appropriately
tagged to the output.

Hank asserted, in a voice of thunder, that the output of such a function contained infinitely many inR’s, and infinitely many inL’s; moreover that any such interleaving was the output of such function. He conjectured that

nu ((A->)* . (B ->)+. (A->))
= nu ((B->)* . (A ->)+. (B->))

and then began to talk rather obscurely about circuits

      /|      _________      |\\
     / |---->|    f    |---->| \\
    /  |      ---------      |  \\
+->    |      _________      | m  >-+
|   \\  |---->|    g    |---->|  /   |
|    \\ |      ---------      | /    |
|     \\|---->out      in---->|/     |
|                                   V
+<----------------------------------+

which "worked properly" for any choice of scheduler m, suggesting that at any rate he had made sense of the "for any", and that working
properly" might be, minimally, something like continuity (from in to out). He said this idea might help understanding delay-insensitive circuitry, or circuitry which is insensitive to buffering.

This all seemed to be in the context of the work Hank and Dirk and Neil had done some while ago in "stream eating". which represented continuous functions of type Str A -> Str B by nu ((A->)* (B \times)). There is a recent draft here.

Order Preserving Embeddings

March 7th, 2008 by James Chapman

Last friday I talked about OPEs which Peter Morris and I have been thinking about recently. OPEs give you a nice way of implementing weakening for a syntax with de Bruijn variables and binding. Here’s one:

data Ty : Set where
  ι    : Ty
  _→_ : Ty -> Ty -> Ty
data Con : Set where
  ε   : Con
  _,_ : Con -> Ty -> Con
data Var : Con -> Ty -> Set where
  vZ : Var (Γ , σ) σ
  vS : (τ : Ty) -> Var Γ σ -> Var (Γ , τ) σ
data Tm : Con -> Ty -> Set where
  var : Var Γ σ -> Tm Γ σ
  λ   : Tm (Γ , σ) τ -> Tm Γ (σ → τ)
  _$_ : Tm Γ (σ → τ) -> Tm Γ σ -> Tm Γ τ

The simply-typed lambda calculus with one base type. If you throw away the types your contexts become natural numbers, you variables become Fin and your terms become the well-scoped terms.

The operation we want is this one:

wk : (τ : Ty) -> Tm Γ σ -> Tm (Γ , τ) σ

But we cannot implement it by induction on the terms:

wk τ (var x) = var (vS τ x)
wk τ (t $ u) = wk τ t $ wk τ u
wk τ (λ t)   = λ ? -- blast!

The type of ? is Tm ((Γ , τ) , σ) ρ where σ and ρ are the domain and range of the lambda.

Clearly we need a more liberal kind of weakening where we can introduce new variables earlier in the context. We could introduce thinning like this:

th : forall Δ τ -> Tm (Γ + Δ) σ -> Tm ((Γ < τ) + Δ) σ

+ is context extension here. We can then define weakening as:

wk = th ε

This works ok until you have to prove properties about thinnings. For example:

wk ρ (th Γ' τ v) == th (Γ' < ρ) τ (wk ρ v)

To prove this you need a more general lemma where both are thinnings which is even very difficult to state due to difficulties with associativity of + not being free in type theory.

Ok, that's the problem. What's the solution? Yes; order preserving embeddings. The idea is to give an inductive description of exactly the types of functions from one context to another that we are interested in. We can then prove the properties we want in a category where the objects are context and the morphisms are OPEs and lift our results to the category of terms.

What are OPEs then?

data OPE : Con -> Con -> Set where
  done : OPE ε ε
  keep : forall {Γ Δ σ} -> OPE Γ Δ -> OPE (Γ , σ) (Δ , σ)
  skip : forall {Γ Δ σ} -> OPE Γ Δ -> OPE (Γ , σ) Δ

This type classifies exactly the operations from Con -> Con where all you can do is say which things to avoid in the target context (which is the first one by the way).

We define identity and composition like so:

id : OPE Γ Γ
id {ε}     = done
id {Γ , σ} = keep id
_•_ : OPE B Γ -> OPE Γ Δ -> OPE B Δ
done   • done   = done
skip f • g      = skip (f • g)
keep f • keep g = keep (f • g)
keep f • skip g = skip (f • g)

and can easily show that they satisfy the left and right identity laws and associativity of composition if we feel so inclined.

Given identity it is easy to define the weakening OPE:

weak : forall τ -> OPE (Γ , τ) Γ
weak τ = skip id

and I can state my property easily:

weak ρ • f == (keep f) • (weak ρ)

The proof is now one line.

We define functors:

vmap : OPE Γ Δ -> Var Δ σ -> Var Γ σ
tmap : OPE Γ Δ -> Tm Δ σ -> Tm Γ σ

Now we can define weakening:

wk τ = tmap (weak τ) 

And that's it. Ok so we did have to do some work but the structure is just the categorical structure you'd expect. In fact for a dependently-typed version of this (using induction recursion) you are forced to prove the laws about identity, composition and map as you define the operations. The structure is forced upon you.

A more liberal version of the above called Order Preserving Functions are mentioned in Conor's excellent AFP notes (well worth a read for any budding (wilting?) epigram or agda programmer). Peter and I plan to keep digging on this subject.

Unimo

February 27th, 2008 by Nils Anders Danielsson

The discussions on Feb 15 concerned correct-by-construction methods for defining monads.

“Programming Monads Operationally with Unimo” (Chuan-kai Lin, ICFP’06) presents a data type Unimo which can be used to construct monads. These monads are supposed to satisfy the monad laws by construction. Furthermore they make use of the associative law to avoid the quadratic behaviour that sometimes affects left-nested uses of bind (analogously to the quadratic complexity of (...(xs_1 ++ xs_2) ++ ...) ++ xs_n).

Unfortunately there are some flaws in the paper; the right unit monad law is not actually satisfied in general, and the termination of the main run function is not established properly. Instead I presented a simplified variant which is less efficient, but perhaps more elegant.
Read the rest of this entry »

Standard Haskell

February 24th, 2008 by Conor

Spotted this on my way from London to Glasgow last Thursday.

Quite right!