Exponentials of containers (again)

September 5th, 2008 by Thorsten

I presented a simple derivation of Paul’s construction of the exponential of containers. I used the opportunity to discuss the exponential of functors and the Yoneda lemma. My derivation is based on the observation that $(P \to) \Rightarrow F \simeq F \circ (P +) writing \Rightarrow for the exponentiation of functors and (P\to) X = P \to X is a Napieran functor.
This can be shown using the Yoneda lemma. Then given a container \Sigma s:S.P s \to and a functor F we can reason as follows:

\begin{eqnarray*}
\lefteqn{ (\Sigma s:S.P s \to) \Rightarrow F }\\
& \simeq & \Pi s:S.(P s \to) \Rightarrow F \\
& \simeq & \Pi s:S.F \circ (+ P s)
\end{eqnarray*}

This shows that one can exponentiate any functor with a container (predicatively) and that exponent of containers is a container, since we know that + P s is a container and containers are closed under composition and products. Expanding the definition gives rise to Paul’s definition.

Modular Monad Transformers

September 4th, 2008 by Mauro Jaskelioff

Monad are an essential tool for functional programmers. When it comes to combining monads, the most popular technique is to use monad transformers to add an effect layer to an existing monad. Do you want a new state for your monad? Use the state monad transformer on your monad and voila! it’s done!

Actually, you are not really done. Monads usually come with a set of operations to manipulate the effect they model. For example, the state monad comes with operations for reading and updating the state (get and put), and the environment monad (Reader in Haskell) comes with operations for reading the environment and performing a computation in a modified environment (ask and local). If you want the operations of your monad to be supported by the transformed monad, then you need to show how to lift the operations to the transformed monad. Unless a technology is used for automating these liftings, you need to provide an operation instance for every monad transformer in the library, leading to $\mathcal{O}(n^2)$ operation instances. That’s not modular!

The problem is not in monad transformers (other technologies for combining monads such as distributive laws or coproduct of monads suffer from the same problem). The problem is that many of the operations associated with a monad are not really compatible with the monad.

Monads have axioms that ensure that return and bind are well-behaved, so it is not unreasonable to think that there should be some axioms associated to its operations. So I propose to require operations to be algebraic. In our context this means that an operation h for a monad M should be of type forall a . S (M a) -> M a for some functor S, such that:

join . h = h. fmap join

where join is the multiplication of the monad join = (>>= id).

For example, uncurry (++) :: ([a],[a]) -> [a] is algebraic for the list monad, with S a = (a,a). On the other hand, uncurry local :: (r->r, Reader r a) -> Reader r a has the correct type, with functor S a = (r -> r, a), but is not algebraic, as the condition does not hold. The operation ask :: Reader r r doesn’t seem to have the correct type, but it is isomorphic to a function ask' :: (r -> Reader r a) -> Reader r a, which is algebraic for Reader r with S a = r -> a. It’s convenient to consider algebraic operations up to isomorphism. Then, ask and (++) are algebraic.

So, what’s good about algebraic operations? They can be lifted by any monad morphism! Every algebraic operation is isomorphic to a natural transformation in the Kleisli category of the monad.


%format . = “\cdot”

\begin{code}
toK      :: (Functor s, Monad m) =>  (forall a. s (m a) -> m a) ->
                                     (forall a. s a -> m a)
toK f    = f . fmap return

fromK    :: (Functor s, Monad m) =>  (forall a. s a -> m a) ->
                                     (forall a. s (m a) -> m a)
fromK k  = join . k
\end{code}

This means that for every algebraic operation h :: forall a. S (M a) -> M a, monad N, and monad morphism lift :: forall a. M a -> N a, we get an algebraic operation fromK (lift. toK h) :: forall a. S (N a) -> N a, i.e. the lifting of h to N.

Append is algebraic for the list monad, so we get a “liftable” version of append: for every monad morphism out of the list monad, we get a lifted operation.


%format . = “\cdot”

\begin{code}
appendL           :: Monad m => (forall x. [x] -> m x) -> m a -> m a -> m a
appendL mm x y  = join  $ mm  $ return x ++ return y
\end{code}

Of course, not all the usual operations are algebraic for their monad. This does not mean that the operation is incorrect, but that the monad should be more powerful. The operations determine the appropriate monad.

The operation local is not an algebraic for Reader but can become algebraic if we implement the environment monad with State:


%format . = “\cdot”

\begin{code}
type Reader' r a  = State r a

reader'           :: (r -> a) -> Reader' r a
reader' e         = State (\r -> (e r, r))

runReader'        :: Reader' r a -> r -> a
runReader' r      = fst . runState r

local'           :: (r -> r) -> Reader' r a -> Reader' r a
local' f a       = join  $ State  $ \r -> (a, f r)

localL           :: Monad t =>  (forall x. Reader' r x -> t x) ->
                                (r -> r) -> t a -> t a
localL lift f t  = join $ lift $ State  (\r -> (t, f r))

\end{code}

Since local' is algebraic for Reader', we get a liftable operation localL. The apparently arbitrary move from Reader into State seems like a big eureka step. Actually, it’s the result of applying a general technique for finding appropriate monads for a given set of operations (paper in process).

Using these ideas, I managed to re-implement the monad transformer library with automatic lifting of operations. I will release the library as soon as I write a minimum of documentation.

Hereditary substitutions over simply typed λ-calculus

August 13th, 2008 by Chantal Keller

Last friday, I talked about what I did during the second part of my internship. I explained an Agda implementation of hereditary substitutions over simply-typed λ-calculus from Thorsten Altenkirch, basing on ideas from Conor McBride and Andreas Abel.

Hereditary substitutions are substitutions over typed λ-calculus that are structurally recursive. It provides an algorithm for normalization whose termination can be proved by a simple lexicographical induction. The main idea of this algorithm is to simultaneously substitute and re-normalize.

We will restrain to simply typed λ-calculus and use de Bruijn notations. We will also use a notation from Conor: if Γ is a context and x a variable in Γ, then Γ-x is the context Γ without the variable x.

Normal forms in simply typed λ-calculus are terms that are as much as possible β-reduced and η-expanded. There are two ways of constructing them:

  • λn : Nf (Γ,σ) τ -> Nf Γ (σ → τ)
  • ne : Var Γ σ -> Sp Γ σ ο -> Nf Γ ο

where spines are lists of normal forms:

  • ε : Sp Γ σ σ
  • _,_ : Nf Γ σ -> Sp Γ τ ρ -> Sp Γ (σ → τ) ρ

We want to define hereditary substitutions over normal forms:

_ [ _ := _ ] : Nf Γ σ -> (x : Var Γ τ) -> Nf (Γ-x) τ -> Nf (Γ-x) σ

in parallel with substitutions over spines:

_ { _ := _ } : Sp Γ σ τ -> (x : Var Γ ρ) -> Nf (Γ-x) ρ -> Sp (Γ-x) σ τ

and application of a normal form to a spine:

_ @ _ : Nf Γ σ -> Sp Γ σ τ -> Nf Γ τ

The definitions are the following ones:

_ [ _ := _ ]

  • (λn t)[x := u] = λn t[x+ := u+]
  • (ne y ts)[x := u] = ne y’ ts{x := u} where y’+x == y if x ≠ y
  • (ne x ts)[x := u] = u @ ts{x := u}

_ { _ := }

  • ε{x := u} = ε
  • (t , ts){x := u} = t[x := u] , ts{x := u}

_ @ _

  • u @ ε = u
  • (λn u) @ (t , ts) = u[ø := t] @ ts

This is structurally recursive and terminates: for measures, we will consider (type of x, length of the normal form) for _ [ _ := _ ]; (type of x, length of the spine) for _ { _ := _ }; and (type of u) for _ @ _. It is obvious to see that when each function calls itself and when _ [ _ := _ ] and _ { _ := _ } call each other, the measures strictly decrease (for the lexicographical order).

The problem is when _ [ _ := _ ] and _ @ _ call each other. In that case, we have:

  • (ne x (t , ts))[x := λn u] = (λn u) @ (t’ , ts’) where (t’ , ts’) == (t , ts){x := λn u}
  • (λn u) @ (t’ , ts’) = (u[ø := t']) @ ts’

When (ne x (t , ts))[x := λn u] calls _ @ _, x and (λn u) have type (σ → τ). When (λn u) @ (t’ , ts’) calls _ [ _ := _ ], ø has type σ, and σ < σ → τ; and (u[ø := t']) has type τ, and τ < σ → τ.

Once we have established hereditary substitutions, we can normalize terms:

  • nf (var v) = nf’ v ε where:
  1. nf’ v ts = ne v ts if ts : Sp Γ σ ο
  2. nf’ v ts = λn nf’ v (ts+,(nf’ ø ε)) if ts : Sp Γ σ (τ → ρ)
  • nf (λ t) = λn (nf t)
  • nf (app t u) = t’[ø := nf u] where nf t == λn t’

In reverse, we can easily define an embedding function <_> that transforms a normal form into a term.

During my trainee, I intended to prove three properties:

  • t ≈ < nf t > (completeness)
  • t ≈ u => nf t == nf u (soundness)
  • nf <t> == t (stability)

I nearly managed to perform these proofs, nonetheless I miss one proof to complete it: the fact that η-equality stands for normal forms:

λn (app’ t (nf’ ø ε)) == t where t has an arrow type and app’ (λn t) u = t[ø := u]

Proving this is similar to prove: (t+1)[ø := (nf' ø ε)] == t. Nils previously performed this proof but in another framework, and we did not see how to prove this property in the framework exposed above.

An ad-hoc approach to productive definitions

August 1st, 2008 by Nils Anders Danielsson

Last week we talked about methods for defining productive functions in total languages.

In EWD 792 Dijkstra defines (a slight variant of) the stream of Hamming numbers roughly as follows (here I use Haskell syntax, and merge merges two ordered streams into one):

hamming = 1 : merge (map (2 *) hamming) (map (3 *) hamming)

(This definition is attributed to van de Snepscheut.) However, this definition is not guarded by constructors. How, then, can we implement it in a language like Agda, which only allows definitions which are either structurally recursive or guarded by constructors (with some extensions)?
Read the rest of this entry »

CCC-ness of the category of containers

July 11th, 2008 by Hancock

Paul Levy seems to have taken an interest in the category of containers, and in particular in whether or not it is a CCC. (Like [Set,Set], in which it faitfully and fully embeds.)

In his thesis, Pierre Hyvernat proved that (interaction structures, which are essentially) indexed containers provide a model of multiplicative linear logic, which is to say tensor (><), lolly (-o) and bang (!). A while ago, we "back-ported" at least tensor and lolly to the category of (plain 'ole) containers. I'm not 100% sure, but I think we backported the bang. (The details are in the containers blog, which seems to be defunct.) It would follow that if we define the exponential of containers so that (A -> B) = (!A)-oB, we’d get what ought to be the exponential. Some slightly inebriated investigations last night persuaded me that it could well work out. The question arises of relating this construction to Paul L’s one — it looks prima facie
a bit different. Paul’s centres round partial functions. Mine centres round bags.

The discussion was mostly about the details of the bang construction, eg. the right way to deal with multisets constructively. (Pierre used finitely indexed families modulo permutations of the index set.) Thorsten had a vague memory that decidability of equality was needed somewhere. Indeed, this rings some kind of bell. The moral was that one should try to stay out of the pub long enough to nail this stuff down.

Agda compiler and FFI

July 4th, 2008 by Nils Anders Danielsson

Today I discussed Agda’s compiler and FFI.

braincurry

July 3rd, 2008 by Tom Nielsen

Last Friday, I presented the braincurry project, which is my attempt, with assistance from Henrik Nilsson, to create a domain specific language to define and execute experiments and simulations related to cellular neuroscience. I spent a bit of time introducing neurons, dendrites, axons, synapses and the problem of synaptic integration (given a pattern of inputs to a neuron, can we predict its outputs?) I also talked briefly about the preparation I am currently working on in Tom Matheson’s lab in Leicester, the locust visual system and the influence of locust swarming on agriculture and crop failure.

Experiments with invertebrates and in particular insects can include an large number of modalities, including visual, auditory and olfactory sensory stimulation, intracellular or extracellular recording from multiple neurons, and movement observation – all at the same time. I presented a prototype language for defining these experiments and their associated analyses.

I restrict myself to experiments that consists of trials, i.e. finite episodes during which a set of stimuli (visual, waveforms of currents to be injected into neurons, etc) are presented and responses (neuron spikes, etc) are recorded. Analog input and output is controlled from a simple FFI binding to the Comedi library, which in turns controls a National Instruments data acquisition board. All aspects of the stimuli must be known before the beginning of the trial, so there are no dynamic or reactive components, although stimuli can be influenced by the outcome of previous trials. This restriction suggests a very simple definition of a trial, which is a list of trial specifications:


%format dots = &#8220;\ldots&#8221;

> type Trial = [TrialSpecification]
> data TrialSpecification  = RecordWaveform Channel
>                           | PlayVideo dots
>                           | PlayAudio dots

In addition, I would like to be able to specify analyses that are performed online, ie immediately after a trial has concluded. I can implement this in an untyped manner by giving names to recordings and additionally specify names as inputs and outputs of analyses. First I define a results data type:


%format dots = &#8220;\ldots&#8221;

> data Result  = ResWaveform [Double]
>              | ResNumber Double
>              | ResBool Bool | dots
> type Name = String

And modify TrialSpecification


%format dots = &#8220;\ldots&#8221;

> data TrialSpecification
>   = RecordWaveform Channel Name
>   | PlayVideo dots
>   | PlayAudio dots
>   | Analysis Name (Result -> Result) Name

One advantage of this approach is that I do not have to specify the order in which analyses are carried out if they depend on one another; their dependencies can be resolved automatically, although not in a type-safe manner. Trials are also very easy to compose. We discussed various possibilities for defining a type-safe trial, including monads and GADTs. I talked a bit about how this language may allow me to use parametrised trials to define general higher order trials that can determine, for instance, whether any one trial parameter influences some experimental outcome.

Recursion with boxes

June 21st, 2008 by Thorsten

While recursion in conventional functional languages is well understood, i.e. we know how to typecheck and how to run recursive programs it is more of an issue for languages with dependent types. The reason is that a typechecker for a dependently typed language has to evaluate terms symbolically (i.e. in the presence of variables) and has to decide definitional equality, i.e. the equality generated by \alpha and \beta-rules and by unfolding (recursive) definitions.

The main problem is how to stop infinite unfolding of recursive definitions. If we accept that \mathrm{let}\,x=t\,\mathrm{in}\,u \equiv \mathrm{let}\,x=t\,\mathrm{in}\,u[x:=t] and t contains occurences of x we can continue to unfold forever. This applies even to the recursive definitions of completely innocent total functions such as addition, e.g.
\mathrm{let}\, add\, =\, \lambda m\, n \to\, \mathrm{case}\, m\, \mathrm{of} 0 \Rightarrow\, n \mid S\, m' \Rightarrow\, S(add\,m'\,n)\, \mathrm{in}\, add\, 3\, 4

This issue is avoided in systems like Coq or Agda by restricting the evaluation of recursive definitions to certain contexts. E.g while the recursive definition of add is unfolded in add\,(S x)\,y because add is applied to a term in constructor form, it is not unfolded in the context add\,x\,y.

However, this approach has a number of disadvantages: it means that evaluation and pattern matching is built into the core language (Another alternative is to translate recursive definitions into basic combinators of Type Theory – this is the approach taken in Epigram). If we want to extend the language to corecursion we have to modify the notion of evaluation contexts, and indeed Coq makes a difference between fix and cofix. Moreover, a different mechanism is used to express the recursive definition of types, which has further to be extended if we want to capture induction-recursion, i.e. the mutual recursive definition of a type and a function.

Nicolas Oury and I have recently proposed a different approach in our draft paper on \Pi\Sigma (recently rejected from ICFP). in \Pi\Sigma we restrict the unfolding of recursion by the use of boxes. In a boxed term recursive definitions from outside the box are not unfolded. E.g. \mathrm{let}\,x=t\,\mathrm{in}\,[x]\not\equiv \mathrm{let}\,x=t\,\mathrm{in}\,[t]. In his recent work on the calculus of definitions Thierry Coquand proposes a similar approach, restricting the unfolding of recursive definition in finite functions corresponding to case-expressions. This behaviour can be simulated in \Pi\Sigma by using boxes in the branches of case-expression.

If we are not careful when introducing boxes we may loose desriable properties of our languages, such as subject reduction. Nicolas’ example from the recent FP lunch discussion, can also be simulated in \Pi\Sigma, it seems. We define recursively a type of Streams:

Stream : Type.
Stream = tag : {'Cons} ; Case tag of {'Cons -> $ Stream }.

here the $ A is the ASCII notation for lifting A_\bottom. We define recursively

ticks : Stream.
ticks = 'Cons, [ticks].

Currently \Pi\Sigma hasn’t got a primitive equality type but we can just define Leibniz equality:

Eq : (A : Type) -> A -> A -> Type.
Eq = \ A -> \ a -> \ b -> (P : A -> Type) -> P a -> P b.

refl : (A : Type) -> (a : A) -> Eq A a a.
refl = \ A -> \ a -> \ P -> \ p -> p.

We can show:

l1 : Eq Stream ticks ('Cons, [ticks]).
l1 = refl Stream ticks.

We can also show that the recursive functional is a congruence:

l2 : (s : Stream) -> (t : Stream) -> (Eq Stream s t)
-> Eq Stream ('Cons, [s]) ('Cons, [t])
l2 = \ s -> \ t -> \ q -> \ P -> \ p -> q (\ x -> P ('Cons,x)) p.

Putting l1 and l2 together we seem to be able to unfold inside a box

l3 : Eq ('Cons, [ticks]) ('Cons, [('Cons, [ticks])]).
l3 = l2 ticks ('Cons, [ticks]) l1.

However, if we reduce l3 to normal form it is basically an instance of refl, i.e.
\ P -> \ p -> p and this has not got l3′s type because
the expressions ('Cons, [ticks]) and ('Cons, [('Cons, [ticks])]) are not convertible. Clearly we have lost subject reduction.

In an email Thierry pointed out that the calculus of definitions doesn’t have this problem, and indeed neither should have \Pi\Sigma. The problem is caused by being half-hearted: we say that boxes are opaque for recursive definitions, but they are transparent for substitutions or non-recursive definitions. If we are consequent, i.e. boxes are opaque, full stop, the problem seems to disappear. I.e. the \beta-rule becomes (\lambda x.t)\,u \equiv \mathrm{let}\,x\,=\,u\,\mathrm{in}\,t. If we adopt this principle, l2 isn’t provable: while equality is a congruence, we can not reduce let x=s in ('Cons,[x]) to ('Cons,[s]).

Introducing boxes raises some further questions: should boxes be stable under \alpha-equivalence? E.g. should \mathrm{let}\,x\,=\,3\,\mathrm{in}\,[x] \equiv \mathrm{let}\,y\,=\,3\,\mathrm{in}\,[y]? Should it be stable under weakening? E.g.
\mathrm{let}\,x\,=\,3\,\mathrm{in}\,[4] \equiv [4]? And under permutation of definitions? E.g. should \mathrm{let}\,x\,=\,3, y\,=\,4\,\mathrm{in}\,[(x,y)]\equiv \mathrm{let}\,x\,=\,4, y\,=\,3\,\mathrm{in}\,[(y,x)]?
Inspired by the last example, we may even wonder, whether \mathrm{let}\,x\,=\,3, y\,=\,3\,\mathrm{in}\,[(x,y)]\equiv \mathrm{let}\,x\,=\,3\,\mathrm{in}\,[(x,x)]?

We propose that all the equivalence but the last should hold. We say that two local definitions are \alpha-equivalent, if there is a partial bijection on the defined variables, such that the right-hand sides and also the body are identified. While opaque to local definitions, boxes should be stable under this sort of \alpha-equivalence. The last example fails because the required identification is not bijective. Weakening is included, since the empty relation is a partial bijection.
(I have used partial bijections before to characterize \alpha-equivalence, see the draft paper – but beware it does contain some bugs).

Nicolas already suggested a simple algorithm to decide this equivalence: when comparing a closure of a box, when comparing two defined variables, we replace them by their definitions, but substitute both by a new generic variables. This procedure clearly terminates because any recursive definition is at most unfolded once. It should also be at least intutively clear that this precisely implements the \alpha-equivalence sketched above.

Implementing Powerlists

June 13th, 2008 by Wouter

I talked about a short program I wrote, inspired by a really nice paper by Jay Misra, Powerlist: A Structure for Parallel Recursion. The paper describes a common pattern underlying many divide-and-conquer algorithms on lists of length 2^k, for some k. There are two ways to divide such powerlists into two pieces:

  • Split the list down the middle;
  • Or separate the elements at odd and even indices

There is no clear ‘best’ choice—sometimes you want the one, sometimes you want the other.

I showed how to implement powerlists in Agda. The trick is, of course, to define a clever view. I’ve put the source code on my website, in case anyone wants to have a closer look.

Codata in Agda

June 10th, 2008 by Nils Anders Danielsson

Last Friday I mentioned that coinductive types were recently added to Agda. Nicolas Oury almost immediately realised that this leads to a loss of subject reduction. In fact, as it turned out, Coq has had the same problem for a long time.

It was also discussed whether a constructor- or destructor-based view of codata was most appropriate. Arguments were made in favour of both; it might be a good idea to support both views.