## Archive for June, 2005

### Container Drivers

Tuesday, June 28th, 2005

I’ve just got hold of The Fall Peel Sessions. Surely we can figure out what a container driver is…

### What is a Naperian container?

Sunday, June 19th, 2005

Conor and I once talked about Naperian’ containers, that so to speak work well with logarithms.
He reminds me that A Naperian container has one shape, and a set of positions.
So it represents an endofunctor .
The logarithm of this container is K.
Every container is a sum of Naperian containers.

To be Naperian is to be a multiplicative, unsummy kind of thing,
all products and exponentials, with no Sigma. Sheer testosterone. [The dependent counterpart is a monotone predicate transformer that commutes with all intersections. ]

Conor’s comment to this note dances the McBride with/out-of the subject,
and relates it to derivatives.

There is a connection between lambda abstraction
and some basic laws of logarithms, namely that in some respects
lambda-abstraction behaves like a logarithm, in that it satisfies
laws such as:

Linear lambda-abstraction corresponds to the case where the logarithm is Naperian.
To handle log_x(k^x) you need a special constant (^) reflecting ^ in the sense that
b^a^(^) = a ^ b. Then log_x(k^x) is (k^) = k^(^).

As Conor pointed out, Naperian functors are closed under composition
(which when taking logarithms translates into ×). In some sense they are closed under
nu expressions, and he shows how to take the logarithm of these.

The weird similarity between lambda abstraction and logarithms was (according to a manuscript by Felice Cardone and Roger Hindley on the history of the lambda-calculus ) known to Böhm in the 1970’s. It really has to do with the combinatorial completeness for the untyped lambda calculus of 4 arithmetical combinators, with exponentiation
A^F reflecting application F(A). Lambda abstraction is the undoing of application in the same way as taking the logarithm is the undoing of exponentiation. You get combinatorial completeness for the linear lambda calculus by dropping
0 and +, and for affine lambda calculus by dropping only +.

### Any mpt is iso to some dc.

Thursday, June 16th, 2005

mpt = monotone predicate transformer. A monotone predicate transformer is a function of type (for some sets or types ) such that for all , we have . Here is the set of functions . When we are interested in equality between such functions the previous sentence actually means that is functorial.

dc = dependent container. There are two subtly different candidates for this title,
namely objects of the types

Here

The types are slightly different, and amount to different presentations of the same information. The differences may be relevant to computing with these presentations.

• In the FamFam one, we make a table where the sections are per output type, the subsections are per shape of the output sort, and the subsubsections are per position in that shape (and give an input sort).
• In the FamPow one, the subsubsections are per input type, and give the set of positions in the shape which share that input port.

There is a choice between having fixed, or on the other hand having it the carrier set in our structure, in the same way that in the theory of groups the carrier set is not globally fixed: different groups, or different dependent containers can have different carrier sets.
I have explored this only for the case I = O.

Any mpt can be expressed in (Presbyterian) disjunctive normal form , which looks like

for some (the following piece of latex compiles fine on my system).
[Unparseable or potentially dangerous latex formula. Error 2 ]
In the Pow’ variant we have a slightly different Episcopelian’ structure
(The following compiles fine on my system…)
[Unparseable or potentially dangerous latex formula. Error 2 ]

If the powerset of a set is a set, ie Pow and/or Fam : Set Set, then any monotone predicate transformer is extensionally equal to a predicate transformer in disjunctive normal form. (Extensional equality
between predicate transformers F and G means that each includes the other. That F is included in G means that for all , implies .

The proof is a piece of vacuous impredicative bollocks, where the idea is that you have an intermediate set which is actually the set of subsets of S.

I think this is interesting, because it means that impredicatively, dependent containers are a complete representation at the level of objects’. (NB, if you have impredicativity, you can define Leibnitz equality.)

There is also the level of morphisms’, where you define a notion of morphism between
predicate transformers, which may be given in disjunctive normal form. There are 3 interesting notions of
mapping between mpts and dcs, corresponding to the types
DC -> DC, DC -> MPT, and MPT -> MPT. In the case
DC -> DC, when both isotopes of DC are presbyterian (FamFam),
we have something very concrete.
Impredicatively, all these notions of mapping coincide.

With dependent containers, it seems that the statement of the representation theorem in your paper(s?)
(that there is a full and faithful functor from DC -> MPT essentially, ie. something which is at the level of morphisms,), can be strengthened to say there is actually there is even a functor which is surjective on objects.

The moral I take to be be that if you want concrete representations for mpts, you are looking for some variant
among slightly different notions of dc’s.

### Containers for number-classes

Thursday, June 9th, 2005

The following is an example of an operation on (non-dependent) containers that may be of interest to those (me and Dr Altenkirch) of us with an interest in constructive versions of the higher number classes’.

What’s a number class? Well, the notion and terminology comes from Cantor. Traditionally, the natural numbers form the first number class, the countable Brouwer-ordinals
form the second, and you can push on further into the so-called finite’ number classes in the obvious way, by chucking in a to the functor whose gave you the previous number class . (You can think of these sets as aleph numbers’ .)

Computable versions of these blighters were defined by Church and Kleene in 1937-1938, and by Markwald and Spector in 1954-1955. In fact they extended the series into the transfinite: at ‘ one takes the union of the finite number classes, and then presses on as in the step from to . In type theory, the theory of the finite number classes is quite interesting (there are some well-cool collapsing functions’ )), but somewhat hindered by being full of dot dot dots’. Moreover, as far as I know, there is no decent development of transfinite number-classes in type-theory. (One wants in the first place to get up to the first inaccessible ordinal, this being, for those of you who know what regular’ means, a regular
ordinal closed under the step to the next regular. )

Anyway. The best way to approach the finite number classes is via the endofunctors of which they are the initial algebras. And the most natural way to do it is as follows.

The initial algebras of these chaps are, successively,

(Now you can maybe see why I wrote rather than .)
Funnily enough, the empty set comes out as the first number-class’, the naturals as the second, etc., so we’re off-by-one from the conventional terminology, but who cares. The point is by working with the endo-functors rather than their initial algebras, we get a nice closed-form description of the step to the next number-class, without any dot dot dots’ as in .

And of course, these functors are given by containers: ,
.
Moreover, it is obvious that there are natural transformations from to all the way up. Taking the colimit of this series (the formalisation requires a universe to recurse into’), we get a functor whose is , and whose value at 0 is .

In other words, the theory of (non-dependent) containers seems to provide quite a bit of the apparatus one needs to make a type-theoretic development of number-classes beyond the finite ones, whose absence (till now) constitutes a minor scandal in type-theory.

Some further thoughts along/beyond these lines can be found here, and here.

### What are dependent containers (for)?

Wednesday, June 8th, 2005

I am trying to write some kind of paper about dependent containers.
A reasonably up to date version may be found here.
(The sources are nearby in “cp.tar.gz”.)

There’s a couple of roughly similar structures that one might call (unary) dependent containers’, to wit:

1. , with typical element , where
, and .
In the case (which I’d call the homogeneous case), these have been used to represent cover-relations between neighbourhoods in formal topology, among other things. (See for example this, section 3.1. . The notion of axiom-set’ is due to Peter Aczel.) Dr. Altenkirch likes ‘em too, so I call them Episcopelian
2. , with typical element ,
where , , and .
Me’n'my pals have been using these things (particularly the homogenous variety) to represent imperative interfaces in type-theory. (For example NPS, chapter 16. I call them Presbyterian.

One can put these things in a more compact form with the aid of some definitions:
, and
.
(Here is a type, by which I mean a proper’, or large’ type, such as for example , , , etc. .) Then the two types above are
, and .
[Concerning Fam': one could call a family of X's a multiplicity (or even an array) of X's. There is no requirement that the function is mono. The same X could be listed again and again.]

[Dr. Altenkirch doesn't like Pow'. The only alternative I know is Pr', suggesting both PRedicate and PResheaf.
I suggest he installs some equi-expansive macro such as Blam' in his head, and thinks of it as a tiny explosion. I never, ever use proof irrelevance, and am usually mindful of the implications of functoriality, naturality, and suchlike jazz.]

What are non-dependent containers for? It is to my mind a reasonable answer that they are representations of endo-functors on sets or rather endo-functors on Set which are given by algebraic signatures. As for the morphisms between them, they represent natural transformations between the relevant endo-functors. There may be other reasons to be interested in them, and there may be happier expressions of it, but that seems to be the killer application.

What then about morphisms between these mothers? What are they for? There’s a lot to say about this, but before getting into it, we can ask whether the and the are to be globally fixed, or whether they are to count as part of the structure. Furthermore, are we going to be interested particularly in the homogeneous case ?
Taking the homogeneous case, if we count as part of the structure, then our objects are really of type

1. , or to put it another way
2. , or to put it another way .

Both these things represent, to put it briefly, monotone predicate transformers on a set. To put it a little more exactly, they represent
pairs , where preserves inclusions between . They do so in essentially the same way:

1. . Remember,
2. . Remember, , and

Such an is a monotone’ operator on-indexed families of sets; or to put it another way, a monotone relation between and (monotone’ in the argument ).

Now one thing I have figured out (in the One True logical assistant Agda), which Dr. Altenkirch already knew, is the following. Suppose we globally fix and , and regard and as categories. That is to say, we regard as having objects , and take for morphisms families of functions . Similarly for . Humour me, and call such inclusions, or shminclusions if you like. Then a functor from to is analogous to a monotone function from the powerset of to the powerset of . A natural transformation from to (same type) is naught but a function having type , or to us (Scandinavian) type-theorists, a proof that the predicate transformer is pointwise’ included in the predicate transformer . Oh, which satisfies the naturality condition.

Now, where and are represented by by objects of type , we can represent such a natural transformation by a pair of functions , having types
, and
,
which is what Dr Altenkirch proposes as a morphism between type 1 objects.
Should you be interested, here is the Agda file. This doesn’t appear to work with the version.

What if we take for objects in our category pairs of type (as I have done with Anton and Pierre)? What should we take for morphisms? Below is the answer we arrived at.

Suppose our domain is the object , and is the codomain. (The subscripts stand for high-level’ and low-level’.) We then have a certain (monotone) relation-transformer, or if you like, monotone transformer of predicates (or set-valued functions) over , namely:

For morphisms from to , one takes which are post-fixed points (invariants) for the above operator.

Clearly, this is quite a mouthful. Its properties emerge only after a certain amount of study. It may be reassuring to know at the outset that these objects have been studied some decades. They are known as downwards’, or forwards’ simulations.

The first thing to observe is that by applying the (type-theoretic) axiom of choice, one can massage the relation-transformer into the form . Namely,
,
,
, and
.

So here we have a binary operator on things. For reasons that will emerge, I use the notation for this. You will immediately want to know what is. I might as well write it down here:
,
,
,
.

OK, what does a morphism represent? If we think of as
representing , and similar for , then let’s say that a simulation of by is an such that . (This stands for .)
Call a simulation downwards if it is a left adjoint’, meaning that there is a such that we have a Galois connection: for all . (Equivalently, commutes with arbitrary unions, or equivalently coincides with a relational union, or equivalently … .) Then: what represents is a downwards simulation.

I should say straightaway that I don’t know if the lollipop gives a complete representation of downwards simulations, though I’d be flabbergasted if not. Hmm — maybe I should try to prove it… .

A certain amount is known about the category of (homogeneous) predicate transformers and downward simulations (though more by Pierre Hyvernat than me). In particular, there are a number of interesting monads and comonads, principally a monad for reflexive and transitive closure of a predicate transformer, and a comonad for a reflexive and transitive interior’. Oh, and there’s a `bang’, and other paraphernalia of linear logic.

Having indicated a certain amount of maybe relevant technical stuff about our two basic structures, it’s high time to start some discussion of the question in the title of this note. On you go.

One thing I should have mentioned is that a structure is a possible model for multi-sorted signatures:
models a sort.
models an operator with target/output sort .
models an operand of the operator with target sort .
models the sort of operand of the operator with target sort .
The structure (and presumably the other variant of it) is remarkably rich in applications. (I mentioned above that it serves also as a model for command-response interfaces, with modelling a state space, commands, responses, and the next-state function. Or if I didn’t, I should have.)