Began dissecting at the weekend. Clowns to the left of me, jokers to the right is the story so far. But there aren’t half still some jokers to the right.
Lucas Dixon and I had a chat on the way to the bus stop, then I wrote this.
Take terms (from some suitable free monad) and compress them, yielding skeletons whose bones are the paths to variables. A Zipper 0 is a path in a term with only closed terms hanging off on either side. At the end of such a path, you either have a variable or a ‘joint’, being a term node with skeletons branching off.
If you’re smart enough to spot that mapping naughtE across anything is a no-op, you get a rather cheaper, yet still purely functional, implementation of substitution.
throws away the structure, giving you your term back. OK, my whiteboard doesn’t have a typechecker, so the second case in explode should yield an exploded node .
I was going to upload some whiteboard images of the basic constructions for fixpoints of indexed containers, but it won’t let me. I’ll link to them instead.
- least fixpoint uses W-types for the basic shape, then recursively defined notions of index-matching and input indexing; I didn’t use inductive families because that’s what we’re constructing, right?
- greatest fixpoint uses M-types for the basic shape, then inductively defined notions of O-indexed and I-indexed places. The index-matching condition is the property that in every O-indexed place, the O you want is the O you’ve got.
Of course, there’s some work to do to show that these things really are what I say they are, but I think they look about right.
By the by, we really need to figure out some notational issues or we’re heading for a mess…
As Hank pointed out, the SMCC structure of our “interaction systems” can be backported to plain containers. This is quite interesting because, even if as an object, a plain container is a particular interaction system (with the set of states being ), the notion of morphisms are quite different. (For interaction systems, we identify morphisms “extensionally” and as a result, all the container morphisms are equal when seen as interaction system morphisms.)
Interaction systems form a model for full linear logic with the following proviso:
* we have classical linear logic only when the ambient category is and the meta theory is classical
* there are complications when dealing with the bang of an interaction system since this requires quotienting sets. (I guess we would need a notion of interaction system on a setoid rather than on a simple set…)
There is a small degeneracy in that the product (”with” in the LL terminology) and coproduct (”plus” in the LL terminology) coincide. I don’t care since without that, interpreting the differential -calculus would have been possible!
The question is now to see if we can backport some of those features into plain containers. I first though the answer was no, but I am not sure any more. More precisely, it looks that we can get a model of intuitionistic LL with plain containers, but the additive operations (plus and with) do not correspond to the one on interaction systems!
For example, it is not possible to define my (co)product on containers, but there are products and coproduct for containers. Interestingly enough, those product and coproduct do correspond to operation on interaction systems which I haven’t found applications for.
(Hmmm… That’s not entirely true: the coproduct of containers corresponds to the product of topological spaces, but that is another story.)
Also, product and coproduct are different. (No hope of getting a denotational model for the simply typed differential -calculus there…)
Since we have (indexed) coproducts and finite tensors, it shouldn’t be difficult to construct the container . Using setoids rather than sets, it should then be possible to add some quotient and obtain .
A shape in is given by a finite multiset of shapes in ; and a position from shape is given by a finite multiset of position . (Some care is needed to define this correctly.)
Does it make sense to any of you? I would more or less expect this construction to be the free comonoid generated by , and it wouldn’t be surprising to obtain a model for full intuitionistic LL…
One last thing about the container : since I only pretend to modeling ILL, any container can be used as the interpretation of ; however, the container is particularly well-suited, if only because we have (I think) .
(Recall: is the LL notation for coproduct and is LL notation for product. “” is shorthand for (see Hank’s posting).)
Of course, the miracle of the extensional, set-based version does work any more: this object will not be dualizing…
WARNING: This is the second version of this entry. The first was WRONG!
It’s mentioned in a previous note that bits and pieces of linear logic can be defined on dependent containers. The expert on this is Pierre Hyvernat, who will shortly be shaking the foundations of civilisation with his thesis. It occurred to me to see how much of this linear guff can be `back-ported’ to Plain Old containers. The management summary is (UIAM) you can define lollipop and the tensor, but not bang.
I adopt Conorial notation for Plain Old containers. (Why on earth Dr. Abbot couldn’t see his arrow head was the wrong way round is beyond me.)
This makes a container out of the morphisms, or more exactly to take for shapes the container morphisms , and for positions therein, the pairs where a is a shape of and d is a position in the shape of to which t maps a. So a position is really a shape in the high-level container (that gives you a shape in the low-level one) and a position in that low-level one (that gives you a position in the high-level one).
If one adopts an `imperative’ metaphor (`shapes’:`positions’ :: `instructions’:`responses’), then to issue an instruction in the tensor is to issue two instructions in parallel, but proceed only when the two responses have been returned.
Now one has to show that:
Ok, let’s write it out. First, the left hand side.
Now, the right hand side. (Fiddling around with latex in this wretched system is beyond the limits of human endurance! The following compiles fine on my computer.)
So both sides of the equation boil down to the same thing.
The left-hand end of the lollipop is a contravariant kind of place, so I wouldn’t go taking fixed-points of container expressions when the bound-variable sits anywhere therein.
not bang Now, there’s a bang operation in the dependent case. To use the imperative metaphor, the states of the banged container are multisets of the states of the unbanged one. To have a banged dependent container (/interaction structure) is like having a multiset of unbanged dependent containers, which are lock-step synchronised. (Pierre calls this synchronous multithreading.)
Plain Old containers are dependent containers over a singleton state-space; or, if you prefer signatures that happen to be singly-sorted. If you bang them, they’re no longer singly sorted. Instead, the state-space becomes the free monoid on a single generator, or as old-timers like me prefer to say, the natural numbers. So, AFAICS, you don’t get a bang out of Plain Old containers.
This is Neil’s scribe here.
We have made some progress with taking logarithms of functors, and seem to have a nice categorical analysis. This includes a universal property of the logarithm of a functor, a complete characterisation of those functors which have logarithms, a formula for calculating the logarithm of a functor (when it exists), and a simple derivation of the properties of the logarithm. Don’t get too excited though for reasons which become clear later.
We start by defining a functor by . So Y(A) is a covariant hom functor and Y(A) is exactly what Conor and Hank have been calling `Naperian containers’. In general Y doesn’t have a right adjoint, but we call a functor F logarithmic iff there is an object and a bijection between maps in and maps in , or equivalently maps in C.
Thm. Every Naperian container is logarithmic, and the log is calculated by thye McBridean formula .
Prf. If F is Naperian, it is of the form Y(B), so by Yoneda the maps are in bijective correspondence with maps . The result then follows by showing that equals B. 
(We cheat by using Yoneda. Some hygeine is required that we will sort out later.)
Thm. Every logarithmic functor F is a Naperian container.
Prf. By assumption, the maps in are in bijection with the maps in .
By Yoneda, the latter maps are in bijection with elements of F(A). So F(A) equals . So , and hence is Naperian. 
This is a nice result. It shows that the logarithmic functors are precisely the extensions of Naperian containers.
The properties we will derive are these:
- , where `1′ stands for the constant 1-valued functor, and likewise `0′ for the constant 0-valued functor. (These are terminal and initial objects in the functor category.)
- , where K is an object in our category.
- (Conor’s law) .
Prf. For the first property, simply calculate . For the rest, note that F and G must be of the form Y(A) and Y(B). Now just use the first result.
Conclusion. This all looked very nice at one point. But these last results are actually pretty darn trivial. Maybe there is more substance to them if we avoid the use of Yoneda. For example the last three laws can be proved using the universal property.
Hank and I whiled away a Friday afternoon, musing on dependent containers. Yet another formulation of them emerged from the flotsam, which we christened Roman containers (after an old joke on our part), but should probably call annotated containers or even refined containers. I’ll stick with Roman for now.
Given sets (I,O) of ‘input and output sorts’, a Roman (I,O)-container, (S,P,q,r) is given by a container (S,P) and a pair of functions q:S→O, r:ΣP→I. The idea is just that q explains which output sort each shape delivers and r explains which input sort each position in each shape requests.
The extension of such a thing inhabits (I→Set) → (O→Set) and is computed thus
[(S,P,q,r)] T o := Σs:S. o = q s × Πp:P s. T (r (s,p))
(Aside, for a differentiable container F, we can just say that q : F 1 -> O and r : dF 1 -> I.)
Allow me to present a little example. The finite sets, when defined inductively are given by
The Roman -container of which this is the least fixed point can be read off directly from this.
output indices per constructor
input index for payload arg
(For years now, Hank and I have been referring to these Alf/Epigram datatype families as ‘Catholic’, with Agda/Haskell’s more austere notion being ‘Presbyterian’. The difference between the two is that Catholics believe in a general transubstantiative equality. Catholic families are the fixpoints of Roman containers.)
It is now blatantly obvious what a morphism between two Roman (I,O)-containers should be: a container morphism which preserves sorts.
A morphism between and is given by
- a container morphism
- a proof that
- a proof that
Moreover, it is now clear how to generalise this to a container driver: the presentation of a ’master‘ (I,O)-container as an interface to a ’slave‘ (I’,O’)-container. Simply replace the equalities above with a pair of relations Q:I→I’→Prop, R:O’→O→Prop which characterise the intended simulation between sorts. The conditions now ensure that the driver correctly implements a command-response interface which respects this simulation.
Can we get away with that?
I’ve just got hold of The Fall Peel Sessions. Surely we can figure out what a container driver is…
Conor and I once talked about `Naperian’ containers, that so to speak work well with logarithms.
He reminds me that A Naperian container has one shape, and a set of positions.
So it represents an endofunctor .
The logarithm of this container is K.
Every container is a sum of Naperian containers.
To be Naperian is to be a multiplicative, unsummy kind of thing,
all products and exponentials, with no Sigma. Sheer testosterone. [The dependent counterpart is a monotone predicate transformer that commutes with all intersections. ]
Conor’s comment to this note dances the McBride with/out-of the subject,
and relates it to derivatives.
There is a connection between lambda abstraction
and some basic laws of logarithms, namely that in some respects
lambda-abstraction behaves like a logarithm, in that it satisfies
laws such as:
Linear lambda-abstraction corresponds to the case where the logarithm is Naperian.
To handle log_x(k^x) you need a special constant (^) reflecting ^ in the sense that
b^a^(^) = a ^ b. Then log_x(k^x) is (k^) = k^(^).
As Conor pointed out, Naperian functors are closed under composition
(which when taking logarithms translates into ×). In some sense they are closed under
nu expressions, and he shows how to take the logarithm of these.
The weird similarity between lambda abstraction and logarithms was (according to a manuscript by Felice Cardone and Roger Hindley on the history of the lambda-calculus ) known to Böhm in the 1970’s. It really has to do with the combinatorial completeness for the untyped lambda calculus of 4 arithmetical combinators, with exponentiation
A^F reflecting application F(A). Lambda abstraction is the undoing of application in the same way as taking the logarithm is the undoing of exponentiation. You get combinatorial completeness for the linear lambda calculus by dropping
0 and +, and for affine lambda calculus by dropping only +.
mpt = monotone predicate transformer. A monotone predicate transformer is a function of type (for some sets or types ) such that for all , we have . Here is the set of functions . When we are interested in equality between such functions the previous sentence actually means that is functorial.
dc = dependent container. There are two subtly different candidates for this title,
namely objects of the types
The types are slightly different, and amount to different presentations of the same information. The differences may be relevant to computing with these presentations.
- In the FamFam one, we make a table where the sections are per output type, the subsections are per shape of the output sort, and the subsubsections are per position in that shape (and give an input sort).
- In the FamPow one, the subsubsections are per input type, and give the set of positions in the shape which share that input port.
There is a choice between having fixed, or on the other hand having it the carrier set in our structure, in the same way that in the theory of groups the carrier set is not globally fixed: different groups, or different dependent containers can have different carrier sets.
I have explored this only for the case I = O.
Any mpt can be expressed in (Presbyterian) disjunctive normal form , which looks like
for some (the following piece of latex compiles fine on my system).
[Unparseable or potentially dangerous latex formula. Error 2 ]
In the `Pow’ variant we have a slightly different `Episcopelian’ structure
(The following compiles fine on my system…)
[Unparseable or potentially dangerous latex formula. Error 2 ]
If the powerset of a set is a set, ie Pow and/or Fam : Set Set, then any monotone predicate transformer is extensionally equal to a predicate transformer in disjunctive normal form. (Extensional equality
between predicate transformers F and G means that each includes the other. That F is included in G means that for all , implies .
The proof is a piece of vacuous impredicative bollocks, where the idea is that you have an intermediate set which is actually the set of subsets of S.
I think this is interesting, because it means that impredicatively, dependent containers are a complete representation `at the level of objects’. (NB, if you have impredicativity, you can define Leibnitz equality.)
There is also `the level of morphisms’, where you define a notion of morphism between
predicate transformers, which may be given in disjunctive normal form. There are 3 interesting notions of
mapping between mpts and dcs, corresponding to the types
DC -> DC, DC -> MPT, and MPT -> MPT. In the case
DC -> DC, when both isotopes of DC are presbyterian (FamFam),
we have something very concrete.
Impredicatively, all these notions of mapping coincide.
With dependent containers, it seems that the statement of the representation theorem in your paper(s?)
(that there is a full and faithful functor from DC -> MPT essentially, ie. something which is at the level of morphisms,), can be strengthened to say there is actually there is even a functor which is surjective on objects.
The moral I take to be be that if you want concrete representations for mpts, you are looking for some variant
among slightly different notions of dc’s.