Archive for March, 2010

Bag Equality via a Proof-Relevant Membership Relation

Tuesday, March 16th, 2010 by Nils Anders Danielsson

Last week I talked about a definition of bag equality—equality up to permutation—which I have used recently. The definition works for arbitrary containers, but for simplicity I will focus on lists.

The predicate transformer Any can be defined as follows (in Agda):

  data Any {A : Set} (P : A → Set) : List A → Set where
    here  : ∀ {x xs} → P x      → Any P (x ∷ xs)
    there : ∀ {x xs} → Any P xs → Any P (x ∷ xs)

Any P xs means that any element in xs satisfies P. Using Any and an equality relation it is easy to define the membership relation:

  _∈_ : {A : Set} → A → List A → Set
  x ∈ xs = Any (_≡_ x) xs

Finally bag equality can be defined:

  _≈_ : {A : Set} → List A → List A → Set
  xs ≈ ys = ∀ x → x ∈ xs ⇿ x ∈ ys

Here A ⇿ B is the set of invertible functions between A and B. Two lists xs and ys are equal when viewed as bags if, for every element x, the two sets x ∈ xs and x ∈ ys are equipotent; note that if x occurs a certain number of times in xs, then there will be a corresponding number of distinct elements in x ∈ xs.

The definition above has some nice properties:

  • If _⇿_ is replaced by equivalence (coinhabitance), then we get set equality instead of bag equality. This uniformity can be exploited; one can for instance prove that map preserves bag and set equality using a single, uniform proof.
  • The definition generalises to other containers, including types with infinite values (like streams).
  • The use of Any can give some convenient wiggle room in proofs.

Let me illustrate the last point by proving that bind distributes from the left over append:

  xs >>= (λ x → f x ++ g x) ≈ (xs >>= f) ++ (xs >>= g)

This property does not hold for ordinary list equality (one way to see this is to let xs be true ∷ false ∷ [] and f and g both be return). I will make use of the following properties:

  (∀ x → P₁ x ⇿ P₂ x) → xs₁ ≈ xs₂ → Any P₁ xs₁ ⇿ Any P₂ xs₂
  A ⇿ B → C ⇿ D → (A ⊎ C) ⇿ (B ⊎ D)

  Any P (xs ++ ys)          ⇿  Any P xs ⊎ Any P ys
  Any (λ x → P x ⊎ Q x) xs  ⇿  Any P xs ⊎ Any Q xs
  Any P (map f xs)          ⇿  Any (P ∘ f) xs
  Any P (concat xss)        ⇿  Any (Any P) xss

From the last two properties we get a derived property for bind:

  Any P (xs >>= f)           ⇿
  Any P (concat (map f xs))  ⇿
  Any (Any P) (map f xs)     ⇿
  Any (Any P ∘ f) xs

Using these properties we can prove left distributivity. The proof is a simple calculation:

  y ∈ (xs >>= λ x → f x ++ g x)                    ⇿
  Any (λ x → y ∈ f x ++ g x) xs                    ⇿
  Any (λ x → y ∈ f x ⊎ y ∈ g x) xs                 ⇿
  Any (λ x → y ∈ f x) xs ⊎ Any (λ x → y ∈ g x) xs  ⇿
  y ∈ (xs >>= f) ⊎ y ∈ (xs >>= g)                  ⇿
  y ∈ (xs >>= f) ++ (xs >>= g)

The “wiggle room” I mentioned above is visible in the calculation: the proof does not refer solely to _∈_ y, i.e. Any (_≡_ y), but also to Any P for other predicates P.

The definitions of Any and bag and set equality are available in Agda’s standard library (Data.List.Any), along with the proof of left distributivity (Data.List.Any.BagAndSetEquality).

Causal Semantics of (A)FRP

Monday, March 8th, 2010 by Neil Sculthorpe

Having being inspired by Conal Elliott’s blog post about the inaccuracy of the (Arrowised) FRP semantical model, I was trying to find as simple a model as possible that is inherently causal.

The basic concept in FRP is of time-varying values called signals (or behaviours):

Signal : Set -> Set
Signal A = Time -> A

Where Time is the non-negative real numbers.

In Arrowised FRP there are then Signal Functions, conceptually functions between signals:

SF : Set -> Set -> Set
SF A B = Signal A -> Signal B

Signal functions are nicely modular, and AFRP programs are constructed by composing signal functions together.

As an example of a signal function, consider “delay”, which delays an input signal by a specified amount of time:

delay : Time -> SF A A
delay d = \ s t -> s (t - d)

The problem with this semantics of SF, is that it isn’t inherently causal. One could define a “future” signal function that is similar to delay, but produces the future input as the current output.

future : Time -> SF A A
future d = \ s t -> s (t + d)

In FRP we want to always be able to produce the current output from the current and past inputs, so this is bad.

Consequently, a causality constraint is usually placed on signal functions. All primitive signal functions must be causal, and all primitive combinators must preserve causality.

Causality can be defined:

Causal : SF A B -> Set
Causal sf = (s1 s2 : Signal A) -> (t : Time) -> ((t' : Time) -> (t' <= t) -> (s1 t' == s2 t')) -> (sf s1 t == sf s2 t)

So a causal signal function is a product of the signal function and the causality property:

CausalSF A B = Sigma (SF A B) Causal

I was hoping to find a model of AFRP that is inherently causal, and so doesn’t require the addition of a causality property on top of it. Starting from the definition of SF:

SF A B = Signal A -> Signal B
{expanding}
SF A B = (Time -> A) -> (Time -> B)
{flipping arguments}
SF A B = Time -> (Time -> A) -> B
{constraining the input signal}
SF A B = (t : Time) -> ((t' : Time) -> (t' <= t) -> A) -> B

We’re now constraining the input signal. We are now only required to know the input signal up to the point in time at which we are sampling the output.

As far as I can tell, this is now inherently causal, and it seems to work out okay for defining primitives. For example, composition is:

_>>>_ : SF A B -> SF B C -> SF A C
sf1 >>> sf2 = \ t sa -> sf2 t (\ t' ltB -> sf1 t' (\ t'' ltA -> sa t'' (transitive ltA ltB)))

However, so far we’ve only considered continuous signals. FRP also contains discrete signals, most notably events. Events only occur at certain points in time, and the event signal is undefined elsewhere. We require that only a finite number of events occur within any finite interval. This can be modelled as a (co)list of time-delta/value pairs:

EventSignal : Set -> Set
EventSignal A = List (Dt , A)

where Dt is the type of positive Time, representing the time since the previous event occurrence.

We can incorporate this into the original model by parametrising signals with a signal descriptor that contains this time-domain information (rather than just a Set):

data SigDesc : Set1 where
C : Set -> SigDesc
E : Set -> SigDesc

Signal : SigDesc -> Set
Signal (C A) = Time -> A
Signal (E A) = List (Dt , A)

SF : SigDesc -> SigDesc -> Set
SF x y = Signal x -> Signal y

However, it doesn’t work with the causal version, as we don’t have the same Signal types. Once we’ve put the time constraint on, we really have a signal history. That is, a signal indexed by some finish time:

SignalHistory : SigDesc -> Time -> Set
SignalHistory (C A) t = (t' : Time) -> (t' <= t) -> A
SignalHistory (E A) t = EventList A t

where an EventList is just a list of time-delta/value pairs with an added constraint that the sum of all the time deltas does not exceed the index. (If anyone can suggest a better datatype for this then please do so; this is just the first one that came to mind.)

data EventList (A : Set) : Time -> Set where
[] : {t : Time} -> EventList A t
cons : {t t' : Time} -> (d : Dt) -> A -> EventList A t'
-> (t' + d) <= t -> EventList A t

Returning to our signal function definition, we can now define signal functions for either kind of input signal, but only continuous output signals:

SF : SigDesc -> SigDesc -> Set
SF x (C B) = (t : Time) -> SignalHistory x t -> B
SF x (E B) = ?

And that’s where I got stuck stuck.

I did consider:

SF : SigDesc -> SigDesc -> Set
SF x y = (t : Time) -> SignalHistory x t -> SignalHistory y t

but this isn’t properly causal. It is to some degree (at any given point in time, the output signal history does not depend upon future input), but there’s nothing preventing the past from being rewritten later (because at each point in time a new signal history is produced, which may or may not be consistent with previous histories).