Archive for June, 2007

Ords’n'Cards

Friday, June 29th, 2007 by Hancock

A couple of weeks ago I busked a talk on ordinals and cardinals.
Graham asked me to post a reference to a book.

One I learnt a lot from when I was young is

“The theory of sets and transfinite numbers”,
B.Rotman and G.T.Kneebone, Oldbourne Mathematical
Series, 1966.

My copy cost 35 shillings. I doubt very much its still obtainable –
except that I still have my copy, and I’ll (try to remember to) bring
it down when I’m back (19th July or so). There may well be better
books to recommend nowadays, but … I haven’t read them.

The next time I give such a talk, to functional programmers, I think
I’ll start off with the hierarchy of process types:

data Proc2 = Stop2 | Beep2 Proc2
| Read2 (Nat -> Proc2)
data Proc3 = Stop3 | Beep3 Proc3
| Read31 (Nat -> Proc3)
| Read32 (Proc2 -> Proc3)
...

Why the wierd “2″s and “3″s? Because Proc2 is the second number class
(the countable ordinals), Proc3 is the third, and so on. Exercises:
what’s the first number class? What’s the nought-th? Can you think of
a way to write the omega-th in Haskell? In a different direction, is
there something you are very familiar with that gives you addition?
How’bout multiplication? Exponentiation?

…………………………………………………

By the way, I recently attended “Computability in Europe 2007″
in Siena: http://www.amsta.leeds.ac.uk/~pmt6sbc/cie07.html.
This is the first time I have attended such a large conference:
about 285 participants. There were 4 parallel “special sessions”,
and 7, yes 7 parallel streams of contributed talks. There were
a lot of talks on biological computation. Educational. I came back with
two breeze-block sized volumes of pre-conference proceedings
that interested parties are welcome to peruse. Actually, Conor
already has one: we should try to get Milli Maietti to come, because
in some sense, she’s in the downstairs flat with Conor and Thorsten
doing the OTT upstairs. (Poor woman…)

It was nice socially, but thin scientifically. (Apart from some
remarks by Andrej Bauer.)

Friday, June 29th, 2007 by Hancock

Programming Modular Synthesizers in Haskell

Friday, June 8th, 2007 by George Giorgidze

I gave very quick introduction to Yampa, domain specific embedded language in Haskell, for casual modelling of highly dynamic hybrid systems. Term “hybrid” is used do describe models with both continuous-time and discrete-time behaviors.

After I pointed out that music in general is hybrid phenomenon and related physical systems can be modelled in a language like Yampa in a nice way.

I described current implementation of modular synthesizer in Yampa (it’s still under development). Explained each building block of synthesizer, and how they are modelled and interconnected with each other. I argued why language like is Yampa is useful for this task and how it makes it easy to model this particular system, I emphasized several features like: signal functions (the fact that they are first class entities), dynamic parallel switching constructs and several other useful combinators.

There were questions about current status and already available features. Current implementation is able to synthesize and play music given in standard MIDI format of single instrument, support for instrument collections is still under development and will be available soon.

I was not able to satisfy requests to demonstrate synthesizer as talk was not planned in advance and I had not got my laptop with me. I suggested to demonstrate it maybe for next lunch or at some point later, hope it will make lunch more enjoyable.

At the end I received some useful comments and suggestions.

I will post future advances on weblog and probably make source available for download from my web page http://www.cs.nott.ac.uk/~ggg/ very soon.

Hurry up please it’s time

Friday, June 1st, 2007 by Hancock

I mentioned the problem of representing time in a
(working) mathematical model.

For many purposes, the natural numbers are perfect. This time is discrete,
has an origin, and no end.

For other purposes, we may prefer the (positive and negative) integers
[no beginning]. Or the rational numbers [dense: between two moments,
a third]. Or maybe the real numbers, or some interval, or (who knows)
the irrationals,… whatever.

Yampa takes very seriously the idea of values that are a function of
continuous time – these are called signals, and are in some sense
virtual – the real things are signal transformers. Discrete streams
are captured by maybe-valued signals f : T -> A+1.

Now I think I understand T=natural numbers quite well, as a signal
transformer is then of type Str A -> Str B, and any such function
which is continuous has an implementation by an object of type

nu x. mu y. B * x + (A -> y)

But this depends on Str B being a final coalgebra, at least
weakly. How far can one push eating techniques to time-domains like
the real numbers, or whatever suits a digital-analog interface?

Lamport’s notion of stuttering
==============================

I mentioned an attractive and unusual approach to time taken by Leslie
Lamport.

http://research.microsoft.com/users/lamport/pubs/pubs.html#what-good

(Search for the word stuttering throughout the page.)

Lamport restricts his specification language TLA (a form of
linear-time temporal logic) so that the temporal properties it can
state (of streams of states) are all invariant under stuttering:
introducing or deleting finite repetition. The reason he does this is
deeply bound up with his notion of what is a (low-level)
implementation of a (high-level) specification.

Think of it like this: a formula of temporal logic describes a movie
of the system being modelled. The camera’s shutter speed must be fast
enough to capture every state-change that takes place, It can be
faster, in which case there will be repeated frames.

Because the formulas are stuttering-insensitive, they are not so much
properties of movies, as properties of what a movie is of. If you see
what I mean.

Thorsten held forth for a while about “special double relativity”,
or something like that.

The Expression Problem

Friday, June 1st, 2007 by Wouter

Phil Wadler defines the expression problem as:

The goal is to define a data type by cases, where one can add new cases to the data type and new functions over the data type, without recompiling existing code, and while retaining static type safety.

I talked a bit about how one might go about tackling Phil Wadler’s expression problem in Haskell. It requires quite a bit of type class trickery, but nothing too nasty. I’m hoping to write something more substantial about this some time soon: watch this space!