November 21st, 2008 by Wouter

I talked about my recent attempts to understand call-by-push-value, a typed lambda calculus that is very precise about when evaluation occurs.

The idea underlying CBPV is to distinguish the type of values and computations. You can think of a value as a fully evaluated Int, String, pair of values, etc. Functions, on the other hand, are computations abstracting over values. I’ve put together an overview of all the types, terms, typing judgements, and evaluation rules (forgive me, but I’d much rather edit large formulas in latex directly than WordPress)

My motivation for this is very different from Paul Levy’s: I’m dreaming of a language with enough laziness to be interesting, but with more predictable memory usage than Haskell. The next step is to figure out how to add (value) data, and (computational) codata, effects (no IO monad), and enough syntactic sugar that you don’t need to be explicit about every thunk/force. There’s a relationship with Conor’s ideas about Frank and the recent work about focussing and duality at CMU that I haven’t managed to nail it down.

10 Responses to “Call-by-push-value”

  1. Jean-Philippe Bernardy Says:

    Care to give pointers for the references you mention? (Frank, focussing)

  2. Noam Says:

    There’s a very close relationship between CBPV and focusing. You can view the types of CBPV as formulas of polarized intuitionistic logic: value types are positive formulas, computation types are negative. At the DTP workshop last spring, I went through some agda code that shows how to view proofs in a focused sequent calculus for polarized intuitionistic logic as pattern-matching programs. There’s a pretty simple translation guide between this syntax and CBPV. (We came up with it when Paul Levy was visiting CMU this summer, but haven’t written it down anywhere…hmm…) The only significant differences between focused, polarized IL and CBPV is that the former distinguishes a few more syntactic categories, and more importantly, that we build “inversion” constructs (i.e., elimination of positive/value types, or introduction of negative/computation types) by “maximally deep” pattern-matching. For example, to eliminate a positive/value type, we have to pattern-match all the way down to variables of negative/computation type. But this idea also shows up in the semantics of CBPV, with what Levy and Soren Lassen called “ultimate pattern-matching”. It makes it a lot easier to reason about the operational behaviour of programs.

  3. Hank Says:

    Wouter, I don’t think I know what a computation is.

    Is Nat a value or a computation?


  4. Wouter Says:

    Jean-Philippe: You may want to have a look at some of Noam Zeilberger’s papers. He has written quite a lot about focusing. On the unity of duality is probably a good place to start.

    I don’t think Conor’s ever written anything up about Frank, but you might be interested in the slides of his talk at the Effects in Type Theory Workshop.

    Noam: Thanks for your reply! I’ve had “Read Noam’s Agda code” on my “Todo someday” list for about a year now. I guess I got distracted by writing my thesis :)

    Ralf Hinze was visiting when I gave talked about this and he asked an interesting question: why does the function space necessarily build computation types? I’m afraid I couldn’t give him a satisfactory answer. Do you have a convincing explanation?

    Hank: Computation types are what Noam calls “negative types” – types characterised by their elimination rules rather than their introduction rules. For example, the function space operator build computation types. In Paul and Noam’s work, functions abstract over value types but return computation types. Inhabitants of Nat would be values (which is different from a computation that will return a Nat).

  5. Noam Says:

    Wouter/Hank: so I think the terminology “value” vs “computation” is a bit misleading, because in ordinary functional programming, lambda’s are values. The way Paul explains value types vs computation types is in terms of the semantics: value types are modelled by sets, computation types by algebras. The coercions F (turning a value type into a computation type) and U (turning a computation type into a value type) represent an adjunction between sets and algebras (hence Paul’s use of the symbols F and U, by analogy to the free algebra functor, and the underlying set functor). So rather than thinking of computations as modelled by “computation types” as such, it’s better to think of them as modelled by this adjunction.

    The unity of duality paper also had some misleading terminology, and these days I like to speak of both positive and negative values, and positive and negative continuations. My favorite way to think about these is in terms of pattern-matching: positive types are eliminated (i.e., we build positive continuations) by pattern-matching, while negative types are introduced (i.e., we build negative values) by pattern-matching. This explains why the ordinary function space of functional programming is negative, because we define functions by pattern-matching. (A positive function space would be weird, because it would be eliminated by pattern-matching…but indeed that’s what the paper on “focusing on binding and computation” is about.) A slightly subtle point, though, is that we build negative values by pattern-matching against patterns of destructors. It so happens that a destructor pattern for A -> B contains a constructor pattern for A (and a destructor pattern for B). You can also think of the negative type of lazy pairs: the way we build a lazy pair is by pattern-matching against the two possible destructors, fst and snd, saying what value to return in each case.

  6. Andrej Bauer Says:

    You might be interested in looking at the toy language “levy” in my Programming Languages Zoo. It closely follows Paul Levy’s call-by-push-value language and might be a good starting point for experimentation with call-by-push-value.

  7. Paul Says:

    Noam: when you say “in ordinary functional programming, lambdas are values”, does
    “ordinary” mean lazy or eager? If you mean eager, I agree that lambdas are values, but cbv lambda translates into thunk lambda, so it remains a value in cbpv.

  8. Noam Says:

    Paul: I meant both lazy and eager. Specifically I’m thinking of Haskell, where both lambdas and lazy pairs are considered “values”. You could argue that that is misleading terminology, but in any case it’s a potential source of confusion with cbpv.

  9. Justin Says:


    Why the choice for big-step evaluation rules rather than single-step in your PDF? Just curious.

  10. Wouter Says:

    Justin: No particular reason, I’m afraid. You could just as well give small-step semantics I suppose. Most of these rules are from Paul Levy’s book on CBPV – he may have a better reason than I do.

Leave a Reply