As well as the names of relevant faculty members, the wiki includes links to their websites and PhilPapers profile where one exists, so that students can find out about the work and research interests of potential supervisors. Emeritus and affiliated faculty can also be listed on the pages for individual programmes.

So far there are almost 70 institutions listed, but there is still a lot to do. In particular, the coverage of some areas (such as continental Europe and Scandinavia) is rather sparse. I also aim to list research specialities for every faculty member included on the site as well, but this part is just in its infancy, so if you can find a few spare minutes to help out—even just updating your own entry, or that of your department—it would be immensely helpful for prospective graduate students in philosophy of mathematics.

You can view and edit the wiki at philmath.net.

]]>There is an obvious way in which this problem is shared by classical reverse mathematics, and that is its relationship to constructivism. Constructivists reject unrestricted use of the law of excluded middle, but reverse mathematics is developed within classical logic and as such proofs relying on the law of the excluded middle are commonplace. Consequently there will be reverse mathematical theorems that cannot be countenanced by a constructivist since they have no constructive proof.

Any attempt to convince a committed constructivist that certain parts of mathematics are beyond the reach of her axioms must employ principles that are themselves constructively acceptable. After all, the point here is to *analyse* foundational programmes—including varieties of constructivism—and not to rule them out before we even begin. The problem is the entailment relation: it’s plain that we can re-run the argument given in §3 of my talk, swapping out computable entailment for classical entailment, and classical proof for constructive proof.

Preserving justification is a key property for any entailment relation: if we are justified in accepting the antecedent, then we are also justified in accepting the consequent. But classical entailment does not, by constructivist standards, preserve justification. Considering just one case, a proof of *φ* ∨ *ψ* does not in all cases constitute a proof of *φ* or a proof of *ψ*. The constructivist is therefore entitled, by her own lights, to ignore classical results demonstrating the limits of her axiom system.

This points to a tension within reverse mathematics. On the one hand there is an obvious and understandable desire to prove all of the results one is able to, and explore precisely which theorems of ordinary mathematics (ordinary mathematics itself being strongly rooted in classical logic) can be shown to be equivalent to one of the major subsystems of second order arithmetic. On the other, as one bakes stronger assumptions into one’s analytical framework one becomes unable to faithfully analyse weaker foundational systems such as constructivism.

]]>Many of the handouts from these seminars have now been made available on the Munich Center for Mathematical Philosophy’s website, in their section on learning mathematical philosophy. In particular, they have my handout on the compactness and Löwenheim–Skolem theorems.

Since the Formal Methods Seminar isn’t a full-on logic course, I didn’t have time to cover all of the material I would have liked to, and consequently the notes are somewhat incomplete. In particular the downwards Löwenheim–Skolem theorem is not proved, although of course the proof is available in any good logic textbook. I hope to correct this omission at some point, and if anyone has corrections or suggestions, please do email me.

During October, I taught this material again to a new set of Master’s students. Instead of proving the upwards direction of Löwenheim–Skolem I taught the more fundamental downwards direction as a simple application of the completeness theorem, albeit not in its full generality, which would have required more time and technical finesse than I could assume.

The updated notes include this proof and have been restructured somewhat, but otherwise they are largely the same as last year’s.

]]>In ordinary mathematical practice, mathematicians prove theorems, reasoning from a fixed set of axioms to a logically derivable conclusion. The axioms in play are usually implicit: mathematicians are not generally in the habit of asserting at the beginning of their papers that they assume ZFC. Given a particular proof, we might reasonably ask which axioms were employed, making explicit the author’s assumptions.

Now that we have a set of axioms

Γwhich are sufficient to prove some theoremφ, we could further inquire whether or not they arenecessaryto prove the theorem, or whether a strictly weaker set of axioms would suffice. To a first approximation, reverse mathematics is the programme of discovering precisely which axioms are both necessary and sufficient to prove any given theorem of ordinary mathematics.Expanding on this slightly, in reverse mathematics we calibrate the proof-theoretic strength of theorems of ordinary mathematics by proving equivalences between those theorems and systems of axioms in a hierarchy of known strength. This characterisation should provoke at least three immediate questions. Which hierarchy of systems? How do these equivalence proofs work? And just what is “ordinary mathematics”?

For the answers to these questions, and many more, you can read my introduction to reverse mathematics. If you notice any mistakes, please drop me a line via email or Twitter.

A note for the eagle-eyed: at one point I claim that an equivalence between ACA_{0} and some statement *φ* would have to involve a theorem scheme, and thus cannot be formalised at the level of the object language. However, the latter does not strictly follow from the former, as ACA_{0} is finitely axiomatisable. For more details on this, read Peter Smith’s handy little guide to finitely axiomatising ACA_{0}.

One of the intermediate steps is the system *T*, originally developed by Kurt Gödel in his Dialectica interpretation, and reformulated by Tait in 1967. I won’t try to cover the history here—there are plenty of articles out there if you’re interested. (The names of the articles alluded to are given in the references at the end.)

The modern formulation of *T* is as a typed λ-calculus with natural number and boolean types. In chapter 7 of *Proofs and Types*, Girard shows how the natural numbers can be represented in *T*, and provides a definition of addition based on the recursion operator R. He goes on to say that

Among easy exercises in this style, one can amuse oneself by defining multiplication, exponenential, predecessor

etc.

So, let’s amuse ourselves! Thinking about these simple examples is helpful because it gives a working demonstration of how the recursion operator adds power back to the language which was lost when the restrictive type system was introduced.

For example, the untyped λ-calculus allows one to use Church encoding to represent the natural numbers, but in the simply typed lambda calculus one is barred from forming the relevant terms by the requirement that every term be well-typed.

During the reading group meeting where we discussed this chapter, we managed to assemble a working definition for multiplication, which I’ll present below along with the other arithmetical functions Girard suggests. However, before we get onto that, I’m going to briefly sketch how the type of natural numbers Nat is defined in *T*; the introduction and elimination rules for terms of that type, on which everything else is built; and how an addition function can be defined in terms of those rules.

A small niggle: Girard actually talks about an *integer type* Int. When one examines the nature of the type it becomes clear that what’s intended is a natural number type, so I’m going to talk solely about natural numbers and call the type in question Nat, not Int.

System *T* is a variant of the simply typed lambda calculus, so it has three sorts of rules: those stating which *types* exist; those stating which *terms* of those types can be formed; and reduction rules stating how some terms can be reduced to others. These are spelled out in §3.1 of *Proofs and Types*, so if any of the following doesn’t make sense, I recommend checking that out (the book is freely available from the link above).

For system *T*, there are also two constant types, Nat and Bool; here we shall only be concerned with Nat so I shall omit the rules mentioning Bool. The additional term formation rules for system *T* specify how one can introduce and eliminate terms of Nat and Bool. The introduction rules are:

- O is a (constant) term of type Nat;
- If
*t*is a term of Nat, then S*t*is a term of type Nat.

These mirror their equivalents in Peano Arithmetic almost exactly—that is, they represent zero and the successor function. There is also the elimination rule for Nat, which introduces the recursion operator R:

- If
*u*,*v*,*t*are of types respectively*U*,*U*→ (Nat →*U*) and Nat, then R*u**v**t*is of type*U*.

The recursion operator allows us to combine the introduction rules to give recursive definitions of arithmetic operations such as addition and multiplication in a form not too far removed from their counterparts in Peano Arithmetic. Its reduction rule is given by two cases:

- R
*u**v*O reduces to*u*. - R
*u**v*(S*x*) reduces to*v*(R*u**v**x*)*x*.

The first is the *base case*, which ensures that the recursion is well-founded. The second rule gives, for the recursive case, a way of eliminating an instance of the recursion operator. The term it reduces to looks more complex than the one we started with, but it’s clear that eventually we’re going to run out of successors and the reduction process will terminate (a proof of this is given in §7.2 of *Proofs and Types*).

With the preamble out of the way, we’re finally ready to start defining the usual arithmetic functions within *T*. The easiest place to start is with the definition of addition.

x+ 0 =x

x+S(y) =S(x+y)

There’s no recursion operator in sight in this definition, but of course there is recursion occurring. The first equation gives the base case, defining the result when the addition function is used to combine a number *x* with 0. The second equation defines the recursive case, giving the result when we add the successor of some number *y* to some number *x*. The recursion operator R has three arguments, and the first two map to these cases: the value of the base case (here it would be *x*) and the function to apply in the recursive case (here, the successor function).

With this in mind, we can formulate an addition function in *T* pretty straightforwardly. The variables bound by lambda abstractions have been given type annotations, just to make clear how the types of these terms relate to those specified in the introduction and elimination rules for terms of type Nat. I shall omit these annotations from the definitions in the rest of the article.

add =

λx^{Nat}.λy^{Nat}.Rx(λz^{Nat}.λw^{Nat}.Sz)y

The second term given to R is just a double lambda abstraction over S*x*, representing a two-argument function; the second argument (the numbers accumulating on the right of the reduction) will always be thrown away. The type of the term must be, by the definition we gave earlier, *U* → (Nat → *U*). In this case we’re saying that the type *U* is in fact the constant type Nat, so the term has a type of Nat → (Nat → Nat).

I should note that there is no mechanism in *T* for giving names to definitions (i.e. let binding), so equations like the one given above should simply be seen as expressing abbreviations, not as expressions in the language of *T*, although the lambda term on the right hand side certainly is.

Again, let’s begin with the definition of multiplication we find in Peano Arithmetic.

x⋅ 0 = 0

x⋅S(y) = (x⋅y) +x

Here we can see that in the base case, when we combine 0 with *x* the result is 0. In contrast, in the base case for addition when we combine 0 with *x* the result is *x*. This is reflected in the following definition of multiplication in *T*, with O as the base case, and *addition* as the operation applied in the successor case.

mult =

λx.λy.RO(λz.λw.(add)xz)y

Expanding the definition of add in mult (with some trivial α-conversion and β-reduction), it’s easy to see the double recursion that we also see in the PA definitions.

mult =

λx.λy.RO(λz.λw.Rx(λu.λv.Su)z)y

Just as multiplication is built on addition, adding an extra layer of recursion, so exponentiation is built on multiplication. The PA equations are

x^{0}= 1

x^{S(y)}=x^{y}⋅x

Again, all we need to do to define this new function is state the base case (SO, corresponding to the rule that any number raised to exponent 0 is 1) and the function to apply in the recursive case (multiplication).

exp =

λx.λy.RSO(λz.λw.((mult)xz))y

The predecessor function is given by the following equations.

Pred(0) = 0

Pred(S(x)) =x

Compared to the nested recursions given above, the definition of the predecessor function in *T* is strikingly simple.

pred =

λx.RO(λy.λz.z)x

The insight which allows one to derive it is similarly simple. An application of the predecessor function to some successor (in the case of O the predecessor will simply be O too) will be of this form: RO(*λ**y*.*λ**z*.*z*)(S*x*). When we look at the reduced term after one reduction, we have something like (*λ**y*.*λ**z*.*z*)(RO(*λ**y*.*λ**z*.*z*)*x*)*x*.

Ignore the new occurrence of the recursion operator; just consider it as another term. Instead, look at the whole formula as an application. Clearly, the first argument to the function on the left will always be discarded: the whole term will always reduce simply to *x*, the predecessor of S*x*, our initial argument.

- Girard, Lafont and Taylor’s
*Proofs and Types*is the primary reference for this article. - System
*T*was first introduced by Kurt Gödel in his 1958*Dialectia*article ‘Über eine bisher noch nicht benützte Erweiterung des finiten Standpunktes’, which is reproduced (with an English translation) in volume II of his*Collected Works*, edited by Feferman et al. - The more modern version of
*T*which Girard et al. work from was first given by William Tait in his 1967*JSL*article, ‘Intensional interpretation of functionals of finite type I’. - From a programming languages perspective, Benjamin Pierce provides a good introduction to the simply typed lambda calculus in chapter 9 of his book
*Types and Programming Languages*. - A more general introduction to the λ-calculus is Hindley and Seldin’s
*Lambda-Calculus and Combinators*, while Barendregt’s*The Lambda Calculus: Its Syntax and Semantics*is the definitive reference work in the field.

Firmin is fundamentally very simple: all it does is parse descriptions of animations and then execute them by manipulating the `style`

property of the animated element. Here’s an example: moving an element right 200 pixels and down 100 pixels in half a second, while rotating it clockwise through 45°.

```
var elem = document.getElementById("an-element");
Firmin.animate(elem, {
translate: {x: 200, y: 100},
rotate: 45
}, 0.5);
```

Firmin was partly born out of a desire to write a lightweight, standalone animation library which didn’t rely on an existing DOM framework like jQuery or YUI. The current version is still missing a number of features and only works on a limited subset of modern browsers, but if you’re developing applications for WebKit-based browsers it’s ready to be used in anger.

There’s a reasonably complete set of documentation available, which introduces the framework and provides a fairly comprehensive set of API docs. If you just want to know how to use Firmin, that’s the place to go.

For the remainder of this article I’m going to discuss using CSS transforms and transitions to create animations, explain some of the problems I’m trying to solve, and address a few of the issues I faced while developing Firmin. If you you like what you see (or think I’m missing a trick somewhere), please fork the project on GitHub or just report an issue.

Animation in HTML documents has always been the preserve of JavaScript, purely because there was no other way to accomplish it. The CSS Transitions Module is an attempt to alter that state of affairs, by providing native animation primitives through CSS. One can write rules like the following to make an element animate when its `opacity`

property changes:

Since the animation will only run when the `opacity`

property is changed, JavaScript will still usually be needed to initiate the transition, although there are some pseudo-class selectors like `:hover`

which can trigger transitions.

Furthermore, one has to write these rules in advance, and are few abstractions available to make such code as flexible as one might like. While it might work for small sites with limited functionality, this is not an approach likely to scale to larger endeavours.

By modifying the DOM properties of the elements to be animated with JavaScript, we can combine the advantages of speed and simplicity offered by the CSS Transitions Module with the reusability and expressive power of a traditional animation framework.

Animation hasn’t tended to perform well in web browsers. This is partly because of how it’s had to be implemented, as a function which fires every hundredth of a second or so to change the state of the animated element to the next ‘frame’ of the animation. This is not easy for browser engines to optimise: a more promising approach would be built-in primitives for animation which could then be optimised at a lower level, closer to the hardware.

This is what the CSS transitions API promises, and when used in concert with CSS transforms, a number of WebKit-based browsers (such as Safari on the Mac, and Mobile Safari on the iPhone and iPad) will actually hardware accelerate the animation. This results in noticeably smoother animations which don’t slow down other operations in the way that traditional animations do.

The animation of transform properties can be hardware accelerated because modifying them doesn’t force a reflow of the page; the same is true of the `opacity`

property, which is why these are the only properties whose animation can currently be hardware accelerated. I used this trick on the Paul Smith homepage to improve the performance of a full-page fade in browsers which supported CSS transitions. We actually published the code in question under a BSD license, so you can have a look at the source code if you’re interested.

Transitions are just CSS properties, which hands an immense amount of power directly to developers, who can start writing quite complex animation routines with barely any JavaScript. One example I saw recently was this fantastic slideshow, which basically works by changing the `className`

of the slides. The minimal amount of code needed to get it off the ground is extremely impressive, and a great demonstration of some of the possibilities which this combination of transforms and transitions provides.

There are four transition properties, which I’ll go through briefly before explaining how they can be used to create animations. All the details can be found in the W3C specification, §2. The first is the `transition-property`

property, which specifies those animatable CSS properties of the DOM element in question which are to be animated.

```
.animated1 {
/* Only the opacity and background-color properties will be animated. */
transition-property: opacity, background-color;
}
.animated2 {
/* All of the element's animatable properties will be animated. */
transition-property: all;
}
```

Then there are the `transition-duration`

and `transition-delay`

properties, which specify how long the animation should last, and how long to wait before starting it.

```
.animated3 {
/* Animations should last for 2 seconds. */
transition-duration: 2s;
}
.animated4 {
/* Animations should begin after 147 milliseconds. */
transition-delay: 147ms;
}
```

Finally, there’s the `transition-timing-function`

property, which specifies a function describing the evolution of the animatable properties over time. In other words, they give the *rate of change* of the animatable properties. Some of them start slow and end fast; others are fast at the beginning and slow down towards the end. Timing functions are specified using cubic Bézier curves: there are five named functions one can use (`ease`

, `linear`

, `ease-in`

, `ease-out`

and `ease-in-out`

), as well as a `cubic-bezier`

function that can be used to create one’s own timing functions by specifying four control points.

```
.animated5 {
/* The ease-in-out function starts slow, speeds up, then slows back down
towards the end of the animation. */
transition-timing-function: ease-in-out;
}
.animated5 {
/* The timing function specified here is actually equivalent to the
ease-in function. */
transition-timing-function: cubic-bezier(0.42, 0, 1.0, 1.0);
}
```

Of course, the downside to writing all one’s animation code in CSS is that everything must be specified in advance: nothing can be calculated at runtime. Fortunately, there is a DOM interface to the way any element is styled: its `style`

property. Manipulating this property is how all existing JavaScript animation frameworks work, and Firmin is no different. Let’s consider a simple example: making an element transparent (“fading it out”). Using Firmin, this is done as follows:

All this does is give the element an opacity of 0, and set its `transitionDuration`

property to `"1s"`

. Let’s translate it into CSS to see how it would work there, assuming we gave the element in question the `fade-out`

class at an appropriate moment.

I wrote at the beginning of this article that Firmin is fundamentally very simple, and here we can see just how simple. In the library there’s a method, Firmin.Animation#exec, which takes the properties to be animated and the transition properties that will animate them, loops over them, and applies them to the relevant element. Since it’s so straightforward, I’m going to show the entire function source, with a few comments added to explain what’s going on.

```
Firmin.Animation.prototype.exec = function(element) {
// An instance of the Animation prototype represents a single animation:
// its style property carries any property, such as opacity, which is
// neither a transition nor a transform. It's aliased as the properties
// variable for convenience.
var properties = this.style, property;
// The properties object is modified by passing it to the transition and
// transform builder methods, which basically just translate the internal
// representations of those properties into something that can be copied
// onto the DOM element being animated.
if (this.transition) properties = this.transition.build(properties);
if (this.transform) properties = this.transform.build(properties);
// Then this loop simply adds each specified property on the element in
// question. This is, in the end, all that the execution of the animation
// consists of.
for (property in properties) {
element.style[property] = properties[property];
}
};
```

I’m going to discuss CSS Transforms in more detail later on, so for now I’ll just mention that they modify the shape and position of elements, and the nature of this modification makes them well-suited to creating certain sorts of animations. In particular, they give the web developer an arsenal of effects that can be deployed to great effect in the creation of web applications which retain much of the feel and responsiveness of native applications on mobile devices, and as such also form a natural replacement for some Flash-based sites.

One of the best things about Ojay is the simplicity of executing actions sequentially: animate this element here, then hide it, then run this function, then do this other thing, and so on. Firmin lets you chain animations in much the same way, although it doesn’t have Ojay’s generality (in Ojay you can change the object which methods are applied to partway through a chain of method calls, and do all the other nifty things MethodChain supports).

```
Firmin.translateY(elem, 200, 1)
.rotate(45, 0.75, function(el) {
el.innerHTML = "Something something";
})
.animate({opacity: 0.1}, 0.5);
```

Two things are on show in this example: animation chaining, and callbacks. Implementing them was very simple, but they add a lot of power to the library. Animation chaining basically just involves building up a queue of animations with each method call, then firing them in order. For the gory details, have a look at the source code—it’s all commented pretty thoroughly.

It’s worth mentioning that animations are run on a timer: every time an animation is run, the library calls `setTimeout`

with a callback function that will run the next animation (if there is one) once the first animation finishes. Originally I added a listener to the animated element for the `transitionend`

event (see §5 of the specification), but it’s rather patchily supported, and some people have reported bugs when using it, so in the end I went with a simple timeout.

Every animation function and method that Firmin has accepts a callback function as an optional last argument, so you can run arbitrary code once an animation finishes. With this in mind, let’s run through the example above, this time with comments.

```
// First, the element is moved 200 pixels down, the animation taking 1 second.
Firmin.translateY(elem, 200, 1)
// Then it's rotated through 45 degrees, taking 0.75 seconds.
.rotate(45, 0.75, function(el) {
// Once that animation finishes, the content of the element is set to
// an arbitrary string.
el.innerHTML = "Something something";
})
// Finally, it's faded to 10% opacity in 0.5 seconds.
.animate({opacity: 0.1}, 0.5);
```

The important lesson here is that these actions are executed *sequentially*, each animation only being run once the previous one is complete.

From everything discussed above, one might get the impression that transitions are great, and a natural replacement for existing JavaScript animation technology. I’m going to end this section with some reasons why this isn’t the case, at least not yet.

The obvious point is that they’re not widely supported, and won’t supplant the current way of doing things until they are. This is fair enough, although I will say that if you know you don’t need to support older browsers—for instance, if you’re writing web applications targeted at iOS devices—then this won’t be a problem.

There are also some real technical limitations to transitions as an animation mechanism, at least at present. Because the tick-by-tick modification of the DOM elements involved is removed from the animation framework (since it’s being done by the browser, based on the transition properties given to the element), certain sorts of fine-grained control of animations are no longer possible.

Firstly, one can’t create a completely arbitrary timing function: only those specifiable by four control points as a cubic Bézier curve are allowed. Relatedly, one can’t stop an animation halfway through. Doing this would require direct access to the intermediate states of the animation—something the transitions API doesn’t give one.

In order to add something like jQuery’s `stop`

method, one would have to write a function to calculate all those intermediate stages, which would need to produce exactly the same output as the browser’s. Apart from being an unnecessary headache, it’s the sort of low-level work that misses the point of *having* these new animation primitives in the first place.

Complementing the Transitions module are the CSS 2D and 3D Transforms Modules. Until now the facilities available to transform the position and shape of an element have been limited to changing the values of various aspects of the box model: its height, width, margins, padding, and position relative to some container.

The Transforms modules offer a range of pure CSS functions to rotate, skew, scale and translate DOM elements, based on the extensive work done on the SVG specification. Scaling an element to twice its initial width and rotating it 45° around its top left-hand corner is simple:

Using a transform function creates a new local coordinate system for a given element and its descendants. All transform functions (rotate, skew, translate, scale etc.) are defined in terms of a transformation matrix. Firmin translates each use of these API-level transform functions into an equivalent matrix transformation and then concatenates them to determine the final value. By performing these operations internally rather than deferring them to the browser, it is possible to introduce stateful transforms, where each new state of an element is based on its current state.

Both the CSS Transforms specifications include a requirement to expose an implementation of an object following the CSSMatrix interface. This is important for anyone wanting to write an animation library like Firmin, since it provides a way of accomplishing two key tasks. Firstly, it exposes the internal representation of the transformation state of a DOM element. This enables library authors to deal with all transforms in a simple, unified way rather than having to do messy hacking around with multiple transform properties. Secondly, it provides a way to calculate the effect of combining different transformation states. This is essential to developing support for stateful transforms.

Thus far, only WebKit-based browsers such as Chrome and Safari expose a transformation matrix class to the user, WebKitCSSMatrix. Firefox, despite some limited support for 2D transforms, doesn’t provide an implementation of CSSMatrix. Neither does Opera: a CSSMatrix class was added in Opera Presto 2.5, but it appears to have been removed from more recent versions of their layout engine.

It’s my hope that Firmin can eventually become a genuinely cross-browser library, so in that spirit I’ve written FirminCSSMatrix, an implmentation of the CSSMatrix interface in JavaScript. Using this I’m hoping to work around the limitations of Firefox and Opera’s support for CSS transforms, and provide at least some of Firmin’s functionality in those browsers. It’s based on WebKit’s matrix code, and supports both 2D and 3D transformations. However, it’s still a little buggy, so I’ve removed it from the 1.0 release until I have time to resolve the remaining issues.

When the `transform`

property of a DOM element is updated, the new transform has no relation to the old one. In a sense this is intuitive—one CSS property is simply being changed to another. However, for many of the things one might want to do with Firmin, it makes less sense. Think about moving a DOM element around with the `translate`

functions. To move an element 400 pixels to the right, call `translateX`

and pass in the element and the argument `400`

.

Then call `translateY`

and pass in `200`

to move it down 200 pixels. So far, so good.

Now consider moving it back to its starting point. We’ve come 400 pixels along and 200 down, so we should move it 400 to the left and 200 up. But since transforms are carried out relative not to their *current* transformation state but to their *initial* transformation state, we can’t just call `translateX`

`translateY`

again with `-400`

and `-200`

as the arguments: it would end up moving the element 400 pixels to the *left* and 200 pixels *above* its original position. Instead, we write this:

Clearly under many circumstances this won’t be a problem, but working in an absolute coordinate system when a relative one is more appropriate is liable to lead to unnecessary confusion, and there will be circumstances where keeping track of an element’s position will introduce unnecessary complexity into the code.

To overcome this annoyance, Firmin comes with support for *stateful transforms*. The chaining object returned by all animation methods actually stores the previous transforms, so any new transforms applied to the element being animated can be based on its current transformation matrix. This lets us rewrite the above code to better accord with our intuitions for this situation.

```
// Move the element right 400px and bind the anim variable to the current
// transformation state.
var anim = Firmin.translateX(el, 400, 1);
// Move the element down 200px relative to its current position.
anim.translateYR(200, 1);
// Move the element back to its origin.
anim.translateR({x: -400, y: -200}, 1);
```

Note the **R** suffixes: these distinguish the *relative transform functions* from their absolute counterparts. Every normal transform function, which transforms an element relative to its initial position before any transforms are applied, has a relative version which transforms the element relative to its current transformation state.

Of course, the example above can be rewritten in a chained style.

3D transforms are also not yet widely supported—like the CSSMatrix interface, they’re only available in WebKit-based browsers like Chrome and Safari. Since Firmin itself is currently only targeted at those platforms, it already has quite a lot of 3D transform support. In particular, it includes all the 3D transform methods (with one omission, discussed below), so elements can be translated, scaled and rotated in three dimensions.

I won’t spend a lot of time explaining 3D transforms, since there are plenty of articles out there which introduce them in a more comprehensive way than I can manage here. Probably the best place to start is the WebKit blog post which first introduced them, while the Apple developer documentation is also good, and includes some helpful diagrams. This Art of Web article might be helpful, and I have a longer list of related articles saved on Pinboard.

The major missing piece in Firmin’s support of 3D transformations is an interface for manipulating the `perspective`

and `perspective-origin`

properties, which control the “camera position” for the whole page, and the `perspective`

transformation function, which modifies that perspective for a single element. These are vital to a really effective use of 3D transforms, so once I figure out a nice way to introduce them and explain their use, they will be added to Firmin. Good documentation is always important, but when the subject matter is slightly obscure and in all likelihood outside most developers’ experience, it’s even more vital.

Apart from full support for 3D transforms, and potentially adding some more convenience functions, the one major thing Firmin is missing is some great demos. Nothing communicates the power of a library or piece of functionality as well as a demonstration of what can be accomplished with it. Unfortunately, this is not a task I have time for at the moment, but if you build something cool, do get in touch.

]]>Before we get going, a brief note on how Nginx and Passenger fit together. Nginx is a web server, much like Apache. Passenger is an application server for web applications written in Ruby. Passenger provides an Nginx module, and the only way to add a module to Nginx is to compile it with that module, so even if you’re only upgrading or installing Passenger, you still need to recompile Nginx.

This article is a guide to installing these two programs from scratch, but it’s also a guide to upgrading one or both. If you get lost at any point—if something happens which is outside the scope of this guide—then I strongly recommend taking a look at the Passenger users guide, which is admirably clear and comprehensive.

The commands that follow need to be run as root. I’m the only admin on my system, so I just become root; if you prefer you can just prefix commands with `sudo`

. Commands will be prefixed with the `$`

character; anything else is output provided for expository purposes and should not be typed into your terminal.

`$ su -`

Passenger is a Ruby web server, so the first dependency you’ll need to install is Ruby. It should be available from your package manager; on Ubuntu you can use `apt-get`

.

`$ apt-get install ruby`

I prefer to compile my own, but I shan’t cover that here. From here on in I’ll assume that you have a working, Passenger-compatible Ruby installation and Rubygems. You’ll also need `wget`

or `curl`

to download files.

Both Passenger and Nginx, as one would expect, have various dependencies. The list below is exhaustive for the configuration I outline, but if you compile more modules into Nginx then you may have to install additional prerequisites.

`$ apt-get install build-essential libpcre3-dev libssl-dev zlib1g`

Nginx’s HTTP rewrite module requires the PCRE library sources, so it can parse regular expressions in `location`

directives. It also needs the OpenSSL header files for SSL support, and zlib so that responses can be compressed.

Downloading and installing the Passenger library is extremely simple: just run

`$ gem install passenger`

This is a good time to note down just where the Passenger gem is installed. The authors provide a handy command to let the user know just that.

```
$ passenger-config --root
/usr/local/lib/ruby/gems/1.9.1/gems/passenger-3.0.17
$ PASSENGER_NGINX_DIR=`passenger-config --root`/ext/nginx
```

If you want to install Nginx with modules that depend on the PCRE library, you’ll need to download its source code too. Otherwise, skip this step and move on to the next section.

```
$ cd /usr/local/src
$ wget ftp://ftp.csx.cam.ac.uk/pub/software/programming/pcre/pcre-8.31.tar.gz
$ tar -xzvf pcre-8.31.tar.gz
$ cd pcre-8.31
$ PCRE_DIR=`pwd`
```

At this point you’ll need to download the Nginx source files to your server. I keep mine in `/usr/local/src`

, but you can do this anywhere. As of the time of writing, the latest stable version of Nginx is 1.2.4, but of course that may well have changed by the time you read this, and so the URL to pass to `wget`

will have changed too.

```
$ cd /usr/local/src
$ wget http://nginx.org/download/nginx-1.2.4.tar.gz
$ tar -xzvf nginx-1.2.4.tar.gz
$ cd nginx-1.2.4
$ NGINX_SRC_DIR=`pwd`
```

We’re now at the point where some decisions need to be made. These include where you want to install the `nginx`

binary, where your Nginx config file is going to live, where it should write log files, and most importantly in this context, which modules you want to compile with it. I’m going to describe a fairly standard, minimal setup with TLS and Gzip support, the Substitution module for replacing text in responses (good for embedding analytics code), as well as compiling in the Passenger module. The install options page on the Nginx wiki is very comprehensive if you want to deviate from the ones provided here.

```
$ ./configure \
--prefix=/usr/local \
--sbin-path=/usr/local/sbin \
--conf-path=/etc/nginx/nginx.conf \
--error-log-path=/var/log/nginx/error.log \
--http-log-path=/var/log/nginx/access.log \
--with-http_ssl_module \
--with-http_gzip_static_module \
--add-module=$PASSENGER_NGINX_DIR \
--with-pcre=$PCRE_DIR \
--with-http_sub_module
$ make
$ make install
```

I’m only going to cover the basics of setting up and configuring Nginx and Passenger—there are plenty of other articles out there which provide comprehensive coverage of this topic. For that reason, exposition here will be relatively brief and high level; you can get the details elsewhere should you need them.

Managing a web server process by hand is, of course, not something anyone wants to do—that’s what initialisation scripts are for. Just copy the source of that script to `/etc/init.d/nginx`

(depending on the details of your setup, you might need to tweak it a little), then make it writable, and add it to the default run levels so it’ll start automatically on boot. For details, consult the Linode Library articles on installing Nginx. They have their own init script which you might prefer to the one linked above.

```
$ chmod +x /etc/init.d/nginx
$ /usr/sbin/update-rc.d -f nginx defaults
```

The next step is to set up your `nginx.conf`

file. The best way to start is to use the configuration that comes with the Nginx source as a basis for your own. Again, Linode have some excellent articles on configuring Nginx.

```
$ cd $NGINX_SRC_DIR
$ cp -R conf /etc/nginx
$ mdkir /etc/nginx/conf.d
```

Then edit `/etc/nginx/nginx.conf`

and add the following line before the end of the `http`

block.

`include /etc/nginx/conf.d/*.conf;`

That will pull in any configuration files you place in the `/etc/nginx/conf.d/`

directory, including the one you’re about to add. Open up `/etc/nginx/conf.d/passenger.conf`

in a text editor and add the following two lines:

```
passenger_root /usr/local/lib/ruby/gems/1.9.1/gems/passenger-3.0.17;
passenger_ruby /usr/local/bin/ruby;
```

The value of the `passenger_root`

variable should be the output of running `passenger-config --root`

, while the second should be the path to your `ruby`

binary (you can get it by running `which ruby`

). Keeping this information in a separate configuration file to your main Nginx config just makes it simpler to update when you upgrade Passenger and need to change that the value of `passenger_root`

to point to the new location.

All that’s needed now is to let Nginx know about the Rack application you want to run. Again, there are plenty of other articles which explain this process, particularly the relevant sections of the Passenger users guide (§3 for Ruby on Rails applications, §4 for generic Rack applications). Essentially, all that’s needed is to tell Nginx what the path to your Rack application is, and that it should use Passenger. The minimal configuration below should suffice—just put it (suitably modified to reflect the details of your site) somewhere Nginx will find it.

```
server {
listen 80;
server_name myrackapp.net;
root /var/www/myrackapp.net/public;
passenger_enabled on;
rails_env production;
}
```

Most of this is common to all Nginx `server`

directives—the ones we’re interested in are `passenger_enabled`

and `rails_env`

. The first makes Nginx enable Passenger for that server, while the latter specifies the environment the relevant Rack application should be run in. Of course, there are many more options available than just these two—for details, consult §5 of the guide. Now you just need to start up the server.

`$ /etc/init.d/nginx start`

If you want to install a new version of Nginx, or a new version of Passenger, you’ll have to recompile Nginx. If it’s the latter, install the new Passenger gem as described above, and then compile Nginx. There are two gotchas to this process, and one handy trick: if you ask nicely, an `nginx`

binary will tell you exactly what configure arguments were passed when compiling it.

```
$ cd $NGINX_SRC_DIR
$ nginx -V
nginx version: nginx/1.2.4
built by gcc 4.4.3 (Ubuntu 4.4.3-4ubuntu5.1)
TLS SNI support enabled
configure arguments: --prefix=/usr/local --sbin-path=/usr/local/sbin --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --with-pcre=/usr/local/src/pcre-8.31 --add-module=/usr/local/lib/ruby/gems/1.9.1/gems/passenger-3.0.17/ext/nginx --with-http_ssl_module --with-http_gzip_static_module --with-http_sub_module
```

This is incredibly handy: you don’t need to write down or reconstruct what options you chose you compiled Nginx, as the program remembers them all for you. Now, if you’ve just installed a new version of Passenger, you can’t just copy and paste the arguments verbatim—you have to change the value of the `add-module`

flag to point to the location of the new Passenger gem.

Run `./configure`

with the appropriate flags and then `make`

as above. Since you’re recompiling, presumably your existing `nginx`

program is running at this point, so you need to stop it before installing the new binary. If you’re reinstalling Passenger, you must update your `passenger.conf`

file with the new path to the library. Then stop `nginx`

, install the new binary, and restart it.

```
$ /etc/init.d/nginx stop
$ make install
$ /etc/init.d/nginx start
```

]]>As I recall from when I did just such a course, one of the most tedious parts was calculating truth tables for various complex logical expressions. Once one has grasped the basic concept, the rest is purely mechanical, and while it might be useful in the exam, beyond that… well, that’s what computers are for.

Hatt is a command-line program for doing just that: parsing expressions of the propositional calculus and printing their truth tables. Assuming you have the Haskell platform installed, just run the following to install Hatt.

`cabal install hatt`

Then just run `hatt`

to enter the interactive mode. Here’s an example session.

```
> (A -> (B | ~C))
A B C | (A -> (B | ~C))
-----------------------
T T T | F
T T F | F
T F T | F
T F F | F
F T T | F
F T F | F
F F T | T
F F F | F
> pretty
Enabling pretty-printing.
> (P <-> (~Q & (R -> S)))
P Q R S | (P ↔ (¬Q ∧ (R → S)))
------------------------------
T T T T | F
T T T F | F
T T F T | F
T T F F | F
T F T T | F
T F T F | F
T F F T | F
T F F F | T
F T T T | T
F T T F | T
F T F T | T
F T F F | T
F F T T | T
F F T F | T
F F F T | T
F F F F | F
```

The `hatt`

binary isn’t the only thing you get when you install the package. There’s also the `Data.Logic.Propositional`

module, which allows you to plug this functionality—parsing, pretty-printing, truth tables, some simple checks like whether an expression is a tautology or a contradiction—into any Haskell program.

The API isn’t terribly extensive, but it’s enough to play around with, and should be really easy to extend if you wanted, for example, to add the facility to express proofs. I’m always amazed by how much one can accomplish with how little in Haskell: Hatt is under 300 lines of code, and a lot of that is the command-line program which is bloated by help text and the like.

This was, admittedly, a fairly trivial project, but it was a good way to improve my familiarity with Haskell, particularly Parsec. Picking a small, manageable project with precise goals proved to be a fun way to keep my hand in and pick up a few new skills, and I highly recommend it as a technique. If you’re interested in taking a look at the source code, the entire project is available on GitHub under the BSD license.

]]>To recap, currying is a technique for transforming a function which accepts *n* parameters into a nest of partially applicable functions. Consider the function *f = λxyz.M*, which has three parameters, *x*, *y* and *z*. By currying, we obtain a new function *f* = λx.(λy.(λz.M))*.

One simple example is currying an `add`

function which accepts 2 parameters and returns the result of adding them together.

```
var add = function(a, b) {
return a + b;
};
var curriedAdd = function(a) {
return function(b) {
return a + b;
};
};
```

A function which returns the result of evaluating a quadratic expression demonstrates more clearly the ‘nesting’ of functions which currying produces.

```
var quadratic = function(a, b, c, x) {
return a * x * x + b * x + c;
};
var curriedQuadratic = function(a) {
return function(b) {
return function(c) {
return function(x) {
return a * x * x + b * x + c;
};
};
};
};
```

Given a pattern like this, the obvious question is how to generalise it. Ideally, we would write a `curry`

function to automatically transform functions like `quadratic`

into ones like `curriedQuadratic`

. The simplest approach is to make curried functions always return a single wrapping function:

```
var naiveCurry = function(f) {
var args = Array.prototype.slice.call(arguments, 1);
return function() {
var largs = Array.prototype.slice.call(arguments, 0);
return f.apply(this, args.concat(largs));
};
};
```

Clearly this is not true currying, except for functions of arity 2. We cannot use it to perform the transformation from `quadratic`

to `curriedQuadratic`

.

A cleverer approach would be to detect the arity of the function we wish to curry. To do this, we can use the length property of the function, which returns the number of named arguments the function accepts. `Math.tan.length`

is 1, while `parseInt.length`

is 2.

```
var curryByLength = function(f) {
var arity = f.length,
args = Array.prototype.slice.call(arguments, 1),
accumulator = function() {
var largs = args;
if (arguments.length > 0) {
// We must be careful to copy the `args` array with `concat` rather
// than mutate it; otherwise, executing curried functions can have
// strange non-local effects on other curried functions.
largs = largs.concat(Array.prototype.slice.call(arguments, 0));
}
if (largs.length >= arity) {
return f.apply(this, largs);
} else {
return curryByLength.apply(this, [f].concat(largs));
}
};
return args.length >= arity ? accumulator() : accumulator;
};
```

However, the length property of any given JavaScript function can easily mislead. To begin with, we often find it useful to define functions with optional parameters.

```
var someFunction = function(a, flag) {
if (flag) {
// Some computation involving a
} else {
// Some other computation involving a
}
};
```

Now consider a variadic function, like `Math.max`

, which returns the largest number amongst its arguments. Despite the fact that it can in fact be called with any number of arguments, including 0 and 1, it has a length property of 2. Consequently, our ‘smarter’ curry function will only work with `Math.max`

up to a point. This will throw a type error, even though `Math.max`

will accept three arguments quite happily:

Currying `Math.max`

limits its utility to discriminating between two numbers, not *n* numbers. We can easily think of similar examples—other variadic functions, functions with optional arguments, and similarly clever abuses of JavaScript’s dynamic arguments to create complex APIs. jQuery’s `bind`

method could be considered an example of this: the event handler can be passed to the method as either the second or the third argument, depending on whether the user wishes to use the optional `eventData`

parameter or not.

It is easy to see that there is no general way of resolving this issue: currying is essentially at odds with variadic functions and the ability to change the number of arguments a function accepts at runtime. However, one’s choices are not limited simply to the approaches discussed above; there are alternatives, even if they do not fully dispose of the problem of dynamic arity.

Firstly, one can simply leave things as they are, with the `curry`

function having a known limitation around functions with dynamic arity. The burden is placed on the user to ensure they take care when currying.

Alternatively, one could make the arity an explicit component of the `curry`

function. This differs from the implicit detection of the arity via the curried function’s length property (however, the implementation is almost identical).

```
var curryWithExplicitArity = function(f, n) {
var args = Array.prototype.slice.call(arguments, 2),
accumulator = function() {
var largs = args;
if (arguments.length > 0) {
largs = largs.concat(Array.prototype.slice.call(arguments, 0));
}
if (largs.length >= n) {
return f.apply(this, largs);
} else {
return curryByLength.apply(this, [f].concat(largs));
}
};
return args.length >= n ? accumulator() : accumulator;
};
```

Finally, one could have entirely different `curry`

functions for each arity. This has the benefit of being explicit, and while it doesn’t solve the problem of functions with dynamic arity, it does mean that one doesn’t have to specify the arity of the function one wishes to curry each time as an additional parameter. Instead of writing `curry(f, 3)`

, one can simply write `curry3(f)`

.

In fact, there is a way to combine these last two approaches, by writing a function which generates curry functions for any given arity.

```
var ncurry = function(n) {
var _curry = function(f) {
var args = Array.prototype.slice.call(arguments, 1),
return function() {
var largs = args.concat(Array.prototype.slice.call(arguments, 0));
if (largs.length < n) {
largs.unshift(f);
return _curry.apply(null, largs);
} else {
return f.apply(null, largs);
}
};
};
return _curry;
};
```

For common use cases such as functions which accept 2 or 3 arguments, one can write simple aliases using `ncurry`

, while one can always use `ncurry`

‘inline’ where necessary.

```
var curry = ncurry(2),
curry3 = ncurry(3);
// Presumably `f7` is a function which accepts 7 arguments
var fc7 = ncurry(7)(f7);
```

However, oftentimes something along the lines of `curryByLength`

is preferable. If the library of functions one is working with consists of a set of functions with well-defined lists of parameters, then implicit rather than explicit conversion can be more convenient and more natural; it is, after all, rather nicer to be able to write `curry(f)`

than `curry(f, n)`

or even `ncurry(n)(f)`

.

Ultimately which approach one decides to take must be based on understanding of the properties of the functions one is working with. A choice of currying function will then arise naturally—and after all, one can always use several. Both of these approaches are available in Udon, my library for functional programming in JavaScript, as `Udon.curry`

and `Udon.ncurry`

.

- Curried JavaScript functions by Svend Tofte;
- Partial Application in JavaScript by John Resig;
- Cleaner Callbacks With Partial Application by Dan Webb;
- Functional JavaScript, a library by Oliver Steele;
- Currying (from the comp.lang.functional FAQ);
- Currying Schönfinkelling (from the c2 wiki);
- Currying in Haskell (from the Haskell wiki);
- Curried Scheme by Edward Kmett.