This article was originally published on the Programlabbet Blog
2013-12-08. The text has been slightly altered in order to
make more sense in a text-centric setting.
As you may already know, functional programming has its roots in
a formal system called the Lambda calculus (or λ-calculus),
which is a model of computation based on the ideas of function
abstraction and application through substitution of bound variables.
(Don't worry, you'll get the idea soon enough.)
Sidenote: λ-calculus was the one of two frameworks that was used in
the one of two answers, the other being the Turing machine.
This system was created by the British mathematician and cryptanalyst
Alan Turing, which is also the theoretical model on which our
computers are built upon. Church and Turing gave their answers
unknowingly of eachother, but despite this, it was later proven that
these two models are equivalent. That is, you can build a Turing-
machine in λ-calculus and express λ-calculus on a Turing machine.
The system was created by the American mathematician and logician
Alonzo Church as a framework for answering the Entscheidungsproblem
(lit. "Decision problem") posed by the German mathematician
David Hilbert in 1928. This problem can essentially be formulated as
the question wether or not there exist a machine that, given a
program, can compute if any given input is valid or not, according
to the program.
So what was the answer to the Entscheidungsproblem?
Well, "No". Hilbert is said to have been pretty bummed out by this.
There are numerous introductions to λ-calculus (e.g. {1}, {2},
and {3}), however most of them appear to be geared towards the
theoretical crowd. In this (very brief) introduction I'll only focus
on the parts that are the most relevant to you as a programmer.
In order to have a foot in the real world, I'll include examples in
lisp as well. (Mind you, most of the stuff I cover here can be
expressed in any language where functions are first class values.
If your favorite language don't have that, you're fresh out of
luck, sonny.)
## The λ
The fundamental building block is the lambda (λ), which is an
operator that yields a functional object, a closure, that can be
applied to a value.
Let's create a function that adds 1 to its parameter x. Not terribly
exciting, but it will serve us well as an example:
λ x . 1 + x
Easy enough. The parameter resides to the right of the 'λ' and to
the left of the period. To the right of the period we have the
expression where we may refer to the parameter, a function body.
You can also view λ-expressions as templates for transforming
expressions.
This is expressed in Lisp as:
(lambda (x) (+ 1 x))
; => #<function (lambda (y) (+ 1 y))>
## Applying
Now, there's a number of things we can do to this expression.
For one, we can pass it around like any other value. Another thing
is to put it to work by applying a value to it:
(λ x . 1 + x) 2 -β-> 1 + 2
What we have done here is a computational step, a β-reduction,
which binds the free variable x of the λ-expression to the value 2
and returns the corresponding expression, 1 + 2. Note that we
returned an expression and not the value 3. This is because a
β-reduction is only one step, much like an instruction of a
conventional computer. To get the expected answer, we need to preform
another β-reduction:
1 + 2 -β-> 3
Excellent! This reduction business doesn't really have an equivalent
in Lisp:
((lambda (x) (+ 1 x)) 2)
; => 3
The reason for this is that Lisp employ a different evaluation
strategy. (That and doing β-reduction for every step is tedious,
even though there exist languages that explicitly do that.)
## Parameters
You may have noticed that we only have one parameter to work with.
How can we get more parameters? Easy, by currying:
λ x y . x + y == λ x . (λ y . x + y)
What happened here? Closure happened here. When applying the outer
lambda, it will reduce to a lambda where the x is bound to the given
value. Applying that lambda will give us a complete expression:
(λ x y . x + y) 1 2 -β-> λ y . 1 + y -β-> 1 + 2 -β-> 3
Or as we can say in Lisp:
(((lambda (x) (lambda (y) (+ x y))) 1) 2)
; => 3
Since functions introduce a lexical context (a block where variables
live, more or less), the inner function captures x and will thus be
known once the inner function is applied. The reason for the
convoluted way of passing the parameters is due to the fact that
Lisp doesn't actually curry its functions.
## Partial application
What happens if we omit the 2 from the previous example?
(λ x y . x + y) 1 -β-> λ y . 1 + y
((lambda (x) (lambda (y) (+ x y))) 1)
; => #<closure (lambda (y) (+ 1 y))>
We got a function that adds 1 to a given parameter! The curried
function λ x y . x + y
can in other words be partially
applicated, that is, the function won't be applied in full
unless we supply all the parameters. All the intermediary steps
can be passed on to be used elsewhere. Handy!
But! As I said, functions in Lisp are not curried, and can't thus
be partially applicated, unless you manually curry them yourself.
## Summary
Now this has been a very, very brief overview of λ-calculus,
though I hope that it has given you some insight into what this
whole thing is about.
You may think this as a bit silly, playing around with tiny functions.
What's the gain in doing that? Depends on who you ask, I'd say.
If you'd ask me, I'd say that there are tremendous power in building
functions much like you'd build your strings or numbers.
Metaprogramming is a breeze, since you can have your code treated as
data and your data treated as code. And besides, why should you write
the programs when the programs can write themselves for you?
{1}: Introduction to Lambda Calculus
{2}: A Tutorial Introduction to the Lambda Calculus
{3}: A short introduction to the lambda calculus