## Programming in a finite world – The parable of limited resources

### November 3, 2010

I am currently holding a tutorial as part of a Programming 101 course for physics students. The primary objective of the course is to gain the ability to analyze experimental data and perform numerical calculations, and we have only three lessons for the actual C stuff. I’m summarizing the lectures in the tutorials, and try to boil down everything as much as possible (e.g. the only datatypes I introduce are `int` and `double`), because it’s the first contact to programming for most freshmen.

They’re mostly doing well, but some are having trouble because of their mathematical background. They feel bad about stuff like

b = b+1;

because they consider this a truth statement instead of an assignment. How could one explain the difference in a didactically constructive manner? I’ll try this:

#### Mathematics is the same as the programming…

The common ground of mathematics and programming is that we’re attaching names to things and studying the behavior of these things. For example, if I write in C:

`double f(double x) { return x * x; }`

Then I can refer to the function that doubles its input by the name of “f”. The equal procedure in mathematics is to write down:

Read: f is a function mapping from real numbers to real numbers, which maps x to x².) Actually, the mathematical function f has already existed for an infinite time before I came along and attached a name to it. It existed inside the set of all ℝ → ℝ functions, and more generally in the set of all functions, and in the set of all objects. The function x ↦ x³ also exists, regardless of whether I give it a name or not.

#### …yet the previous heading was a lie

After the main similarity of mathematics and programming comes the main difference: the limited resources of a computer. No matter how large my memory is, it will never be sufficiently large to store π, while π can be defined in finite time using mathematical constructions. These constraints change how everything works: We cannot have everything available all the time, we need to create and give names to what we need when we need it. This applies to everything from variable declarations, functions to libraries.

The next difference is that the type of named objects changes. Mathematicians give names to mathematical objects which operate only inside their theory, while a programmer operates on actual hardware, and gives names to existing computer resources. While both languages, C and mathematics, allow the statement `x = 5` to be made, the statement has a different meaning in both languages: The mathematical expression `x = 5` attaches the name `x` to the mathematical object “real number 5”, which is the same as the mathematical object “integer number 5”. The C statement `x = 5`, on the other hand, is only valid when a variable `x` exists, because `x` is not the name of the mathematical object “number 5”, but the name of the physical resource “memory block no. 23421337 (or whatever)”, to which the system has granted you access.

While a mathematician may define a function `f` which works on all sets which define a multiplication operation, a programmer has to state explicitly the datatype of each function. (All Pythonistas out there please remember that this is about C programming.) In his world, `f` is not the name of a function that squares everything to which it is applied, but it is the name of a chunk of code that calculates the square of an integer number (for instance). If he also wants a function that handles double-precision numbers, he has to take care of implementing that function, too.

#### Mathematics is timeless

The limited-resource theme also explains immediately how the time axis comes into play, i. e. the order of execution. Mathematical objects exist as results of fully logical derivations, which are by their nature independent from their context and therefore from time. If there exists a root of the Riemann zeta function with non-half real part, its existence is not dependent on our knowledge of its exact value or even our knowledge of its existence.

Logical derivations are so versatile that they bring into existence infinitely many mathematical objects. So many that we cannot count them. The limited resources of computer technology do not allow us to replicate this structure in programming. And we do not need to: Those objects which are not relevant for our computations need not be created. This generates the time axis: Objects are brought to life when we need them.

For example, let the computer know about a `double` variable `a` and the function `f` as defined above. Now it is given the C expression `x = f(5)`. At this point in time, the computer decides that the object `f(5)` (precisely: the physical representation of the mathematical object which is equal to `f(5)`) has to come to life inside the memory block represented by the name `a`. To achieve this, it has to call the function `f`. If you like, you can imagine that some chunk of memory is allocated, and the name `x` is attached to the first memory block, which is filled with `5`. Also, the name “return value” is attached to the second memory block, and the function is instructed that it shall put the result of its computation in there.

`return x * x;`

The function starts by retrieving the value of `5` from the memory block called `a`:

`return 5 * 5;`

It calculates the result of the multiplication:

`return 25;`

and finishes by writing `25` into the memory block called “return value”.

By the way, this thought experiment also explains variable scopes: The variable `x` is only available inside the function `f`, of which it is a formal argument, because the variable is just the name of a memory block. The memory block might be reused for some different calculation after the function has been evaluated, so it is not safe to allow one to refer to the memory cell by the old name. If you like, think of the memory cell as an appartement: `f` pays the bills for `x`, so when `f` dies, `x` is forced to move out, and a new renter may move into the memory cell at any time.

#### Beyond single variables and simple expressions

After you know how to explain variables and functions, all other features of the C language can either be derived similarly, or are (from this perspective) mere convenience features. For example, control structures express nonlinear and conditional time flows (where the time is defined as the execution time, i.e. the time when the computer is occupied with evaluating statements). Arrays are a way to give a name to multiple similar memory blocks at once. And so on.

If you found this parable helpful to get started with C programming, feel free to leave a note below.

P.S. Be sure to take a look at the comments to this article, for some nice additions, and other parables that can be useful for beginners.

November 3, 2010 at 22:27

Hmm… I’m not sure I’d describe it like that.

The thing is, the equivalent of “x = 5” in maths is actually “x == 5” in C. This is why languages like pascal use “=” for comparison and “:=” for assigment, which is much closer to mathematical notation (where “:=” would be read as “is defined by”). It’s really a notational thing.

Mathematically (and this is roughly how you’d implement global variables in a functional programming language like Haskell or ML), the statement “x = 5” in C is function on the state space of the computer, taking the existing state (where x is *something*) to a new state where the value of x is 5. You can, in this manner, read a program as a series of composed functions taking the program through a state space.

Of course, when you introduce while loops, this gets trippy (and tricky).

November 3, 2010 at 22:32

If you’re dealing with physicists, though, another way of looking at it is algorithmically. Consider a computer program as a series of instructions, such as you might have for an experiment. The “person” who will read this instructions is amazingly fast but completely unintelligent, so you need to spell out every little step and write it in a very particular way.

Variables are simply labels for the data. You could imagine them indicating places in a filing cabinet, or locations on the table, or things labelled with post-it notes. “x = 5” would be “put the value 5 into the box/folder called ‘x'”, or “transfer the ‘x’ post-it to something indicating the value ‘5’”.

This, of course, is quite close reality.

November 3, 2010 at 22:36

I remember the box parable from my first programming course (in Turbo Pascal). It is very suitable for beginners, but the limited-resources parable has its 15 minutes of fame when a student asks “But why does that not work like in maths?” because it explains not only the “how”, but also the “why” of the difference.

November 3, 2010 at 23:27

While reading your post one thing that popped to mind is why create a function to handle squaring an integer and squaring a real (double) when you can just create one to handle the real number which includes integers as a subset.

This is an important question, and the implications should definitely be covered in an introductory course like this at least on a basic level. It even ties into your limited resources theme. Real numbers on a computer are approximations. (double)5 * (double)5 may not actually end up being a perfect 25. The implications (e.g. with regards to comparisons) should not be overlooked. Precision matters.

November 4, 2010 at 08:36

Exactly. I forgot this in this write-up, but I think I mentioned it in class.

November 3, 2010 at 23:58

This is a great way of looking at it. It will require some time to explain and students need to be able to look up e.g. this post to read again the explanation. It is great for getting them into an exact way of thinking.

November 4, 2010 at 09:07

Hello Mr. Majewsky,

Just wanted to drop by and say “thank you” — your article is a very good starting point for beginners.

And, yet, it shows how awkward a programming language like C is for newbies, for it forces you to learn concepts like memory access, pointers and different types of data even in the first levels of gameplay. But, it is for the better, as it employs learning to think how a computer does things. And, to be able to control it, like in any other subset of mathematical theory, you’ll have to learn the axioms first — after all, they are the foundation on which all the information theory is built upon.

I find it harder to explain to matematicians precisely because of the differences in thinking, which you have also highlighted. I always tend to explain writing programs is NOT defining mathematical concepts. It is about approximating them to a certain degree. You have done very well in outlining about the finite time and resource notations.

I always go further, hinting how different number types are stored in memory and use the analogy with the digital calculator. It has a limited amount of numbers, and that amount is guaranteed to hold just enough for any *normal programming usage*. This is fundamentally different than the abstract objects on which even the simplest of mathematical concepts are built upon (like, Natural numbers, as defined by Giuseppe Peano’s here: http://en.wikipedia.org/wiki/Peano_axioms). The fundamental difference comes always because, in computing, there is no such concept of “infinity”, as it is present nowhere in the world we live in. I’d suggest it will never be possible to build something that can support the notation “infinity” naturally, given the finite amount of time and resources available.

The first thing to be mentioned usually, to matematitians, is that the definition of a computer is “A machine that can calculate and can memorize a finite set of data.” Do always think of them as persons, who have a calculator at hand and a big notebook which they can write numbers down. It takes time to look at the notebook, figure out the algorithm, then look at different parts of the book, figure the input, then perform the calculations using the calculator, then write down the result in the same notebook. It’s a huge oversimplification, but tends to work.

Best regards,

Hristo

November 6, 2010 at 13:42

Cannot we just say that assignments are preceded (in the imagination) by the word “let” or “set”, then it becomes quite clear. Well, that is what I was taught 46 years ago and it has worked for me ever since.

My degree was in maths and physics and there was no Programming 101 and no C back then, although there was a language called Basic in which the “let” word was optional. I was also fortunate in learning an assembly language on a rather small machine soon after Fortran. When you write “ld b; inc; st b;”, rather than “b = b + 1;”, it is quite clear what is happening, in a practical sense.

BTW how do you explain to your 101 guys about “if (b = c) b = b + 1;” and its odd behaviors in C? That is a real trap for young players.

November 11, 2010 at 00:22

We encourage them to compile with -Wall, so they get a warning when they try this.

May 2, 2011 at 22:18

Very interesting post! 🙂