OddThinking

A blog for odd things and odd thoughts.

Lamport on Buridan’s Principle

My “Blog Articles To Do” pile is full of these blog ideas that started simple, but then grew to add another item, and then another item, until they became too long to write in one or two sessions, I got bored of the topic, and they died on the vine. Here I present a number of connected ideas, deliberately stilted, so I can get to the real point faster.

When Chaos Theory was making the news in the late 80’s, I was left confused.

Poor science reporting was part of the problem. I remember TV special leaving me bewildered about the connections between the different aspects, until I watched it a second time and noticed the segue along the lines of “Talking about non-linear systems…” The two topics it introduced in one section were completely unrelated; trying to link them in my mind was giving me headaches.

Once you got past the “really good patterns that you could put on a t-shirt” (to quote Terry Pratchett), and the desperate attempts to make them at all relevant, what was really there?

Some surprising result about periodic doubling? Nice, but what predictions can you make with this? How is it going to change the world as much as they claimed.

The realisation that in some systems tiny changes in input could lead to big changes in output? Whoopee. That’s news? No, it is pretty bloody obvious, as anyone who has tried putting a ping-pong ball in the clown’s mouth at a fairground should know.


Leslie Lamport is a living legend of Computer Science. I’ve read Distributed Computing papers by him. I’ve used his software. I’ve quoted his aphorism: “A distributed system is one in which the failure of a computer you didn’t even know existed can render your own computer unusable.”

He’s a man who has had a great impact on Computer Science.


One way to make a successful fad is to take an old idea that people should already know (but perhaps is too boring for them to learn about,) add an irrelevant piece of (preferably proprietary) junk, bundle it with a ribbon (and perhaps some terminology and the claim that it is revolutionary), and Bob’s your aunt’s de facto.

Weight-loss supplements (to be consumed with a balanced diet and regular exercise) are one example.

Agile development (to be consumed with high-quality unit-testing, peer-review, iterative development, good communication with an interested customer and small teams sitting near each other) is another.


One day, an Economy student explained to me the different theories about what drove stockmarket fluctuations. If I recall correctly, he explained that people thought that either (a) there was a hugely complicated function taking lots of inputs that no-one could completely fathom, or (b) there was randomness or (c) some combination of the two.

When I suggested that there was, of course, another option: a relatively simple repeated function which was sensitive to small fluctuations in a large number of inputs, he looked at me strangely, and patiently explained that that could not be the case, because of the complexity of the output.

Suddenly, I wondered if Chaos Theory was like these other fads. The ideas may seem old and obvious if you have the background, but to many people they may seem new. Maybe the excitement about Chaos Theory was a whole lot of people understanding the “old and obvious” at the same time?


I remember seeing a cartoon of a donkey in an early book for children about computers. The donkey was stuck between two stacks of hay, and couldn’t choose between them. I didn’t get it at the time.

Today, I learnt the donkey was Buridan’s Ass, which is an illustration of a paradox in philosophy. Apparently, the idea goes back a few thousand years – the idea being if the donkey was placed exactly halfway between two identical haystacks, it would be unable to choose between them, and remain where it was, starving to death.

Apparently, this “paradox” is useful in discussing free will, but also in discussing electronics – particularly when trying to make analog circuits act digital, and finding that they are “meta-stable” and don’t damp down into one state quickly enough.


Leslie Lamport wrote an article about Buridan’s Principle.

In it, he discusses some of the real implications of the rule to software development, but I don’t want to discuss that.

In the first section, he formalises the principle, and shows that the assumptions that lead to the unexpected result that there is always some position between the haystacks where the ass cannot choose (within a given period of time) which stack to head towards.

He defines At(x) as the position of the ass at time, t, given it was initially placed at position, x, between the two stacks. He then explains:

The ass is a physical mechanism subject to the laws of physics. Any such mechanism is continuous, so At(x) is a continuous function of x.

Unsurprisingly, given Lamport’s background, his paper’s typography handles such mathematical expressions far more beautifully than WordPress.

Lamport explains:

When first told of Buridan’s Principle, people usually find it unbelievable and propose mechanisms to circumvent it.

He examines some of these mechanisms, such as adding noise or quantum effects to try to break these physical implications, and make the function discontinuous.

However, I guess I am one of the people who find this paradox unbelievable, and I, too, am immodest enough to propose a mechanism. But I am happy to continue to work in the physics of a classical universe.

[Update: Andrew, in the comments, gives me some insight into why my following argument is bogus, and that Lamport was right all along. You’ll find my apology to Lamport in the comments too.]

I do so by rejecting the claim that physical mechanisms must exhibit continuity in all directions. I submit that physics requires At(x) to be a continuous function of time (t) – that is, after being placed, the ass may not be seen to teleport between any two positions. However, I do not believe that physics requires At(x) to be continuous in x.

If you move the animal slightly to the right before revealing the haystacks, and replay the scenario, its position at time t may be completely different (and discontinuous) from where it would have been in the animal had been left in the original starting position.

In effect, I am claiming “fractals” are a legitimate answer here to the paradox. More accurately, I am denying that physics requires the location of an object to be continuous when comparing different timelines branching at slightly different starting conditions.


A small part of me is thinking I sound like a crackpot to be taking on a well-known computer scientist (and PhD in Mathematics) talking about a subject he has written a paper about, while I just heard about it today, and I am doing so by waving my hands and saying “Chaos Theory”, “Butterflies flapping their wings can cause storms” and “Watch what happens when I move my ass discontinuously!”

Especially because last week, I smirked at a Fields Medalist for wild inaccuracies in his maths. He was dabbling with a mathematical model of the likelihoods of some sports results. It is an area I have been studying for a while, and his model contained the logical equivalent of assuming that “race-horse are perfect spheres running inside a vacuum”.

Furthermore, I have made no effort to see if Lamport has changed his views in the intervening two decades (apart from noting that I am referring to an updated version of the paper) or seeing whether other people have weighed in in the meantime. I feel a little guilty about that laziness. Sorry.

Nonetheless, I stand by my crackpottery: I think that Lamport is wrong in justifying the use of an uncomfortable assumption with a physical claim.

(Nothing here invalidates the use of Buridan’s Ass as an analogy for challenging explanations of free will or for real-live effects in electrical circuits.)


Comments

  1. Isn’t Hawking radiation proof that At(x) is not continuous? I’m not really sure about the physics for this (and I’ve done even less research than you), but for any given x, a particle created in that point can either: (a) go back into the black hole and be destroyed by its anti-particle, (b) escape the black hole together with its anti-particle and by destroyed, or (c) escape and become hawking radiation, in which case the value for At(x) is very much not ‘nil’.

  2. The dread Wikipedia apostrophe strikes again: Try this instead of exposing a naked apostrophe within the link. [Ed: Fixed. Cheers.]

  3. configurator,

    I think you are right, that shows At(x) is discontinuous with respect to x in real physics.

    It relies on “quantum”, which I was hoping to avoid – I wanted to show even in the classical physics they still used when this paper was written ( 🙂 ) the assumption was false.

    That said, I wonder if Buridan’s principle does hold even in this situation. If created in just the right position and velocity, will one of the pair of particles (or even both), continue to sit exactly upon the event horizon, unable to “decide” which way to go for a long period?

  4. I think you will find if you investigate such systems that no matter how chaotic and fractal your decision system is, as you approach the tipping point it takes longer to decide between the two options, sufficiently so that At(x) really is continuous. IIRC, all those pretty colors on the mandelbrot set represent how long it takes for that starting point to jump away to infinity.
    An actual ass may never take a long enough time to decide between hay bales for it to have an effect on the ass, but the relevant neurons will still take longer to settle on a decision near the boundary point.
    Mathematically there’s a big difference between “really steep” and “discontinuous”.

  5. Amazingly, I used Tex as an example of non-trivial bug-free software. Shortly afterwards I discussed Chaos Theory, just before your article.

    I have a feeling most people don’t understand Chaos Theory properly. It was certainly not obvious to me, and I don’t know if people understand the consequences obviously. Some people link Chaos and Quantum in their minds (Clearly, you do not, as you’re talking about a classical system and invoking Chaos). The only similarity I see is that they’re both incredibly unintuitive. If chaos is so obvious, make an interesting chaotic system that isn’t already on Wikipedia. Specifically, something like the Lorentz attractor which is still stable, but chaotic. I find it incredibly difficult to think of these systems, and identify them in nature.

    I think the Ass thing is also a metaphor for decision-making, for which he’s using continuous systems to model. Maybe the physical Ass doesn’t fit into the model, but this doesn’t change the decision-making process for the metaphorical Ass in a “real” continuous system.

  6. Sunny,

    Is it incredibly difficult to think of Chaotic Systems? You are a gamer; how about a first-person shooter. I’ll keep it simple and suggest Unreal Tournament. A very slight different in initial conditions – say a teensy bit extra lag in the network, causes your rocket to miss its target, which causes you to be killed by the other player, which causes you to spawn in a better position, which causes you to get a better weapon, which causes you to last longer, which causes several people to team up to take you out, which causes you to re-spawn in a different area again. A totally different (and discontinuous) outcome based on a tiny change to the starting point.

    No quantum. No imaginary numbers. Just a system sensitive to initial conditions in non-linear ways.

    Yes, the donkey is a metaphor, but in his paper he tries to show it applies to other real-life systems too (albeit, in a well designed system, the danger point is infinitesimally small.)

  7. Andrew,

    Maybe I have been playing with digital technology too long, and the analogue aspects are hidden from me, but I don’t see why a neuron should take longer to reach a decision near a tipping point.

    The expression “x > 3.0” takes a constant time on a digital computer, no matter what the value of (floating point) x. Why should it necessarily take longer for same values of x in the analogue world?

    (I am sure that this is an unfair dismissal, but the claim seems like if a butterfly flaps its wings, we get a storm on Thursday. If it stays still, it will be fine on Thursday, but if it just unfurls its wings, we won’t find out about Thursday’s weather until the weekend.)

    Actually, this is getting me drawn into the wrong part of the argument. My claim is that the donkey’s decisions can be discontinuous across x. If it was a robotic donkey, we could easily have a discontinuous function which evaluates in a constant time. Why isn’t it fair to say that the hypothetical ass’s brain could have similar techniques?

  8. If you are playing with digital technology, your inputs have already been through an a/d converter. Each output bit of an a/d converter is powered by a tiny ass and two itt-bitty bales of hay.

    Neurons are controlled by threshold voltages triggering cascading movement of electrons… which take longer when the input is very close to the threshold.

    Lamport’s claim is that there is always a small range of x for which the decision takes more than some arbitrary length of time. However, the size of this range rapidly shrinks below your ability to keep still (for example) and it is rarely observed in practice unless you are an electronics designer (or in his example, trying to cross a railroad crossing safely and quickly).

    I now have the background music for Frogger in my head, and I blame you for that.

  9. I understand your point about the A/D converter and its bales of hay. I get that an A/D converter can have trouble stabilising…

    I understand such things would be rare to see in practice.

    I understand why you have Frogger music in your head, and why you blame me.

    But what I also (think I) understand is that in my same digital technology there is a vibrating crystal. The A/D converter is required to have an answer before the crystal reaches its leftmost point in its cycle. (I hope my simple model of the electronic internals is accurate enough not to invalidate my argument.) If the A/D converter hasn’t stabilised on the “right” solution by then, the computer will take whichever the arbitrary solution has managed to reach in that time. And because there is no “right” answer to which haystack the ass should choose, just so long as it chooses it, an arbitrary solution is a correct solution.

    My socket poppet Lamport then argues “Ah, but it will take a long time for the computer to decide what the arbitrary reading might be” and I argue back “No, it can’t. The results are always interpreted and passed onto the next part of the process as a 0 or a 1 by the time the crystal makes it back to its original position, and the crystal isn’t slowing down its vibrations just because some circuit is indecisive.”

  10. I must be missing something.

    As best I can tell, the ass will be unable to decide only if it is exactly half-way between the bales – on a mathematical line. That is to say, the location at which the ass is unable to decide is infinitely narrow. Now, aside from the fact that no infinitely narrow asses exist, there is also no way to place anything on an infinitely narrow spot in the real universe. Furthermore, the length of time it will need to decide should fall off rapidly with increasing distance from that mathematical line, so to cause it to take a finite, but arbitrarily long time to decide you would have to place it arbitrarily precisely close to the exact half-way.

    If all that is correct and I’m not missing something, then it would seem to me that this is problem as insightful (in relation to the real world) as Zeno’s paradoxes. It faintly conjures images of spherical cows in vacuum…

    (As for the question of whether you can do this with particles at the boundary of a black hole in any meaningful way, I don’t think so, because in order for their position to be arbitrarily precisely defined, their velocity would have to be imprecise by a corresponding factor. So either way, the arrangement would soon be destroyed.)

  11. So, which value is it that is passed on when the crystal says “time’s up”? A 0, or a 1? Your sock puppet Lamport is right – you need another ass and two more bales of hay for that.

    You seem to be arguing that you can get around the problem of a system that may not be able to choose in any finite time, by making it choose in a specific finite time.

    An A/D converter that’s required to have an answer before the crystal reaches its leftmost point is the problem, not the solution.

  12. Andrew,

    I have to admit to being a little excited that my intuition here about the physics could be wrong, Lamport could have been right all along and I could actually be learning something. Thank you!

    But I am not there yet. 🙁

    Of course, my real answer to your question is “It doesn’t matter. That’s not the point.” but suppose I turned this around, and said “0”. If you can’t choose between chocolate and vanilla before I count to 3, you are getting vanilla. Does that help? I am guessing you will say ‘no’ (now, I have a sock-puppet Andrew! Which is good, because at least I am getting an intuition of what the objection will be.) because now there is a question of whether the answer arrived in time which will somehow take a long time to answer.

    I think I can escape from this mire by counting individual electrons, but that’s using “quantum” again and feels like cheating.

    But I still haven’t understood, in the middle of my circuit with the oscillating crystal, how this problematic A/D conversion will manifest. I have written code that reads from a microphone. I have written code that says if amplitude >= 128. I never had to write an exception handler for AMPLITUDE_NOT_YET_DETERMINED. I get that the risk of the problem is microscopically small, but I am having trouble imagining what failure might look like here. (Or is “quantum” saving us?)

  13. Let’s say you have a door between the ass and bale #1. The door closes automatically in 3 minutes. Is At(x) still continuous?

  14. Aristotle,

    [I am sorry your comment got delayed in the spam queue. Not sure why it picked on you, unless the comment was sitting exactly on the boundary of spam and non-spam 🙂 ]

    Yes, your argument is correct; this is just a thought experiment about something that would never happen in real life (except that I dispute the ass itself needs to be infinitely narrow; just its the position.)

    However Zeno’s paradoxes were paradoxes precisely because the mathematics at the time had a gap, which has since been corrected; they highlighted a flaw.

    Buridan’s paradox is (so I’m told) useful in highlighting a problem in some electronic circuits, in which the width of the vague-zone is non-trivial.

    (Allegedly, it is also useful in discussions about free-will, which strikes me as a paradoxical statement in itself.)

  15. configurator,

    According to Lamport, yes it is still continuous. He gives the example of a person driving towards a railway crossing, racing against a train, and claims there is always a situation where they can’t decide whether to stop, and hit the train, and that this is true EVEN if there is a boom-gate. (I didn’t read this section with the same care as the first one.)

  16. Well, if the ass isn’t infinitely narrow, then it still needs to have an infinitely well defined outline before you can place it on a mathematical line with infinite precision.

    Zeno’s paradoxes are actually correct in all their premises. The only problem with their paradoxical-seeming statements is that they miss the fact that as distances approach zero, under constant velocities, so do the corresponding time periods. (It’s this approaching-zero/infinity behaviour that Buridan’s Ass reminded me of. Not an infinitely exact analogy, but close enough.)

    As for using the Ass in discussing free will: Wittgenstein would spin in his grave. (Which is to say, I think there’s a lot of confusion of the level of meaning of terms required to make it the principle seem relevant.)

    I don’t think electronic circuits are a real-life example of the Ass either, although it’s an illustrative thought experiment. (You cannot keep circuit in a metastable state forever, just for a practically significant time.)

  17. Julian,

    Leslie Lamport already knows about defaults:

    The most common suggestion is that the “ass” take some specific action when he finds himself unable to make a decision; for example, he might stop at the railroad crossing when he cannot decide if it is safe to cross. However, this merely pushes the decision back one level; the driver still must decide whether or not he can decide if it is safe to cross.

    So instead of an indecision exactly on halfway, you now have one at whatever x-value you consider to be the limit of “definitely made a choice”.

    He also explains why you don’t see AMPLITUDE_NOT_YET_DETERMINED:

    The problem is solved in modern computers by allowing enough time for deciding so the probability of not reaching a decision soon enough is much smaller than the probability of other types of failure.

    In other words, your computer probably crashes first. He does work for Microsoft, after all.

    I am having trouble imagining what failure might look like here.

    Your computer is designed to handle logic 1’s and 0’s. Logic half might be interpreted as 1 by some gates and 0 by others, resulting in unexpected program flows.

    And in section 6 he discusses (far from exhaustively) why quantum mechanics still won’t help you. I haven’t thought hard about whether I agree with him on that.

  18. Andrew,

    The idea that a “logic half” might be passed around my computer internals, crashing my code, is a challenge to my world view, but I think I am getting some insight.

    I owe you many thanks for explaining this to me, and I hope you understand why I will now blame all unexpected software bugs on the Electrical Engineers.

    I also owe Leslie Lamport an apology; it seems there wasn’t a small hole in his logic but a small (and alas, not infinitesimal) hole in my understanding of physics. Sorry, Dr. Lamport. I hope you won’t punish me by causing the failure of a computer I have never heard of.

  19. The main conundrum may be solved, but I’m still going to raise the point of Chaos Theory. You mention games, specifically Unreal Tournament. The problem with this is that the agents in the game are non-deterministic (unless you take the player out of the equation and only have deterministic bots, in which case I’m unsure you’ll see chaos). The system itself, I suspect, is not chaotic, even though the changes may be quite significant, overall I bet you’ll see almost identical outcomes out of (reasonable and deterministic) bots, even with slightly varying initial conditions.

    The other thing is that the system is quite complicated. Nothing like Mandelbrot’s set, and yields relatively little “chaos”. With Mandelbrot, you could generate literally an infinite number of unique and interesting pictures. Variations on a theme, if you will. I cannot come up with a simple formula for doing that.

Leave a comment

You must be logged in to post a comment.

Web Mentions

  1. OddThinking » Some of My Best Friends are Donkeys