# -8

[Crossposted]

The medieval philosopher Buridan reportedly constructed a thought experiment to support his view that human behavior was determined rather than “free”—hence rational agents couldn’t choose between two equally good alternatives. In the Buridan’s Ass Paradox, an ass finds itself between two equal equidistant bales of hay, noticed simultaneously; the bales’ distance and size are the only variables influencing the ass’s behavior. Under these idealized conditions, the ass must starve, its predicament indistinguishable from a physical object suspended between opposite forces, such as a planet that neither falls into the sun nor escapes into outer space. (Since the ass served Buridan as metaphor for the human agent, in what follows, I speak of “ass” and “agent” interchangeably.)

Computer scientist Leslie Lamport formalized the paradox as “Buridan’s Principle,” which states that the ass will starve if it is situated in a range of possibilities that include midpoints where two opposing forces are equal and it must choose in a sufficiently short time span. We assume, based on a principle of physical continuity, that the larger the bale of hay compared to the other, the faster will the ass be able to decide. Since this is true on the left and on the right, at the midpoint, where the bales are equal, symmetry requires an infinite decision time  Conclusion: within some range of bale comparisons, the ass will require decision time greater than a given bounded time interval. (For rigorous treatment, see Buridan’s Principle (1984).)

Buridan’s Principle is counterintuitive, as Lamport discovered when he first tried to publish. Among the objections to Buridan’s Principle summarized by Lamport, the main objection provides an insight about the source of the mind-projection fallacy, which treats probability as a feature of the world. The most common objection is that when the agent can’t decide it may use a default metarule. Lamport points out this substitutes another decision subject to the same limits: the agent must decide that it can’t decide. My point differs from that of Lamport, who proves that binary decisions in the face of continuous inputs are unavoidable and that with minimal assumptions they preclude deciding in bounded time; whereas I draw a stronger conclusion: no decision is substitutable when you adhere strictly to the problem’s conditions specifying that the agent be equally balanced between the options. Any inclination to substitute a different decision is a bias toward making the decision that the substitute decision entails. In the simplest variant, the ass may use the rule: turn left when you can’t decide, potentially entrapping it in the limbo between deciding whether it can’t decide. If the ass has a metarule resolving conflicting to favor the left, it has an extraneous bias.

Lamport’s analysis discerns a kind of physical law; mine elucidates the origins of the mind-projection fallacy. What’s psychologically telling is that the most common metarule is to decide at random. But if by random we mean only apparently random, the strategy still doesn’t free the ass from its straightjacket. If it flips a coin, an agent is, in fact, biased toward whatever the coin will dictate, bias, here, means an inclination to use means causally connected with a certain outcome, but the coin flip’s apparent randomness is due to our ignorance of microconditions; truly random responding would allow the agent to circumvent the paradox’s conditions. The theory that the agent might use a random strategy expresses the intuition that the agent could turn either way. It seems a route to where the opposites of functioning according to physical law and acting “freely” in perceived self-interest are reconciled.

This false reconciliation comes through confusing two kinds of symmetry: the epistemic symmetry of “chance” events and the dynamic symmetry in the Buridan’s ass paradox. If you flip a coin, the symmetry of the coin (along with your lack of control over the flip) is what makes your reasons for preferring heads and tails equivalent, justifying assigning each the same probability. We encounter another symmetry with Buridan’s ass, where we also have the same reason to think the ass will turn in either direction. Since the intuition of “free will” precludes impossible decisions, we construe our epistemic uncertainty as describing a decision that’s possible but inherently uncertain.

When we conceive of the ass as a purely physical process  subject to two opposite forces (which, of course, it is), and then it’s obvious that the ass can be “stuck.” What miscues intuition is that the ass need not be confined to one decision rule. But if by hypothesis it is confined to one rule, the rule may preclude decision. This hypothetical is made relevant by the necessity of there being some ultimate decision rule.

The intuitive physics of an agent that can’t get stuck entails: a) two equal forces act on an object producing an equilibrium; b) without breaking the equilibrium, an additional natural law is added specifying that the ass will turn. Rather than conclude this is impossible, intuition “resolves” the contradiction through conceiving that the ass will go in each direction half the time: the probability of either course is deemed .5. Confusion of kinds of symmetry, fueled by the intuition of free will, makes Buridan’s Principle counter-intuitive and objective probabilities intuitive.

How do we know that reality can’t be like this intuitive physics? We know because realizing a and b would mean that the physical forces involved don’t vary continuously. It would make an exception, a kind of singularity, of the midpoint.

# -8

New Comment

Moved to Discussion.

How many of the commenters have actually read Lamport's paper? From what they say, my impression is, not many.

[-][anonymous]11y20

The medieval philosopher Buridan reportedly constructed a thought experiment to support his view that human behavior was determined rather than “free”—hence rational agents couldn’t choose between two equally good alternatives.

Just an historical nitpick: this contradicts the wiki article on Buridan's ass, which reports that it was used to satirize Buridan's moral deterministic philosophy, but that the paradox itself (including the ass) goes back further than Aristotle (i.e. Aristotle mentions it in such a way that the reader is expected to be familiar with it).

The problem with this post is that it conflates two issues. One is Buridan's principal about the impossibility of mapping a continuous parameter to a discrete decision in a bounded amount of time. The other is the issue of breaking the symmetry.

The problem with the Problem is that it simultaneously assumes a high cost of thinking (gradual starvation) and an agent that completely ignores the cost of thinking. An agent who does not ignore this cost would solve the Problem as Vaniver says.

The Problem only assumes the universe is continuous. If you move a particle by a sufficiently small amount, you can guarantee an arbitrarily small change any finite distance in the future. Thanks to the butterfly effect, it has to be an absurdly tiny amount, but it's only necessary that it exists.

Also, it assumes that the Ass will eventually die, but that's really more for effect. The point is that it can't make the decision in bounded time.

Also, it assumes that the Ass will eventually die,

I'm not convinced this is actually true for the same reason of continuity.

Its possible survival is not guaranteed by continuity. It is possible in real life, but it takes more than continuity to prove that.

I know. I was thinking that it might be possible for the ass to guarantee it won't die by having an interrupt based on how hungry it is.

If you could do an interrupt, you could just make it go to the left if it takes too long to decide.

You can make it so that it gets more left-biased as it gets hungrier, but this just means that the equilibrium has it slowly moving to the right thereby increasing the pull to the right enough to counter out the increased pull to the left from hunger.

My idea is the following:

As it stands the ass will (after an indefinite amount of time) wind up in one of three positions:
a) eating from the left bale,
b) eating from the right bale, or

I'm trying to arrange it so that it always winds up in one of (a) or (b).

If it can pick (a) or (b) then it can also pick something somewhere in between. The only way to get around this is to somehow define (a) and (b) so that they border on each other.

For example, if you talk about which bale it eats first, and it only needs to eat some of it, then you could have something where it walks to the right bale, is about to take a bite, but then changes its mind and goes to the left bale. If you change it by epsilon, it takes an epsilon-sized bite, and eats from the right bale first instead of the left bale.

If it can pick (a) or (b) then it can also pick something somewhere in between. The only way to get around this is to somehow define (a) and (b) so that they border on each other.

Only if you assume bounded time. A ball unstably balanced on a one-dimensional hill will after an indefinite amount of time fall to one side or the other, even though the two equilibria aren't next to each other.

No. It almost certainly will fall eventually, but there is at least one possibility where it never does.

Just because you can think of a possibility does not make it possible. In the absence of classical mechanics, finite temperature will cause it to fall in very finite time. With quantum mechanics, quantum zero point fluctuations will cause it to fall in finite time even if it was at zero temperature.

Finite temperature will cause it to fall in a finite time if you start with it balanced perfectly. You just need to tilt it a little to counter that. This is an argument by continuity, not an argument by symmetry.

There's some set of starting positions that result in it falling to the left, and another set that result in it falling to the right. If you start it on a boundary point and it falls right after time t, that means that you can get a point arbitrarily close to it that will eventually fall left, so is clearly nowhere near that at time t. That means that physics isn't being continuous.

Finite temperature will cause it to fall in a finite time if you start with it balanced perfectly. You just need to tilt it a little to counter that.

You cannot counter it by tilting it because the thermal perturbation is random. At one moment is being pushed to the left, at another to the right.

f you start it on a boundary point and it falls right after time t, that means that you can get a point arbitrarily close to it that will eventually fall left, so is clearly nowhere near that at time t. That means that physics isn't being continuous.

To be clear, I don't want to argue against the hypothesis. As long as you are NOT talking about the real world, which is what I take "physics" to mean, you can talk about continuous and balance point, and arbitrarily long times to fall one way or the other. The point is that in the real world, any real world experiment analyzed in detail will have "noise" which causes it to fall sometimes one way and sometimes the other in finite time when placed at its balance point. That noise is usually dominated by thermal fluctuations, but even in the absence of thermal fluctuations, there are still quantum fluctuations which behave very much like thermal noise, which behave in very many ways as though you can't get the temperature below some limit.

In principle, you can build a system where the thing is cooled enough, and where it is designed so that the quantum fluctuations are small enough, that you can have a relatively long time for it to fall one way or the other off its balance point. That is, the amount of time it takes to be pushed by quantum noise is large on human or intuitive scales. However, you can't make it arbitrarily large without building an arbitariily large system. So if you are concerned about the difference between "really big" and "infinite," you don't get to claim a physical system balanced on an unstable equilibrium point can have an infinitely long time to fall.

Of course this doesn't mean that in a non-physical "world" with no quantum and rigid objects (another approximation that can't be realized in the real world) you couldn't do it. So the math is safe, its just not physics.

You cannot counter it by tilting it because the thermal perturbation is random.

Randomness is an attribute of the map, not the territory. You do not know how much to tilt it, but there is still a correct position.

This isn't something you can feasibly do in real life. It's not hard to make it absurdly difficult to find the point at which you have to position the needle. It's just that there is a position.

You do not know how much to tilt it, but there is still a correct position.

The amount to tilt it takes is changing in time. Thermal and/or quantum fluctuations continue to start the thing falling in one direction or another, and you have to keep seeing how it's going and move whatever your balancing it on to catch it and stop it from falling.

You have created a dynamic equilibrium instead of a static one by using active feedback based on watching what the random and thermal noise are doing. You have not created a situation where an unstable equilibrium takes an arbitrarily long time to be lost. You have invented active feedback to modify the overall system into a stable equilibrium.

If you tilt it the correct way, it will not just stand there. It will fall almost perfectly into every breeze that comes its way. Almost, because it has to be off a little so that it will fall almost perfectly into the next breeze.

You can't keep it from ever tilting, but that doesn't mean that you can't keep it from falling completely.

Your argument now boils down to "the physical world is not both continuous and deterministic".

With probability zero.

Falling to the left with t==1 second also has probability zero. Remaining balanced for a period of time between a google and 3^^^3 times the current age of the universe, then falling left, has positive probability.

There is no upper bound to the amount of time that the ball can remain balanced in a continuous deterministic universe.

Sorry, I'm not sure I understand what you mean. What particle should we move to change the fact that the ass will eventually get hungry and choose to walk forward towards one of the piles at semi-random? It seems to me like you can move a particle to guarantee some arbitrarily small change, but you can't necessarily move one to guarantee the change you want (unless the particle in question happens to be in the brain of the ass).

If you slowly move the particles one at a time from one bale to the other, you know that once you've moved the entire bale the Ass will change its decision. At some point before that it won't be sure.

There might not actually be a choice where the Ass stands there until it starves. It might walk forward, or split in half down the middle and have half of it take one bale of hay and half take the other, or any number of other things. It's really more that there's a point where the Ass will eventually take a third option, even if you make sure all third options are worse than the first two.

Thanks (and I actually read the other new comments on the post before responding this time!) I still have two objections.

The first one (which is probably just a failure of my imagination and is in some way incorrect) is that I still don't see how some simple algorithms would fail. For example, the ass stares at the bales for 15 seconds, then it moves towards whichever one it estimates is larger (ignoring variance in estimates). If it turns out that they are exactly equal, it instead picks one at random. For simplicity, let's say it takes the first letter of the word under consideration (h), plugs the corresponding number (8) as a seed into a pseudorandom integer generator, and then picks option 1 if the result is even, option 2 if it's odd. It does seem like this might induce a discontinuity in decisions, but I don't see where it would fail (so I'd like someone to tell me =)).

The second objection is that our world is, in fact, not continuous (with the Planck length and whatnot). My very mediocre grasp of QM suggests to me that if you try to use continuity to break the ass's algorithm (and it's a sufficiently good algorithm), you'll just find the point where its decisions are dominated by quantum uncertainty and get it to make true random choices. Or something along those lines.

For example, the ass stares at the bales for 15 seconds, then it moves towards whichever one it estimates is larger (ignoring variance in estimates). If it turns out that they are exactly equal, it instead picks one at random.

Your problem is that you're using an algorithm that can only be approximated on an analog computer. You can't do flow control like that. If you want it to do A if it has 0 as an input and B if it has 1 as an input, you can make it do A+(B-A)x where x is the input, but you can't just make it do A under one condition and B under another. If continuity is your only problem, you can make it do A+(B-A)f(x), where f(x)=0 for 0<=x<=0.49 and f(x)=1 for 0.51<=x<=1, but f(x) still has to come out to 1/2 when x is somewhere between 0.49<x<0.51.

If you tried to do your algorithm, after 15 seconds, there'd have to be some certainty level where the Ass will end up doing some combination of going left and choosing at random, which will keep it in the same spot if "random" was right. If "random" is instead left, then it stops if it's half way between that and right.

The second objection is that our world is, in fact, not continuous (with the Planck length and whatnot).

I'm not really sure where that idea came from. Quantum physics is continuous. In fact, derivatives are vital to it, and you need continuity to have them. The position of an object is spread out over a waveform instead of being at a specific spot like a billiard ball, but the waveform is a continuous function of position. The waveform has a center of mass that can be specified however much you want. Also, the Planck length seems kind of arbitrary. It means something if you have an object with size one Planck mass (about the size of a small flea), but a smaller object would have a more spread out waveform, and a larger object would have a tighter one.

get it to make true random choices.

That would make it so you can't purposely fool the Ass, but it won't keep that from happening on accident. For example, if you try to balance a needle on the tip outside when there's a little wind, you're (probably) not going to be able to do it by making it stand up perfectly straight. It's going to have to tilt a little so it leans into every gust of wind. But there's still some way to get it to balance indefinitely.

The second objection is that our world is, in fact, not continuous (with the Planck length and whatnot).

I'm not really sure where that idea came from. Quantum physics is continuous. In fact, derivatives are vital to it, and you need continuity to have them. The position of an object is spread out over a waveform instead of being at a specific spot like a billiard ball, but the waveform is a continuous function of position. The waveform has a center of mass that can be specified however much you want. Also, the Planck length seems kind of arbitrary. It means something if you have an object with size one Planck mass (about the size of a small flea), but a smaller object would have a more spread out waveform, and a larger object would have a tighter one.

The Plank length is irrelevant but quantization isn't. Specifically, with with quantum mechanics it's possible to get the ass to be in a superposition of eating from one or the other (but not in the middle) in bounded time.

Okay, thanks for the explanation. It does seem that you're right*, and I especially like the needle example.

*Well, assuming you're allowed to move the hay around to keep the donkey confused (to prevent algorithms where he tilts more and more left or whatever from working). Not sure that was part of the original problem, but it's a good steelman.

You don't have to move the hay during the experiment. The donkey is the one that moves.

If he goes left as he gets hungry, you move the bale to his right a tad closer, and he'll slowly inch towards it. He'll slow down instead of speed up as he approaches it because he's also getting hungrier.

Does that really work for all (continuous? differentiable?) functions. For example, if his preference for the bigger/closer one is linear with size/closeness, but his preference for the left one increases quadratically with time, I'm not sure there's a stable solution where he doesn't move. I feel like if there's a strong time factor, either a) the ass will start walking right away and get to the size-preferred hay, or b) he'll start walking once enough time has past and get to the time-preferred hay. I could write down an equation for precision if I figure out what it's supposed to be in terms of, exactly...

I'm not sure there's a stable solution where he doesn't move.

Like I said, the hay doesn't move, but the donkey does. He starts walking right away to the bigger pile, but he'll slow down as time passes and he starts wanting the other one.

Interestingly, that trick does get the ass to walk to at least one bale in finite time, but it's still possible to get it to do silly things, like walk right up to one bale of hay, then ignore it and eat the other.

I'm not sure there's a stable solution

The solutions are almost certainly unstable. That is, once you find some ratio of bale sizes that will keep the donkey from eating, an arbitrarily small change can get it to eat eventually.

Interestingly, that trick does get the ass to walk to at least one bale in finite time, but it's still possible to get it to do silly things, like walk right up to one bale of hay, then ignore it and eat the other.

Okay, sure, but that seems like the problem is "solved" (i.e. the donkey ends up eating hay instead of starving).

It can also use the "always eat the left bale first" strategy, although that gets kind of odd if it does it with a bale of size zero.

There is a problem if you want to make it make an actual binary decision, like go to one bale and stay.

See Daniel's comment here.

Lamport points out this substitutes another decision subject to the same limits: the agent must decide that it can’t decide.

When the Ass computes the expected utility of going each direction, it finds that they are equal. This is a decision subject that the Ass can decide in finite time, and furthermore that computation shows that it is obvious there is no value to be gained by spending further time deciding. It's not even worth flipping a coin over- a right-hoofed Ass should go to the right bale, and a left-hoofed Ass should go to the left bale, or some other suitable default that saves cognitive resources.

Basically, it looks like a lot of the assumptions in Lamport's argument are questionable,* and applying basic decision theory dissolves the problem immediately.

*Why does it take appreciably longer to visually measure larger bales of hay? All the Ass is doing is looking at them.

When the Ass computes the expected utility of going each direction, it finds that they are equal.

How is the ass to determine that they're equal, as opposed to one being ε larger than the other in finite time?

How is the ass to determine that they're equal, as opposed to one being ε larger than the other in finite time?

Again, this is the reverse of how you should go about things. The amount of time to spend making a decision is proportional to the difference between the value of the actions- once you're sure the difference is smaller than ε, you stop caring. You might as well flip a coin.

The amount of time to spend making a decision is proportional to the difference between the value of the actions- once you're sure the difference is smaller than ε, you stop caring.

Aw, but how does one determine that the distance is smaller than ε? What if the difference is arbitrarily close to ε?

You assign a disutility to spending time t further evaluating, and when the disutility of further evaluation is equal to the expected increase in utility of the selected bale (because one of them is/may be ε larger than the other), you select randomly between them.

If you adjust the disutility of marginal time t to increase with the total amount of time spent deciding, you break the deadlock condition which exists if you are uncertain about whether the disutility of spending more time evaluating is greater or less than the expected utility increase of spending more time evaluating the choice; if that return-on-investment is ever within ε of zero, then some time later it must be greater than ε less than zero (because the disutility of the time t will have increased by more than 2ε).

I'm a bit confused, so please take this as exploratory rather than expository. What prevents the ass (or the decision process in general) from:

1) Having a pre-computed estimate of (a) how long it has before it would starve, (b) how the error of its size determinations depends on how long it spends observing, and (c) how much error in that estimates it cares about; and then,

2) Stop observing/deciding when the first limit is close (but far enough to still have time to eat!) or when the error of the difference between the two directions falls below the limit it cares about. (In the strictest interpretation of the question, this second step is not necessary.)

When I say "estimate" in step one, I mean a very wide pre-computed interval, not some precise computation. I don't know exactly how long it'll take me to die from hunger, but it's clear that in a similar situation at some point I'd be hungry enough that I can anticipate without needing any complicated logic that I would die from not eating and stop comparing the choices. In that case you just need a way to distinguish them to pick one (i.e., left and right, not bigger and smaller), and you do so with any arbitrary rule (e.g., lexicographic ordering).

I effect, the ass has a not just the binary problem of choosing left and right, it has the ternary (meta-)problem of choosing between going left, going right, or searching for a better method of picking which direction is better. The first two may remain symmetrical, but at some point the third choice ("keep thinking") will reach negative expected utility (trivially, when you anticipate to starve, but in real life you might also decide that spending another hungry hour deliberating over some very small amount of hay is not worth it).

I'm sure similar decision problems can be posed, where the tradeoffs are balanced such that you still have an issue, but this particular formulation seems almost as silly as claiming the ass will starve because of Zeno's arrow paradox.

Run your decision procedure for a constant time. If it doesn't halt, abort it and break the symmetry - e.g. by choosing the option that sorts first lexically.

The constant time part could work, but is hardly the only escape valve you should have. You have a utility estimate for each action- the estimates will have some variance, and you can run the procedure until either the variance is below a certain amount or the variance has decreased by less than some threshold in the last iteration or you've run out of time.

The Ass is not a digital computer. It's an analog computer. It's subject to continuity. That's important.

If you look at the Ass's center of mass five seconds after the experiment starts, and vary the relative sizes of the bales of hay continuously, the Ass's position must also change continuously. If you found some ratio of hey where the Ass ends up at the left bale of hay, but if you add any amount, no matter how tiny, it ends up at the right bale of hay, the Ass is violating the laws of physics.

It gets a bit more complicated because you can't add less than one particle to the bale of hay, but there are other things you can do, such as slowly move one piece of straw between the bales, or move the bales closer and further from the Ass.

[-]satt11y10

(I wonder why Lamport's paper ended up in Foundations of Physics after so long. He finished a first draft by October 31, 1984 and a revised version on January 21, 1986, at which point he seems to've let it sit around until he submitted it to FoP in 2011!)

One of your better posts, even if you'd need a highly unrealistic assumption (e.g. confined to one rule) to actually have a stuck-up a...gent.

But, similarly contrived scenarios (edit: but not exactly analogous!) can happen. To all sorts of asses, such as dining philosophers:

Five silent philosophers sit at a table around a bowl of spaghetti. A fork is placed between each pair of adjacent philosophers.

Each philosopher must alternately think and eat. However, a philosopher can only eat spaghetti when he has both left and right forks. Each fork can be held by only one philosopher and so a philosopher can use the fork only if it's not being used by another philosopher. After he finishes eating, he needs to put down both forks so they become available to others. A philosopher can grab the fork on his right or the one on his left as they become available, but can't start eating before getting both of them.

Eating is not limited by the amount of spaghetti left: assume an infinite supply. An alternative problem formulation uses rice and chopsticks instead of spaghetti and forks.

The problem is how to design a discipline of behavior (a concurrent algorithm) such that each philosopher won't starve, i.e. can forever continue to alternate between eating and thinking, assuming that any philosopher cannot know when others may want to eat or think.

We can come up with easy ways to solve such deadlocks, the same applies to the donkey.

The dining philosophers are digital. They can make whatever crazy exceptions they want. Buridan's Ass is analogue. If it has an exception to a rule, it has to be able to be half way into the exception. It's like how you can make a continuous function that looks a lot like a square wave, but now matter how close it gets, there's always a point where it's 0 instead of +-1.

and then it’s obvious that the ass can be “stuck.”

...seriously?

[-]shev11y00

Well, that's the point. It's absurd.

[-][anonymous]11y00

I am not sure what your point is. Consider adding a summary upfront.

If you simplify your system and replace an agent with, say, an inverted pendulum, then this is a standard physical phenomenon of spontaneous symmetry breaking, well studied in classical and quantum situations. Is agency important for your point (your (mis)use of "free will" seems to indicate so), whatever it is? Are you trying to solve the issue of bounded time before making a decision?

[This comment is no longer endorsed by its author]Reply