I was rereading the *fantastic* encyclopedia *Mathematics 1001: Absolutely Everything that Matters in Mathematics in 1001 Bite-Sized Explanations* over Christmas break. It really lives up to its title!, and is much more accessible than reading random Wikipedia articles.

There’s an interesting puzzle, The Unexpected Hanging, posed in the final section *Games and Recreation*. And, uh, more specifically in the chapter on *Puzzles and Perplexities*. The problem is as follows:

A logician was condemned to be hanged. The judge informed him that the sentence was to be carried out at noon, one day next week between Monday and Friday. But it would be unexpected: he would not know which day it would be, before it happened. In his cell, contemplating his doom, the logician reasoned as follows:

‘Friday is the final day available for my hanging. If I am still alive on Thursday afternoon, then I can be certain that Friday must be the day. Since it is to be unexpected, this is impossible. So I can rule out Friday.

Therefore Thursday is the last possible day on which the sentence can be carried out. If I am still here on Wednesday afternoon, I must expect to die on Thursday. Again, this conflicts with the execution’s unexpectedness, and is therefore impossible. I can rule out Thursday!..’

Repeating the same argument the logician was able to rule out Wednesday, Tuesday, and Monday, and went to sleep slightly [sic] cheered. On Tuesday morning the hangman arrived at his cell, completely unexpectedly! As he stood at the gallows, the logician reflected that the dreadful sentence was being carried out exactly as the judged had promised.

Where oh where was the flaw in the logician’s reasoning? Please feel free to puzzle over the problem yourself before reading my solution :).

.

.

.

.

.

.

.

.

Ok, here’s how I would approach the problem. One correct, but unsatisfactory answer, is to say that the judge’s warning, "the sentence was to be carried out at noon, one day next week between Monday and Friday. But it would be unexpected: he would not know which day it would be, before it happened”, is somehow logically impossible or logically meaningless. It’s like saying “this sentence is false” or “the flibbertygibbets gobswallow” or “green is the best color” or “the water is very hot”, and isn’t the kind of unambiguous proposition that takes a definite truth value. I mean, this is true, and explains the logician’s mistake, but we can give a much more satisfying account of what’s going on.

“Unexpected” actually *does* have a formal mathematical definition, unlike “flibbertygibbets” or “best color”. An outcome is unexpected or surprising in proportion to how many bits of information you gained from learning it. A more formal term is “information content” or “surprisal”. This is an extremely fundamental concept in information theory and machine learning. In fact the most common objective that machine learning models try to minimize, e.g. a language model that’s trained to predict the next word in a sentence, or an image classifier that tries to determine what category a picture belongs to, is how surprised they are by the correct answer!

There are thus a few ways to formalize the judge’s warning. A slightly dumb way might be if the judge is really saying, “On the day you are killed, you will have *cumulatively* gained lg(5) = 2.32 bits of information, and thus be that surprised overall” (which is just to say that the logician will be killed on any of Monday-Friday with equal probability). This is kind of a dumb solution, since we want to respect the logician’s understanding that he must be gaining a lot of surprisal on the day of the execution itself.

My favorite solution is if the judge is really saying, “On the day you are killed, you will gain at least 1 bit (or half a bit — whatever counts as sufficiently surprising) of information... with high probability”. The “high probability” is because, in the spirit of the logician’s reasoning, there is no info to be gained on the last day, so the logician can only be surprised on one of the earlier days. A way the judge could randomly choose the day in accordance with his warning is to, say, decide each morning Monday-Thursday to kill the logician with probability 50%, and always kill the logician if he’s survived to Friday. If he’s killed on any of Monday-Thursday, then the logician will gain lg(2) = 1 bit of surprisal. There’s only a one in sixteen chance that the logician will make it all the way to Friday, which means he’ll be surprised with high probability = 15/16.

Those last two ways of the judge choosing the probability distribution (uniformly random across the five days, or according to an exponential where the conditional probability of each of the first four days is one in two), kind of assume the judge shares exactly his decision process with the logician. That is if the judge says he will kill the logician with probability one fifth on Monday, then the logician will gain lg(5) = 2.3 bits of info if he is killed on that day (and lg(5/4)= .32 bits of info if he’s spared).

Of course, the logician could hallucinate that the judge said he would be killed with certainty on Monday, in which case the logician would gain zero information if he was killed that day (and infinite information if he survived!). So in general, given a possible deviation between the logician’s beliefs and the true distribution, we might want to formalize the judge’s warning as, “You will be surprised on average by what day you are killed”. The technical name for this is cross-entropy, which is again a central concept in optimization in machine learning (e.g. a language model wants to minimize the cross-entropy between its predictions and the true distribution. The cross-entropy is bounded below by the true distribution’s entropy, so the logician will be at least as surprised on average as the judge.) In the context of the original logician’s reasoning, he believed that there was *zero* chance of being killed on any of the five days. Soooooo, he would have always been infinitely surprised when inevitably hung. Technically, the logician could avoid being surprised on the day of his hanging by always being maximally pessimistic, and assuming that today was the day. In that case his average cumulative surprisal would be extremely high (more precisely cross-entropy is the cumulative surprisal — that’s the term that’s bounded below by the judge’s entropy).

I looked at the Wikipedia page for this problem, and it looks like the epistemological school proposes a solution entirely based on assuming that the logician has a different distribution than the judge for when he will be hung. For a minimal example, say the judge’s warning was, “You will be hung *tomorrow*, and that will surprise you”. The judge knows that he will hang the logician with probability 1. The logician has a different probability, and holds onto some hope that he will be spared. Hence he is, sadly, surprised.

Finally, the judge could always just be probabilistically wrong in a boring way. Maybe half his offenders are optimists like our story’s logician and the other half are pessimists, and only the optimists are ever surprised by their hanging.

Taking a cue from the wiki article "you will be hanged tomorrow, and you will not be able to derive from this statement whether or not you will be hanged tomorrow"

Seems kind of weird because it is self-contradictory, and yet true.

Friday is a surprise because one would not expect to live till Friday. So you cannot bootstrap backwards from it, leaving you with a uniform distribution.