Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

Consider the Sleeping Beauty problem. What do we mean by fair coin? It is meant that the coin will have 50-50 probability of heads or tails. But that is fake. It will ether come up heads or tail, because the real world is deterministic. It is true that I don't know the outcome. I don't know if I am in a world of type "the coin will come up heads" or a world of type "the coin will come up tails". But in this situation I should be allowed to put what ever prior I want on the coins behavior.

Consider the Born rule of quantum mechanics. If I measure the spin of an electron, then I will entangle the large apparatus that is the my measuring equipment with the spin of the electron. We say that there are now two Everett branches, one where the apparatus measured spin up and one where the apparatus measured spin down. Before I read of the result, I don't know which Hilbert branch I am in. I could be in ether, and I should be allowed to have what ever prior I want. So why the Born rule? Why to I do I believe that the square amplitude is the correct way of assigning probability to which Hilbert branch I am in?

I believe in the Born rule because of the frequency of experimental outcomes in the past. The distribution of galaxies in the sky can be traced back to the Born rule. I don't have the gears on what is causing the Born rule, but there are something undeniably real about galaxies that trumps mere philosophical Bayesian arguments about freedom of priors.

Imagine that you are offered a bet. Should you take it or not? There are several argument about what you should do in different situations. For example, if you have finite amount of money, you should maximize the E(log(money)) for each bet, (see e.g. Kelly criterion). However, every such argument I have ever seen, is assuming that you will be confronted by a large number of similar bets. This is because probabilities only relay make sense if you sample enough times from the random distribution you are considering.

The notion of "fair coin" does not make sense if the coin is flipped only once. The right way to view the Sleeping Beauty problem is to view it in it in the context of Repeated Sleeping Beauty.

Next: Repeated (and improved) Sleeping Beauty problem

New Comment
7 comments, sorted by Click to highlight new comments since: Today at 2:33 PM

Seems you have a category error, at least in the title. Probability is a model, frequency is an observation. A model cannot be fake, it can be a certain degree of usefulness in explaining and predicting the observations.

What do we mean by fair coin? It is meant that the coin will have 50-50 probability of heads or tails. But that is fake. It will ether come up heads or tail, because the real world is deterministic.

A small nit to pick perhaps, but it doesn't follow that the world is deterministic because the coin comes up either heads or tails. Perhaps the coin has libertarian free will and decides at the last moment which side to come up, yet we'd make the same observation that the coin was either heads or tails in this case and would be wrong to say the world is deterministic. To make a claim about determinism you need to be saying something about the future being determined (caused by) on the past. When we talk about fair coins and probabilities we are instead expressing something about our uncertainty about the future, not an expression of whether or not we think the future is determined by the past.

I don't know what you mean by "should be allowed to put whatever prior I want". I mean, I guess nobody will stop you. But if your beliefs are well approximated by a particular prior, then pretending that they are approximated by a different prior is going to cause a mismatch between your beliefs and your beliefs about your beliefs.

[Nitpick: The Kelly criterion assumes not only that you will be confronted with a large number of similar bets, but also that you have some base level of risk-aversion (concave utility function) that repeated bets can smooth out into a logarithmic utility function. If you start with a linear utility function then repeating the bets still gives you linear utility, and the optimal strategy is to make every bet all-or-nothing whenever you have an advantage. At least, this is true before taking into account the resource constraints of the system you are betting against.]

I agree that "want" is not the correct word exactly. What I mean by prior is an agents actual a priori beliefs, so by definition there will be no mis-match there. I am not trying to say that you choose your prior exactly.

What I am gesturing at is that no prior is wrong, as long as it does not assign zero probability to the true outcome. And I think that much of the confusion in atrophic situation comes from trying to solve an under-constrained system.

.the real world is deterministic

We don't know that. Even if the way we quantify probability is a "map" feature, probability is not just a set of quantities. 0.5 is not intrinsically a probability. Probability is the quantification of some kind of indeterminism or lack of information. Under conditions of perfect determinism and information, there is nothing for probability do do.

I think you would be right if we lived in a classical universe. But given many worlds, there is a principled way in which a coin flip can be random, and a principled difference between flipping a coin and checking the trillionth digit of the decimal expansion of .

Edit: I know you acknowledge this, but you don't seem to draw the above conclusion.

We can talk about single-shot events, so long as we are allowed to include multi-shot elements. For example, Sleeping Beauty includes a coin. We can insist that Sleeping Beauty is only run once, but when we say the coin has 50/50 chance of being heads we are talking about the long-run frequency.