Some Variants of Sleeping Beauty

7Sami Petersen

5Eric Chen

5JBlack

5Sylvester Kollin

4Sylvester Kollin

4Martín Soto

3Jim Buhler

2Eric Chen

2Sylvester Kollin

-1avturchin

New Comment

These are great. Though Sleeping Mary can tell that she's colourblind on any account of consciousness. Whether or not she learns a *phenomenal fact* when going from 'colourblind scientist' to 'scientist who sees colour', she does learn the *propositional fact* that she isn't colourblind.

So, if she sees no colour, she ought to believe that the outcome of the coin toss is Tails. If she does see colour, both SSA and SIA say P(Heads)=1/2.

Yeah great point, thanks. We tried but couldn't really get a set-up where she just learns a phenomenal fact. If you have a way of having the only difference in the 'Tails, Tuesday' case be that Mary learns a phenomenal fact, we will edit it in!

I did particularly like the "Sleeping Loop" version, which manages to even confuse the question of how many times you've been awakened: just once, or infinitely many times? Congratulations!

My follow-up question for almost all of them though, is based on use of the word "should" in the question. Since it presumably is not any moral version of "should", it's presumably a meaning in the direction of "best achieves a desired outcome".

What outcome am I trying to maximize, here? Am I trying to maximize some particular metric over prediction accuracy? In which case, which metric and how is it being applied? If I give the same answer twice based on the same information, is that scored differently from giving that answer once? If some p-zombie answers that same way that I would have if I were conscious, does that score count for my prediction or is it considered irrelevant? (Although this comment ends here, don't worry - I have a lot more questions!)

My follow-up question for almost all of them though, is based on use of the word "should" in the question. Since it presumably is not any moral version of "should", it's presumably a meaning in the direction of "best achieves a desired outcome".

The 'should' only designates what you think epistemic rationality requires of you in the situation. That might be something consequentialist (which is what I think you mean by "best achieves a desired outcome"), like maximizing accuracy^{[1]}, but it need not be; you could think there are other norms^{[2]}.

To see why epistemic consequentialism might not be the whole story, consider the following case from Greaves (2013) where the agent seemingly maximises accuracy by ignoring evidence and believing an obviously false thing.

Imps. Emily is taking a walk through the Garden of Epistemic Imps. A child plays on the grass in front of her. In a nearby summerhouse are n further children, each of whom may or may not come out to play in a minute. They are able to read Emily ’s mind, and their algorithm for deciding whether to play outdoors is as follows. If she forms degree of belief 0 that there is now a child before her, they will come out to play. If she forms degree of belief 1 that there is a child before her, they will roll a fair die, and come out to play iff the outcome is an even number. More generally, the summerhouse children will play with chance , where is the degree of belief Emily adopts in the proposition that there is now a child before her. Emily ’s epistemic decision is the choice of credences in the proposition that there is now a child before her, and, for each , the proposition that the th summerhouse child will be outdoors in a few minutes’ time.

See Konek and Levinstein (2019) for a good discussion, though.

If I give the same answer twice based on the same information, is that scored differently from giving that answer once?

Once again, this depends on your preferred view of epistemic rationality, and specifically how you want to formulate the accuracy-first perspective. Whether you want to maximize individual, average or total accuracy is up to you! The problems formulated here are supposed to be agnostic with regard to such things; indeed, these are the types of discussions one wants to motivate by formulating philosophical dilemmas.

^{^}This is plausibly cashed out by tying your epistemic utility function to a proper scoring rule, e.g. the Brier score.

^{^}See e.g. Sylvan (2020) for a discussion of what non-consequentialism might look like in the general, non-anthropic, case.

Regarding Sleeping Counterfact: there seems to be two updates you could make, and thus there should be conceptual space for two interesting ways of being updatelessness in this problem; you could be 'anthropically updateless', i.e., not update on your existence in the standard Thirder way, and you could also be updateless with respect to the researchers asking for money (just as in counterfactual mugging). And it seems like these two variants will make different recommendations.

Suppose you make the first update, but not the second. Then the evidentialist value of paying up would plausibly be .

Suppose, on the other hand, that you are updateless with respect to both variables. Then the evidentialist value of paying up would be .

Interesting! Did thinking about those variants make you update your credences in SIA/SSA (or else)?

(Btw, maybe it's worth adding the motivation for thinking about these problems in the intro of the post.) :)

I expected to see Sleeping beauty trolley problem:

One beauty is one Monday track and 5 beauties are on Tuesday track. All beauties are exact copies of each other. Should you change the direction of the trolley, given that no of them will ever love you?

The Sleeping Beauty problem is a classic conundrum in the philosophy of self-locating uncertainty. From

Elga (2000):Here are some variants of the problem, not to be taken all too seriously.

## Sleeping Logic

## Sleeping Counterfact

Sleeping Nested

Solution.The expected number of wakings:

E[W]=1/2⋅2+1/2⋅(1+E[W]),

which implies that E[W]=3, where E[W|H]=4 and E[W|T]=2. This means that the problem at hand is (a reversed version of) the standard Sleeping Beauty.

## Sleeping Newcomb

Solution.Deliberationalepistemic EDTFixed points: p=0,p=1

Fixed point: every p

^{[1]}Epistemic CDT(with a uniform prior)

p=1/3

p=1/2

Solution.Thirder/SIA

Halfer/SSA

Deliberationalepistemic EDTFixed point: p=√2−1

^{[2]}^{[3]}Fixed point: p=1/2

Epistemic CDT(with a uniform prior)

p=1/3

p=1/2

## Sleeping Past

## Sleeping Loop

## Sleeping Mary

## Sleeping Zombie

## Sleeping Parfit

## Sleeping Collapse

## Sleeping Cardinalities

## Sleeping Grim Wakers

## Sleeping Rosswood

Hint.Suppose the coin lands Tails: consider any day n, is Sleeping Beauty awoken on this day?

^{^}Since every credence p works here, one should arguably go with zero or one.

^{^}Does rationality require you to have irrational credences?

^{^}Proof. Suppose I say ‘p’. Then I have uncentred credences Cru(H)=1−p and Cru(T)=p. So the Thirder Rule/SIA says that my centred credence should be Cr@(H)=(1−p)/[(1−p)+2p]=(1−p)/(1+p). If we now set Cr@(H)=p (to find the fixed point), we get (1−p)/(1+p)=p⇔…⇔p=±√2−1, where p=√2−1≈0.42 is the only positive value. (See Briggs (2010) for details on the Thirder Rule/SIA.)