# 7

This is the eighth post in my series on Anthropics. The previous one is Lessons from Failed Attempts to Model Sleeping Beauty Problem. The next one is Beauty and the Bets.

## Introduction

Suppose we take the insights from the previous post, and directly try to construct a model for the Sleeping Beauty problem based on them.

We expect a halfer model, so

On the other hand, in order not repeat Lewis' Model's mistakes:

But both of these statements can only be true if

And, therefore, apparently,  has to be zero, which sounds obviously wrong. Surely the Beauty can be awaken on Tuesday!

At this point, I think, you wouldn't be surprised, if I tell you that there are philosophers who are eager to bite this bullet and claim that the Beauty should, indeed, reason as if she can't possible be awoken on Tuesday. I applaud their dedication to brute-forcing the problem, and leave them at that, preferring to follow less bizarre approaches.

As I mentioned in the previous post, we need to find the core flawed assumption in all three models and fix it before we will be able to construct the correct model. That's what we are going to do now.

On the other hand, this drawback may create an impression that my analysis from the previous post is wrong, and that's the reason why my insights are leading to a dead end. That actually one of the three models is sound for the Sleeping Beauty problem, despite all the issues mentioned previously.

Thankfully, there is a way to kill both birds with one stone.

## Statistical Analysis

Let's simulate Sleeping Beauty experiment multiple times and write down the day and the coin side on every awakening. Here is an implementation in Python:

def sleepingBeauty():
days = ['Monday']
if random.random() >= 0.5: # result of the coin toss
coin = 'Tails'
days += ['Tuesday']
else:
return days, coin

ListSB = []
for i in range(n):
days, coin = sleepingBeauty()
for day in days:
ListSB.append(coin+'&'+day)

And then do the same with all the models.

def modelElga():
rand = random.random()
if rand <= 0.33
elif rand <= 0.66:
return 'Tails&Monday'
else:
return 'Tails&Tuesday'

ListE = []
for i in range(1.5*n):
outcome = modelElga()
ListE.append(outcome)
def modelLewis():
if random.random() >= 0.5: # result of the coin toss
coin = 'Tails'
day = 'Monday' if random.random() >= 0.5 else 'Tuesday'
else:
day = Monday
return coin+'&'+day

ListL = []
for i in range(1.5*n):
outcome = modelLewis()
ListL.append(outcome)
def modelUpdating():
day = 'Monday' if random.random() >= 0.5 else 'Tuesday'
coin = 'Heads' if random.random() >= 0.5 else 'Tails'
outcome = coin+'&'+day
return outcome

ListU = []
for i in range(1.5*n):
outcome = modelUpdating()
if outcome:
ListU.append(outcome)

As a result, we've got four lists of outcomes: ListSB, ListE, ListL and ListU.

If one of the models accurately represents Sleeping Beauty problem, we shouldn't be able to easily distinguish between ListSB and the list produced by this model. They wouldn't be identical, of course, since we are talking about random events, but their statistical properties have to be similar, so that if a person was given either of them, it wouldn't be possible to guess which was which better than chance.

It's fairly easy to see that ListL immediately fails this test. Frequency of Tails&Monday and Tails&Tuesday outcomes in the ListSB is around 1/3, while in ListL it's 1/4 for both of them. But ListE and ListU are not much better. Yes, their frequencies for individual outcomes are correct. And yet, these lists are still easily distinguishable from the target.

In ListE and ListU, all three outcomes: Heads&Monday, Tails&Monday, Tails&Tuesday are spread randomly. But in the ListSB outcome, Tails&Tuesday is always preceded by Tails&Monday. This is because all three models assume that Beauty's previous and next awakenings happen independently of each other, while in the experiment itself, there is an order between them.

So, of course these models are applicable to some different problems where the condition of independence is satisfied. And now we should be able to see the wrong assumption in all three models. They treat current awakening as somehow sampled, assuming that Monday and Tuesday and, therefore, Heads&Monday, Tails&Monday, Tails&Tuesday are mutually exclusive outcomes from which a sample space can be constructed. However, they are not. Monday and Tuesday are ordered and thus not mutually exclusive.

## Probability and Time

It may not be immediately clear why Monday and Tuesday are not mutually exclusive in Sleeping Beauty. It intuitively feels that they have to be exclusive because they are separated by time. But this is a confusion between a layman understanding of mutual exclusivity and a probability theoretic one. And the core difference is in how our intuition and probability theory treat the concept of time.

For us, time is a continuous stream, where the next moment depends on the previous one due to causality.

For probability theory, time is discrete and every moment is independent from another. It's a purely logical entity without any physical connotations. "Next moment in time" simply means "next iteration of the experiment".

Mutually exclusive events, according to probability theory, are events that can't happen in the same iteration of the experiment - exclusive in "logical time" and not "physical time".

Consider the Single-Awakening problem from the previous post. We can reformulate it so that instead of Monday and Tuesday awakenings there were awakenings in two different rooms:

If the coin comes Heads, Sleeping Beauty is awakened in Room 1. If the coin comes Tails, Sleeping Beauty is either awakened in Room 1 with 50% chance, or in Room 2, otherwise.

Even if we make sure to awaken the Beauty at the same physical time on Heads and on Tails the outcomes Heads&Room1, Tails&Room1 and Tails&Room2 are still mutually exclusive for the sake of probability theory. Uncertainty about physical time is treated by probability theory exactly the same way as uncertainty about physical space. What really matters is that only one particular outcome happens in a particular iteration of experiment.

Likewise, consider this elementary case:

If the coin comes Heads, you get five dollars on the next day.

Is getting five dollars mutually exclusive with coin being Heads? Of course not! On the contrary, it's a direct consequence of it. The fact that there is a time delay obviously is irrelevant from probability theoretic standpoint.

So why the confusion with Sleeping Beauty problem? Why do we suddenly decide that Tails&Monday and Tails&Tuesday are mutually exclusive?

## Effects of Amnesia

Some level of confusion is definitely added by the fact that Sleeping Beauty is experiencing amnesia between awakenings. I hope that no one seriously assumes that it somehow changes the statistical properties of the setting. But it may seem that the Beauty has to reason as if Monday and Tuesday are exclusive, anyway, based on the knowledge available to her.

After all, the Beauty doesn't know which awakening is which, she can't distinguish them better than chance. And probability is in the mind, it's about modelling uncertainty. In case of lack of any information, we default to the equiprobable prior.

Such reasoning would be correct if Beauty actually didn't have any information about her awakenings, except the fact that there are three distinct, though indistinguishable for her, possibilities. Then the situation would be isomorphic to the No-Coin-Toss problem, and using Elga's Model to describe Beauty's knowledge state would be appropriate.

But in our case, the Beauty is well aware about the setting of the experiment. She knows that her awakening routine is determined by the coin toss. She knows that Tails&Monday and Tails&Tuesday awakenings follow sequentially. In spite of memory loss, this information is fully available to the Sleeping Beauty. And as a good Bayesian, she has to use all the relevant available information and not delude herself into thinking that her current awakening happens independently from the next/previous one.

## Failure of Centred Possible Worlds

How comes philosophers were missing all this for decades? Well, they weren't exactly. As always, they figured out a clever sounding reason to disregard these concerns. Citing a footnote from Bayesian Beauty by Silvia Milano:

In a sense, speaking about refining the state space may seem suspicious: after all, ws1 and ws2 both happen (sequentially) if the result of the coin toss is Heads. So, from an atemporal point of view, they are not mutually exclusive. However, here we are not interested in the atemporal viewpoint, but in the temporally located viewpoint that Beauty occupies at the time that she considers the problem. From this temporally located perspective, ws1 and ws2 are indeed mutually exclusive.

Suspicious is an understatement. The jump to such a conclusion immediately raises several serious questions.

How come our interest in something else is affecting the statistical properties of events? Why do we think that Beauty has to consider the problem from a "temporally located viewpoint"? Where does all this talk about temporal and atemporal points even come from, considering that probability theory doesn't have any special way to deal with physical time? Thankfully, the answers can be found in the next footnote of the same paper.

The indexical states within the state space Ω′ can be interpreted as centred worlds. Using centred worlds to capture the content of self-locating propositions is a standard move in the philosophical literature Lewis (1979).

As far as I can tell, David Lewis' Attitudes De Dicto and De Se, which Bayesian Beauty is citing, is the source of all our troubles. There is a lot I could say about this paper, arguing against its individual points one by one, and maybe someday I will, after all I can definitely see how many a confusion of anthropics originate from it.

But whatever its general philosophical merits, right now we are interested in something very specific: the justifications of the probability theory manipulations that since became a "standard move in the philosophical literature".

And there is not much. If, just like me, you've hoped to see at least some math in the paper, you'd be disappointed. David Lewis' reasoning on this matter can be abridged to a simple "Why not?":

We can have beliefs whereby we locate ourselves in logical space. Why not also beliefs whereby we locate ourselves in ordinary time and space? We can self-ascribe properties of the sort that correspond to propositions. Why not also properties of the sort that don't correspond to propositions? We can identify ourselves as members of subpopulations whose boundaries follow the borders of the worlds. Why not also as members of subpopulations whose boundaries don't follow the borders of the worlds?

Why not? No reason! We can and we do have beliefs whereby we locate ourselves in ordinary time and space; whereby we self-ascribe properties that don't correspond to propositions; and whereby we identify ourselves as members of subpopulations
whose boundaries don't follow the borders of the worlds.

Don't get me wrong, that's a fair question, that deserves a fair answer. Which at this point should be clear - by doing so you may end up contradicting probability theory.

And if you want to expand probability theory somehow, to lawfully be able to do this new exciting thing that you want to do, well then, it's expected to engage with the math, state new axioms and prove theorems and be generally more substantial than asking "Why not?"

But this wasn't the goal of David Lewis. He was writing a philosophy paper, not a math one. Maybe he expected that the formal mathematical justifications would be easily made later. Maybe he didn't even understand that he was contradicting probability theory as it is. The closest to reasoning about mathematics is this:

Then it is interesting to ask what happens to decision theory if we take all attitudes as de se. Answer: very little. We replace the space of worlds by the space of centered worlds, or by the space of all inhabitants of worlds. All else is just as before. Whatever the points of the space of possibilities may be, we have probability distributions over the space and assignments of utility values to the points. For any rational agent at any time there is a pair of a probability distribution and a utility assignment. The probabilities change under the impact of his perception; the probabilities and utilities jointly govern his action. His degrees of belief at a time are got by taking the total probability of regions of the space; his degrees of desirability are got by integrating the point-by-point utilities, weighted by probability, over regions of the space. But since the space of possibilities is no longer the space of worlds, its regions to which degrees of belief and desirability attach are no longer

There is some ambiguity here.

Probability theory isn't based on philosophical concept of "possible worlds" and their metaphysical reality that philosophers speculate about. It is based on a triplet , where   is the sample space of elementary outcomes,  is some sigma algebra of it and  is the measure function, the domain of which is

As long the outcomes in  are mutually exclusive - everything goes. Once again, remember two versions of Single-Awakening Problem - the one where the awakenings happen on different days and the one where they happen in different rooms.

The same mathematical model describes both of them. Philosophers may say that the first type of uncertainty is "temporal" while the second is "spatial". But for the sake of probability theory such semantics doesn't matter in the slightest. Math stays correct regardless of what you call your variables.

So, if by "replace the space of worlds by the space of centered worlds" Lewis meant being able to talk about such cases of "temporal uncertainty" then of course nothing really changes. In this case no rewriting of probability theory is required and everything is quite trivial.

You are to awake on Monday and on Tuesday. You are currently awake. What is the probability that it's Monday?

You are to be put in Room 1 and then Room 2. You are currently in a room. What is the probability that it's Room 1?

Do you see the difference? Here both outcomes happen, not only one of them, so they are not mutually exclusive. In order to be able to apply probability theory we need to specify what currently means. Are we talking about the first moment in time? The second one? A random one? The one selected according to some specific rule? The answer is different, depending on how we remove this ambiguity, and there is no answer until we do it.

Another way to interpret "replace the space of worlds by the space of centered worlds", the kind of interpretation that we can observe in all the failed attempts to model the Sleeping Beauty problem, is to replace sample space  with , which consists of non-mutually-exclusive outcomes, and yet treat them as if they are. To claim that in case of "temporal uncertainty" one has to always assume that current time moment means random time moment.

I'm not quite sure that philosophers generally understand the difference between these two cases. When you talk about them in philosophical terms, both are just "temporal uncertainty", so it's very easy to confuse them and decide that they are one and the same. And if we can use probability theory in one case, then indeed, why not in the other?

Well, here is why not. Because in the latter case you are making baseless assumptions and contradicting the axioms of probability theory. Even if you say such magic words as "centred possible worlds", "de se attitudes" and "temporal perspective". And when this happens you are to encounter paradoxes. Which is exactly what happened.

In a saner world, people would treat the Sleeping Beauty paradox as a clear demonstration that Lewis' ideas do not work. A classic case of proof by contradiction. As soon as we accept the premise, we can observe that a fair coin becomes unfair, or we get an update for the probability estimate despite receiving no new information, or an ability to predict the outcome of a fair coin better than chance.

In our world people just can't stop begging the question. They accept the framework and then assume that Sleeping Beauty is the kind of problem that has to be solved using it, that Beauty has to reason from a "temporal perspective", despite there being no justification for it, despite all the absurdity that follows. Some claim that the Beauty has to treat her Monday and Tuesday awakenings as two different people. Some go as far to claim that we should reason as if our every observer moment is randomly sampled!

## Math vs Intuition

Why are people so eager to trust that we can use probability theory with such a temporal perspective? Once again, because it fits our naive intuitions.

If I forget what is the current day of the week in my regular life, well, it's only natural to start from a 1/7 prior per day and work from there. I can do it because the causal process that leads to me forgetting such information can be roughly modeled as a low probability occurrence which can happen to me at any day.

It wouldn't be the case, if I was guaranteed to also forget the current day of the week on the next 6 days as well, after I forgot it on the first one. This would be a different causal process, with different properties - causation between forgetting - and it has to be modeled differently. But we do not actually encounter such situations in everyday life, and so our intuition is caught completely flat footed by them.

Consider the assumption that on an awakening Sleeping Beauty learns that "she is awoken today". What does it actually mean? A natural interpretation is that Beauty is awoken on Monday xor Tuesday. It's easy to see why it's true for Single-Awakening and No-Coin-Toss problems. In every iteration of experiment if the Beauty is awakened on Monday she is not Awakened on Tuesday and vice versa.

But it doesn't stand true for Sleeping Beauty problem, where individual awakenings do not happen independently. On Tails both Monday and Tuesday awakening happen, so the Beauty can't possibly learn that she is awoken on Monday xor Tuesday - this statement is wrong in 50% of cases. What Beauty actually learns is that "she is awoken at least once" - on Monday and (Tuesday or not Tuesday).

And yet, the assumption that the Beauty should be able to reason about her individual awakenings feels so natural, so intuitive! It doesn't even seem right to question it. We are so used to situations where different physical moments in time can be lawfully modeled by probability theory that we assume that it always has to be the case. That math has to care about our identity, of first person perspective, or feeling of passage of time.

But math doesn't care. It's just a truth preserving mechanism and nothing more. The moment you've bent the rules, even a tiny bit, even when it really feels like the rules have to be different, you are not preserving the truth anymore. Not arriving at the correct conclusion. There is no arguing around that. Just give up, find the mistake and fix it.

## Correct Model for the Sleeping Beauty Problem

Now, with all these things considered, we are finally ready to construct the correct model.

As previous and future awakenings are not independent, we can't use awakenings as elementary outcomes. But we can talk about the experiment as a whole. And as soon as we do it - everything adds up to normality.

Previously we were confused how can  be equal to 1. We thought that it means that  - an obviously wrong statement.

But now we understand that Monday and Tuesday are not mutually exclusive events. More specifically, events "This awakening happens during Monday/Tuesday" are ill-defined, but events "In this experiment Monday/Tuesday awakening happens" have coherent probabilities.

We can talk about conditional probabilities and intersections between Monday and Tuesday events.

Meanwhile, the sample space for the coin toss is obvious

We can combine it with :

Monday awakening happens regardless of the result of the coin toss, so:

Likewise we can combine  and :

Tuesday awakening happening in the experiment means that the coin is definitely Tails and vice versa

Which, by the way, is the reason why Elga's proof for thirdism doesn't work.

By combining all the sample spaces together:

Which is equal to the sample space of the coin toss, just with different names for the outcomes. And no wonder - the outcome of the coin is the only unknown element in the setting.

- are all the exact same outcome. Likewise,  and  are the same.

On every awakening the Beauty always learns that she is awake and that Monday awakening has happened, just as she has initially expected.

Whether Monday is today or was yesterday is irrelevant - it is the same outcome, anyway. There is no new information and so the Beauty doesn't update her probability estimate for the coin toss outcome.

Once again with an "anthropic problem", as long as we are actually following probability theory faithfully, without trying to manipulate it in favor of our flawed intuitions, everything is absolutely obvious. No fair coin suddenly becoming unfair. No precognitive powers. No contradiction of conservation of expected evidence. No mystery to write philosophical papers about for decades.

And no two valid answers either. We can now strictly say that a per experiment-based scoring rule produces valid probabilities for the Sleeping Beauty setting, while per awakening does not, because it counts the same elementary outcome twice.

The correct model represents the only lawful way to reason about the Sleeping Beauty problem without smuggling unjustified assumptions that the current awakening is randomly sampled. It passes the statistical test with flying colors, as it essentially reimplements the sleepingBeauty() function:

def modelCorrect():
if random.random() >= 0.5: # result of the coin toss
else:
return ['Tails&Monday','Tails&Tuesday']

ListC = []
for i in range(n):
outcome = modelCorrect()
ListC += outcome

It also correctly deals with every betting scheme, be it per day or per awakening. And in the next post we will specifically explore the topic of betting in regards to the Sleeping Beauty problem.

The next post in the series is Beauty and the Bets.

New Comment
Some comments are truncated due to high volume. Change truncation settings

I think I had this same (or similar) train of thought up to this point, and it was one of the paths that eventually led me to UDT:

1. Indexical uncertainty / centered possible worlds seems incompatible with probabilities.
2. In a big universe/multiverse (like the many-worlds interpretation, an infinite universe, or Tegmark level IV), all empirical uncertainty (as opposed to logical uncertainty) is indexical uncertainty, because everything that logically can happen does happen somewhere, just with different measure.
3. Given this, it seems impossible to update (non-trivially) on observations, because "I observe X" seemingly just tells me that in this universe/multiverse some observer with my memories observed X, but I already knew that.
4. How am I supposed to make decisions then? One idea (named UDT1.1) is to think, "What input/output map, if I were to implement it, would optimize the universe/multiverse according to my values?" Then when I have to make a decision, feed my current memories/observations into this input/output map to get the decision.

I'm curious if you're going somewhere else instead, since I'm far from satisfied with UDT, including as a solution to anthropic reasoning.

1Ape in the coat
Yes, I see the similarity. I'm specifically trying not to touch decision theory unless I absolutely have to, solving problems without invoking utilities. My core hypothesis is that whenever people try to solve "anthropic problems" they make mistakes in the basics and then build a tower of assumptions on a poor foundation. That's why I'm going to talk about the betting in Sleeping Beauty only as a next step after the problem itself is already solved in terms of probability theory. My current stance can be described as: 1. Indexical uncertainty / centered possible worlds often contradicts basic probability theory. 2. So Indexical uncertainty / centered possible is total nonsense and we should stop trying to reason this way and go back to the basics. 3. When we actually follow probability theory everything adds up to normality and all confusing anthropic problems reduce to basic probability theory ones. 4. Eventually we will be able to build the correct decision theory based on this sound foundation. So I think I disagree with your 2. If you can't treat something as logical uncertainty then you can't reason about it. Period. It doesn't mean that we can't use probability theory in many worlds. I can reason about a quantum coin toss exactly the same way I would reason about a logical coin toss.
2Wei Dai
I guess I'm not seeing how everything adds up to normality in big worlds. For example, suppose Sleeping Beauty is taking place in a quantum universe, where the coin lands both heads and tails (in different Everett branches), and due to quantum tunneling there is a branch with a tiny measure where Beauty wakes up on Tuesday even though the coin landed heads. And then the only coherent probabilities (that do not involve indexicals or centered possible worlds) seem to be P(Monday)=P(Tuesday)=P(Heads)=P(Tails)=P(Heads|Tuesday)=P(Tails|Tuesday)=1, where Monday means "Monday awakening happened somewhere in the universe" and so on. But I don't think many people would call this "normal" or useful? Do you see a way to get around this?
1Ape in the coat
Hmm. So instead of two outcomes we have three: Heads&!Tuesday, Tails&Tuesday and Heads&Tuesday, where the last one is very low probable. Or do we also have a very low probable Tails&!Tuesday as well? Still no update on awakening or on knowing about Monday. But now learning about Tuesday awakening is a tiny bit less strong evidence in favor of Tails. What about "Monday awakening happens in this Everett branch"? The problem seems to be equivalent to picking a random branch from several possible ones, according to their known probabilities. Which is modelled as logical uncertainty without much problems, as far as I see. The point isn't that everything that can be treated as indexical uncertainty is automatically invalid. The point is that everything that can't be treated as logical uncertainty is. We should simply abolish the category of "indexical uncertainty" as non helpful and keep using standard probability theory wherever we lawfully can. I suppose you encounter problems if you think that branching has to be modelled the same way as memory loss? That if we can't lawfully reason about separate awakening states of the Beauty, we shouldn't be able to reason about separate Beauties in different branches? But I don't see why it has to be the case. Monday and Tuesday awakening in one branch are causally connected. Different branches are not.
2Wei Dai
What if, as part of the thought experiment, we assume that the people running Sleeping Beauty make sure that Monday and Tuesday awakening are causally disconnected to the best of their abilities? (I.e., they try to ensure that nothing that Beauty does during each awakening can affect the world outside the experiment or persist to the next awakening.) Would that change your answers? (I.e., why can't we the define P(Monday) to mean P(Monday awakening happens in this causal bubble) and so on?) Maybe you reply that they can't enforce causal disconnectedness between Monday and Tuesday with certainty, so Beauty still has to treat them as causally connected. But then we also can't be sure that different Everett branches are causally disconnected with absolute certainty (that's just what our current best theory says), so the two situations still seem analogous. See also "preferential interaction" from indexical uncertainty and the Axiom of Independence: Subsequent to writing that post, people also came up with the idea of "acausal interactions" as in acausal trade and extortion, which similarly violates the axiom of independence.
1Ape in the coat
As long as previous and next awakenings on Tails are not statistically independent it wouldn't. This is what matters here. By the definition of the experimental setting, when the coin is Tails, what Beauty does on Monday - awakens - always affects what she does on Tuesday - awakens the second time. Sequential events are definitely not mutually exclusive and thus can't be elements of a sample space. Now, we can remove this causality/correlation by doing a different experiment, where some number of SB experiments are simulated, then we get a list of awakenings and then select a random one of them, and put the Beauty through it. Then thirder model, which implicitly assumes that current awakening is randomly sampled, would be correct. The whole point of acausal trade is that it's, well, acausal. Branches are mutually exclusive in probability theoretic terms, and yet you may choose to care about a different branch in terms of your utilities. This is not a problem for probability theory, because it doesn't have utilities and complications that they add. Which is, once again, why it's helpful to disentangle probability theory from decision theory and firstly solve the former before engaging with the latter.
1Signer
But who does the picking? The problem is that all branches exist, so objective statistics shows them always existing simultaneously. On the fundamental level, they are. And if you are fine with approximations, then you can treat Elga's model as approximation too.
1Ape in the coat
The math stays the same, regardless. That's the whole point. I don't see how it's a problem. We deal with such cases all the time in probability theory. Suppose there are n students and n exam question sheets. Every sheet may includes several questions and some questions are asked often then others. "Objective statistics" shows that all the sheets are spread among students and all the questions are asked. And yet there is a meaningful way to say that to a particular student there is a specific probability to receive a particular question in the exam. I don't think I understand what you mean here. Can you elaborate? I'm talking about the difference in causal graphs. Or, it's works as an approximation, to some degree, no argument here. But what's the point in using imperfect approximation when there is a better model?
1Signer
I mean that different branches are casually connected - there is some level of interference between them. In practice you would be approximating it differently - coin toss causing all branches as opposed to Monday causing Tuesday, yes. But it's basically the same causal graph as if we copy Beauty instead of reawakening, so I don't get why such causality matters. You said in another comment, that copying changes things, but I assume (from the OP) that you still would say that Elga's model is not allowed, because both rooms exist simultaneously? Well, branches also exist simultaneously. It doesn't - if all branches exist, then P of everything is 1. Even if you believe in Born probabilities, they are probably indexical too. ...or do you accept Elga's model for copies and it is really all about awakenings being sequential? Why the same arguments about changing probability of questions wouldn't apply here? Or "if two models are claiming that you get different amount of knowledge while observing the same data one is definitely wrong"?
2Ape in the coat
Can't we model interference as separate branches? My QM is a bit rusty, what kind of casual behaviour is implied? It's not that we can actually jump from one branch to the other. Simultaneous of existence has nothing to do with it. Elga's model is wrong here because unlike the Sleeping Beauty, learning that you are in Room 1 is evidence for Heads, as you could not be sure to find yourself in Room 1 no matter what. Here Lewis' model seems a better fit. I think some cloning arrangement can work according to Elga's model, it fully depends on the specific of the cloning procedure. Whether the process that led to your existence can be correctly approximated as random sampling or not. Though, I need to think more about it as these cases still feel a bit confusing to me. There definitely are settings where SIA-like reasoning is valid, like when there is a limited set of souls that a randomly picked to be instantiated in bodies, it just doesn't really seem to be the way our universe works.
1Signer
Don't know specifics, as usual, but as far as I know, amplitudes of the branch would be slightly different from what you get by evolving this branch in isolation, because other branch would also spread everywhere. The point is just that they all exist, so, as you say, why use imperfect approximation? I meant the experiment where you don't know which room it is, but anyway - wouldn't Lewis’ model fail statistical test, because it doesn't generate both rooms on Tails? I don't get why modeling coexistence in one timeline is necessary, but coexistence in space is not. What do you mean by "can be correctly approximated as random sampling"? If all souls are instantiated, then Elga’s model still wouldn't pass statistical test.
1Ape in the coat
I'm afraid I won't be able to address your concerns without the specifics. Currently I'm not even sure that they are true. According to Wei Dai in one of a previous comments our current best theory claims that Everett branches are causally disconnected and I'm more than happy to stick to that until our theories change. If you participate in a Fissure experiment you do not experience being at two rooms on Tails. You are in only one of the rooms in any case, and another version of you is in another room when it's Tails. You can participate in a thousand fissure experiment in a row and accumulate a list of rooms and coin outcomes corresponding to your experience and I expect them to fit Lewis's model. 75% of time you find yourself in room 1, 50% of time the coin is Heads. Because coexistence in space happens separately to different people who are not causally connected, while coexistence in one timeline happen to the same person, whose past and future are causally connected. I really don't understand why everyone seem to have so much trouble with such an obvious point.  Suppose in a Sleeping Beauty it's Tails and the participant eats a big meal on Monday. On Tuesday they will likely need to visit the toilet as a result. But in Fissure on Tails if a person in one room eats a big meal it doesn't affect in any way the person in the other room.  Probability is in the map. And this map may or may not correspond to the territory. When someone throws a coin it can usually be treated as a random sample from two outcomes. But it's not some inherent law of the universe about coin tossing. Its possible to make a robot arm that throws coins in such a way to always produce Tails.
1Signer
They are approximately disconnected according to our current best theory. Like your clones in different rooms are approximately disconnected, but still gravitationally influence each other. Still don't get how it's consistent with your argument about statistical test. It's not about multiple experiments starting from each copy, right? You still would object to simulating multiple Beauties started from each awakening as random? And would be ok with simulating multiple Fissures from one original as random? I understand that there is a difference. The trouble is with justification for why this difference is relevant. Like, you based your modelling of Monday and Tuesday as both happening on how we usually treat events when we use probability theory. But the same justification is even more obvious, when both the awakening in Room 1 and the awakening in Room 2 happen simultaneously. Or you say that the Beauty knows that she will be awake both times so she can't ignore this information. But both copies also know that they both will be awake, so why they can ignore it? Is this what it is all about? It depends on definition of "you". Under some definitions the Beauty also doesn't experience both days. Are you just saying that distinction is that no sane human would treat different moments as distinct identities?
1Ape in the coat
I think this level of accuracy is good enough for now. It very much is. Every copy is its own person who can then participate in whatever experiments they chose to independently from the other copy.  I don't see how it is possible in principle. If the Beauty in the middle of experiment how can she starts participating in another experiment without breaking the setting of the current one? In what sense is she the same person anyway if you treat any waking moment as a different person? No, they are not. Events that happen to Beauty on Monday and Tuesday are not mutually exclusive because they are sequential. On Tails if an awakening happened to her on Monday it necessary means that an awakening will happen to her on Tuesday in the same experiment. But the same argument isn't applicable to fissure, where awakening in different Rooms are not sequential, and truly are mutually exclusive. If you are awaken in Room 1 you definetely are not awaken in Room 2 in this experiment and vice versa. Well if there was some probability theoretic reason why copies could not reason independently, then that would be the case. This is indeed an interesting situation and I'll dedicate a separate post or even multiple of them to comprehensive analysis of it. Of course it depends on definitions. Everything does. But not all definitions are made equal. Some carve reality at its joints and some do not. Some allows to construct theories that adds up to normality and some - that lead to bizarre conclusions. Well it's a bit too late for that, because there definetely are otherwise sane people, who are eager to bite the bullet, no matter how ridiculous.  What I'm saying is that to carve reality at it's joints we need to base our definitions on the causal graphs. And as an extra bonus it indeed seems to fit the naive intuition of personal identity and adds up to normality.
1Signer
Somehow every time people talk about joints, it turns out being more about naive intuitions of personal identity, than reality^^. If you insist on Monday and Tuesday being on the same week, then by backing up her memory: after each awakening we save memory and schedule memory loading and new experiment to a later free week. Or we can start new experiment after each awakening and schedule Tuesdays for later. Does either of these allow you to change your model? You can treat every memory sequence as a different person. I'm not saying the arguments are literally identical. Your argument is: 1. The awakening on Tuesday happens always and only after the awakening on Monday. 2. Therefore !(P(Monday) = 0 & P(Tuesday) = 1) & !(P(Monday) > 0 & P(Tuesday) < 1). 3. Therefore they are not exclusive. The argument about copies is: 1. The awakening in Room 1 always happens and the awakening in Room 2 always happens. 2. Therefore !(P(Room 1) < 1) & !(P(Room 2) < 1). 3. Therefore they are not exclusive. Why the second one doesn't work? I agree, some are more preferable. Therefore probabilities depend on preferences.

Which is equal to the sample space of the coin toss, just with different names for the outcomes.

Well then by your arguments it can't be describing the Sleeping Beauty problem, when it is a much better match for the Just a Coin Toss problem.

Whether Monday is today or was yesterday is irrelevant—it is the same outcome, anyway.

But what if you actually want to know?

Well, here is why not. Because in the latter case you are making baseless assumptions and contradicting the axioms of probability theory.

Again, Elga’s model doesn't contradict the axioms ...

1Ape in the coat
My argument isn't just "There is another problem to which a model is applicable, therefore it is not applicable to Sleeping Beauty". It is "When we apply a model to Sleeping Beauty we can notice some weirdness going on, while when we apply it to a different problem, this weirdness disappears, therefore it is not applicable to Sleeping Beauty". Elga model doesn't have an explanation for the change of probabilities of the coin, it doesn't pass statistical test. Thus it's not a good match for the Sleeping Beauty problem. My model doesn't have such issues. It is just as good match for the Sleeping Beauty problem as for Just a Coin Toss problem Too bad. Math doesn't always let us do things that we want, which is what makes it useful in the first place.  It does not when it's solving problems where Heads&Monday, Tails&Monday and Tails&Tuesday are mutually exclusive.  But when applied to Sleeping Beauty problem, where Tails&Monday and Tails&Tuesday happen sequentially and, therefore, are not mutually exclusive it does, because by definition sample space has to consist of mutually exclusive events. We are supposed to finally stop ignoring the mathematical fact that we can't, for which, I think, I've made a quite comprehensive explanation why. Have I missed something? What is the justification that makes you believe that we should be able to do it, despite everything that I've written in this post?
3Signer
There is no change of probabilities, because there are no probabilities without outcome space. And why "the model doesn't represent "today is Monday"" is not weird, when that was what you wanted to know in the first place? Wouldn't it fail the statistical test if we simulated only subjective experience of Beauty? But it's not math, it's your objectivity biased "no-weirdness" principle. Without it you can use Elga's model to get more knowledge for yourself in some sense. It's not a theorem of probability theory that Sleeping Beauty is a problem, where Tails&Monday and Tails&Tuesday happen sequentially and, therefore, are not mutually exclusive. You've shown that there is a persuasive argument for treating Monday and Tuesday as both happening simultaneously, that it is possible to treat them like this. But you haven't shown that they definitely can't be treated differently.
2Ape in the coat
Are you saying that probability of Heads is in principle not defined before the Beauty awakens in the experiment? Or just that it can't be defined if we assume that Elga's model is true? Because if it's the latter - it's not a point in favor of Elga's model. Because, such event doesn't have a well defined probability in the setting of Sleeping Beauty. I've shown it in Math vs Intuition section, but probably this wasn't clear enough. Lets walk through it once more. Try rigorously specifying the event "today is Monday" in the Sleeping Beauty problem. What does "today" mean?  For example, in No-Coin-Toss problem it means Monday xor Tuesday or, in other words it's a variable from the set: today ∊ {Monday, Tuesday}. But in Sleeping Beauty we can't define "today" this way, because on Tails both Monday and Tuesday happen. The variable "today" has to take two different values during the experiment. Or you may define today as Monday or Tuesday. But then the event "today is Monday" always happens and P(Monday)=1. The main question of the Sleeping Beauty problem is what her credence for Heads should be when she is awakened, while participating in the experiment. This is the question my model is answering. People just mistakenly assume that it means "What is you credence specifically today", because they think that "today" is a coherent variable in Sleeping Beauty, while it's not. We are simulating only subjective experience of the Beauty. We are not adding to the list Heads&Tuesday, during which the Beauty is asleep, for instance, only the states when she is awake and thus able to subjectively experience whatever is going on. And these subjective experiences still exist in the setting of the experiment, where Tuesday follows Monday. I suppose you mean something else by "subjective experience of Beauty"? What is it? Are you under the impression that Beauty subjectively experiences her awakening in random order due to the amnesia? I deal with this argument in the Effects
1Signer
It can’t be usefully defined if we assume that Elga’s model is true. I agree that it is not a point in favor. Doesn't mean we can't use it instead of assuming it is true. What do you mean by "rigorously"? "Rigorously" as "using probability theory" it is specified as Monday in Elga's model. "Rigorously" as "connected to reality" today is specified as Monday on physical Monday, and Tuesday on physical Tuesday. We do! You are using wrong "happens" and "then" in the definition - the actual definition uses words connected to reality, not parts of probability theory. It's not a theorem of probability theory, that if event physically happens, it has P > 0. And "awakening happens" is not even directly represented in Elga’s model. Yes, it's all unreasonable pedantry, but you are just all like "Math! Math!". On wiki it's "When you are first awakened, to what degree ought you believe that the outcome of the coin toss is Heads?" - notice the "ought"^^. And the point is mostly that humans have selfish preferences. Nah, I was just wrong. But... Ugh, I'm not sure about this part. First of all, Elga’s model doesn't have "Beauty awakened on Monday" or whatever you simulate - how do you compare statistics with different outcomes? And what would happen, if Beauty performed simulation instead of you? I think then Elga's model would be statistically closest, right? Also what if we tell Beauty what day it is after she tells her credence - would you then change your simulation to have 1/3 Heads? No, that's the point - it means they are using different definitions of knowledge. You can use Elga’s model without assuming randomness of an awakening, whatever that means. You'll need preferred definition of knowledge instead, but everyone already has preferences. "Default" doesn't mean "better" - if extra assumptions give you what you want, then it's better to make more assumptions.
2Ape in the coat
No disagreement here, then. Indeed, we can use wrong models as some form of approximations, we just have to be aware of the fact that they are wrong and not insist on their results when they contradict results of correct models. As in what you mean by "today" in logical terms. I gave you a very good example with how it's done with No-Coin-Toss and Single-Awakening problems. It's not unreasonable pedantry. It's an isolated demand for rigor from your part.  I do not demand from Elga's model anything my model doesn't do. I'm not using more vague language while describing my model, that the one I used while describing Elga's.  You, on the other hand, in attempt to defend it, suddenly pretend that you don't know what "event happens" means and demand to formally prove that events that happen have probability more than zero. We can theoretically go this route. Wikipedia's article for probability space covers the basics. But do you really want to loose more time on obvious things that we do not actually disagree about? First awakened? Then even Elga's model agrees that P(Heads|Monday)=1/2 No, the question is about how she is supposed to reason anytime she is awakened, not just the first one. Thank you for noticing it. I'd recommend to take some time and reflect on the new evidence that you didn't expect. What else does event "Monday" that has 2/3 probability means then? According to Elga's model there are three mutually exclusive outcomes: Heads&Monday, Tails&Monday Tails&Tuesday, corresponding to three possible awakening states of the Beauty. What do you disagree with here? I do not I understand what you mean here. Beauty is part of simulation. Nothing prevents any person from running the same code and getting the same results. Why would it? The simulation shows which awakening the Beauty is going through on a repetition of an experiment as it described, so that we could investigate the statistical properties of these awakenings. How is definition of knowledge r
1Signer
It means "today is Monday". I mean what will happen, if Beauty runs the same code? Like you said, "any person" - what if this person is Beauty during the experiment? If we then compare combined statistics, which model will be closer to reality? My thinking is because then Beauty would experience more tails and simulation would have to reproduce that. The point of using probability theory is to be right. That's why your simulations have persuasive power. But different definition of knowledge may value average knowledge of awake moments of Beauty instead of knowledge of outside observer.
1Ape in the coat
And Beauty is awakened, because all the outcomes represent Beauty's awakened states. Which is "Beauty is awakened today which is Monday" or simply "Beauty is awakened on Monday" just as I was saying. Nothing out of the ordinary. The Beauty will generate the list with the same statistical properties. Two lists if the coin is Tails. Simulation already reproduces that. Only 1/3 of the elements of the list are Heads&Monday. You should probably try running the code yourself to see how it works, because I have a feeling that you are missing something.
1Signer
Oh, right, I missed that your simulation has 1/3 Heads. Thank you for your patient cooperation in finding mistakes in your arguments, by the way. So, why is it ok for a simulation of an outcome with 1/2 probability to have 1/3 frequency? That sounds like more serious failure of statistical test. I imagined that the Beauty would sample just once. And then if we combine all samples into list, we will see that if the Beauty uses your model, then the list will fail the "have the correct number of days" test. They are not the same thing? The first one is false on Tuesday. (I'm also interested in your thoughts about copies in another thread).
1Ape in the coat
There are only two outcomes and both of them have 1/2 probability and 1/2 frequency. The code saves awakenings in the list, not outcomes People mistakenly assume that three awakenings mean three elementary outcomes. But as the simulation shows, there is order between awakenings and so they can't be treated as individual outcomes. Tails&Monday and Tails&Tuesday awakenings are parts of the same outcome. If this still doesn't feel obvious, consider this. You have a list of Heads and Tails. And you need to distinguish between two hypothesis. Either the coin is unfair and P(Tails)=2/3, or the coin is fair but whenever it came Tails, the outcome was written twice in the list, while for Heads - only once. You check whether outcomes are randomly spread or pairs of Tails follow together. In the second case, even though the frequency of Tails in the list is twice as high as Heads, P(Tails)=P(Heads)=1/2.
1Ape in the coat
On a reflection, I'd like to clarify a thing. I do talk about it in the post by I suppose some people may still be missing it. When I say that "we can't use probability theory with subjective decomposition of outcomes", I do not mean that we can never do it, or that Bayesian interpretation of probability theory is wrong.  The point is that in a general case the fact that you subjectively perceive something, doesn't necessary mean that it can be treated as an elementary outcome in a probability theoretic model. It is often the case but not all the time and this is the sort of the confusion that leads us astray with the Sleeping Beauty problem.
[-]Ben30

This is a very nice post, that has clarified my understanding a lot.

Previously I thought that it was just "per experiment" vs "per awakening" being underspecified in the problem. But you are completely correct that when we consider "per awakening" then its not really acceptable to treat it as random when consecutive awakenings are correlated.

I assume that the obvious extension to some of the anthropic thought experiments where I am copied also holds?  For example: a coin is flicked, on heads I wake up in a room, on tails 1E6 identical copies of me wak...

2Ape in the coat
You are most welcome! Broadly yes. I've been briefly talking about such cases in this post and the next one. But be mindful, the experiment where you may or may not be separated into multiple people is not exactly isomorphic to Sleeping Beauty, despite what traditional discourse about anthropics migh make you think. In Sleeping Beauty on Tails the same participants goes through both awakenings, while here different people experience awakenings in different rooms. Causal graphs are different. So you actually are able to reason about a specific instance of you in a particular room.  Suppose that on Heads you awaken in Room 1, but on Tails you are split into to people who awake in Room 1 and Room 2. Being awaken is no evidence one way or the other. But knowing that you are in Room 1 - is evidence in favor of Heads. Here Lewis' model actually is a good fit.
[-]robo31

I don't understand what return ['Tails&Monday','Tails&Tuesday'] and ListC += outcome mean.  Can you explain it more?  Perhaps operationalize it into some specific way Sleeping Beauty should act in some situation?

For example, if Sleeping Beauty is allowed to make bets every time she is woken up, I claim she should bet as though she believes the coin came up with probability 1/3 for Heads and 2/3 for Tails (≈because over many iterations she'll find herself betting in situations where Tails is true twice as often as in situations where Heads...

4Ben
Whether Beauty should bet each time she wakes up depends very critically on the rules of the wager. Some examples: Rule 1: Independent awakening bets: Each awakening beauty can bet $1 on the outcome of the coin. The bets at each awakening all stand independently. -In this case she should bet as if it was a 2/3rd chance of tails. After 100 coin tosses she has awoken 150 times, and for 100 of them it was tails. Rule 2: Last bet stands: Each awakening beauty can bet$1 on the outcome of the coin.  Only beauty's final decision for each toss is taken into account, for example any bet she makes Tuesday replaces anything decided on Monday. -She treats it as 50/50. Rule 2: Guess Right You Live (GRYL): On each awakening beauty must guess the coins outcome. If she has made no correct guess by the end of the run, she is killed. -For fair coin pick randomly between heads and tails, but for an unfair coin its a bit weird: https://www.lesswrong.com/posts/HQFpRWGbJxjHvTjnw/?commentId=BrvGnFvpK3fpndXGB Rule 3: Guess Wrong you Die (GRYD): On each awakening beauty must guess the coins outcome. If she has made any incorrect guesses by the end of the run, she is killed. -She should pick either heads or tails beforehand and always pick that. Picking heads is just as good as picking tails. The above set gives 1 "thirder's game", two "halfer's games" and one that I can't classify (GRYL). She will certainly find herself betting in twice as many tails situations as heads ones (hence the Rule 1 solution), but whether that should determine her betting strategy depends on the rules of the bet. As Ape in Coat has said Rule 1 can be interpreted as "50/50 coin, but your deposit and winnings are both doubled on tails" (because on tails Beauty makes two wagers).
3Daniel Munro
It seems to me that Rule 1 is a direct translation of the Sleeping Beauty problem into a betting strategy question, while the other rules correspond to different questions where a single outcome depends on some function of the two guesses in the case of tails. Doing the experiment 100 times under that rule, Beauty will have around 150 identical awakening experiences. The payout for each correct guess is the same, \$1, and the correct guess would be tails 2/3 of the time. So surely the probability that the coin had landed tails prior to these events is 2/3? Not because it's an unfair coin or there was an information update (neither is true), but because the SB problem asks the probability from the perspective of someone being awakened, and 2/3 of these experiences happen after flipping tails. It seems a stretch to say the bet is 50/50 but the 2nd 50% happens twice as often.
1Ape in the coat
There is no such thing as direct translation of a problem into a betting strategy question. A model for a problem should be able to deal with any betting schemes, no matter how extravagant. And the scheme where the Beauty can bet on every awakening is quite extravagant. It's an asymetric bet on a coin toss, where Tails outcome is rewarded twice as Heads outcome. If there is no information update then the probability of the coin to be Tails can't change from 1/2 to 2/3. It would contradict the law of conservation of expected evidence. As I've written in the Effects of Amnesia section, from Beauty's perspective Tails&Monday and Tails&Tuesday awakening are still part of the same elementary outcome because she remembers the setting of the experiment. If she didn't know that Tails&Monday and Tails&Tuesday necessary follow each other, if all she knew is that there are three states in which she can awaken, then yes, she should've reasoned that P(Tails)=2/3. Alternatively if the question was about a random awakening of the Beauty among multiple possible experiments, then, once again, P(Heads) would be 1/3. But in the experiment as stated, the Beauty isn't experiencing a random awakening, she is experiencing ordered awakening, determined by a coin toss.
1Ape in the coat
1robo
You don't have to reply, but FYI I don't understand what ListC represents (a total ordering of events defined by a logical clock?  A logical clock ordering Beauty's thoughts, or a logical clock ordering what causally can affect what, or logically affect what allowing for Newcomb-like situations?  Why is there a clock at all?), how ListC is used, what concatenating multiple entries to ListC means in terms of beliefs, etc.  If it's important for readers to understand this you might have to step us through (or point us to an earlier article where you stepped us through).
1Ape in the coat
ListC represents the same thing as ListE, ListL and ListU. Its a list of ordered outcomes of multiple runs of a particular model, which than we can compare to ListSB - a list of ordered awakenings of the Beauty in multiple iterations of an experiment and see whether their statistical properties are the same. The nature of the test is described in Statistical Analysis part of the post. I provide an inner hyperlink to it here:
1robo
Right, I read all that.  I still don't understand what it means to append two things to the list. Here's how I understand modelLewis, modelElga, etc. "This model represent the world as a probability distribution.  To get a more concrete sense of the world model, here's a function which generates a sample from that probability distribution" Here's how I understand your model. "This model represents the world as a ????, which like a probability distribution but different.  To get a concrete sense of the world model, here's a function which generates a sample from that probability distribution JUST KIDDING here's TWO samples". Why can you generate two samples at once?  What does that even mean??  The world model isn't quite just a stationary probability distribution, fine, what is it then?  Your model isn't structured like other models, fine, but how is it structured?  I'm drowning in type errors. EDIT and I'm suggesting be really concrete, if you can, if that will help.  Like come up with some concrete situation where Beauty makes a bet, or says a thing, ("Beauty woke up on Monday and said 'I think there's a 50% chance the coin came up on heads, and refuse to say there's a state of affairs about what day it presently is'") and explain what in her model made her make that bet or say that thing.  Or maybe draw a picture which what her brain looks like under that circumstance compared to other circumstances.
1Markvy
Here’s how I think of what the list is. Sleeping Beauty writes a diary entry each day she wakes up. (“Nice weather today. I wonder how the coin landed.”). She would like to add today’s date, but can’t due to amnesia. After the experiment ends, she goes back to annotate each diary entry with what day it was written, and also the coin flip result, which she also now knows. The experiment is lots of fun, so she signs up for it many times. The Python list corresponds to the dates she wrote in her dairy.

I've started at your latest post and recursively tried to find where you made a mistake (this took a long time!). Finally, I got here and I think I've found the philosophical decision that led you astray.

Am I understanding you correctly that you reject P(today is Monday) as a valid probability in general (not just in sleeping beauty)? And you do this purely because you dislike the 1/3 result you'd get for Sleeping Beauty?

Philosophers answer "Why not?" to the question of centered worlds because nothing breaks and we want to consider the question...

1Ape in the coat
I think you'd benefit more if you read them in the right order starting from here. Sure, we want a lot of things. But apparently we can't always have everything we want. To preserve the truth statements we need to follow the math wherever it leads and not push it where we would like it to go. And where the math goes - that what we should want. This post refers several alternative problems where P(today is Monday) is a coherent probability, such as Single Awakening and No-Coin-Toss problems, which were introduced in the previous post. And here I explain the core principle: when there is only one day that is observed in the one run of the experiment you can coherently define what "today" means - the day from this iteration of the experiment. A random day. Monday xor Tuesday. This is how wrong models try to treat Monday and Tuesday in Sleeping Beauty. As if they happen at random. But they do not. There is an order between them, and so they can't be treated this way. Today can't be Monday xor Tuesday, because on Tails both Monday and Tuesday do happen. As a matter of fact, there is another situation where you can coherently talk about "today", which I initially missed. "Today" can mean "any day". So, for example, in Technicolor Sleeping beauty from the next post, you can have coherent expectation to see red with 50% and blue with 50% on the day of your awakening, because for every day it's the same. But you still can't talk about "probability that the coin is Heads today" because on Monday and Tuesday these probabilities are different. So in practice, the limitation is only about Sleeping Beauty type problems where there are multiple awakenings with memory loss in between per one iteration of experiment, and no consistent probabilities for every awakening. But generally, I think it's always helpful to understand what exactly you mean by "today" in any probability theory problem. I do not decide anything axiomatically. But I notice that existent axioms of probabili
1Malentropic Gizmo
1Ape in the coat
I'll start from adressing the actual crux of our disagreement As I've written in this post, you can't just said magical word "centredness" and think that you've solved the problem. If you wont a model that can have an event that changes its truth predicate with the passage of time during the same iteration of the probability experiment - you need to formally construct such model, rewriting all the probability theory from scratch, because our current probability theory doesn't allow that. In probability theory, one outcome of a sample space is realized per an iteration of experiment. And so for this iteration of experiment, every event which includes this outcome is considered True. All the "centred" models therefore, behave as if Sleeping Beauty consist of two outcomes of probability experiment. As if Monday and Tuesday happen at random and that to determine whether the Beauty has another awakening the coin is tossed anew. And because of it they contradict the conditions of the experiment, according to which Tails&Tuesday awakening always happen after Tails&Monday. Which is shown in Statistical Analysis section. It's a model for random awakening not for current awakening that. Because current awakening is not random. So no, I do not do this mistake in the text. This is the correct way to talk about Sleeping Beauty. Event "The Beauty is awaken in this experement" is properly defined. Event "The Beauty is awake on this particular day" is not, unless you find some new clever way to do it - feel free to try. I must say, this problem is very unhelpful to this discussion. But sure, lets analyze it regardless. I suppose? Such questions are usually about ideal rational agents, so yes, it shouldn't matter, what a specific non-ideal agent does, but then why even add this extra complication to the question if it's irrelevant? Well, that's his problem, honestly, I though we agreed that what he does is irrelevant to the question. Also his behavior here is not as bad as wh
1Malentropic Gizmo
1Ape in the coat
I didn't start believing that "centred worlds don't work". I suspect you got this impression mostly because you were reading the posts in the wrong order. I started from trying the existent models noticed when they behave weirdly if we assume that they are describing Sleeping Beauty and then noticed that they are actually talking about different problems - for which their behavior is completely normal. And then, while trying to understand what is going on, I stumbled at the notion of centred possible worlds and their complete lack of mathematical justification and it opened my eyes. And then I was immediately able to construct the correct model, which completely resolves the paradox, adds up to normality and has no issues whatsoever. But in hindsight, if I did start from the assumption that centred possible worlds do not work, - that would be the smart thing to do and I'd save me a lot of time.  Well, you didn't. All this time you've just been insisting on a privileged treatment for them: "Can work until proven otherwise". Now, that's not how math works. If you come up with some new concept, be so kind to prove that they are coherent mathematical entities and what are their properties. I'm more than willing to listen to such attempts. The problem is - there are none. People just seem to think that saying "first person perspective" allows them to build sample space from non-mutually exclusive outcomes.  It's like you didn't even read my posts or my comments. By definition of a sample space it can be constructed only from elementary outcomes which has to be mutually exclusive. Tails&Monday and Tails&Tuesday are not mutually exclusive - they happen to the same person in the same iteration of probability experiment during the same outcome of the coin toss. "Centredness" framework attempts to treat them as elementary outcomes, regardless. Therefore, it contradicts the definition of a sample space.  This is what statistical analysis clearly demonstrates. If a mathem
-1Malentropic Gizmo
This whole conversation isn't about math. It is about philosophy. Math is proving theorems in various formal systems. If you are a layman, I imagine you might find it confusing that you can encounter mathematicians who seem to have conversations about math in common English. I can assure you that every mathematician in that conversation is able to translate their comments into the simple language of the given formal system they are working in, they are just simply so much of an expert that they can transmit and receive the given information more efficiently by speaking on a higher level of abstraction. It is not possible to translate the conversation that we're having to a simple formal system as it's about how we should/can model some aspect of reality (which is famously dirty and complicated) with some specific mathematical object.  To be more concrete: I want to show you that we can model (and later that we should indeed) a person's beliefs at some given point in time with probability spaces. This is inherently a philosophical and not a mathematical problem and I don't see how you don't understand this concept and would appreciate if you could elaborate on this point as much as possible. You keep insisting that  If we are being maximally precise, then NO: the math of probability spaces prescribes a few formal statements which (this is very important), in some cases, can be used to model experiments and events happening or not happening in reality, but the mathematical objects itself have no concept of 'experiment' or 'time' or anything like those. I won't copy it here, but you can look these up on the net yourself, if you want: here is one such source. Don't be confused by the wiki sometimes using English words, rest assured, any mathematician could translate it to any sufficiently expressive, simple formal system using variable names like a1,x3564789, etc.. (If you really think it would help you and you don't believe what I'm saying otherwise, I can transla
1Ape in the coat
The tragedy of the whole situation is that people keep thinking that.  Everything is "about philosophy" until you find a better way to formalize it. Here we have a better way to formalize the issue, which you keep ignoring. Let me spell it for you once more: If a mathematical probabilistic model fits some real world process - then the outcomes it produces has to have the same statistical properties as the outcomes of real world process. If we agree on this philosophical statement, then we reduced the disagreement to a mathematical question, which I've already resolved in the post. If you disagree, then bring up some kind of philosophical argument which we will be able to explore. I'm not. And frankly, it baffles me that you think that you need to explain that it's possible to talk about math using natural language, to a person who has been doing it for multiple posts in a row. https://en.wikipedia.org/wiki/Experiment_(probability_theory) The more I post about anthropics the clearer it becomes that I should've started with posting about probability theory 101. My naive hopes that average LessWrong reader is well familiar with the basics and just confused about more complicated cases are crushed beyond salvation. This question is vague in a similar manner to what I've seen from Lewis's paper. Let's specify it, so that we both understand what we are talking about Did you mean to ask 1. or 2: 1. Can a probability space at all model some person's belif in some circumstance at some specific point in time? 2. Can a probability space always model any person's belief in any circumstances at any unspecified point in time? The way I understand it, we agree on 1. but disagree on 2. There are definetely situations where you can correctly model uncertanity about time via probability theory. As a matter of fact, it's most of the cases. You won't be able to resolve our disagreement by pointing to such situations - we agree on them. But you seem to have generalized tha
1Malentropic Gizmo
1Ape in the coat
1Markvy
2Ape in the coat
I understand that it all may be somewhat counterintuitive. I'll try to answer whatever questions you have. If you think you have some way to formally define what "Today" means in Sleeping Beauty - feel free to try.  Seems very much in accordance with what I've been saying.  Throughout the series I keep repeating the point that all we need to solve anthropics is to follow probability theory where it leads and then there will be no paradoxes. This is exactly what I'm doing here. There is no formal way to define "Today is Monday" in Sleeping Beauty and so I simply accept this, as the math tells me to, and then the "paradox" immediately resolves.  Good question. First of all, as we are talking about betting I recommend you read the next post, where I explore it in more details, especially if you are not fluent in expected utility calculations. Secondly, we can't ignore the breach of the protocol. You see, if anything breaks the symmetry between awakening, the experiment changes in a substantial manner. See Rare Event Sleeping Beauty, where probability that the coin is Heads can actually be 1/3. But we can make a similar situation without breaking the symmetry. Suppose that on every awakening a researcher comes to the room and proposes the Beauty to bet on which day it currently is. At which odds should the Beauty take the bet? This is essentially the same betting scheme as ice-cream stand, which I deal with in the end of the previous comment.
2Markvy
I tried to formalize the three cases you list in the previous comment. The first one was indeed easy. The second one looks “obvious” from symmetry considerations but actually formalizing seems harder than expected. I don’t know how to do it. I don’t yet see why the second should be possible while the third is impossible.
1Ape in the coat
2Markvy
3Ape in the coat
Where does the feeling of wrongness come from? Were you under impression that probability theory promised us to always assign some measure to any statement in natural language? It just so happens that most of the time we can construct an appropriate probability space. But the actual rule is about whether or not we can construct a probability space, not whether or not something is a statement in natural language. Is it really so surprising that a person who is experiencing amnesia and the repetetion of the same experience, while being fully aware of the procedure can't meaningfully assign credence to "this is the first time I have this experience"? Don't you think there has to be some kind of problems with Beauty's knowledge state? The situation whre due to memory erasure the Beauty loses the ability to coherently reason about some statements makes much more sense than the alternative proposed by thirdism - according to which the Beauty becomes more confident in the state of the coin than she would've been if she didn't have her memory erased. Another intuition pump is that “today is Monday” is not actually True xor False from the perspective of the Beauty. From her perspective it's True xor (True and False). This is because on Tails, the Beauty isn't reasoning just for some one awakening - she is reasoning for both of them at the same time. When she awakens the first time the statement "today is Monday" is True, and when she awakens the second time the same statement is False. So the statement "today is Monday" doesn't have stable truth value throughout the whole iteration of probability experiment. Suppose that Beauty really does not want to make false statements. Deciding to say outloud "Today is Monday", leads to making a false statement in 100% of iterations of experiemnt when the coin is Tails. Here you are describing Lewis's model which is appropriate for Single Awakening Problem. There the Beauty is awakened on Monday if the coin is Heads, and if the coin
2Markvy
2Ape in the coat
Well, I think this one is actually correct. But, as I said in the previous comment, the statement "Today is Monday" doesn't actually have a coherent truth value throughout the probability experiment. It's not either True or False. It's either True or True and False at the same time! We can answer every coherently formulated question. Everything that is formally defined has an answer Being careful with the basics allows to understand which question is coherent and which is not. This is the same principle as with every probability theory problem.  Consider Sleeping-Beauty experiment without memory loss. There, the event Monday xor Tuesday also can't be said to always happen. And likewise "Today is Monday" also doesn't have a stable truth value throughout the whole experiment.  Once again, we can't express Beauty's uncertainty between the two days using probability theory. We are just not paying attention to it because by the conditions of the experiment, the Beauty is never in such state of uncertainty. If she remembers a previous awakening then it's Tuesday, if she doesn't - then it's Monday. All the pieces of the issue are already present. The addition of memory loss just makes it's obvious that there is the problem with our intuition.
2Markvy
Re: no coherent “stable” truth value: indeed. But still… if she wonders out loud “what day is it?” at the very moment she says that, it has an answer. An experimenter who overhears her knows the answer. It seems to me that you “resolve” this tension is that the two of them are technically asking a different question, even though they are using the same words. But still… how surprised should she be if she were to learn that today is Monday? It seems that taking your stance to its conclusion, the answer would be “zero surprise: she knew for sure she would wake up on Monday so no need to be surprised it happened” And even if she were to learn that the coin landed tails, so she knows that this is just one of a total of two awakenings, she should have zero surprise upon learning the day of the week, since she now knows both awakenings must happen. Which seems to violate conservation of expected evidence, except you already said that the there’s no coherent probabilities here for that particular question, so that’s fine too. This makes sense, but I’m not used to it. For instance, I’m used to these questions having the same answer: 1. P(today is Monday)? 2. P(today is Monday | the sleep lab gets hit by a tornado) Yet here, the second question is fine (assuming tornadoes are rare enough that we can ignore the chance of two on consecutive days) while the first makes no sense because we can’t even define “today” It makes sense but it’s very disorienting, like incompleteness theorem level of disorientation or even more
2Ape in the coat
2Markvy
Thanks :) the recalibration may take a while… my intuition is still fighting ;)
1Malentropic Gizmo
Consider that in the real world Tuesday always happens after Monday. Do you agree or disagree: It is incorrect to model a real world agent's knowledge about today being Monday with probability?
1Ape in the coat
Again, that depends. I think, I talk about something like you point to here:

This is the Sleeping Beauty Problem:

"Some researchers are going to put you to sleep. During the two days that your sleep will last, they will briefly wake you up either once or twice, depending on the toss of a fair coin (Heads: once; Tails: twice). After each waking, they will put you to back to sleep with a drug that makes you forget that waking. When you are first awakened, to what degree ought you believe that the outcome of the coin toss is Heads?"

Unfortunately, it doesn't describe how to implement the wakings. Adam Elga tried to implement by adding s...

1Ape in the coat
As I've told you multiple times your "Two-Coin Sleeping-Beauty" is fully isomorphic to regular Sleeping Beauty problem and so thirder model of it has all the same issues. It treats sequential events as mutually exclusive, therefore unlawfully constructs sample space, contradicting the fundamentals of probability theory. Your elimination argument has all the same flaws of elimination argument from updating model which I explored in the previous post. But sure enough, let's look specifically at two-coin version of the problem and see how your updating model fails. Let's start from the statistical test.  Your model treats HH, HT, TH and TT as four individual mutually exclusive outcomes that define a sample space, where each outcome has the probability of 1/4 and conditional on awakening we have three mutually exclusive outcomes HT, TH and TT which have probability 1/3. So according to it, running two coin experiment multiple times and writing down the states of the coins on every awakening of the Beauty should produce a list of outcomes HT, TH and TT in random order, where all of them have frequency 1/3. However, when you actually do it, you get a different list. The frequency is 1/3, but the order is not random. TH and TT always go in pairs, and you can use this knowledge to predict the next token in the list better than chance. Therefore, your model can't possibly be describing Two-Coin-Sleeping-Beauty problem. By analogy with regular Updating model, it actually describes Observer Two-Coin Sleeping Beauty problem: An observer who arrives on a random day may very well catch the Beauty asleep, so when you see her awake you actually receive new evidence about the state of the first coin and lawfully update. For an observer HH, HT, TH and TT are indeed mutually exclusive outcomes that do not have any order. If we repeat the observer two-coin experiment multiple times documenting all the outcomes of the coins every time that the Beauty is awake we indeed get a list w
1JeffJo
And as I’ve tried to get across, if the two versions are truly isomorphic, and also have faults, one should be able to identify those faults in either one without translating them to the other. But if those faults turn out to depend on a false analysis specific to one, you won’t find them in the other. The Two Coin version is about what happens on one day. Unlike the Always-Monday-Tails-Tuesday version, the subject can infer no information about coin C1 on another day, which is the mechanism for fault in that version. Each day, in the "world" of the subject, is a fully independent "world" with a mathematically valid sample space that applies to it alone. “It treats sequential events as mutually exclusive,” No, it treats an observation of a state, when that observation bears no connection to any other, as independent of any other. “… therefore unlawfully constructs sample space.” What law was broken? Do you disagree that, on the morning of the observation, there were four equally likely states? Do you think the subject has some information about how the state was observed on another day? That an observer from the outside world has some impact on what is known on the inside? These are the kind of details that produce controversy in the Always-Monday-Tails-Tuesday version. I personally think the inferences about carrying information over between the two days are all invalid, but what I am trying to do is eliminate any basis for doing that. Yes, each outcome on the first day can be paired with exactly one on the second. But without any information passing to the subject between these two days, she cannot do anything with such pairings. To her, each day is its own, completely independent probability experiment. One where "new information" means she is awakened to see only three of the four possible outcomes. “Your model treats HH, HT, TH and TT as four individual mutually exclusive outcomes” No, it treats the current state of the coins as four mutually exclusive st
1Ape in the coat
Let it be not two different days but two different half-hour intervals. Or even two milliseconds - this doesn't change the core of the issue that sequential events are not mutually exclusive. It very much bears a connection. If you are observing state TH it necessary means that either you've already observed or will observe state TT. The definition of a sample space - it's supposed to be constructed from mutually exclusive elementary outcomes.  Disagree on both accountsd. You can't treat HH HT TT TH as individual outcomes and the term "morning of observation" is underspecified. The subject knows that some of them happen sequentially. I noticed, and I applaud your attempts. But you can't do that because you still have sequential events, anyway, the fact that you call them differently doesn't change much. Exactly. And the Beauty knows it. Case closed. She knows that they do not happen at random. This is enough to be sure that each day is not completely independent probability experiment. See Effects of Amnesia section. Call them "states" if you want. It doesn't change anything. I've specifically explained how. We write down outcomes when the researcher sees the Beauty awake - when they updated on the fact of Beauty's awakening. The frequency for three outcomes is 1/3, moreover they actually go in random order because the observer witnesses only one random awakening per experiment.  Yep, no one is arguing with that. The problem is that the order isn't random as your model predicts - TH and TT always go in pairs. No, I'm not complicating this with two lists for each day. There is only one list, which documents all the awakenings of the subject, while she is going through the series of experiments. The theory that predicts that two awakening are "completely independent probability experiments" expect that the order of the awakenings is random and it's proven wrong because there is an order between awakenings. Easy as that. You are mistaken about what the amnes
3JeffJo
OUTCOME: A measurable result of a random experiment. SAMPLE SPACE: a set of exhaustive, mutually exclusive outcomes of a random experiment. EVENT: Any subset of the sample space of a random experiment. INDEPENDENT EVENTS: If A and B are events from the same sample space, and the occurrence of event A does not affect the chances of the occurrence of event B, then A and B are independent events. The outside world certainly can name the outcomes {HH1_HT2, HT1_HH2, TH1_TT2, TT1_TH2}. But the subject has knowledge of only one pass. So to her, only the current pass exists, because she has no knowledge of the other pass. What happens in that interval can play no part in her belief. The sample space is {HH, HT, TH, TT}. To her, these four outcomes represent fully independent events, because she has no knowledge of the other pass. To her, the fact that she is awake means the event {HH} has been ruled out. It is still a part of the sample space, but is is one she knows is not happening. That's how conditional probability works; the sample space is divided into two subsets; one is consistent with the observation, and one is not. What you are doing, is treating HH (or, in Elga's implementation, H&Tuesday) as if it ceases to exist as a valid outcome of the experiment. So HH1_HT2 has to be treated differently than TT1_TH2, since HH1_HT2 only "exists" in one pass, while TT1_TH2 "exists" in both. This is not true. Both exist in both passes, but one is unobserved in one pass. And this really is the fallacy in any halfer argument. They treat the information in the observation as if it applies to both days. Since H&Tuesday "doesn't exist", H&Monday fully represents the Heads outcome. So to be consistent, T&Monday has to fully represent the Tails outcome. As does T&Tuesday, so they are fully equivalent. You are projecting the result you want onto the process. Say I roll a six-sided die tell you that the result is odd. Then I administer the amnesia drug, and tell you that I prev
3JeffJo
The link I use to get here only loads the comments, so I didn't find the "Effects of Amnesia" section until just now. Editing it: "But in my two-coin case, the subject is well aware about the setting of the experiment. She knows that her awakening was based on the current state of the coins. It is derived from, but not necessarily the same as, the result of flipping them. She only knows that this wakening was based on their current state, not a state that either precedes or follows from another. And her memory loss prevents her from making any connection between the two. As a good Bayesian, she has to use only the relevant available information that can be applied to the current state."
1Ape in the coat
[-]JeffJo-3-1

A Lesson in Probability for Ape in the Coat

First, some definitions. A measure in Probability is a state property of the result of a probability experiment, where exactly one value applies to each result. Technically, the values should be numbers so that you can do things like calculate expected values. That isn't so important here; but if you really object, you can assign numbers to other kinds of values, like 1=Red, 2=Orange, etc.

An observation (my term) is a set of one or more measure values. An outcome is an observation that discriminates a result suffi...

-1Ape in the coat
Yes! I'm so glad you finally got it! And the fact that you simply needed to remind yourself of the foundations of probability theory validates my suspicion that it's indeed the solution for the problem. You may want to reread the post and notice that this is exactly what I've been talking about the whole time. Now, I ask you to hold in mind the fact that "SB Problem is one random experiment with a single result". We are goin to use this realization later. This is false, but not crucial. We can postpone this for later. No, what I call sequential events are pairs HH and HT, TT and TH, corresponding to exact awakening, which can't be treated as individual outcomes.  On the other hand, as soon as you connect these pairs and got HH_HT, HT_HH, TT_TH and TH_TT, they totally can create a sample space, which is exactly what I told you in this comment. As soon as you've switched to this sound sample space we are in agreement. You are describing a situation where the Beauty was told whether she is experiencing an awakening before the second coin was turned or not. If the Beauty awakens and learns that it's the awakening before the coin was turned, she indeed can reason that she observed the event {HT1_HH2, TH1_TT2, TT1_TH2} and that the probability that the first coin is Heads is 1/3. This, mind you, is not sneaky thirder idea of probability, where P(Heads) can be 1/3 even though the coin is Heads in 1/2 of the experiments. This is actual probability that the coin is Heads in this experiment. Remember the thing I asked you to hold in mind, our mathematical model doesn't attempt to describe the individual awakening anymore, as you may be used to, it describes the experiment as a whole. Let this thought sink through. The Beauty which learned that she is awakened before the coin was turned, can bet on Tails and win with 66% chance per experiment. So she should agree for per experimental betting odds up to 1:2 - which isn't usually a good idea in Sleeping Beauty when you do
-1JeffJo
Too bad you refuse to "get it." I thought these details were too basic to go into: A probability experiment is a repeatable process that produces one or more unpredictable result(s). I don't think we need to go beyond coin flips and die rolls here. But probability experiment refers to the process itself, not an iteration of it. All of those things I defined before are properties of the experiment; the process. "Outcome" is any potential result of an iteration of that process, not the result itself. We can say that a result belongs to an event, even an event of just one outcome, but the result is not the same thing as that event. THE OBSERVATION IS NOT AN EVENT. For example, an event for a simple die roll could be EVEN={2,4,6}. If you roll a 2, that result is in this event. But it does not mean you rolled a 2, a 4, and a 6. So, in ... ... you are describing one iteration of a process that has an unpredictable result. A coin flip. Then you observe it twice, with amnesia in between. Each observation can have its own sample space - remember, experiments do not have just one sample space. But you can't pick, say, half of the outcomes defined in one observation an half from the other, and use them to construct a sample space. That is what you describe here, by comparing what SB does on Monday, and on Tuesday, as if they are in the same event space. The correct "effect of amnesia" is that you can't relate either observation to the other. They each need to be assessed by a sample space that applies to that observation, without reference to another. And BTW, what she observes on Monday may belong to an event, but it is not the same thing as the event.  A common way to avoid rebuttal is to cite two statements and make one ambiguous assertion about them, without support or specifying which you mean. It is true that remaining asleep is a possible result of the experiment - that is, an outcome - since Tuesday exists whether or not SB is awake. What SB observes tells her
1Ape in the coat
On every iteration we have exactly one outcome from a sample space that is realized. And every event from event space which has this outcome is also assumed to be realized. When I say "experiment" I mean a particular iteration of it yes, because one run of sleeping beauty experiment correspond to one iteration of the probability experiment. I hope it cleared the possible misunderstanding. Event is not an outcome, it's a set of one or more outcomes, from the sample space, which itself has to belong to the event space.  What you mean by "observation" is a bit of a mystery. Try tabooing it - after all probability space consists of only sample space, event space and probability function, no need to invoke this extra category for no reason. It's also a common way to avoid unnecessary tangents. Don't worry we will be back to it as soon as we deal with the more interesting issue, though I suspect then you will be able to resolve your confusion yourself. I don't think that correcting your misunderstanding about my position can be called "strawmanning". If anything it is unintentional strawmannig from your side, but don't worry, no offence taken. Yes, One-coin-version has the exact same issue, where sequential awakenings Tails&Monday, Tails Tuesday are often treated as disconnected mutually exclusive outcomes. But anyway, it's kind of pointless to talk about it at this point when you've already agreed to the the fact that the correct sample space for two coins version is {HT_HH, TT_TH, TH_TT, HH_HT}. We agree on the model, let's see where it leads. It means that you've finally done the right thing of course! You've stopped talking about individual awakenings as if they are themselves mutually exclusive outcomes and realized that you should be talking about the pairs of sequential awakenings treating them as a single outcome of an experiment. Well done! But apparently you still don't exactly undertand the full consequences of it. But that's okay, you've already done t