This is the seventh post in my series on Anthropics. The previous one is Why Two Valid Answers Approach is not Enough for Sleeping Beauty. The next one is The Solution to Sleeping Beauty.

Introduction

As the Sleeping Beauty problem has been investigated for a long time, there have been multiple different attempts to apply various mathematical models to it. And as we are still talking about this problem, these attempts were not quite successful.

Figuring out where these models failed is crucial for understanding the Sleeping Beauty problem and solving it. The last thing we want is to repeat someone else's mistakes yet another time and get attached to a wrong solution. 

In this post I'll investigate several such models, point out the issues due to which none of them manages to correctly capture the setting of the problem and thus collect valuable insights about the properties of the correct model. 

Updateless Models

Updateless Models are the ones, according to which no update on awakening is supposed to happen, because regardless of the coin toss outcome Beauty can be certain to be awake.

Intuitively it may look as if such models should necessarily be in favor of answering 1/2, however, this is not the case and historically the first of updateless models is a thirder one.

Elga's Model

This model was originally applied to the Sleeping Beauty problem in the Self-locating belief and the Sleeping Beauty problem paper by Adam Elga.  According to it, there are three possible outcomes with equal probability:

Every outcome describes an awakened state of the Beauty:

And most of the awakenings happen on Monday:

If the Beauty is awakened on Monday and knows about it, she can't guess Heads or Tails better than chance:

And likewise, if she knows that the coin is Tails, her credence for Monday and Tuesday is the same:

The model is elegant, simple and wrong. The issue is, of course, that according to it the coin isn't fair.

Therefore, it can't possibly be describing Sleeping Beauty problem.

No-Coin-Toss Problem

The thing is, Elga's model is describing a subtly different problem which I'm going to call No-Coin-Toss problem:

A random number from 0 to 2 is generated. On 0, the coin is put Heads and Sleeping Beauty is awakened on Monday. On 1, Sleeping Beauty is also awakened on Monday but the coin is put Tails. On 2 the coin is also put Tails and Sleeping Beauty is awakened on Tuesday.

Attentive readers may notice that this problem is quite similar to the setting that Groisman used with Balls in a Box Machine, described in a previous post. There, a random number generator was implemented by picking a ball from a box, only a third of the balls in which were green.

While in Sleeping Beauty the awakening routine is determined by the result of the coin toss, in No-Coin-Toss it's the opposite. Awakenings are randomly sampled and no coin toss happens at all. 

How has Elga managed to accidentally model a different problem? Well, he was coming from the base thirder's assumption that "this awakening" is randomly sampled from three possible outcomes. Plus the belief that the Beauty doesn't get any new information on awakening. So he modeled a problem where exactly that happens, which, as it turned out, isn't a Sleeping Beauty problem.

But why didn't he notice it? Well, he did notice that something weird is going on:

Let H be the proposition that the outcome of the coin toss is Heads. Before being put to sleep, your credence in H was 1/2. I’ve just argued that when you are awakened on Monday, that credence ought to change to 1/3. This belief change is unusual. It is not the result of your receiving new information — you were already certain that you would be awakened on Monday. (We may even suppose that you knew at the start of the experiment exactly what sensory experiences you would have upon being awakened on Monday.) Neither is this belief change the result of your suffering any cognitive mishaps during the intervening time — recall that the forgetting drug isn’t administered until well after you are first awakened. So what justifies it? 

 

Thus the Sleeping Beauty example provides a new variety of counterexample to Bas Van Fraassen’s ‘Reflection Principle’ (1984:244, 1995:19), even an extremely qualified version of which entails the following:

Any agent who is certain that she will tomorrow have credence x in proposition R (though she will neither receive new information nor suffer any cognitive mishaps in the intervening time) ought now to have credence x in R.

But Elga framed it as a curious observation, a new property of probability theory dealing with "centered possible worlds" and "de se evidence", a question for a future research.

David Lewis once asked ‘what happens to decision theory if we [replace the space of possible worlds by the space of centered possible worlds]?’ and answered ‘Not much.’ (Lewis 1983:149) A second lesson of the Sleeping Beauty problem is that something does happen. Namely: at least one new question arises about how a rational agent ought to update her beliefs over time.

Of course, when we understand what this model is actually about, we can see that nothing weird is happening and everything perfectly adds up to normality. In No-Coin-Toss problem the Beauty can't be certain that she awakes on Monday, so the Reflection Principle is not contradicted. 

Likewise, Adam Elga's proof for thirdism, which he provides in the same paper, is valid for the No-Coin-Toss problem but not applicable to Sleeping Beauty. However, this presents an interesting question: which assumptions that Elga makes for his proof are not true for the Sleeping Beauty

At first glance it's very much not clear where exactly is an error. All the assumptions seem quite reasonable. And this makes the answer to this question a very valuable insight - a key to the correct model for the Sleeping Beauty problem. I'm going to explore it in a future post; for now let's just outline the insights we got from Elga's Model:

  1. Either Thirdism is wrong or there is an update on awakening
  2. Either 1/3 is the correct answer or one of the assumptions of Elga's proof are wrong

Lewis' Model

So if updateless thirdism doesn't work, then maybe 1/2 is the correct answer? After all, the base logic for the updateless stance seems sound. Let's explore this possibility and an appropriate model, introduced in Sleeping Beauty: reply to Elga by David Lewis. 

Once again, we have three possible outcomes, representing Beauty's awakened states:

However, in this case the coin is indeed fair. And, as there always is an awakening on Heads:

As previously

However:

 and thus:

Now, at first, this may look reasonable. As we remember, either the answer is 1/3 or one of the assumptions from Elga's proof has to be wrong. Elga assumes that , so of course Lewis model assigns a different value to it. 

The problem is that according to Lewis' model, if the Beauty is awakened on Monday and knows about it, she is supposed to be able to guess, better than chance, an outcome of the coin toss, that might've not even happened yet

Just like Elga, Lewis acknowledges that his model produces counterintuitive results and just like Elga, he is biting the bullet anyway, wishing to follow where the argument leads.

Imagine that there is a prophet whose extraordinary record of success forces us to take seriously the hypothesis that he is getting news from the future by means of some sort of backward causation. Seldom does the prophet tell us outright what will happen, but often he advises us what our credences about the outcome should be, and sometimes his advice disagrees with what we would get by setting our credences equal to the known chances. What should we do? If the prophet’s success record is good enough, I say we should take the prophet’s advice and disregard the known chances.

Now when Beauty is told during her Monday awakening that it’s Monday, or equivalently not-T2 , she is getting evidence – centred evidence – about the future: namely that she is not now in it. That’s new evidence: before she was told that it was Monday, she did not yet have it. To be sure, she is not getting this new evidence from a prophet or by way of backward causation, but neither is she getting it just by setting her credences equal to the known chances.

 

I admit that this is a novel and surprising application of the proviso, and I am most grateful to Elga for bringing it to my attention. Nevertheless I find it fairly convincing, independently of wishing to follow where my argument leads.

However, here it may appear a bit more problematic, because the question of which is the correct value of  is experimentally testable. We can simulate the setting for a large number of times and see for ourselves that 1/2 is the correct value. Thus Lewis Model produces incorrect betting odds for the Sleeping Beauty problem, which is a huge deal breaker. 

Single-Awakening Problem

Now, wouldn't it be curious if, just like Elga, David Lewis accidentally developed a model for a different problem instead of Sleeping Beauty? Because it's indeed the case.

I'm calling it Single-Awakening problem:

If the coin comes Heads, Sleeping Beauty is awakened on Monday. If the coin comes Tails, Sleeping Beauty is either awakened on Monday with 50% chance, or on Tuesday, otherwise.

While in Sleeping Beauty problem, Monday awakening is always bound to happen, regardless of the side of the coin, therefore we can just as well wake the Beauty even before the coin was tossed, in Single-Awakening problem on Tails there is a 50% probability that Monday awakening will not happen. So being awakened on Monday is valid evidence that the coin is Heads, and the statement  is correct.

For the Single-Awakening problem Lewis' model assigns appropriate betting odds, nothing paradoxical is going on and everything is quite intuitive.

And it also becomes clear what exactly Lewis did wrong while attempting to model the Sleeping Beauty problem. He treated Tails awakenings as if only one of them can happen, while according to the conditions of the Sleeping Beauty experiment both of them necessarily have to happen when the coin comes Tails. Let's add this to our insights:

  1. Either Thirdism is wrong or there is an update on awakening
  2. Either Thirdism is correct or one of the assumptions of Elga's proof are wrong
  3. The correct model has to account for the fact of twice as many awakenings on Tails as on Heads

Updating Model

As Updateless Models fail to model the Sleeping Beauty problem correctly, maybe there has to be an update on awakening? Our insights are also seeming to point in this direction. Let's investigate a model that does exactly that, and two arguments justifying such an update.

Elimination Argument

The idea is to describe the experiment as a whole, not just the outcomes where the Beauty is awake, therefore to include Heads&Tuesday to the outcome space.

As both Monday and Tuesday happen during the experiment, their probabilities have to be equal.

And the coin is fair:

Therefore:

What happens when the Beauty awakens? She learns that it's not Heads&Tuesday of course! This outcome is eliminated from the probability space and she updates to

A lot of people find this argument quite persuasive. It invokes intuitions for a simple probability theory problem like 

Two coins were tossed, a specific event happens, unless both of them came Heads. The event has happened. What is the probability that the first one came Heads.

And so, it may give a deceptive sense of clarity, that Sleeping Beauty problem is just like that. However, if this argument was applicable in such setting it would prove too much. More specifically, it would have to be the case that 1/3 was also the correct answer in Single-Awakening problem.

Indeed, all we needed to construct the Elimination Argument for Sleeping Beauty is a fair coin and both Monday and Tuesday definitely happening during the experiment. Both of these conditions are also satisfied by Single-Awakening problem, and yet 1/3 is clearly a wrong answer to it.

Frequency Argument

To make a better justification for Updating Model we need to be more rigorous. Let's construct a proper Bayesian update for an awakening. As we remember, Lewis' model doesn't account for the fact that there are twice as many awakenings on Tails than on Heads. Let's try to account for that:

 

This reasoning definitely doesn't apply to Single-Awakening problem. But does it apply to Sleeping Beauty?

It's clear where such update on awakening works perfectly. If there were such Heads outcomes where the Beauty wouldn't be awakened at all, for example in such a setting as:

If the Coin is Tails you are to be awakened, if the coin is Heads there is only 50% chance that you will be awakened.

Thus, being awoken is an evidence in favor of Tails. But this is not the case for the Sleeping Beauty problem where the Beauty is awakened in every iteration of the experiment when the coin is Heads. This brings us back to the core problem with the Updating Model applied towards Sleeping Beauty. We are not actually supposed to add outcomes which can't be expected to be observed by the Beauty to the sample space as having non-zero probability.

Otherwise, we are getting a situation where Beauty is always a bit surprised to be awakened and updates in a predictable manner. She always receives the same evidence, she knows that she receives this particular evidence in every awakening of every experiment, and yet she is surprised anyway. Which is an obvious violation of Conservation of Expected Evidence.

Observer Problem

Once again we have an attempt to model Sleeping Beauty problem. Once again it produces a counter-intuitive result, either contradicting the setting of the experiment or outright violating probability theory. Once again, philosophers notice that something is wrong and come up with a clever sounding argument why it's fine. Here is a recent example from Bayesian Beauty by Silvia Milano:

Before we move on, I want to consider two worries that might arise in relation to
my construction of the state space Ω′ to represent the Sleeping Beauty problem.
One worry is that one of the elements of Ω′, namely ws2, is not compatible with
Beauty’s evidence at any point in time. Since Beauty is asleep in ws2, she could
never consciously experience being in that state. In other words, we could say
that ws2 is a ‘blind’ state: Beauty could never learn that this is her current state,
and any evidence that she might have while she is awake automatically rules out
ws2. But, then, maybe we should not consider ws2 to be a genuine epistemic
possibility for Beauty, and it should not be included in the state space Ω′ at
all. Another, perhaps related, worry is that the state space Ω′ is only available
to Beauty when she wakes up, but not before, as none of the states in Ω′ are
compatible with Beauty’s evidence before being put to sleep, when she is certain
about the current time. This is a serious worry because, if Ω′ is not always
available to Beauty, then it would be unclear in what sense we can speak of her
having prior probabilities relative to Ω′, upon which she may conditionalize after
learning new evidence during the course of the experiment.

Let us start by addressing the first worry, that ws2 may not be a genuine epis-
temic possibility for Beauty, since she could never learn that she is in that state.
First of all, there are some prima facie objections to the claim that we could not
assign positive probability to ‘blind’ states that are logically possible. Consider
the following case. Suppose you are preparing to leave for a journey, and buying
travel insurance brings you to mind the possibility that your flight will crash.
Given what you believe about the chances of an accident, you assign a positive
probability to the possibility of your own death. This, however, is not a proposi-
tion that you are in a position to ever learn. So, in some cases at least, it seems
plausible to assign a positive probability to ‘blind’ states.

Further to this, whether we should include ‘blind’ states such as ws2 in the state
space and whether they should be assigned a positive probability are two issues
that can be kept separate. In the construction of Ω′, and in what follows, I assume
that the state space contains all the logically possible states, including those that
(like ws2) may never represent live epistemic possibilities for Beauty.

I believe I've already resolved this particular confusion in Conservation of Expected Evidence and Random Sampling in Anthropics. The fact that you can't observe yourself dead doesn't mean that you can't expect yourself to die. You can assign non-zero probability to plane crashing because planes, very rarely, but still do crash, so you can expect it to happen. But, by definition of the experiment, Sleeping Beauty never awakens during Heads&Tuesday, so she can't expect it to happen and shouldn't assign non-zero probability to such outcome.

Anyway, at this point, I hope you can guess where all of it is going. Once again, the model in question is totally valid, just for a different probability theory problem, that I'm calling Observer problem.

You were hired to work as an observer for one day in a laboratory which is conducting Sleeping Beauty experiment. You don't know whether it's the first day of the experiment or the second, but you can see whether the Beauty is awake or not.

Here, it's immediately obvious that the model is sound because nothing counter-intuitive is happening.

Unlike the Beauty, for such Observer the outcome  is to be expected. On a repeated experiment, when an observer arrives either on Monday or Tuesday, there will naturally be iterations where the Beauty is awake and iterations where she is not. The observer, contrary to the Beauty herself, can expect that the Beauty will not be awoken. Therefore, when the observer learns that the Beauty is awake, they receive new evidence and lawfully update in favor of Tails.

No need to come up with clever arguments about plane crashes, no need for extra justifications. When the model actually fits the setting it's immediately clear.

And it's also clear what the Updating Model is doing wrong with the Sleeping Beauty. It does an opposite mistake that Lewis' Model does. It accounts for two awakening on Tails but doesn't account for the fact that Beauty is awakened on Heads no matter what, claiming that , which is true for the Observer problem and not for the Sleeping Beauty. Let's add this to our list of insights.

  1. Either Thirdism is wrong or there is an update on awakening
  2. Either Thirdism is correct or one of the assumptions of Elga's proof are wrong
  3. The correct model has to account for the fact of twice as many awakenings on Tails than on Heads
  4. The correct model has to account for the fact that there is always one awakening on Heads

Towards the Correct Model

Now it may look as if we've just walked ourselves into a corner.

On the one hand, Sleeping Beauty is awakened in every experiment where the coin lands Heads, so:

On the other hand, there are twice as many awakenings on Tails, so:

 

Our insights claim that the initial statements have to be true at the same time. Yet the chains of reasoning they produce are obviously incompatible. Is it a true paradox, then? No wonder we can't come up with the right model! Is Sleeping Beauty just an inherently unsolvable problem in the realms of probability theory?

Beware this way of thinking: it produces mysterious answers

This is definitely a crux, though. An actual probability theoretic disagreement between updateless and updating models and thus between halfism and thirdism. It can't be dismissed just by proclaiming two valid answers. There has to be a mistake in one of these chains of reasoning. 

And there is. A very simple one, actually. As probability can't exceed 1, the statement  is only valid when . Which isn't the case for the Sleeping Beauty problem. So the whole second chain of reasoning, that justifies update on awakening, crumbles.

Applying an Updating Model to Sleeping Beauty requires a particular sleight of hand: to ignore Kolmogorov's third axiom for a moment and then hide this fact by renormalizing the probabilities. Which is obviously unlawful, but for some reason people tend not to notice. 

This doesn't mean that we should remove the insight 3. from our list, mind you. It means that the correct model should account for it in some other way, not by assuming that . But in any case, this is a serious blow to thirdism in Sleeping Beauty. Apparently, it can be justified neither via updateless model, nor via update on awakening. Which gives us a good hint that the correct answer to Sleeping Beauty problem is 1/2.  Just a hint - not a definitive proof - but a hint nonetheless. And if we accumulate all these hints, we will be able to arrive to the actual solution.

But for that we will have to fix some core misconception that plagues all these failed models. An implicit assumption that all of them share and which is not true for the Sleeping Beauty problem.

Conclusion

The story of Sleeping Beauty is a story of the importance of noticing your own confusion. If your model gives you contradictory results, it means that you are doing something wrong. That some of the assumptions you made are not actually true.

In a saner world, Adam Elga, David Lewis and all the other philosophers would not rationalize their confusion and notice that the models they try to apply to Sleeping Beauty do not actually fit the setting. Just imagine, if instead of trying to come up with reasons why probability theory works differently in this case and thus inventing the whole field of "anthropic reasoning" they would simply find the errors in their assumptions? How much less troubles we all would have?

In our world, we are stuck in a weird limbo. On one hand, there is thirdism with no sound theoretical justification for updating, but somehow providing correct betting odds. On the other, there is updatelessness and halfism, which is sound in theory, but naive attempts at which produce testably wrong results. Both have their own weird implications. Then these two flawed approaches were generalized as SIA and SSA - their own ways to deal with "anthropic problems", both constantly leading to ridiculous conclusions. And so people just somehow choose between these two bad options, arguing which one is even worse than the other, and deciding which issues they are ready to ignore.

Let's do something else instead. Let's see all these attempts to model Sleeping Beauty as what they are - educational failures. Let's figure out the wrong assumption in them, fix it and thus be able to construct the correct model. We already know a lot about what properties it's supposed to have. 

This is what I'm going to do in the next post.

The next post in the series is The Solution to Sleeping Beauty.

New Comment
12 comments, sorted by Click to highlight new comments since: Today at 9:18 AM

This comment is my thoughts.

  1. If you have N situations, it does not automatically mean they have same probabilities. I call the mistake of not recognising it - Equiprobability mistake.

  2. Outcomes have to be excluding. So people make the mistake at the very beginnign - at constructing the Ω set. Two of those situations are not excluding. One of them literally guarantees with 100% certainty, that the other will happen. When you have a correct Ω , then probability of one outcome given any other is zero. To check, whether outcomes are excluding, draw the branching universe graph and imagine a single slice in a much later point of time (Sunday), and count how many parallel universes reached that point. You will find, that only two, but thirders count the second entity twice. No matter what situation you research, the nodes which you take as outcomes CAN NEVER BE CONSEQUETIVE. If it was not the axiom, then i would be able add "I throw a dice" into the set of possible numbers that the dice shows at the end, and i would get an nonsense which is not Omega: {I throw the dice, dice shows 1, dice shows 2, shows 3, 4, 5, 6}. Thirders literally construct such an omega and thus get 1/3 for an outcome, just like I would get a "1/7 chance" of getting a number6 if i was also using a corrupted omega set.

  3. There is a table. I place on it two apples, jar, bin, box. I pit the first apple into a jar. I put the jar into the box. I put the second apple into the bin. Comes a thirder and starts counting: "How many apples in a jar. One. How many apples in the box. One. How many apples in a bin. One. So, there are 3 apples" And forgets, that th apple in a jar and the apple in the box is THE SAME apple.

P(Monday|Tails)=P(Tuesday|Tails) is technically true, not "because two entities are equal", but because an entity is compared to itself! It is a single outcome, which is phrased differently by using consequtive events of the single outcome.

When apple is in a jar, it guarantees that it is also in a box, the same way as <Monday and tails> situation guarantees <Tuesday and tails>.

If talking about graphs, both situations are literrally just the node sliding along the branch, not reaching any points of branching.

Yes, you are completely correct.

Frankly, it's a bit bizarre to me that the absolute majority of people do not notice it. That we still do not have a consensus. As if people mysteriously loose the ability to apply basic probability theory reasoning when talking about "anthropical problems".

From my point of view, the problem statement mixes probabilities with credences. The coin flip is stated in terms of equal unconditional probability of outcomes. An awakening on Monday is not probabilistic in the same sense: it always happens. Sleeping Beauty has some nonzero subjective credence upon awakening that it is Monday now.

Let's use P(event) to represent the probabilities, and C(event) to represent subjective credences.

We know that P(Heads) = P(Tails) = 1/2. We also know that P(awake on Monday | Heads) = P(awake on Monday | Tails) = P(awake on Tuesday | Tails) = 1, and P(awake on Tuesday | Heads) = 0. That fully resolves the probability space. Note that "awake on Tuesday" is not disjoint from "awake on Monday".

The credence problem is underdetermined, in the same way that Bertrand's Paradox is underdetermined. In the absence of a fully specified problem we have symmetry principles to choose from to fill in the blanks, and they are mutually inconsistent. If we specify the problem further such as by operationalizing a specific bet that Beauty must make on awakening, then this resolves the issue entirely.

I think you are on the right path. Indeed, awakenings on Monday is somehow different than the coin coming up a particular side. But the separation between probabilities and credences is not helpful. They have to be one and the same, otherwise something unlawful is going on. 

If we specify the problem further such as by operationalizing a specific bet that Beauty must make on awakening, then this resolves the issue entirely.

No, this doesn't work like that. Halfer scoring rule counts per experiment, while thirder per awakening, but regardless of the bet proposed they produce correct betting scores (unless we are talking about halfers who subscribe to Lewis' model which is just wrong). 

If there is only one bet per experiment a thirder would think that while probability of Tails is twice as large as Heads, it is compensated by the utilities of bets: only one of the Tails outcome is rewarded, not both. So they use equal betting odds, just as a halfer, who would think that both probability and utility are completely fair. 

Likewise, if there is a bet on every awakening, halfer will think that while the probabilities of Heads and Tails are the same, Tails outcome is rewarded twice as much, so they will have betting odds favouring Tails, just as a thirder, for whom utilities are fair but probability is in favour of Tails.

Betting just adds another variable to the problem, it doesn't make the existent variables more preciese.

We know that P(Heads) = P(Tails) = 1/2. We also know that P(awake on Monday | Heads) = P(awake on Monday | Tails) = P(awake on Tuesday | Tails) = 1, and P(awake on Tuesday | Heads) = 0. That fully resolves the probability space. Note that "awake on Tuesday" is not disjoint from "awake on Monday".

I'm going to present a correct model in the next post with all the detailed explanations, but I suspect you should be able to deduce it on your own. Suppose that all that you've written here is correct. What's stopping you from filling the blanks? What is P(Heads|awake on Monday)? If you know it, you should be able to calculate P(awake on Monday) and so on.

[-]TAG2mo20

But the separation between probabilities and credences is not helpful. They have to be one and the same, otherwise something unlawful is going on.

I don't see why. If someone is messing with you, eg. by wiping your memory, then your subjective credences could depart from objective probabilities.

In situations when you don't know about (potential) memory wipe or, more generally, in situations where you were lied to/not revealed all the necessary information about a setting, your probability estimate differs from a probability estimate of a person who knows all the relevant information. But I don't see a reason to call one "subjective credence" and the other "objective probability". Just different estimates based on different available information.

In any way, it's not what happening in the Sleeping Beauty problem, where the Beauty is fully aware that her memories are to be erased, so the point is moot

Yes, I have filled in all the blanks, which is why I wrote "fully resolves the probability space". I didn't bother to list every combination of conditional probabilities in my comment, because they're all trivially obvious. P(awake on Monday) = 1, P(awake on Tuesday) = 1/2, which is both obvious and directly related to the similarly named subjective credences of which day Beauty thinks it is at a time of awakening.

By the way, I'm not saying that credences are not probabilities. They obey probability space axioms, at least in rational principle. I'm saying that there are two different probability spaces here, that it is necessary to distinguish them, and the problem makes statements about one (calling them probabilities) and asks about Beauty's beliefs (credences) so I just carried that terminology and related symbols through. Call them P_O and P_L for objective and local spaces, if you prefer.

Your frequentism is showing. Bayesian probabilities are subjective credences, not objective features of the universe.

I don't think the Elimination approach gives P(Heads|Awake) = 1/3 or P(Monday|Awake) = 2/3 in the Single Awakening problem. In that problem, there are 6 possibilities:

P(Heads&Monday) = 0.25

P(Heads&Tuesday) = 0.25

P(Tails&Monday&Woken) = 0.125

P(Tails&Monday&Sleeping) = 0.125

P(Tails&Tuesday&Woken) = 0.125

P(Tails&Tuesday&Sleeping) = 0.125

Therefore:

P(Heads|Awake)

= P(Heads&Monday) / (P(Heads&Monday) + P(Tails&Monday&Woken) + P(Tails&Tuesday&Woken))

= 0.5

And:

P(Monday|Awake)

= (P(Heads&Monday) + P(Tails&Monday&Woken)) / (P(Heads&Monday) + P(Tails&Monday&Woken) + P(Tails&Tuesday&Woken))

= 0.75

Elimination argument is the logic of "There are four equiprobable mutually exclusive outcomes but only 3 of them are compatible with an observation, so we update to 3 equiprobable outcomes" attempted to be applied to the Sleeping Beauty problem. This logic itself is correct when there actually are four equiprobable mutually exclusive outcomes. Like in a situation where two coins were tossed and you know that the resulting outcome is not Heads Heads.

So the argument tries to justify, why the same situation is true for the Sleeping Beauty problem. It does so based on these facts:

  1. P(Heads) = P(Tails) - because the coin is fair
  2. P(Monday)=P(Tuesday) - because the experement lasts for two days
  3. Days happen independently of the outcomes of the coin tosses

The thing is, it's not actually enough. And to show it I apply this reasoning to another prblem where all the same conditions are satisfied, yet the answer is definetely not 1/3.

What you've done is not an application of Elimination argument to Single-Awakening problem. You are trying to apply a version of Updating model to Single-Awakening problem. And just as Updating model when attempted to apply to Sleeping Beauty is actually solving Observer problem, you are solving Observer-Single-Awakening problem:

You were hired to work as an observer for one day in a laboratory which is conducting Single-Awakening experiment. You don't know whether it's the first day of the experiment or the second, but you can see whether the Beauty is awake or not.

You seem to be doing it correctly. No update happens because P(Heads|Awake)=P(Tails|Awake). 

Therefore, it can’t possibly be describing Sleeping Beauty problem.

Why not? It accounts for the coin being fair in some other way, as you say.

If according to a mathematical model an unconditional probability of a coin being Heads equals 1/3, it requires quite some suspension of disbelief to claim that this model accounts for a coin being fair. And if it requires to apply suspension of disbelief to justify that some model is describing the problem. It's a pretty good hint that it actually doesn't do it.

One of the points of this post is to provide opportunity to look at situations when a model clearly describes a problem: Elga's Model and No-Coin-Toss problem, Lewis Model and Single-Awakening Problem, Updating Model and Observer problem, - and then compare them with awkward attempts to stretch these models to fit the setting of Sleeping Beauty, which require to constantly avert your eyes from all kind of weirdnesses and inconsistencies. 

Math doesn't have a formal way to prove when a model fits a problem, so in theory people can still cling to these models, regardless. But I hope we know better than this.