Chris_Leong

Sequences

Linguistic Freedom: Map and Territory Revisted
INVESTIGATIONS INTO INFINITY

Comments

Dissolving Sleeping Beauty

I'm confused. I don't think I understand what it means to regard them as irreducibles instead of a sample?

Dissolving Sleeping Beauty

I think halfers would say from Beauty's perspective P(Heads) and P(Heads|I am awake now) means the same thing. Any reasoning done by Beauty is based on the fact that "I am awake now".

Yes, at first glance it seems natural to assume this, but I see rejecting that claim as the only consistent way of developing the halfer view.

Any reasoning done by Beauty is based on the fact that "I am awake now"

There's a sense in which that's true, which is that the fact that Beauty is awake is available for beauty to use in whatever calculations they perform. However, this isn't the say as claiming they have to use it a particular way.

P(Heads|I am awake now) doesn't just mean "I am awake now" is available in beauty's databank, but also indicates that instead of sampling coin flips we're sampling times when beauty is awake. So that's the suble slight of hand; the invisible shift in assumptions.

In the end I do agree they are answering different questions. But still think halfers in some sense are more subjective.

The halfer position poorly developed is more subjective.  Although maybe most halfers develop it that way, I don't know.

Dissolving Sleeping Beauty

Okay, well given that, I agree that if Alex is woken up with beauty he should change his probability of heads to 1/3. Of course, this isn't P(Heads) anymore, but P(Heads|Beauty awake).

So I guess your issue then is that beauty also says that P(Heads)=1/2 whilst P(Heads|Beauty awake) = 1/3, when we might expect them to be the same thing. After all, beauty's value of P(Heads) is based on all information beauty possesses and beauty knows that they are awake.

Well, I guess if we adopt what I've called the subjectivist intutions, then the above argument follows. But if we're following the more objectivist intuitions then probability is only defined when there is no double counting going on; or if double counting is going on, but we add in an adjustment factor to undo it.

The thing to notice above is that P(Heads) and P(Heads|Beauty awake) are constructed over different sample spaces. The first is over just {{Heads},{Tails}}, while the later is over {{Heads, Mon},{Heads, Tues}, {Tails, Mon}, {Tails, Tues}} (Note: {Heads, Tues} never actually occurs). 

And that's why from the objectivist perspective they result in different probabilities - because they are asking different questions! In the first, knowledge of beauty's awakeness state is taken to be relevant only in so far as it helps us to remove the distortions of beauty sometimes being asleep. While in the second, we're trying to intentionally ignore the cases when beauty is asleep.

Dissolving Sleeping Beauty

Thanks, this comment is useful and if I read it before I'd written the post, then I would have been able to write a better post :-).

I agree that from initial appearances the halfer position appears more "subjective" and the thirder more "objective", but I would also say surface level appearances can be deceiving.

Before I respond I'd like to ask you to clarify how Alex is woken up. If the coin is heads is he always woken up on Monday? And if the coin is tails, does he have a 50% chance of being woken up on Monday and a 50% chance of being woken up on Tuesday?

Or do you mean that whenever Sleeping beauty is woken up, he has a 1/3 chance of waking as well, with a chance that on some iterations that Alex is never woken up?

Or does it work some other way?

Dissolving Sleeping Beauty

"And only one of them is relevant to rational decision-making" - okay, that's a significant difference!

Dissolving Sleeping Beauty

From your description here, it sounds like we're arguing essentially the same thing, but you think it's different? How?

Covid 1/21: Turning the Corner

Will all comments by you be green or just a special class of mod comments?

Excerpts from a larger discussion about simulacra

Given all of the discussion around simulacra, I would be disappointed if this post wasn't updated in light of this.

mAIry's room: AI reasoning to solve philosophical problems

I've already written a comment with a suggestion that this post needs a summary so that you can benefit from it, even if you don't feel like wading through a bunch of technical material.

Load More