All of florijn's Comments + Replies

I don't think you fully understand my argument. It is not about being offered a wager or not, because that certainly would alter the experiment and make it very easy to decide whether halfer or thirder reasoning is the way to go.

Instead, it is about the fundamental principle the thirder's argument is based on; the anthropic principle Elga calls his Principle of Indifference. It is the key element used to justify Beauty's credence drop from 1/2 to 1/3 on waking up. This credence drop is in serious need of justification because Beauty learns nothing new whe... (read more)

I agree with the author of this article. After having done a lot of research on the Sleeping Beauty Problem as it was the topic of my bachelor's thesis (philosophy), I came to the conclusion that anthropic reasoning is wrong in the Sleeping Beauty Problem. I will explain my argument (shortly) below:

The principle that Elga uses in his first paper to validate his argument for 1/3 is an anthropic principle he calls the Principle of Indifference:

"Equal probabilities should be assigned to any collection of indistinguishable, mutually exclusive and exhausti... (read more)

After having done a lot of research on the Sleeping Beauty Problem as it was the topic of my bachelor's thesis (philosophy), I came to the conclusion that anthropic reasoning is wrong in the Sleeping Beauty Problem. I will explain my argument (shortly) below:

The principle that Elga uses in his first paper to validate his argument for 1/3 is an anthropic principle he calls the Principle of Indifference:

"Equal probabilities should be assigned to any collection of indistinguishable, mutually exclusive and exhaustive events."

The Principle of Indiffer... (read more)

0JeffJo10y
No, they aren't. "Indistinguishable" in that definition does not mean "can't tell them apart." It means that the cases arise through equivalent processes. That's why the PoI applies to things like dice, whether or not what is printed on each side is visually distinguishable from other sides. To make your cases equivalent, so that the PoI applies to them, you need to flip the second coin after the first lands on Heads also. But you wake SB at 8:00 regardless of the second coin's result. You now have have six cases that the PoI applies to, counting the "8:00 Monday" case twice, and each has probability 1/6.
1pianoforte61111y
I don't think that's the same experiment. Suppose that each time sleeping beauty is woken up, she is offered the opportunity to wager where the first coin ended up being heads. If she wins the wager she gets $1.00. Clearly her expected winnings from betting tails is still $1.00 and her expected utility from betting heads is still $0.50 This strikes me as a rather Dark Artsy/debater type strategy. I thought that LessWrong had a social norm against that sort of thing. I don't understand your incredulity. Suppose that in each one of those million events, she is offered the wager. Clearly wagering tails is the winning move by far.

After having done a lot of research on the Sleeping Beauty Problem as it was the topic of my bachelor's thesis (philosophy), I came to the conclusion that anthropic reasoning is wrong in the Sleeping Beauty Problem. I will explain my argument (shortly) below:

The principle that Elga uses in his first paper to validate his argument for 1/3 is an anthropic principle he calls the Principle of Indifference:

"Equal probabilities should be assigned to any collection of indistinguishable, mutually exclusive and exhaustive events."

The Principle of Indiffer... (read more)

2ike9y
This is actually correct, assuming her memory is wiped after each time. No! Her credence in "This is the first (or xth) time I've been awakened in Tails" is 1/1000000. Her credence in "this is Tails" is ~1. What would you put the probability of it being the xth question, x ranging from 1 to 1 1,000,000?