Regarding cloning, we have very good reason to think that good-enough memory erasure is possible, because this sort of thing happens in reality - we do forget things, and we forget all events after some traumas. Moreover, there are plausible paths to creating a suitable drug. For example, it could be that newly-created memories in the hippocampus are stored in molecular structures that do not have various side-chains that accumulate with time, so a drug that just destroyed the molecules without these side-chains would erase recent memories, but not older...
I don't think the issue of whether "cloning" is possible is actually crucial for this discussion, but since this relates to a common lesswrong sort of assumption, I'll elaborate on it. I do think that making a sufficiently accurate copy is probably possible in principle (but obviously not now, and perhaps never, in practice). However, I don't think this has been established. It seems conceivable that quantum effects are crucial to consciousness - certainly physics gives us no fundamental reason to rule this out. If this is true...
"The probability of me being a man" in the anthropic sense means the probability of me being born into this world as a human male. Or it can been seen as the probability of my soul getting embodied as a human male. ... even though "I'm a man" is a valid statement "the probability of me being a man" does not exist
Here, you have imported some highly questionable ideas, which would seem to be not at all necessary for analysing the Sleeping Beauty problem. This is my core objection to how Sleeping Beauty is used - it's ...
I'm afraid this makes no sense to me. I think this comes from my not understanding how the concept of a "reference class" can possible work. So I have no idea what it could mean to "observe the world from the perspective of any human that is male", if observing from that "perspective" is supposed to change the probability (or render the probability meaningless) of some statement that I would take to be about the actual, real, world.
As I've pointed out before, the Sleeping Beauty problem is only barely a thought exp...
This is because in the context of sleeping beauty problem the probabilities of “today being Monday/Tuesday” do not exist. In another word “what’s the probability of today being Monday/Tuesday” are invalid questions.
I think here you depart from common-sense realism, in favour of what I am not sure.
From a common-sense standpoint, it is meaningful for Beauty to consider the probability of it being Monday because she can decide to batter down the door to her room, go outside, and ask someone she encounters what day of the week it is. That she actually has no desire to do this does not render the question meaningless.
What I mean by "someone with those memories exists" is just that there exists a being who has those memories, not that I in particular have those memories. That's the "non-indexical" part of FNC. Of course, in ordinary life, as ordinarily thought of, there's no real difference, since no one but me has those memories.
I agree that one could imagine conditioning on the additional piece of "information" that it's me that has those memories, if one can actually make sense of what that means. But one of the points ...
I can sort of see what you're getting at here, but to me needing to ask "what question was being asked?" in order to do a correct analysis is really a special case of the need to condition on all information. When we know "the older child in that family is a boy", we shouldn't condition on just that fact when we actually know more, such as "I asked a neighbour whether the older child is a boy or girl, and they said 'a boy'", or "I encountered a boy in the family and asked if they were the older one, a...
I'm not sure what you're saying in this reply. I read your original post as using the island problem to try to demonstrate that there are situations in which using probabilities conditional on all the available information gives the wrong answer - that to get the right answer, you must instead ignore "ad hoc" information (though how you think you can tell which information is "ad hoc" isn't clear to me). My reply was pointing out that this example is not correct - that if you do the analysis correctly, you do get the ri...
As you may know, my Full Nonindexical Conditioning (FNC) approach (see http://www.cs.utoronto.ca/~radford/anth.abstract.html) uses the third-person perspective for all inference, while emphasizing the principle that all available information should be used when doing inference. In everyday problems, a third-person approach is not distinguishable from a first-person approach, since we all have an enormous amount of perceptions, both internal and external, that are with very, very high probability not the same as those of any other person. This approach lead...
But the thing is you can't call it "0.5 credence" and have your credence be anything like a normal probability. The Halfer will assign probability 1/2 for Heads and Monday, 1/4 for Tails and Monday, and 1/4 for Tails and Tuesday. Since only the guess on Monday is relevant to the payoff, we can ignore the Tuesday possibility (in which the action taken has no effect on the payoff), and see that a halfer would have a 2:1 preference for Heads. In contrast, a Thirder would give 1/3 probability to Heads and Monday, 1/3 to Tails and Monday, and ...
Well of course. If we know the right action from other reasoning, then the correct probabilities better lead us to the same action. That was my point about working backwards from actions to see what the correct probabilities are. One of the nice features about probabilities in "normal" situations is that the probabilities do not depend on the reward structure. Instead we have a decision theory that takes the reward structure and probabilities as input and produces actions. It would be nice if the same nice property held in SB-type problems, ...
A big reason why probability (and belief in general) is useful is that it separates our observations of the world from our decisions. Rather than somehow relating every observation to every decision we might sometime need to make, we instead relate observations to our beliefs, and then use our beliefs when deciding on actions. That's the cognitive architecture that evolution has selected for (excepting some more ancient reflexes), and it seems like a good one.
The linked post by ata is simply wrong. It presents the scenario where
Each interview consists of Sleeping Beauty guessing whether the coin came up heads or tails. After the experiment, she will be given a dollar if she was correct on Monday.
In this case, she should clearly be indifferent (which you can call “.5 credence” if you’d like, but it seems a bit unnecessary).
But this is not correct. If you work out the result with standard decision theory, you get indifference between guessing Heads or Tails only if Beauty's subjective probability of Heads is...
The standard question is what probability Beauty should assign to Heads after being woken (on Monday or Tuesday), and not being told what day it is, given that she knows all about the experimental setup. Of course if you change the setup so that she's asked a question on Monday that she isn't on Tuesday, then she will know what day it is (by whether the question was asked or not) and the answer changes. That isn't an interesting sense in which the answer 1/2 is correct. Neither is it interesting that 1/2 is the answer to the question of what probability the person flipping the coin should assigns to Heads, nor to the question of what is seven divided by two minus three...
We agree about what the right actions are for the various reward structures. We can then try to work backwards from what the right action is to what probability Beauty should assign to the coin landing Heads after being wakened, in order that this probability will lead (by standard decision theory) to her taking the action we've decided is the correct one.
For your second scenario, Beauty really has to commit to what to do before the experiment, which means this scheme of working backwards from correct decision to probability of Heads after wakening ...
Your second scenario introduces a coordination issue, since Beauty gets nothing if she guesses differently on Monday and Tuesday. I'm still thinking about that.
If you eliminate that issue by saying that only Monday guesses count, or that only the last guess counts, you'll find that Beuaty has to assign probability 1/3 to Heads in order to do the right thing by using standard decision theory. The details are in my comment on the post at https://www.lesswrong.com/posts/u7kSTyiWFHxDXrmQT/sleeping-beauty-resolved#aG739iiBci9bChh5D
Or you can say tha...
Probability is meant to be a useful mental construct, that helps in making good decisions. There's a standard framework for doing this. If you apply it, you find that Beauty makes good decisions only if she assigns a probability of 1/3 to Heads when she is woken. There is no sense in which 1/2 is the correct answer, unless you choose to redefine what probabilities mean, along with the method of using the to make decisions, which would be nothing but a pointless semantic distraction.
I agree that the word "Paradox" was some sort of hype. But I don't think anyone believed it. Nobody plugged their best guesses for all the factors in the Drake equation, got the result that there should be millions of advanced alien races in the galaxy, of which we see no sign, and then said "Oh my God! Science, math, and even logic itself are broken!" No. They instead all started putting forward their theories as to which factor was actually much smaller than their initial best guess.
I've noticed something that may explain some of the confusion. You say above:
...halvers don't believe that you can answer: "If Sleeping Beauty is awake, what are the chance that the coin came up heads?" without de-indexicalising the situation first.
But in the Sleeping Beauty problem as usually specified, the question is what probability Beauty should assign to Heads, not what some external observer should think she should be doing. Beauty is in no doubt about who she is (eg, she's the person who just stubbed her toe on this bedpost here) even though she doesn't know what day of the week it is.
Sleeping Beauty with cookies is an almost-realistic situation. I could easily create an analogous situation that is fully realistic (e.g., by modifying my Sailor's Child problem). Beauty will decide somehow whether or not to eat a cookie. If Beauty has no rational basis for making her decision, then I think she has no rational basis for making any decision. Denial of the existence of rationality is of course a possible position to take, but it's a position that by its nature is one that it is not profitable to try to discuss rationally.