If you believe you're a Boltzmann brain, you shouldn't even be asking the question of what you should do next because you believe that in the next microsecond you won't exist. If you survive any longer than that, that would be extremely strong evidence that you're not a Boltzmann brain, so conditional on you actually being able to make a choice of what to do in the next hour, it still makes sense to choose to lift weights.
LessWrong consists of people who like to think deeply about what the world is like, and how we can understand it better; what goals we should have and how we can change ourselves to achieve them; and what goals humanity should have, and how to build an AI that helps humanity achieve them.
On the contrary, the probability that Beauty has some particular set of Monday/Tuesday experiences is twice as great if she is woken on both days than if she is woken only on Monday
Ok, this sentence made everything snap into place for me. Thanks. The Sailor's Child problem is also helpful. This has been an interesting journey. I was originally a one-thirder based on betting arguments, and then became convinced from the original post that that is indeed a red herring, and so momentarily became a halfer, and now that you've clarified this I'm back to being a thirder to within epsilon.
I'm having a really hard time pinpointing where there's an error in the analysis, but something is still just not right. There is no indexical uncertainty in locating the event of the coin being flipped on Tuesday. Any information relevant to that event can only be considered information if there is a different probability of it being received if the coin is heads rather than tails. Any stream of bits that Beauty receives has the same probability no matter what that event is. So her probability of that event simply cannot be updated in any direction. Where does this reasoning go wrong?
Both of the "former" views are positions I held in the past (the one about light when I was very young, and the one about morality much more recently). I agree that the latter ones are right; I'm not sure what I was thinking when I wrote for the morality example that the latter is less correct than the former.
This is a very enlightening post. But something doesn't seem right. How can receiving a random bit cause Beauty to update her probability, as in the case where Beauty is an AI? If Beauty already knows that she will update her probability no matter what bit she receives, then shouldn't she already update her probability before receiving the bit?
The quote at the beginning of the post is a quote from Jesus' Parable of the Talents.
The problem with quasars is that they only emit that much power along their axes, not in every direction.
There is at least one situation in which you might expect something different under MWI than under pilot-wave: quantum suicide. If you rig a gun so that it kills you if a photon passes through a half-silvered mirror, then under MWI (and some possibly reasonable assumptions about consciousness) you would expect the photon to never pass through the mirror no matter how many experiments you perform, but under pilot-wave you would expect to be dead after the first few experiments.