You're looking at it all wrong, "you" are not "in" any simulation or universe. There exists instantiations of the algorithm, including the fact that it remembers winning the lottery, which is you in various universes and simulations and boltzman brains and other things, with certainty (for our purposes), and what you need to do depends on what you want ALL instances to do. It doesn't matter how many simulations of you are run, or what measure they have, or anything else like that, if your decisions within them don't matter for the multi... (read more)

Yes, I really despise non-decision-theoretic approaches to anthropics. I know how to write a beautiful post that explains where almost all anthropic theories go wrong -- the key point is a combination of double counting evidence and only ever considering counterfactual experiences that logically couldn't be factual -- but it'd take awhile, and it's easier to just point people at UDT. Might give me some philosophy cred, which is cred I'd be okay with.

This post is for sacrificing my credibility!

by Will_Newsome 1 min read2nd Jun 2012347 comments


Thank you for your cooperation and understanding. Don't worry, there won't be future posts like this, so you don't have to delete my LessWrong account, and anyway I could make another, and another.

But since you've dared to read this far:

Credibility. Should you maximize it, or minimize it? Have I made an error?


Don't be shallow, don't just consider the obvious points. Consider that I've thought about this for many, many hours, and that you don't have any privileged information. Whence our disagreement, if one exists?