Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

This is a simple summary of how to "do anthropics":

  • If there are no issues of exact copies, or advanced decision theory, and the questions you're asking aren't weird, then use SIA. And by "use SIA", I mean "ignore the definition of SIA, and just do a conventional Bayesian update on your own existence". Infinite universes won't be a problem (any more than they are with conventional probabilities). And this might not increase your expected population by much.

First of all, there was the realisation that different theories of anthropic probability correspond to correct answers to different questions - questions that are equivalent in non-anthropic situations.

We can also directly answer "what actions should we do?", without talking about probability. This anthropic decision theory gave behaviours that seem to correspond to SIA (for total utilitarianism) or SSA (for average utilitarianism).

My personal judgement, however, is that the SIA-questions are more natural than the SSA-questions (ratios of totals rather than average of ratios), including the decision theory situation (total utilitarianism rather than average utilitarianism). Thus, in typical situations, using SIA is generally the way to go.

And if we ignore exact duplicates, Boltzmann brains, and simulation arguments, SIA is simply standard Bayesian updating on our existence. Anthropic probabilities can be computed exactly the same way as non-anthropic probabilities can.

And there are fewer problems than you might suspect. This doesn't lead to problems with infinite universes - at least, no more than standard probability theories do. And anthropic updates tend to increase the probability of larger populations in the universe, but that effect can be surprisingly small - to given the data we have.

Finally, note that anthropic effects are generally much weaker than Fermi observation effects. The fact that we don't see life, on so many planets, tells us a lot more than the fact we see life on this one.

New Comment
3 comments, sorted by Click to highlight new comments since: Today at 7:05 AM

"If there are no issues of exact copies, or advanced decision theory, and the questions you're asking aren't weird, then use SIA. "

So practically FNC? I understand that FNC and SIA converges when the reference class is so restrictive that it only contains the one observer. But I find counter arguments like this quite convincing.

If there are no exact duplicates, FNC=SIA whatever the reference class is.