LESSWRONG
LW

Practical Guide to Anthropics
AnthropicsWorld ModelingRationality
Frontpage

15

Practical anthropics summary

by Stuart_Armstrong
8th Jul 2021
AI Alignment Forum
1 min read
3

15

Ω 8

AnthropicsWorld ModelingRationality
Frontpage

15

Ω 8

Previous:
Anthropics and Fermi: grabby, visible, zoo-keeping, and early aliens
1 comments15 karma
Practical anthropics summary
1Christopher King
1dadadarren
3Stuart_Armstrong
New Comment
3 comments, sorted by
top scoring
Click to highlight new comments since: Today at 7:26 PM
[-]Christopher King2y10

Related: Anthropically Blind: the anthropic shadow is reflectively inconsistent

Reply
[-]dadadarren4y10

"If there are no issues of exact copies, or advanced decision theory, and the questions you're asking aren't weird, then use SIA. "

So practically FNC? I understand that FNC and SIA converges when the reference class is so restrictive that it only contains the one observer. But I find counter arguments like this quite convincing.

Reply
[-]Stuart_Armstrong4y30

If there are no exact duplicates, FNC=SIA whatever the reference class is.

Reply
Moderation Log
More from Stuart_Armstrong
View more
Curated and popular this week
3Comments

This is a simple summary of how to "do anthropics":

  • If there are no issues of exact copies, or advanced decision theory, and the questions you're asking aren't weird, then use SIA. And by "use SIA", I mean "ignore the definition of SIA, and just do a conventional Bayesian update on your own existence". Infinite universes won't be a problem (any more than they are with conventional probabilities). And this might not increase your expected population by much.

First of all, there was the realisation that different theories of anthropic probability correspond to correct answers to different questions - questions that are equivalent in non-anthropic situations.

We can also directly answer "what actions should we do?", without talking about probability. This anthropic decision theory gave behaviours that seem to correspond to SIA (for total utilitarianism) or SSA (for average utilitarianism).

My personal judgement, however, is that the SIA-questions are more natural than the SSA-questions (ratios of totals rather than average of ratios), including the decision theory situation (total utilitarianism rather than average utilitarianism). Thus, in typical situations, using SIA is generally the way to go.

And if we ignore exact duplicates, Boltzmann brains, and simulation arguments, SIA is simply standard Bayesian updating on our existence. Anthropic probabilities can be computed exactly the same way as non-anthropic probabilities can.

And there are fewer problems than you might suspect. This doesn't lead to problems with infinite universes - at least, no more than standard probability theories do. And anthropic updates tend to increase the probability of larger populations in the universe, but that effect can be surprisingly small - 7 to 32 given the data we have.

Finally, note that anthropic effects are generally much weaker than Fermi observation effects. The fact that we don't see life, on so many planets, tells us a lot more than the fact we see life on this one.