There is a well-known style of reasoning called the anthropic argument (which has nothing to do with the AI frontier lab of the same name). It goes something like this:
Scientist 1: “X seems really unlikely! How come I'm observing it?" (where X is usually something like the Hoyle resonance and the resulting triple-alpha process)
Scientist 2: “Your mistake is that you’re estimating P(X) — you should instead be estimating P(X | I'm here to ask this question). If the triple-alpha process didn’t work, there wouldn’t be enough carbon for you to exist. Consider all the places in the String Theory Landscape that don’t have intelligent life in them asking a question like that right now. The few that do are almost all going to have some sort of fluke like this that makes intelligent life probable there. It’s just a sampling effect.”
This is all well and good, and I agree with the second scientist — questions only get asked if there’s someone around to ask them, and you should include everything you already know about the situation in your priors. That’s kind of obvious, once you stop and think about it. And once the String Theory Landscape comes into the discussion, 10>200 is a very large number.
However, if you apply this sort of thinking too much, you find yourself starting to be surprised if your viewpoint is sufficiently atypical in any way, compared to a random sample drawn from all sapient beings, or at least all humans, ever. Such as that, on the fairly plausible assumption that the human race will eventually colonize the stars, as long as we manage not to go extinct first, it seems rather likely that that will be something of the rather rough order O(10∼10) times as many people in our current forward lightcone as in our backward lightcone. That’s a rather large coincidence — why is our current viewpoint that atypical? Now, obviously, somebody did get to be the very first member of Homo sapiens born, right after we speciated, but that’s just a fluke, they do happen, just very rarely — so I continue to ask, why me? Why am I one of the lucky (or at least early) ones by a factor as large as 1 in O(10∼10)? That just doesn’t seem very plausible…
I have seen this argument cited as evidence that there must be a Great Filter or that we should all have a high P(DOOM), because it’s just so implausible that our generation could be missing out on going to the stars. However, I think we also need to consider the Meta-Anthropic Principle:
Scientist 3: “No, no, no, you should actually be estimating: P(X | I'm here, asking this question, and applying the Anthropic Principle to it). Obviously asking the question in the first place, and especially not having any better way to answer it than the Anthropic Principle, is going to be a lot rarer later on, once we know a lot more. First wondering about this sort of stuff is strongly correlated with you living not that long after the development of the Scientific Method. It’s just a sampling effect.”
So the clue they forgot to include in their priors was right there all along, on Scientist 1’s name-tag.
My actual point (for anyone wondering whether I have one) is that the most honest way for a Bayesian to look at a counterfactual is P(X | everything else I know), which is generally very near 1 — certainly it is for the Hoyle resonance. Once you start doing counterfactuals more rarified than that, you’re on increasingly thin epistemic ice, and probably shouldn’t be surprised if you start getting odd-looking results once you also leave out almost everything else you know, such as when you are living, or even everything other than the fact that you’re sapient. Which one might call the meta-meta-anthropic principle.
Epistemic status: I just thought this up
There is a well-known style of reasoning called the anthropic argument (which has nothing to do with the AI frontier lab of the same name). It goes something like this:
This is all well and good, and I agree with the second scientist — questions only get asked if there’s someone around to ask them, and you should include everything you already know about the situation in your priors. That’s kind of obvious, once you stop and think about it. And once the String Theory Landscape comes into the discussion, 10>200 is a very large number.
However, if you apply this sort of thinking too much, you find yourself starting to be surprised if your viewpoint is sufficiently atypical in any way, compared to a random sample drawn from all sapient beings, or at least all humans, ever. Such as that, on the fairly plausible assumption that the human race will eventually colonize the stars, as long as we manage not to go extinct first, it seems rather likely that that will be something of the rather rough order O(10∼10) times as many people in our current forward lightcone as in our backward lightcone. That’s a rather large coincidence — why is our current viewpoint that atypical? Now, obviously, somebody did get to be the very first member of Homo sapiens born, right after we speciated, but that’s just a fluke, they do happen, just very rarely — so I continue to ask, why me? Why am I one of the lucky (or at least early) ones by a factor as large as 1 in O(10∼10)? That just doesn’t seem very plausible…
I have seen this argument cited as evidence that there must be a Great Filter or that we should all have a high P(DOOM), because it’s just so implausible that our generation could be missing out on going to the stars. However, I think we also need to consider the Meta-Anthropic Principle:
So the clue they forgot to include in their priors was right there all along, on Scientist 1’s name-tag.
[1]
My actual point (for anyone wondering whether I have one) is that the most honest way for a Bayesian to look at a counterfactual is P(X | everything else I know), which is generally very near 1 — certainly it is for the Hoyle resonance. Once you start doing counterfactuals more rarified than that, you’re on increasingly thin epistemic ice, and probably shouldn’t be surprised if you start getting odd-looking results once you also leave out almost everything else you know, such as when you are living, or even everything other than the fact that you’re sapient. Which one might call the meta-meta-anthropic principle.