I think it's important to be clear about what SIA says in different situations, here. Consider the following 4 questions:
A) Do we live in a simulation?
B) If we live in a simulation, should we expect basement reality to have a large late filter?
C) If we live in basement reality, should we expect basement reality (ie our world) to have a large late filter?
D) If we live in a simulation, should we expect the simulation (ie our world) to have a large late filter?
In this post, you persuasively argue that SIA answers "yes" to (A) and "not necessarily" to (B). However, (B) is almost never decision-relevant, since it's not about our own world. What about (C) and (D)? (Which are easier to see how they could be decision-relevant, for someone who buys SIA. I personally agree with you that something like Anthropic Decision Theory is the best way to reason about decisions, but responsible usage of SIA+CDT is one way to get there, in anthropic dilemmas.)
To answer (C): If we condition on living in basement reality, then SIA favors hypotheses that imply many observers in basement reality. The simulated copies are entirely irrelevant, since we have conditioned them away. (You can verify this with bayes theorem.) So we are back with the SIA doomsday argument again, and we face large late filters.
To answer (D): Detailed simulations of civilisations that spread to the stars are vastly more expensive than detailed simulations of early civilizations. This means that the latter are likely to be far more common, and we're almost certainly living in a simulation we're we'll never spread to the (simulated) stars. (This is plausibly because the simulation will be turned off before we get the chance.) You could discuss what terminology to use for this, but I'd be inclined to call this a large late filter, too.
So my preferred framing isn't really that the simulation hypothesis "undercuts" the SIA doomsday argument. It's rather that the simulation hypothesis provides one plausible mechanism for it: that we're in a simulation that will end soon. But that's just a question of framing/terminology. The main point of this comment is to provide answers to questions (C) and (D).
I disagree that (B) is not decision-relevant and that (C) is. I'm not sure, haven't thought through all this yet, but that's my initial reaction at least.
Ha, I wrote a comment like yours but slightly worse, then refreshed and your comment appeared. So now I'll just add one small note:
To the extent that (1) normatively, we care much more about the rest of the universe than our personal lives/futures, and (2) empirically, we believe that our choices are much more consequential if we are non-simulated than if we are simulated, we should in practice act as if there are greater odds that we are non-simulated than we have reason to believe for purely epistemic purposes. So in practice, I'm particularly interested in (C) (and I tentatively buy SIA doomsday as explained by Katja Grace).
Edit: also, isn't the last part of this sentence from the post wrong:
SIA therefore advises not that the Great Filter is ahead, but rather that we are in a simulation run by an intergalactic human civilization, without strong views on late filters for unsimulated reality.
Re your edit: That bit seems roughly correct to me.
If we are in a simulation, SIA doesn't have strong views on late filters for unsimulated reality. (This is my question (B) above.) And since SIA thinks we're almost certainly in a simulation, it's not crazy to say that SIA doesn't have strong view on late filters for unsimulated reality. SIA is very ok with small late filters, as long as we live in a simulation, which SIA says we probably do.
But yeah, it is a little bit confusing, in that we care more about late-filters-in-unsimulated reality if we live in unsimulated reality. And in the (unlikely) case that we do, then we should ask my question (C) above, in which case SIA do have strong views on late filters.
Ah, I agree. I misread that bit as about filters for us given that we are non-simulated, but really it's about filters for non-simulated civilizations, which under the simulation argument our existence doesn't tell us much about. Thanks.
Regarding (D), it has been elaborated more in this paper (The Simplicity Assumption and Some Implications of the Simulation Argument for our Civilization).
If 99.99 per cent of all civilizations go extinct, SIA Doomsday is true in basement reality, despite the fact that last 0.01 civilizations create trillions of simulations, and we are likely to be in one of them.
I'll try not to be too grumpy, here.
Let me make the case why filter-ology isn't so exciting by extending part of your own model: the fact that you take evidence into account.
The basic doomsday argument model asks us to pretend that we are amnesiacs - that we don't know any astronomy or history or engineering, all we know is that we've drawn some random number in a list of indeterminate length. If that numbered list is the ordering of human births, we're halfway through. If we consign the number of humans to the memory hole and say that the list is years in the lifespan of our species, we're halfway through that, too. Easy and simple.
Once you start talking about evidence, though - having observations of our universe, like the lack of aliens in the night sky, that we condition on when predicting the future - you've started down a slippery slope. Why not also condition on the fact that nuclear weapons have been invented, or condition on the UN being formed? Condition on life being carbon-based, or that octopuses are pretty smart, or the kinds of expolanets we've detected, or the inferred frequency of gramma-ray bursts in our local group? Some of these factors will change our expectations, just like how the negative evidence of aliens penalizes hypotheses where life is both likely to arise and to spread.
Some of this info is pretty important! If we'd had nuclear war in 1980 (or dealt with greenhouse gases better), I'd be more pessimistic (or optimistic) about humanity's future.
At the bottom of this slippery slope, you're dumped in the murky water of trying to predict the future based on lots of useful information, rather than a tractable but woefully incomplete model of the world.
I see nothing grumpy here.
I think supporters of the doomsday argument are saying you should consider all evidence, but the doomsday argument still stands. So we should use all the information available to make a prediction of the future and then, on top of all that, apply the doomsday argument so that the future looks bleaker. And that should be the case unless we find a logical error in the argument.
I think the error for the doomsday argument is to try and find an explanation for why I am this person, living in this time. It should be regarded as something primitively given, a reasoning starting point. Instead, it treated it as a sampling outcome. That is why I am against both SSA and SIA.
Sleeping Beauty and other anthropic problems considered in terms of bets illustrate how most ways of assigning anthropic probabilities are not about beliefs of fact in a general sense, their use is more of appeal to consequences. At the very least the betting setup should remain a salient companion to these probabilities whenever they are produced. Anthropic probabilities make no more sense on their own, without the utilities, than whatever arbitrary numbers you get after applying Bolker-Jeffrey rotation. The main difference is in how the utilities of anthropics are not as arbitrary, so failing to carefully discuss what they are in a given setup makes the whole construction ill-justified.
Interestingly, if we are in a simulation, it still could simulate the end of the world, and the share of such simulations should be large by the reasons I explain below. Thus on the observable level, SIA Doomsday still holds, despite the fact we are likely in simulation.
Th reason for the large number of "Doomsday simulations" is that any aliens will try to solve Fermi paradox numerically, as they want to learn what is the probability to meet other aliens. So they will simulate many civilisation's histories around the time of high x-risks (that is 20-21 century in our world). Not only their own past but the history of any other possible civilisations. So we could be simulated by aliens which explore the ways how the worlds tend to end.
If there is no other reason to create simulations, such Fermi-paradox-solving-simulations will be most numerous. Other reasons, like gaming, may also include the end of the world as a bad out come in "world saving game".
This post was written by Mark Xu based on interviews with Carl Shulman. It was paid for by Open Philanthropy but is not representative of their views.
Summary
Note that we are not endorsing the underlying SIA decision making framework here, only discussing whether certain conclusions follow it. In dealing with such anthropic problems we would prefer approaches closer to Armstrong’s Anthropic decision theory, which we think is better for avoiding certain sorts of self-destructive anthropic confusions.
Introduction
The search for extraterrestrial intelligence has not yet yielded fruit. Hanson (1998) argues that this implies the probability of life evolving on a planet and becoming visible must be extremely low, a so-called Great Filter. Such a filter could be located in many possible difficulties: abiogenesis, intelligent life, interstellar colonization, etc. Humanity’s future prospects may depend on whether the difficulties have already passed or lie ahead. Thus we are left with a troubling question: how far along the filter are we?
Sandberg et al. (2018) observe that current scientific uncertainty is compatible with a high chance that we are alone in the universe. For example, Sandberg et al. suggest over 200 orders of magnitude of uncertainty over the frequency of abiogenesis. Since there is substantial prior probability on early Great Filters, the lack of visible extraterrestrial life can’t provide a very large likelihood ratio regarding late filters.
However, on the Self-Indication Assumption (SIA) the fact that we find ourselves to exist should provide overwhelming reason to reject theories of very large early filters, and purportedly favor late filters.
We will discuss how the Great Filter interacts with SIA. We will first introduce the assumption and then present Grace (2010) and Ord and Olsen (2020)’s arguments that SIA implies the Great Filter is ahead, which we will call the “SIA Doomsday Argument”. We will then argue that Bostrom’s simulation argument reverses the conclusion of the SIA Doomsday Argument.
Self-Indication Assumption
The self-indication assumption:
To illustrate applications of SIA, we will discuss two examples due to John Leslie and Nick Bostrom.
To answer this question using SIA, one must take the prior probabilities of each world existing, then weight them by the number of people that are in the first ten rooms. Since both worlds have ten people who are in the first ten rooms and the worlds were equally probable, SIA advises that you think the coin was equally likely to have fallen heads or tails.
Since super-duper symmetry considerations are indifferent between T1 and T2, they are equally probable. However, T2 implies a trillion times as many observers as T1. Under SIA, an observer is thus a trillion times more likely to find themselves in a world where T2 is true than a world in which T1 is true. [1]
SIA Doomsday Argument
We will simplify discussion by supposing that there are only two stages of the Great Filter: getting to where humanity is now, and getting from now to an intergalactic civilization. Let pdevelops be the probability that any given star develops life at humanity’s current level of technological maturity. Let pexpands be the probability that a civilization with Earth’s current level of technology expands into an intergalactic civilization. The absence of evidence for other intergalactic civilizations suggests that pdevelops∗pexpands<<1.
Grace (2010) explains the SIA Doomsday Argument:
Olson and Ord (2021) illustrate the results of a similar argument:
In our two step filter model, the SIA Doomsday Argument is the observation that SIA favors higher numbers of intelligent observers at our technological stage, which provides pressure for pdevelops to be as large as possible. However, the lack of evidence for extraterrestrial life puts pressure on pdevelops∗pexpands to be very small, suggesting that pexpands is small.
SIA places linear pressure on pdevelops to be large (and thus also on pexpands to be small); at each stage, SIA assigns ten times higher probability to worlds in which pdevelops is ten times as large. If, for example, one requires that pexpands∗pdevelops≈10−10 and one’s uncertainty over pdevelops is log-uniform from 10−10 to 100, SIA would exponentially favor pdevelops falling into higher orders of magnitude, concentrating almost all probability mass on relatively high values of pdevelops (and thus on low values of pexpands). Here is the update illustrated graphically: (https://www.desmos.com/calculator/8baeummm7y)
The above model is slightly simplified. In reality, very high values of pdevelops imply there are many intelligent civilizations close to our own, contradicting observation. SIA pushes pdevelops as high as possible without making it extremely unlikely that we observe an empty galaxy.
The implication in this model seems to be that under SIA we should expect that the bulk of the filter is still ahead. Given that it seems like object-level considerations about the probability of existential risk and the technological feasibility of intergalactic colonization do not warrant that expectation, this would mean we were adopting surprising conclusions about the physical world based on this presumptuous philosophical argument.
However, a hidden premise in these arguments is that observers only seem to find themselves at our stage of technological maturity in between abiogenesis and intergalactic colonization. The possibility of computer simulations plausibly invalidates this assumption.
The Simulation Argument
Under certain plausible assumptions, intergalactic civilizations might be able to create computer simulations containing many orders of magnitude more observers than natural primitive civilizations could support. In particular, Bostrom’s Simulation Argument argues that at least one of the following is true:
Since the various disjuncts in the Simulation Argument have implications on the expected number of observers that are “like you”, SIA favors some over others. If humanity develops into an intergalactic civilization and devotes a small fraction of resources (an immense amount by today’s standards) to producing simulations with observations like ours, there will be many orders of magnitude more observers in our apparent situations. Since SIA favors worlds where the absolute number of such observers is high, SIA vastly favors such a world. [2]
More specifically, if (1) or (2) were true, there would be no computer simulations of observers in our apparent situations. However, since such simulations possibly vastly outnumber the possible quantity of biological humans, (1) and (2) are both extremely heavily penalized by SIA. Indeed, even if you suspect that simulations will be difficult, will not be conscious, etc., if you were mistaken, then there would be trillions and trillions of observers. Since the SIA Doomsday Argument already embraces Presumptuous Philosopher-style reasoning on the Great Filter, it is difficult to see why it would inconsistently abandon that practice with respect to the simulation argument.
Considering our two step filter model, we are interested in the values of pdevelops and pexpands in non-simulated reality. A low value of pexpands suggests that (1) is true, which is improbable under SIA. For example, suppose an intergalactic civilization would produce a trillion times more simulated observers in our apparent situations per galaxy than the SIA Doomsday scenario of frequent primitive civilizations that get filtered without colonization. Since there are currently around 10 billion such observers, SIA favors the expansion+simulation hypothesis vs SIA Doomsday by a trillion to one, other things equal.
In general, SIA puts pressure on pdevelops∗pexpands to be high enough such that nearly all resources in bedrock reality can be used for simulations. Since the speed of intergalactic colonization is fast relative to the amount of time it takes civilizations to mature, the pdevelops∗pexpands needs to be close (on the log scale) to enough to produce one intergalactic civilization per affectable region of the universe. However, the SIA Doomsday scenario, with much more frequent primitive civilizations that ~uniformly fail to colonize, decreases the amount of colonization, and is thus penalized by SIA.
SIA therefore advises not that the Great Filter is ahead, but rather that we are in a simulation run by an intergalactic human civilization, without strong views on late filters for unsimulated reality.
Also see Radford Neal on getting similar conclusions in finite worlds with no duplicate observers. Note that this approach is self-undermining in that it requires no duplicates to drive reasoning, but also predicts overwhelmingly that many duplicates will exist. ↩︎
Shulman and Bostrom make a similar argument in section 4 of How Hard is Artificial Intelligence? ↩︎