Preface: I know this post is pretty wild. Just keep in mind that it’s meant to be lighthearted speculation. I hope that LW will eventually have a subforum for this type of post. Until then, please accept my apologies for putting it here.  


Some people (myself included) believe our universe is likely to have digital ("information level") underpinnings. As Nick Bostrom, David Chalmers, Elon Musk and others have pointed out, either civilizations like ours go extinct before reaching the level of technology necessary to create a simulation (i.e. "programmed universe"), or we are very likely to be in a simulation. (This is because a single organic mind would be able to create billions of digital minds, which would affect the organic to digital mind ratio in any universe.) (Bostrom also explores a third scenario in which posthumans become capable of creating simulations, but none of these posthumans choose to do so--for ethical or other reasons.)

Many people believe AI is very likely to cause “organic species” (such as humans) to become subservient to AI or to go extinct--provided we don't go extinct from some other non-AI cause first (1). We imagine that the higher the likelihood of this scenario, the lower the likelihood that we would ever create or be in a simulation. 

However, it seems there is another scenario in which humans would indeed go extinct, but the remaining AI would eventually choose to create a simulation populated by “digital-minded” humans (and an entire simulated universe with digital underpinnings).  

There are many reasons future AI might choose to do this. Just as future posthumans might be interested in their ancestors, future AI might consider humans to be a sort of ancestor. If traits of the human mind are not “magical,” but rather substrate-independent, then curiosity would also be substrate-independent, and could eventually be programmed by AI for AI (long after humans are gone). 

Furthermore, since we imagine that the relationship between AI and humans might become like the current relationship between humans and chimpanzees, perhaps we might consider an AI-created digital universe to be like a sort of “digital zoo” or “natural habitat,” the nature of which eludes us.

I realize this general scenario has already been explored, but it seems like maybe we’ve overestimated the extent to which our extinction would negatively affect the likelihood of our existence in a simulation. (If anything, an AI-controlled world might increase the likelihood, since future AI might view ethical questions regarding digital minds in a different light than future humans would. On the other hand, compassion would also be substrate-independent, even if it were to take extensive programming and hardware work to recreate it. And granted, this AI-to-AI programming would most likely develop far too late to help "base reality" humans.) 

--

1-This doesn't mean we shouldn't try to prevent our own extinction by using every tool and perspective we have at our disposal. 

--

[
EDITS 20230422 0914: 
1-Added some parenthetical clarifications. 
2-Changed from saying the human mind is substrate-independent to saying traits of the human mind are substrate-independent. 
3-I also removed a couple words in paragraph 3 so that I'm no longer implying the cause of human extinction will necessarily be from AI, since there are multiple other ways we might go extinct before any AI-created extinction event might occur. 
4-Changed "zoo" in paragraph 5 to "digital zoo."
5-Removed prior edit note regarding alternate titles for this post.
 

EDIT 20230502 0711 CST:

1-Moved parenthetical clarification.

]

New Comment
9 comments, sorted by Click to highlight new comments since: Today at 10:53 PM

Upcoming AGI x-risk upweights the simulation hypothesis for me because...

Of all the peoples' lives that exist and have existed, what are the chances I'm living one of the most prosperous lives in all of humanity, only to descend into facing the upcoming rapture of the entire world? Sounds like a video game / choose your adventure from another life...

Of all the peoples' lives that exist and have existed, what are the chances I'm living [...here and now]

Is there a more charitable interpretation of this line of thinking rather than "My soul selected this particular body out of all available"?

You being you as you are is a product of your body developing in circumstances it happened to develop in.

Interestingly, J. Miller recently wrote in twitter: if a person gives a higher weight to AI risk, she should also give higher credence to simulation hypothesis, as this person believes in high chance of appearance of superintelligence capable to simulation creation. 

Thanks for sharing this! It's so interesting how multiple people start having similar thoughts when the environment is right. It seems the simulation hypothesis and AI Risk are inextricably linked, even if for no other purpose than conducting thought experiments that help us understand both better. 

There are many reasons future AI might choose to do this

Yeah, but almost all of them are because we taught them well. Sure, curiosity might push them to do it, but not with any significant amount of compute power.

Even un-aligned AI will create past simulations in order to predict the probability of different types of AI appearance and thus predict which types of alien AIs it may meet in space or acausaly trade in multiverse.

I don't see the probability-estimation causality here - I don't understand your priors if you're updating this way.  If we're in a simulation, the fact that we're making some progress on AI-like modeling doesn't seem to DEPEND on being in that simulation.  If we're on the "outside", and are actually in a "natural" universe, this kind of transformer doesn't seem to provide any evidence on whether we can create full-fidelity simulations in the future.

The simulation hypothesis DEPENDS on the simulation being self-contained enough that there are no in-universe tests which can prove or disprove it, AND on being detailed enough to contain agents of sufficient complexity to wonder whether it's a simulation.  Neither of those requirements are informed by current technological advances or measurements.

Note: I currently think of the simulation hypothesis as similar to MWI in quantum mechanics - it's a model that cannot be proven or disproven, and has zero impact on predicting future experiences of humans (or other in-universe intelligences).

"...this kind of transformer doesn't seem to provide any evidence on whether we can create full-fidelity simulations in the future." 

My point wasn't that WE would create full-fidelity simulations in the future. There's a decent likelihood that WE will all be made extinct by AI. My point was that future AI might create full-fidelity simulations, long after we are gone. 

"I currently think of the simulation hypothesis as similar to MWI in quantum mechanics - it's a model that cannot be proven or disproven..." 

Ironically, I believe many observable phenomena in quantum mechanics provide strong support (or what you might call "proof") for the simulation hypothesis--or at least for the existence of a deeper/"information level" "under" the quantum level of our universe. Here's a short, informal article I wrote about how one such phenomenon (wave function collapse) supports the idea of an information level (if not the entire simulation hypothesis).

[EDIT: The title of the article reflects how MWI needs a supplemental interpretation involving a "deeper/information" level. From this, you can infer my point.] 

https://medium.com/@ameliajones3.14/a-deeper-world-supplement-to-the-many-worlds-interpretation-of-wave-function-collapse-54eccf4cad30

Also, the fact that something can't currently be proven or disproven does not mean it isn't true (and that it won't be "proven" in the future).  Such has been the case for many theories at first, including general relativity, evolution through natural selection, etc.