Preface: This is a rewrite of a previous post. This new post is meant to be agnostic as to the cause of a potential extinction event. Instead, it focuses only on how future AI superintelligence and agency would affect the likelihood of the creation of (and our existence in) a simulation. Like the previous post, this post is meant to be lighthearted and informal. 

 

As Nick Bostrom, David Chalmers, Elon Musk and others have pointed out, either civilizations fail before reaching the level of technology necessary to create a full and convincing simulation (i.e. “programmed universe”), or we are very likely to be in a simulation. (This is because a single programmer mind would be able to create billions of programmed minds, which would affect the programmer to programmed mind ratio in any universe. Bostrom also explores a third scenario in which posthumans become capable of creating simulations, but none of these posthumans choose to do so — for ethical or other reasons.) 

Many people believe there is a high likelihood humans will go extinct in the near future. We imagine that a higher likelihood of this scenario would lead to a lower likelihood of our existence in a simulation. However, it seems there is another scenario in which humans would indeed go extinct, but the remaining superintelligent AI would eventually create a simulation populated by “digital-minded” humans (and an entire simulated universe with digital underpinnings).

There are many reasons future AI might choose to do this. Just as future posthumans might be interested in their ancestors, future AI might consider humans to be a sort of ancestor. If traits of the human mind are not “magical,” but rather substrate-independent, then curiosity and even compassion would also be substrate-independent--and could eventually be programmed by AI programmers after humans are gone (1). 

Since we imagine that the relationship between AI and humans might become like the current relationship between humans and chimpanzees, perhaps we might consider an AI-created digital universe to be like a sort of “digital zoo” or “nature preserve” — the essence of which eludes its inhabitants. If the cause of our extinction turns out to be AI-related,  the “digital zoo” metaphor would be most accurate. However, in the event that our extinction is caused by ourselves (nuclear war, bioweapons, climate change...), or by some other non-AI, non-human cause (meteors, cosmological events...), the “nature preserve” metaphor would be most adept. This would especially be the case in the event that the Earth’s environments can no longer sustain human life. (We could also look at the “nature preserve” metaphor as a means of AI saving us from ourselves, since the server could just be rebooted every time we manage to destroy ourselves.)

I realize these general scenarios have already been explored, but it seems like perhaps we’ve overestimated the extent to which our extinction would negatively affect the likelihood of our existence in a simulation, given that future AI systems with superintelligence and agency might create simulations after humans are extinct (2). 


1-I’m not saying transformers will ever be capable of feelings like curiosity or compassion. I’m just saying future hardware and algorithms might allow for these traits. Autonomous AI communities would benefit from creating systems with these traits, just as human communities have benefited from these traits (which must be the case, since evolution through natural selection has developed them). Of course a crucial question is whether humans will go extinct before or after AI systems are capable of agency, hardware creation, etc.--assuming that humans go extinct at all in the near future.    

2-I still believe we should do as much as we can to minimize possible causes of human extinction, including AI risk. Even if we are in a simulation, there would undoubtedly be unimaginable suffering and unacceptable loss in the event of our extinction. This post is only meant to analyze probabilities having to do with human existence and the simulation hypothesis. It is not intended to advocate a particular course of action.  

-7

New Comment