Review

For this question, let's assume the worst about the universe:
1. That a false vacuum decay can be created,
2. False vacuum decay will destroy the universe, i. e. inside the true vacuum bubble life/computation will be impossible.
3. Faster-than-light travel is impossible, i. e. it's not possible to outrun the bubble of true vacuum.

The AGI isn't aware of the above. What qualities does it need to have, to come to the realization that it's dangerous?
I think that an aligned AGI may be more dangerous in this case, depending on how it's aligned, since it may be restricted in the methods it can use to prevent the rest of the universe creating a false vacuum decay.
Since a false vacuum decay will also destroy the AGI, not just humanity, even an unaligned AGI will view false vacuum as a risk to itself.
However, if AGI doesn't care about its own existence, it will not care about the dangers of false vacuum as well.

But what if AGI (following Occam's Razor) creates a simpler theory of physics that agrees with what can be observed in the universe, but without the possibility of false vacuum decay, and it turns out to be wrong?
This dilemma is different from Pascal's Wager, in the sense that false vacuum decay is a scientific hypothesis, not a religious belief.

I struggle to think of how an AGI can be programmed to tread carefully with false vacuum research, without hardcoding it to reject physics theories that allow no possibility for false vacuum.
 

New Answer
New Comment

1 Answers sorted by

interstice

31

It seems unlikely that an AGI would know enough to be able to perform experiments potentially leading to a false vacuum collapse while also not being aware of that possibility.

While I think the scenario I described is very unlikely, it nonetheless remains a possibility. 

More specifically - that there might be a simpler theory of physics that explains all "naturally" occurring conditions (in a contemporary particle accelerator, in a black hole, quasar, supernova, etc.), but doesn't predict the possibility of false vacuum in some unnatural conditions that may be created by AGI, while a more complicated theory of physics does. 

If AGI prefers a simpler theory, that may cause it to create a false vacuum decay.

2 comments, sorted by Click to highlight new comments since:

I don't understand the purpose behind this question.  What does it take for a non-artificial intelligence to realize it?  Why would an AGI be any different?  What's the meaning of "hardcoded" in your constraint, and why is it different from "learned"?

This is a hypothetical question, regarding possible (albeit not the most likely) existential risks. Maybe a non-artificial intelligence can realize it, but I'm talking about artificial because it can be programmed in different ways.

By hardcoded, I mean forced to prefer - in this case, a more complicated physics theory with false vacuum decay over a simple one without it.