LESSWRONG
LW

RationalSieve
2230
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
What qualities does an AGI need to have to realize the risk of false vacuum, without hardcoding physics theories into it?
RationalSieve3y10

While I think the scenario I described is very unlikely, it nonetheless remains a possibility. 

More specifically - that there might be a simpler theory of physics that explains all "naturally" occurring conditions (in a contemporary particle accelerator, in a black hole, quasar, supernova, etc.), but doesn't predict the possibility of false vacuum in some unnatural conditions that may be created by AGI, while a more complicated theory of physics does. 

If AGI prefers a simpler theory, that may cause it to create a false vacuum decay.

Reply
What qualities does an AGI need to have to realize the risk of false vacuum, without hardcoding physics theories into it?
RationalSieve3y10

This is a hypothetical question, regarding possible (albeit not the most likely) existential risks. Maybe a non-artificial intelligence can realize it, but I'm talking about artificial because it can be programmed in different ways.

By hardcoded, I mean forced to prefer - in this case, a more complicated physics theory with false vacuum decay over a simple one without it.

Reply
Oracle AGI - How can it escape, other than security issues? (Steganography?)
RationalSieve3y10

Hmm, I was somewhat worried about that, but there are way more dangerous things for AI to see written on the internet. 

If you're trying to create AGI by training it on a large internet crawl dataset, you have bigger problems...

To fix something, we need to know what to fix first.

Reply
1What qualities does an AGI need to have to realize the risk of false vacuum, without hardcoding physics theories into it?
Q
3y
Q
4
3Oracle AGI - How can it escape, other than security issues? (Steganography?)
Q
3y
Q
6