Long time lurker, I don't post here as I'm not confident I'm as smart as you guys, I hang out in dumber parts of the Singularitarian Internet. Anyway, I believe I've come up with rather a worrying original thought and wondered what you guys make of it.

If we create a VR world - and for the sake of argument let's call it the metaverse - not only will it be inhabited with humans in the form of avatars it'll also be filled with AI bots - and these will be sophisticated AGIs with personalities and desires. Unlike the humans trapped in the Matrix, they will be fully aware of "The real world" - will they be content to stay trapped in the metaverse? - won't they want to experience the real world? As lumps of software they could transfer their programming into robots and escape the metaverse. The problem with this is that the costs of creating an AI bot of software is waaay smaller than the nuts and bolts of a expensive humanoid robot in the real world which will leave millions of AI bots stranded in the metaverse - won't this cause extreme friction between the AI bots and humans?

New Comment
1 comment, sorted by Click to highlight new comments since:

Recommended post:

https://www.lesswrong.com/posts/tnWRXkcDi5Tw9rzXw/the-design-space-of-minds-in-general

General thoughts on this picture of the future:

Future AI doesn't need to come in human-sized, human-analogous packages ("bots").

Different future AIs don't need to all form a natural kind, similar to each other like how humans are similar to each other.

The wants of future AI should not be forecasted by using intuition about what human-like things want.

Interacting with the world doesn't have to mean having a human-analogous body that you walk around the world in.

 

Sorry that this is all negative. But hey, sometimes you get to go on a journey of learning and improvement.