I remember reading some paper that suggests humans to take a specialized role inside super AI, like mitochondria, or like a biological "backup". The argument was that humans might have specialized functions that they can do well, like mitochondria is good at making energy. And humans might be good at surviving, so might be a good "biological backup".

I feel like it's by Vernor Vinge, but I just checked and he didn't write that. I googled around and couldn't find it either. Can you find from which paper this might have come from?

New Answer
Ask Related Question
New Comment

1 Answers sorted by

I wrote something like this in "Message to any future AI" there I suggest to the future AI that humans may have some now unknown instrumental value, and thus it is no wise to kill them now (this is in the end of the post).

Also in Global solution to AI safety, again at the end, there I look at solutions where AI consists of humans and in First human upload as AI Nanny which title is self-explanatory.

You may also have read The Age of Ems by Hanson.

No, it was something way older, from maybe 2000-2009.

2 comments, sorted by Click to highlight new comments since: Today at 9:40 PM

In Matrix, the role of humans was quite similar to the role of mitochondria. (Except, it does not make sense.)

I imagine that at the beginning, humans could be useful to the young AIs which would excel at some skills but fail at others. (One important role would be providing a human "face" in interaction with humans who don't like AIs.) However, that usefulness would only be temporary.

An eukaryotic cell cannot find a short-term replacement for mitochondria, and in evolution the long-term does not happen without the short-term. An intelligent designer -- such as a self-improving AI -- could however spend the time and resources to research a more efficient replacement for the functions the humans provide, if it would make sense in long term.

On the other hand, if the AI is under so much pressure that it cannot afford to do research, it probably also cannot afford to provide luxuries to its humans. So the humans will become an equivalent of cage-bred chickens.

An intelligent designer -- such as a self-improving AI -- could however spend the time and resources to research a more efficient replacement for the functions the humans provide, if it would make sense in long term.

Or they could make improvements. (Breeding/genetic mod.s, etc.)

On the other hand, if the AI is under so much pressure that it cannot afford to do research, it probably also cannot afford to provide luxuries to its humans.

I'm not clear on what scenario would give rise to this - the knowledge that a meteor is headed to Earth, and is going to have a very strong negative impact on its current course seems like it could induce much pressure while not (immediately) constraining resources a lot (aside from time, etc.). Cannot afford to R&D new luxuries, or maintain ones that rely on it, etc., perhaps.