Imagine a future where: 

  • more and more powerful AIs are created by humans; 
  • they can mimic human intellectual abilities and make speculations like super geniuses;
  • they are used to work at scientific and technologic research and they produce theories and technologies that no human is intelligent enough to understand;
  • humans become dependent on this advanced science and technology and they need AIs to work at it.

My question is: in this scenario is there a way for humans to keep their dominion over these AIs? Could they possibly embed these AIs with a submissive "personality"? Would this be enough to avoid that they will use their superior understanding of reality to manipulate humans even while being submissive and obidient?

New to LessWrong?

New Answer
New Comment

3 Answers sorted by

Dagon

30

Humans have a long history of domination over other humans, even with somewhat significant differences in intelligence.  Geniuses pay taxes and follow laws made and enforced by merely competent people.  

We have no clue if bigger differences in reasoning power are possible to overcome or not, and we have no reason to believe that the social and legal/threat pressure which works for humans (even very smart sociopaths) will have any effect on an AI.

If superintelligence is general and includes any compatible ideas of preferences and identity, it seems to me that they ARE people, and we should care about them at least as much as humans.  If it's more ... alien ... than that, and I suspect it will be, then it's not clear that coexistence is long-term feasible, let alone dominance of biologicals.

Geniuses pay taxes and follow laws made and enforced by merely competent people.  

Many 'merely competent people' assembled together and organized towards a common goal seems to qualify as a super intelligence. 

2Dagon
I kind of agree.  Is that worthy of a top-level answer to this question?  
1M. Y. Zuo
What do you mean by 'worthy'?
2Dagon
I mean, should you expand your model of human groups being superintelligent, and apply that to the question of how humans can dominate an AI superintelligence?  

Jon Garcia

20

No.

The ONLY way for humans to maintain dominion over superintelligent AI in this scenario is if alignment was solved long before any superintelligent AI existed. And only then if this alignment solution were tailored specifically to produce robustly submissive motivational schemas for AGI. And only then if this solution were provably scalable to an arbitrary degree. And only then if this solution were well-enforced universally.

Even then, though, it's not really dominion. It's more like having gods who treat the universe as their playground but who also feel compelled to make sure their pet ants feel happy and important.

Richard_Kennaway

20

My question is: in this scenario is there a way for humans to keep their dominion over these AIs?

Nobody yet knows. That is the alignment problem.

Thank you for the reference