898

LESSWRONG
LW

897
Technological ForecastingAI
Frontpage

-5

[ Question ]

How could humans dominate over a super intelligent AI?

by Marco Discendenti
27th Jan 2023
1 min read
A
3
8

-5

-5

How could humans dominate over a super intelligent AI?
3Dagon
1M. Y. Zuo
2Dagon
1M. Y. Zuo
2Dagon
2Jon Garcia
2Richard_Kennaway
1Marco Discendenti
New Answer
New Comment

3 Answers sorted by
top scoring

Dagon

Jan 27, 2023

30

Humans have a long history of domination over other humans, even with somewhat significant differences in intelligence.  Geniuses pay taxes and follow laws made and enforced by merely competent people.  

We have no clue if bigger differences in reasoning power are possible to overcome or not, and we have no reason to believe that the social and legal/threat pressure which works for humans (even very smart sociopaths) will have any effect on an AI.

If superintelligence is general and includes any compatible ideas of preferences and identity, it seems to me that they ARE people, and we should care about them at least as much as humans.  If it's more ... alien ... than that, and I suspect it will be, then it's not clear that coexistence is long-term feasible, let alone dominance of biologicals.

Add Comment
[-]M. Y. Zuo3y10

Geniuses pay taxes and follow laws made and enforced by merely competent people.  

Many 'merely competent people' assembled together and organized towards a common goal seems to qualify as a super intelligence. 

Reply
2Dagon3y
I kind of agree.  Is that worthy of a top-level answer to this question?  
1M. Y. Zuo3y
What do you mean by 'worthy'?
2Dagon3y
I mean, should you expand your model of human groups being superintelligent, and apply that to the question of how humans can dominate an AI superintelligence?  

Jon Garcia

Jan 28, 2023

20

No.

The ONLY way for humans to maintain dominion over superintelligent AI in this scenario is if alignment was solved long before any superintelligent AI existed. And only then if this alignment solution were tailored specifically to produce robustly submissive motivational schemas for AGI. And only then if this solution were provably scalable to an arbitrary degree. And only then if this solution were well-enforced universally.

Even then, though, it's not really dominion. It's more like having gods who treat the universe as their playground but who also feel compelled to make sure their pet ants feel happy and important.

Add Comment

Richard_Kennaway

Jan 28, 2023

20

My question is: in this scenario is there a way for humans to keep their dominion over these AIs?

Nobody yet knows. That is the alignment problem.

Add Comment
[-]Marco Discendenti3y10

Thank you for the reference

Reply
Moderation Log
More from Marco Discendenti
View more
Curated and popular this week
A
3
0
Technological ForecastingAI
Frontpage

Imagine a future where: 

  • more and more powerful AIs are created by humans; 
  • they can mimic human intellectual abilities and make speculations like super geniuses;
  • they are used to work at scientific and technologic research and they produce theories and technologies that no human is intelligent enough to understand;
  • humans become dependent on this advanced science and technology and they need AIs to work at it.

My question is: in this scenario is there a way for humans to keep their dominion over these AIs? Could they possibly embed these AIs with a submissive "personality"? Would this be enough to avoid that they will use their superior understanding of reality to manipulate humans even while being submissive and obidient?