It is reasonably clear that not just a super-human but "just" a human-level AGI that can "clone itself" can defeat humanity through concerted effort and determination, should they choose to. I wonder how much less intelligent than an average human an AGI can be and still have no trouble making short work of humanity? An army of e-chimps? An army of e-chipmunks? 

New to LessWrong?

New Answer
New Comment

7 Answers sorted by

JBlack

Jun 12, 2022

110

I don't think literal chimp minds could do it, as they probably don't have enough forethought to predict and avoid bad consequences of their actions. Suppose they could somehow freely propagate through all computer networks and infest everything more complex than a toaster: they could certainly collapse civilization and kill most of us, but would almost certainly kill all of themselves in the process.

At least individually, they won't be devising ways to run all the processes required to make more hardware to support their existence. There might be some way to combine e-chimp minds into collectives that can make long-term plans and learn how to use technology outside any inbuilt interfaces they might have, but I think it would not be a simple structure, and don't think they'd be able to find out how to do it themselves. They certainly wouldn't be able to use our structures, so it would require quite some trial and error without being able to employ many of the more powerful optimization methods that humans can.

I think there is a threshold of intelligence below which an army of AI agents will be very much less effective than human, and it's probably within the low human range. I can imagine everything at least as complex as a phone suddenly acquiring copies of an emulated homicidal human mind with IQ 60, with more powerful computing devices just running more of them and somewhat faster (up to say 10x speed). I don't think that would extinguish humanity even if they had some inherent coordination advantages.

I'm not 100% sure that IQ 100 would - it really depends upon how well they can manage to coordinate with each other. If they can do so vastly better than average humans usually coordinate with each other (even when they have common goals), then I'm pretty sure that would suffice. I'm just not sure how much better coordination capability you can get away with and still consider them to be cognitively average human level. Effective coordination is a cognitive task that most groups of humans find very difficult.

Thank you for engaging with the actual question, unlike the other comments! What you seem to be gesturing at is a phase transition from "too dumb to be x-risk dangerous, even is a large group" to "x-risk-level dangerous". I think this phase transition, or lack thereof would be worth studying, for two reasons:

  • it is something we CAN study effectively, because we don't have to reason about intelligences smarter than ourselves.
  • it is something that can become an x-risk WAY EARLIER than a super-intelligent AGI.

Additionally, there is a fair chance to stave off "dying without dignity" by accidentally unleashing something preventable.

Lone Pine

Jun 13, 2022

30

Having a one-dimensional IQ model is really limiting here, in my opinion. Let me propose a three-dimensional model:

  1. Technical IQ: How sophisticated is the AI in its ability to design novel technologies (nanotech or extinction-plauge) or hack into semi-secured systems?
  2. Competence: How competent is the AI in its ability to make and execute plans in complex domains, and adapt when things don't go to plan?
  3. Hubris: How competent does the AI perceive itself to be, relative to its actual abilities?

Since it is impossible to estimate your own competence when you're a bright young model fresh off the GPU, it seems likely that we could have highly incompetent, hubristic systems that try and fail to take over the world. Hubris would presumably not be due to ego, as in humans, but a misaligned AI might decide that it must act 'now or never' and just hope its competence is sufficient.

Its also possible to imagine a system that has effectively zero technical IQ, but is able to take over the world just using existing technology and extreme competence. In order for this to be possible, I think we need more automation in the environment. If there was a fully automated factory to produce armed drones, that would be sufficient.

Right, thinking of intelligence as one-dimensional is quite limiting. I wonder if there are some accepted dimensions and ways for how to measure general intelligence that can be applied to AI.

avturchin

Jun 12, 2022

30

Even narrow AI which very effectively designs weapons can kill us all.

trevor

Jun 12, 2022

30

I recommend watching All the President's Men and Captain America 2: Winter Soldier (idk which order, a coin flip would be best). Those two films fold together to do a pretty good job of illustrating how vulnerable human institutions are to complex internal conflict.

It's better to look at the human angle than the AI angle since a semi-intelligent "dumb" AGI would probably have random capabilities (e.g. deepfaked emails that pass the turing test for posing as that particular sender and recipient) and random comprehension of the internal and external world (e.g. accidentally believing that animals count as humans due to weak models of non-human minds).

Viliam

Jun 12, 2022

20

A virus is less intelligent than human, and the right kind of virus could destroy humanity. You need something that spreads quickly by air and surface, seems perfectly harmless for a few months, and then kills with 90% probability.

Not sure if the virus itself is the answer to your question, or a narrow AI that develops it.

trevor

Jun 12, 2022

10

I think history books have the answer here. Human institutions are somewhat likely to stumble over eachother, and/or hyperprioritize internal threats (including simultaneously). 

This is one of the reasons why SIGINT and HUMINT are such a big deal- if an enemy agent or asset ended up in the wrong place, all they'd have to do is give a little push and watch the adversary's institutions go into anaphelactic shock. 

burmesetheater

Jun 12, 2022

-10

How much real power does the AI have access to, and what can humans do about it?

To reframe your question, even relatively small differences in human intelligence appear to be associated with extraordinary performance differences in war: consider the Rhodesian Bush War, or the Arab-Israeli conflict. Both sides of each conflict are relatively well-supplied and ideologically motivated to fight. In both cases there is also a serious intellectual giftedness gap (among other things) between the competing populations and the more intelligent side is shown to win battles easily and with very lopsided casualties--although in the case of Rhodesia the more intelligent population eventually lost the war due to other externalities associated with the political realities of the time.

If humanity is aligned, and the less-smart-than-human AI doesn't have access to extraordinary technology or other means to grant itself invulnerability and / or quickly kill a critical mass of humans before we can think our way around it, it should be the case that humans win easily. It is difficult to imagine a less-intelligent-than-human AI reliably obtaining such hedges without human assistance. 

I see where you are coming from, but I don't think comparing an adversarial AI/human interaction with a human/human interaction is fruitful. Even a "stupid" AI thinks differently than a human would, in the way that it considers options no human ever would think of or take seriously. Self-cloning and not having to consider losses is another approach humans have no luxury of. 

I would start by asking a question that no one at MIRI or elsewhere seems to be asking:

What might an irate e-chimp do if their human handler denied it a banana?

(I.e. what are the dangers of gain-of-function research in sub-human A[G]I)

1burmesetheater2y
A stupid AI that can generate from thin air things that have both useful predictive power and can't be thought of by humans, AND that can reliably employ the fruits of these ideas without humans being suspicious or having a defense...isn't that stupid. This AI is now a genius. Who cares? For one, if we're talking about an AI and not a chimp em this is an obvious engineering failure to create something with all the flaws of an evolved entity with motivational pressures extraneous and harmful to users. Or in other words this is a (very) light alignment problem that can be foreseen and fixed. 
1 comment, sorted by Click to highlight new comments since: Today at 7:35 AM
[+][comment deleted]2y30