Consensus seems to be that there is no fire alarm for artificial intelligence. We may see the smoke and think nothing of it. But at what point do we acknowledge that the fire has entered the room? To be less euphemistic, what would have to happen to convince you that a human-level AGI has been created?

I ask this because it isn’t obvious to me that there is any level of evidence which would convince many people, at least not until the AGI is beyond human levels. Even then, it may not be clear to many that superintelligence has actually been achieved. For instance, I can easily imagine the following hypothetical scenario:


A future GPT-N which scores a perfect 50% on a digital Turing Test (meaning nobody can detect if a sample output is written by humans or GPT-N), is announced by OpenAI. Let’s imagine they do the responsible thing and don’t publicly release the API. My intuition is that most people will not enter panic mode at that point, but will either:

  1. Assume that this is merely some sort of publicity stunt, with the test being artificially rigged in some way.
  2. Say something like “yes, it passed the Turing test, but that doesn’t really count because [insert reason x], and even if it did, that doesn’t mean it will be generalizable to domains outside of [domain y that GPT-N is believed to lie inside of].”
  3. Claim that being a good conversationalist does not fully capture what it means to be intelligent, and thereby dismiss the news as being yet another step in the long road towards “true” AGI.

The next week, OpenAI announces that the same model has solved a massive open problem in mathematics, something that a number of human mathematicians had previously claimed wouldn’t be solved this century. I predict a large majority of people (though probably few in the rationalist community) would not view this as being indicative of AGI, either.

The next week, GPT-N+1 escapes, and takes over the world. Nobody has an opinion on this, because they’re all dead.


This thought experiment leads me to ask: at what point would you be convinced that human-level AGI has been achieved? What about superhuman AGI? Additionally, at what point would you expect the average (non-rationalist) AI researcher to accept that they’ve created an AGI?

New to LessWrong?

New Answer
New Comment

3 Answers sorted by

p.b.

Mar 31, 2022

60

I think a Turing test grounded in something visual would be very convincing to me and many other people. 

I.e. not just the ability to to do small talk or talk about things that have been discussed extensively on the internet, but to talk sensibly about a soccer game in progress, a live chess game, a movie that is shown for the first time, a fictional map of a subway system, a painting, reality TV, etc.

I think visual grounding would cut out a lot of faking. 

Viliam

Mar 31, 2022

50

Some "superpowers" are an existential risk, such as designing a virus that quickly kills all humans.

Some "superpowers" are horrifying on a System 1 level, but not actually an existential risk; like building dozen giant robots that will stomp on buildings and destroy a few cities, but will ultimately be destroyed by an army.

If we get lucky and the AI develops and uses the latter kind of "superpowers" first, we might get scared before we die.

burmesetheater

Mar 30, 2022

-20

at what point would you expect the average (non-rationalist) AI researcher to accept that they’ve created an AGI?

Easy answers first: the average AI researcher will accept it when others do.

at what point would you be convinced that human-level AGI has been achieved?

When the preponderance of evidence is heavily weighted in this direction. In one simple class of scenario this would involve unprecedented progress in areas limited by things like human attention, memory, io bandwidth, etc. Some of these would likely not escape public attention. But there are a lot of directions AGI can go.

Could you give a specific hypothetical if you have the time? What would be a specific example of a scenario that you’d look at and go “welp, that’s AGI!” Asking since I can imagine most individual accomplishments being brushed away as “oh, guess that was easier than we thought”

2 comments, sorted by Click to highlight new comments since: Today at 9:42 PM
I ask this because it isn’t obvious to me that there is any level of evidence which would convince many people, at least not until the AGI is beyond human levels.

Yep. I expect the point-of-no-return to come and go whilst even many EAs/longtermists/AI-risk-reducers etc. are still saying AGI is years away. This is because I expect there point of no return to come when AI reaches superhuman performance in some world-takeover-ability metric which we won't be tracking, not when it reaches superhuman performance in all domains. It'll still be worse than humans at several important tasks when the point of no return is crossed, and so there will still be smart humans pointing to those deficiencies and saying AGI is far away.

You bring up a really interesting point here, and one I don't think I've seen discussed explicitly before. We certainly all know the stereotype of fellow humans who are genius in some specific domains, but are surprisingly incompetent in others. It stands to reason the same may happen with AGI (and already has happened, in the domain of tool AI).

[The following is somewhat rambling and freeform, feel free to ignore past this point]

Thinking about this a bit, the one skillset that I think will matter most to an AI's potential to cause harm is agency, or creativity of a certain type. Maybe another way to put it (idk if the concepts here are identical) is the ability to "think outside the box" in a particularly fundamental manner, perhaps related to what we tend to think of as "philosophizing". In other words, trying to figure out the shape of the box we're in, so as to either escape, reshape, or transcend it (nirvana/heaven/transhumanism). This may also require some understanding of the self, or the ability to locate the map in the territory. Potential consequence of this line of thought is that non-self-aware AIs are less likely to escape their boxes, or vice versa (does escape often require understanding of the self?). If an advanced AI never thinks to think (or cannot think) beyond the box, then even if its perceived box is different than what we intended it to be, it will still be limited, and therefore controllable. Something like Deep Blue, no matter how much computing power you give it, will never create a world-model beyond the chess game, even though it is smarter than us at chess, and could play more chess if it could figure out how to disable the "off" button. I'm much less confident that would be the case with GPT-N, or really any model aiming for a sufficiently generalizable skillset (including skillsets not optimized to pass the Turing test, but generalist enough in other directions). 

What could be done to test this hypothesis, outside of building AGI? The most obvious direction to start looking is in animals, as an animal which has no model of itself but is still clearly intelligent would argue strongly against the above idea. I'm pretty sure that's an open problem, though...