Maybe calls for tabooing terms can become a bit inflationary. However, talking about if and when AI will reach or surpass “human-level intelligence” can misleadingly frame the discussion, so I recommend avoiding that term. 

What is "human-level intelligence" and when will AI surpass it? 

My understanding is that there is a wide spectrum of human intelligence, and when people are discussing AI capabilities and threats, it is important to know where in that spectrum the AI in question sits. 

Moreover, intelligence is not a single-dimensional concept. Sure, there is g etc. - in human brains. But even an ordinary calculator can, well, calculate much faster than humans. For being dangerous, an AI does not need to be better than all people in every discipline. Does the AI have to be a great composer, or art historian? No.

Yet the vagueness of the concept of "human-level intelligence" may lead to further misleading ways of assessing AI capabilities. Firstly, dismissing domain-specific abilities; "sure, this AI is great at explaining jokes but it cannot invent new salsa recipes, whereas this is something that humans can do." True, but the human brain is quite modular, subsystems do different things, so why should an AI not also use different AIs? Secondly, comparing with the best human experts in some domain; "look, this AI may win a game of Diplomacy, but not against the best players." Fine, but if human-level intelligence means that you have to be best at everything, then no human has human-level intelligence. This would only be relevant if humanity coordinated itself perfectly. 

Additionally, to be dangerous, an AI does not have to be conscious. But this is something else that the term "human-level intelligence" may suggest. Why, if the term does not contain the concept of consciousness? The reason for this is that the lack of concreteness about what human intelligence is could invite people to substitute what they believe exists in everyone's brain, regardless of their intelligence, and that is consciousness. Saying that AI transcending human-level intelligence is a relevant threshold may sound to people as if the real danger is AIs developing consciousness.

So avoid the term "human-level intelligence" if you can.

 

Thanks to Justis Mills for feedback on a draft of this post.

New to LessWrong?

New Comment
7 comments, sorted by Click to highlight new comments since: Today at 9:02 PM

Human/Machine Intelligence Parity by 2040? on Metaculus has a pretty high bar for human-level intelligence:

Assume that prior to 2040, a generalized intelligence test will be administered as follows. A team of three expert interviewers will interact with a candidate machine system (MS) and three humans (3H). The humans will be graduate students in each of physics, mathematics and computer science from one of the top 25 research universities (per some recognized list), chosen independently of the interviewers. The interviewers will electronically communicate (via text, image, spoken word, or other means) an identical series of exam questions of their choosing over a period of two hours to the MS and 3H, designed to advantage the 3H. Both MS and 3H have full access to the internet, but no party is allowed to consult additional humans, and we assume the MS is not an internet-accessible resource. The exam will be scored blindly by a disinterested third party.

Question resolves positively if the machine system outscores at least two of the three humans on such a test prior to 2040.

(I graduated in physics from a top-25 research university, and I'm not at all confident I'd pass this test myself.)

In any case, I wonder if it's better to not overly focus on the question of "the right operational definition of human-level intelligence" and instead adopt Holden's approach of talking about PASTA, in particular the last 2 sentences:

By "transformative AI," I mean "AI powerful enough to bring us into a new, qualitatively different future." The Industrial Revolution is the most recent example of a transformative event; others would include the Agricultural Revolution and the emergence of humans.2

This piece is going to focus on exploring a particular kind of AI I believe could be transformative: AI systems that can essentially automate all of the human activities needed to speed up scientific and technological advancement. I will call this sort of technology Process for Automating Scientific and Technological Advancement, or PASTA.3 (I mean PASTA to refer to either a single system or a collection of systems that can collectively do this sort of automation.) ... [some paragraphs on what PASTA can do]

By talking about PASTA, I'm partly trying to get rid of some unnecessary baggage in the debate over "artificial general intelligence." I don't think we need artificial general intelligence in order for this century to be the most important in history. Something narrower - as PASTA might be - would be plenty for that.

The Metaculus definition is very interesting as it is quite different from what M. Y. Zuo suggested to be the natural interpretation of "human-level intelligence".

I like the PASTA suggestion, thanks for quoting that! However, I wonder whether that bar is a bit too high.

I've always understood the term to be a shorthand for: "median average human level intelligence" or perhaps "mean average human level intelligence".

My interpretation is pretty similar, though perhaps unimportantly broader and more task-based. Something like performing between 10th and 100th percentile of human capability at cognitive tasks in the context of discussion. A calculator certainly doesn't qualify, in both directions. It performs worse than human in almost every cognitive task and in contexts that refer only to certain very narrow arithmetic tasks, it performs super-humanly.

In a narrow Diplomacy-playing context, the recent bots are definitely human-level. Better than many humans (including the bottom 10% of people who have ever played the game), but not better than the best. Good chess programs are superhuman at their narrow domain, but utterly subhuman at everything else.

State of the art LLMs broadly display pretty much human level intelligence within their context, but with certain strengths and weaknesses somewhat outside the human range.

This multidimensionality is exactly why I think the term "human-level intelligence" should not be used. My impression is that it suggests a one-dimensional type of ability, with a threshold where the quality changes drastically; and the term even seems to suggest that this threshold to be at a level that is in fact not decisive.

Yes, that's fair enough. It's not like we have any examples of systems that have human-level intelligence in a broad context for the term to apply to anyway.

I do still think it's a useful term for hypothetical discussions, referring to systems that are not obviously subhuman nor superhuman in broad capabilities. It is possible that such systems may never exist. If we develop superintelligence, it may be via systems that are always obviously subhuman in some respects and superhuman in others, or with a discontinuity in capability, or other even stranger possibilities.

I considered adding this possible interpretation. However, I do not see where creating an AI that crosses this particular threshold would be something extremely meaningful as opposed to, say, an IQ of 95 or an IQ of 107 or 126. Being "smarter" than 50% of humanity does not seem to constitute a discrete jump in risks.