This is a special post for quick takes by Simon Möller. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.

New to LessWrong?

1 comment, sorted by Click to highlight new comments since: Today at 10:53 AM

"Human-level AGI" is not a useful concept (any more). I think many people equate human-level AGI and AGI (per definition) as a system (or a combination of systems) that can accomplish any (cognitive) task at least as well as a human.

That's reasonable, but having the "human-level" in that term seems misleading to me. It anchors us to the idea that the system will be "somewhat like a human", which it won't be. So let's drop the qualifier and just talk about AGI.

Comparing artificial intelligence to human intelligence was somewhat meaningful when we were far away from it along many dimensions to gesture in a general direction.

But large language models are already superhuman on several dimensions (e.g. know more about most topics than any single human, think "faster") and inferior on others (e.g. strategic planning, long-term coherence). By the time they are at human level on all dimensions, they will be super-human overall.