LESSWRONG
LW

Simon Möller's Shortform

by Simon Möller
4th Mar 2023
1 min read
1

2

This is a special post for quick takes by Simon Möller. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Simon Möller's Shortform
1Simon Möller
1 comment, sorted by
top scoring
Click to highlight new comments since: Today at 5:26 PM
[-]Simon Möller3y11

"Human-level AGI" is not a useful concept (any more). I think many people equate human-level AGI and AGI (per definition) as a system (or a combination of systems) that can accomplish any (cognitive) task at least as well as a human.

That's reasonable, but having the "human-level" in that term seems misleading to me. It anchors us to the idea that the system will be "somewhat like a human", which it won't be. So let's drop the qualifier and just talk about AGI.

Comparing artificial intelligence to human intelligence was somewhat meaningful when we were far away from it along many dimensions to gesture in a general direction.

But large language models are already superhuman on several dimensions (e.g. know more about most topics than any single human, think "faster") and inferior on others (e.g. strategic planning, long-term coherence). By the time they are at human level on all dimensions, they will be super-human overall.

Reply
Moderation Log
More from Simon Möller
View more
Curated and popular this week
1Comments