Today's post, Above-Average AI Scientists was originally published on 28 September 2008. A summary (taken from the LW wiki):

 

A lot of AI researchers aren't really all that exceptional. This is a problem, though most people don't seem to see it.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Competent Elites, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

New to LessWrong?

New Comment
3 comments, sorted by Click to highlight new comments since: Today at 2:30 AM

Is there something different about what AI researchers are trying to create and what EY calls AGI?

I would be happy to have a piece of equipment which can learn a language by listening to it; an AI doesn't need to be a fully realized AGI to be good at anything that we can imagine.

It would be a breakthrough in AI development if we were able to program a robot to be as useful as a below-average minimum-wage-slave. The singularity is not needed in order to create programs which can fly aircraft with no more than 10 times the failure rate of human pilots, and software which can understand and adapt to a limited context of situations need not be a step towards the singularity.

Yes, what "AGI" refers to on LW is typically something far more powerful than what you describe. And yes, this causes a lot of confusion from time to time, when people confuse it with artificial intelligence in general (I've made this error more than once), which is one reason I often try to talk about "superhuman optimizers" rather than "AIs" in the LW context. (I often fail.)

It also says that you can't necessarily extrapolate the FAI-theory comprehension of future researchers from present researchers, if a breakthrough occurs that repopulates the field with Norvig-class minds.

Maybe a thing to work on then is figuring out what breakthrough might do that and then work on that.