This is a linkpost to a list of skeptical takes on AI FOOM. I haven't read them all and probably disagree with some of them, but it's valuable to put these arguments in one place. 

New Comment
4 comments, sorted by Click to highlight new comments since: Today at 2:47 PM

It seems nice to have these in one place but I'd love it if someone highlighted a top 10 or something.

Thanks, this is quite useful.

Still, it is rather difficult to imagine that they can be right. The standard argument seems to be quite compact.

Consider an ecosystem of human-equivalent artificial software engineers and artificial AI researchers. Take a population of those and make them work on producing a better, faster, more competent next generation of artificial software engineers and artificial AI researchers. Repeat using a population of better, faster, more competent entities, etc... If this saturates, it would probably saturate very far above human level...

(Of course, if people still believe that human-equivalent artificial software engineers and human-equivalent artificial AI researchers are a tall order, then skepticism is quite justified. But it's getting more and more difficult to believe that...) 

If this saturates, it would probably saturate very far above human level...

Foom is a much stronger claim than this. It's saying that there will be an incredibly fast, localized intelligence explosion involving literally one single AI system improving itself. Your scenario of an "ecosystem" of independent AI researchers working together sounds more like the "slow" takeoff of Christiano or Hanson than EY-style fast takeoff.

That depends on the dynamics, not on whether it is localized or distributed. E.g. if it includes a take-over of a large part of Internet, it will end up very distributed, so presumably a successful foom will get more distributed as it unfolds... But initially, a company will have it on its own local cluster, so it might be fairly localized for a while, depending on how they structure it...

(The monolithic abstractions, like a "singleton", are very questionable. Even a single human is fruitfully decomposed into a "society of minds" following Minsky. It might look "monolithic" or a "singleton" from the outside, but it will have all kinds of non-trivial internal dynamics, internal discourse, internal disagreements, and so on; this rich internal structure might be somewhat observable from the outside, or might be hidden.)

The real uncertainty is time: what timeframe for an "intelligence explosition" people are ready to call "foom" vs slow takeoff? https://www.lesswrong.com/tag/ai-takeoff makes a choice of putting this boundary between months and years:

A soft takeoff refers to an AGI that would self-improve over a period of years or decades.

A hard takeoff (or an AI going "FOOM" [2]) refers to AGI expansion in a matter of minutes, days, or months.

I certainly don't think the scheme I described would work in minutes, I am less certain about days, and I am mostly thinking in terms of weeks (months do feel a bit too long to me, although who knows).