Today's post, Hard Takeoff was originally published on 02 December 2008. A summary (taken from the LW wiki):


It seems likely that there will be a discontinuity in the process of AI self-improvement around the time when AIs become capable of doing AI theory. A lot of things have to go exactly right in order to get a slow takeoff, and there is no particular reason to expect them all to happen that way.

Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Whither Manufacturing?, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

New Comment

New to LessWrong?