LESSWRONG
LW

493
Wikitags

AI Takeoff

Edited by Ruby, Multicore, et al. last updated 30th Dec 2024

AI Takeoff is the process of an Artificial General Intelligence going from a certain threshold of capability (often discussed as "human-level") to being super-intelligent and capable enough to control the fate of civilization. There has been much debate about whether AI takeoff is more likely to be slow vs fast, i.e., "soft" vs "hard".

See also: AI Timelines, Seed AI, Singularity, Intelligence explosion, Recursive self-improvement

AI takeoff is sometimes casually referred to as AI FOOM.

Soft takeoff

A soft takeoff refers to an AGI that would self-improve over a period of years or decades. This could be due to either the learning algorithm being too demanding for the hardware or because the AI relies on experiencing feedback from the real-world that would have to be played out in real-time. Possible methods that could deliver a soft takeoff, by slowly building on human-level intelligence, are Whole brain emulation, Biological Cognitive Enhancement, and software-based strong AGI [1]. By maintaining control of the AGI's ascent it should be easier for a Friendly AI to emerge.

Vernor Vinge, Hans Moravec and have all expressed the view that soft takeoff is preferable to a hard takeoff as it would be both safer and easier to engineer.

Hard takeoff

A hard takeoff (or an AI going "FOOM" [2]) refers to AGI expansion in a matter of minutes, days, or months. It is a fast, abruptly, local increase in capability. This scenario is widely considered much more precarious, as this involves an AGI rapidly ascending in power without human control. This may result in unexpected or undesired behavior (i.e. Unfriendly AI). It is one of the main ideas supporting the Intelligence explosion hypothesis.

The feasibility of hard takeoff has been addressed by Hugo de Garis, Eliezer Yudkowsky, Ben Goertzel, Nick Bostrom, and Michael Anissimov. It is widely agreed that a hard takeoff is something to be avoided due to the risks. Yudkowsky points out several possibilities that would make a hard takeoff more likely than a soft takeoff such as the existence of large resources overhangs or the fact that small improvements seem to have a large impact in a mind's general intelligence (i.e.: the small genetic difference between humans and chimps lead to huge increases in capability) [3].

Notable posts

  • Hard Takeoff by Eliezer Yudkowsky

External links

  • The Age of Virtuous Machines by J. Storrs Hall President of The Foresight Institute
  • Hard take off Hypothesis by Ben Goertzel.
  • Extensive archive of Hard takeoff Essays from Accelerating Future
  • Can we avoid a hard take off? by Vernor Vinge
  • Robot: Mere Machine to Transcendent Mind by Hans Moravec
  • The Singularity is Near by Ray Kurzweil

References

  1. http://www.aleph.se/andart/archives/2010/10/why_early_singularities_are_softer.html↩
  2. http://lesswrong.com/lw/63t/requirements_for_ai_to_go_foom/↩
  3. http://lesswrong.com/lw/wf/hard_takeoff/↩
Subscribe
Discussion
2
Subscribe
Discussion
2
Posts tagged AI Takeoff
98AlphaGo Zero and the Foom Debate
Eliezer Yudkowsky
8y
17
74New report: Intelligence Explosion Microeconomics
Eliezer Yudkowsky
12y
246
102Arguments about fast takeoff
Ω
paulfchristiano
8y
Ω
68
191Discontinuous progress in history: an update
KatjaGrace
5y
25
44Quick Nate/Eliezer comments on discontinuity
Rob Bensinger
8y
1
27Will AI See Sudden Progress?
KatjaGrace
8y
11
27Will AI undergo discontinuous progress?
Ω
Sammy Martin
6y
Ω
21
122Soft takeoff can still lead to decisive strategic advantage
Ω
Daniel Kokotajlo
6y
Ω
47
63Takeoff Speeds and Discontinuities
Ω
Sammy Martin, Daniel_Eth
4y
Ω
1
140Against GDP as a metric for timelines and takeoff speeds
Ω
Daniel Kokotajlo
5y
Ω
19
210Yudkowsky and Christiano discuss "Takeoff Speeds"
Ω
Eliezer Yudkowsky
4y
Ω
176
30Modelling Continuous Progress
Ω
Sammy Martin
5y
Ω
3
155Conjecture internal survey: AGI timelines and probability of human extinction from advanced AI
Ω
Maris Sala
2y
Ω
5
79Distinguishing definitions of takeoff
Ω
Matthew Barnett
6y
Ω
6
159Why all the fuss about recursive self-improvement?
So8res
3y
62
Load More (15/264)
Add Posts