LESSWRONG
LW

895
Wikitags

Intelligence explosion

Edited by Alex_Altair, joaolkf, Swimmer963 (Miranda Dixon-Luinenburg), Vladimir_Nesov, RobertM, et al. last updated 19th Feb 2025

An "intelligence explosion" is what happens if a machine intelligence has fast, consistent returns on investing work into improving its own cognitive powers, over an extended period. This would most stereotypically happen because it became able to optimize its own cognitive software, but could also apply in the case of "invested cognitive power in seizing all the computing power on the Internet" or "invested cognitive power in cracking the protein folding problem and then built nanocomputers".

A strong version of this idea suggests that once the positive feedback starts to play a role, it will lead to a very dramatic leap in capability very quickly. This is known as a “hard takeoff.” In this scenario, technological progress drops into the characteristic timescale of transistors rather than human neurons, and the ascent rapidly surges upward and creates superintelligence (a mind orders of magnitude more powerful than a human's) before it hits physical limits. A hard takeoff is distinguished from a "soft takeoff" only by the speed with which said limits are reached.

Published arguments

Philosopher David Chalmers published a significant analysis of the Singularity, focusing on intelligence explosions, in Journal of Consciousness Studies. His analysis of how they could occur defends the likelihood of an intelligence explosion. He performed a very careful analysis of the main premises and arguments for the existence of the a singularity from an intelligence explosion. According to him, the main argument is:"

  • 1. There will be AI (before long, absent defeaters).
  • 2. If there is AI, there will be AI+ (soon after, absent defeaters).
  • 3. If there is AI+, there will be AI++ (soon after, absent defeaters).

—————-

  • 4. There will be AI++ (before too long, absent defeaters). "

He also discusses the nature of general intelligence, and possible obstacles to a singularity. A good deal of discussion is given to the dangers of an intelligence explosion, and Chalmers concludes that we must negotiate it very carefully by building the correct values into the initial AIs.

Luke Muehlhauser and Anna Salamon argue in Intelligence Explosion: Evidence and Import in detail that there is a substantial chance of an intelligence explosion within 100 years, and extremely critical in determining the future. They trace the implications of many types of upcoming technologies, and point out the feedback loops present in them. This leads them to deduce that an above-human level AI will almost certainly lead to an intelligence explosion. They conclude with recommendations for bringing about a safe intelligence explosion.

Hypothetical path

The following is a common example of a possible path for an AI to bring about an intelligence explosion. First, the AI is smart enough to conclude that inventing molecular nanotechnology will be of greatest benefit to it. Its first act of recursive self-improvement is to gain access to other computers over the internet. This extra computational ability increases the depth and breadth of its search processes. It then uses gained knowledge of material physics and a distributed computing program to invent the first general assembler nanomachine. Then it uses some manufacturing technology, accessible from the internet, to build and deploy the nanotech. It programs the nanotech to turn a large section of bedrock into a supercomputer. This is its second act of recursive self-improvement, only possible because of the first. Then it could use this enormous computing power to consider hundreds of alternative decision algorithms, better computing structures and so on. After this, this AI would go from a near to human level intelligence to a superintelligence, providing a dramatic and abruptly increase in capability.

Blog posts

  • Cascades, Cycles, Insight..., ...Recursion, Magic
  • Recursive Self-Improvement, Hard Takeoff, Permitted Possibilities, & Locality

See also

  • Technological singularity, Hard takeoff
  • Existential risk
  • Artificial General Intelligence
  • Lawful intelligence
  • The Hanson-Yudkowsky AI-Foom Debate

External links

  • Intelligence Explosion website, a landing page for introducing the concept
  • Three Major Singularity Schools

References

  • Good, Irving John (1965). Franz L. Alt and Morris Rubinoff. ed. "Speculations concerning the first ultraintelligent machine." Advances in computers (New York: Academic Press) 6: 31-88. doi:10.1016/S0065-2458(08)60418-0.
  • David Chalmers (2010). "The Singularity: A Philosophical Analysis." Journal of Consciousness Studies 17: 7-65.
  • Muehlhauser, Luke; Salamon, Anna (2012). "Intelligence Explosion: Evidence and Import". in Eden, Amnon; Søraker, Johnny; Moor, James H. et al. The singularity hypothesis: A scientific and philosophical assessment. Berlin: Springer.
Parents:
Advanced agent properties
Subscribe
Discussion
1
Subscribe
Discussion
1
Posts tagged Intelligence explosion
20Why I'm Sceptical of Foom
DragonGod
3y
36
17Towards a Formalisation of Returns on Cognitive Reinvestment (Part 1)
DragonGod
3y
11
10The Hard Intelligence Hypothesis and Its Bearing on Succession Induced Foom
DragonGod
3y
7
1111960: The Year The Singularity Was Cancelled
Scott Alexander
6y
15
74New report: Intelligence Explosion Microeconomics
Eliezer Yudkowsky
12y
246
65Optimization and the Intelligence Explosion
Eliezer Yudkowsky
11y
2
15Superintelligence 6: Intelligence explosion kinetics
KatjaGrace
11y
68
79Carl Shulman on The Lunar Society (7 hour, two-part podcast)
ESRogs
2y
17
38The Evolution of Humans Was Net-Negative for Human Values
Zack_M_Davis
1y
1
34Intelligence Explosion vs. Co-operative Explosion
Kaj_Sotala
13y
62
33AGI will be made of heterogeneous components, Transformer and Selective SSM blocks will be among them
Ω
Roman Leventov
2y
Ω
9
22Facing the Intelligence Explosion discussion page
lukeprog
14y
138
22Does the AI control agenda broadly rely on no FOOM being possible?
Q
Noosphere89
6mo
Q
3
21Knowledge, Reasoning, and Superintelligence
owencb
6mo
1
20Existential risk from AI without an intelligence explosion
AlexMennen
8y
23
Load More (15/39)
Add Posts