LESSWRONG
LW

828
Wikitags
You are viewing version 1.23.0 of this page. Click here to view the latest version.

Computing Overhang

Edited by pedrochaves, Alex_Altair, Kaj_Sotala, et al. last updated 30th Dec 2024
You are viewing revision 1.23.0, last edited by pedrochaves

Computing overhang refers to a situation where new algorithms can exploit existing computing power far more efficiently than before. This can happen if previously used algorithms have been suboptimal.

In the context of Artificial General Intelligence, this signifies a situation where it becomes possible to create AGIs that can be run using only a small fraction of the easily available hardware resources. This could lead to an intelligence explosion, or to a massive increase in the number of AGIs, as they could be easily copied to run on countless computers. This could make AGIs much more powerful than before, and present an existential risk.

Examples

In 2010, the President's Council of Advisors on Science and Technology reported on benchmark production planning model having become faster by a factor of 43 million between 1988 and 2003. Of this improvement, only a factor of roughly 1,000 was due to better hardware, while a factor of 43,000 came from algorithmic improvements. This clearly reflects a situation where new programming methods were able to use available computing power more efficiently.

As of today, enormous amounts of computing power is currently available in the form of supercomputers or distributed computing. Large AI projects can grow to fill these resources by using deeper and deeper search trees, such as high-powered chess programs, or by performing large amounts of parallel operations on extensive databases, such as IBM's Watson playing Jeopardy. While the extra depth and breadth are helpful, it is likely that a simple brute-force extension of techniques is not the optimal use of the available computing resources. This leaves the need for improvement on the side of algorithmic implementations, where most work is currently focused.

Though estimates of whole brain emulation place that level of computing power at least a decade away, it is very unlikely that the algorithms used by the human brain are the most computationally efficient for producing AI. This happens mainly because evolution had no insight, no deliberate plan in creating the human mind, and our intelligence didn't develop with the goal of eventually being modeled by AI. As Yudkoswky puts it, human intelligence, created by evolution, is characterized by this design signature - and it has developed poorly adapted to deliberation. On the other hand, when considering the design of complex systems where the designer - us - collaborates with the system being constructed, we are faced with a new signature and a different way to achieve AGI.

References

See also

  • Optimization process
  • Optimization
Subscribe
Discussion
2
Subscribe
Discussion
2
Posts tagged Computing Overhang
21Taboo "compute overhang"
Zach Stein-Perlman
3y
8
268Are we in an AI overhang?
Ω
Andy Jones
5y
Ω
106
116Measuring hardware overhang
Ω
hippke
5y
Ω
14
71Thoughts on hardware / compute requirements for AGI
Ω
Steven Byrnes
3y
Ω
32
65Brain-inspired AGI and the "lifetime anchor"
Ω
Steven Byrnes
4y
Ω
16
50A closer look at chess scalings (into the past)
hippke
4y
14
44How Much Computational Power Does It Take to Match the Human Brain?
habryka
5y
1
38Relevant pre-AGI possibilities
Ω
Daniel Kokotajlo
5y
Ω
7
31Against "argument from overhang risk"
RobertM
1y
11
19GPT-2005: A conversation with ChatGPT (featuring semi-functional Wolfram Alpha plugin!)
Lone Pine
3y
0
4AI overhangs depend on whether algorithms, compute and data are substitutes or complements
[anonymous]3y
0
198Sam Altman’s Chip Ambitions Undercut OpenAI’s Safety Strategy
garrison
2y
52
84The 0.2 OOMs/year target
Ω
Cleo Nardo
3y
Ω
24
59Are There Examples of Overhang for Other Technologies?
Jeffrey Heninger
2y
50
59Before smart AI, there will be many mediocre or specialized AIs
Ω
Lukas Finnveden
2y
Ω
14
Load More (15/19)
Add Posts