LESSWRONG
LW

Wikitags

Computing Overhang

Edited by pedrochaves, Alex_Altair, Kaj_Sotala, Swimmer963 (Miranda Dixon-Luinenburg), et al. last updated 30th Dec 2024

Computing Overhang is a situation where new algorithms can exploit existing computing power far more efficiently than before. This can happen if previously used algorithms have been suboptimal.

In the context of Artificial General Intelligence, this signifies a situation where it becomes possible to create AGIs that can be run using only a small fraction of the easily available hardware resources. This could lead to an intelligence explosion, or to a massive increase in the number of AGIs, as they could be easily copied to run on countless computers. This could make AGIs much more powerful than before, and present an existential risk.

Examples

In 2010, the President's Council of Advisors on Science and Technology reported on benchmark production planning model having become faster by a factor of 43 million between 1988 and 2003. Of this improvement, only a factor of roughly 1,000 was due to better hardware, while a factor of 43,000 came from algorithmic improvements. This clearly reflects a situation where new programming methods were able to use available computing power more efficiently.

As of today, enormous amounts of computing power is currently available in the form of supercomputers or distributed computing. Large AI projects can grow to fill these resources by using deeper and deeper search trees, such as high-powered chess programs, or by performing large amounts of parallel operations on extensive databases, such as IBM's Watson playing Jeopardy. While the extra depth and breadth are helpful, it is likely that a simple brute-force extension of techniques is not the optimal use of the available computing resources. This leaves the need for improvement on the side of algorithmic implementations, where most work is currently focused on.

Though estimates of whole brain emulation place that level of computing power at least a decade away, it is very unlikely that the algorithms used by the human brain are the most computationally efficient for producing AI. This happens mainly because our brains evolved during a natural selection process and thus weren't deliberatly created with the goal of being modeled by AI.

As Yudkoswky puts it, human intelligence, created by this "blind" evolutionary process, has only recently developed the ability for planning and forward thinking - deliberation. On the other hand, the rest and almost all our cognitive tools were the result of ancestral selection pressures, forming the roots of almost all our behavior. As such, when considering the design of complex systems where the designer - us - collaborates with the system being constructed, we are faced with a new signature and a different way to achieve AGI that's completely different than the process that gave birth to our brains.

References

  • Muehlhauser, Luke; Salamon, Anna (2012). "Intelligence Explosion: Evidence and Import". in Eden, Amnon; Søraker, Johnny; Moor, James H. et al.. The singularity hypothesis: A scientific and philosophical assessment. Berlin: Springer.

See also

  • Optimization process
  • Optimization
Subscribe
2
Subscribe
2
Discussion0
Discussion0
Posts tagged Computing Overhang
21Taboo "compute overhang"
Zach Stein-Perlman
3y
8
268Are we in an AI overhang?
Ω
Andy Jones
5y
Ω
106
116Measuring hardware overhang
Ω
hippke
5y
Ω
14
65Brain-inspired AGI and the "lifetime anchor"
Ω
Steven Byrnes
4y
Ω
16
64Thoughts on hardware / compute requirements for AGI
Ω
Steven Byrnes
3y
Ω
32
50A closer look at chess scalings (into the past)
hippke
4y
14
44How Much Computational Power Does It Take to Match the Human Brain?
habryka
5y
1
38Relevant pre-AGI possibilities
Ω
Daniel Kokotajlo
5y
Ω
7
31Against "argument from overhang risk"
RobertM
1y
11
19GPT-2005: A conversation with ChatGPT (featuring semi-functional Wolfram Alpha plugin!)
Lone Pine
2y
0
4AI overhangs depend on whether algorithms, compute and data are substitutes or complements
[anonymous]3y
0
198Sam Altman’s Chip Ambitions Undercut OpenAI’s Safety Strategy
garrison
2y
52
84The 0.2 OOMs/year target
Ω
Cleo Nardo
2y
Ω
24
59Are There Examples of Overhang for Other Technologies?
Jeffrey Heninger
2y
50
59Before smart AI, there will be many mediocre or specialized AIs
Ω
Lukas Finnveden
2y
Ω
14
Load More (15/19)
Add Posts