| v1.47.0 | (+480/-451) Replaced the first paragraph with Eliezer's one-paragraph Arbital version. | |||
| v1.46.0 | (+9/-9) | |||
| v1.45.0 | (+12/-15) | |||
| v1.44.0 | (-2) | |||
| v1.43.0 | (+559) | |||
| v1.42.0 | ||||
| v1.41.0 | (+27/-20) | |||
| v1.40.0 | (+4/-5) | |||
| v1.39.0 | (+5/-5) Rewrote the first paragraph, reworded the second. I will probably fix up the last section later. | |||
| v1.38.0 | (+14/-12) |
A strong version of this idea suggests that once the positive feedback starts to play a role, it will lead to a very dramatic leap in capability very quickly. This is known as a “hard takeoff.” In this scenario, technological progress drops into the characteristic timescale of transistors rather than human neurons, and the ascent rapidly surges upward and creates superintelligence (minds(a mind orders of magnitude more powerful than human)a human's) before it hits physical limits. A hard takeoff is distinguished from a "soft takeoff" only by the speed with which said limit islimits are reached.
An intelligenceIntelligence explosion is theoretical scenario in which an intelligent agent analyzes the processes that produce its intelligence, improves upon them, and creates a successor which does the same. This process repeats in a positive feedback loop– each successive agent more intelligent than the last and thus more able to increase the intelligence of its successor – until some limit is reached. This limit is conjectured to be much, much higher than human intelligence.
An "Intelligence Explosionintelligence explosion" is theoretical scenario in whichwhat happens if a machine intelligence has fast, consistent returns on investing work into improving its own cognitive powers, over an intelligent agent analyzes the processes that produce its intelligence, improves upon them, and creates a successor which does the same.extended period. This process repeats in a positive feedback loop– each successive agent more intelligent than the last and thus morewould most stereotypically happen because it became able to increaseoptimize its own cognitive software, but could also apply in the intelligencecase of its successor – until some limit is reached. This limit is conjectured to be much, much higher than human intelligence."invested cognitive power in seizing all the computing power on the Internet" or "invested cognitive power in cracking the protein folding problem and then built nanocomputers".
A strong version of this idea suggests that once the positive feedback starts to play a role, it will lead to a very dramatic leap in capability very quickly. This is known as a “hard takeoff.“hard takeoff.” In this scenario, technological progress drops into the characteristic timescale of transistors rather than human neurons, and the ascent rapidly surges upward and creates superintelligence (a mind orders of magnitude more powerful than a human's) before it hits physical limits. A hard takeoff is distinguished from a "soft takeoff" only by the speed with which said limits are reached.
Intelligence explosionExplosion is theoretical scenario in which an intelligent agent analyzes the processes that produce its intelligence, improves upon them, and creates a successor which does the same. This process repeats in a positive feedback loop– each successive agent more intelligent than the last and thus more able to increase the intelligence of its successor – until some limit is reached. This limit is conjectured to be much, much higher than human intelligence.
A strong version of this idea suggests that once the positive feedback starts to play a role, it will lead to a very dramatic leap in capability very quickly. This is known as a “hard takeoff.” In this scenario, technological progress drops into the characteristic timescale of transistors rather than human neurons, and the ascent rapidly surges upward and creates superintelligence (minds orders of magnitude more powerful than human) before it hits physical limits. A hard takeoff is distinguished from a "soft takeoff" only by the speed whithwith which said limit is reached.
A strong version of this idea suggests that once the positive feedback starts to play a role, it will lead to a very dramatic leap in capability very quickly. This is known as a “hard takeoff.” In this scenario, technological progress drops into the characteristic timescale of transistors rather than human neurons, and the ascent rapidly surges upward and creates superintelligence (minds orders of magnitude more powerful than human) before it hits physical limits. A hard takeoff is distinguished from a "soft takeoff" only by the speed whichwhith which said limit is reached.
A strong version of this idea suggests that once the positive feedback starts to play a role, it will lead to a very dramatic leap in capability very quickly. This is known as a “hard takeoff.” In this scenario, technological progress drops into the characteristic timescale of transistors rather than human neurons, and the ascent rapidly surges upward and creates superintelligence (minds orders of magnitude more powerful than human) before it hits physical limits. A hard takeoff is distinguished from a soft takeoff"soft takeoff" only by the speed which which said limit is reached.
An “intelligence explosion” is theoretical scenario in which an intelligent agent analyzes the processes that produce its intelligence, improves upon them, and creates a successor which does the same. This process repeats in a positive feedback loop– each successive agent more intelligent than the last and thus more able to increase the intelligence of its successor – until some limit is reached. This limit is conjectured to be much, much higher than human intelligence.