Paperclip Maximizer

Ruby (+9/-18)
Multicore (+5076/-381) Merge with old wiki
Swimmer963 (+1049) copied over citations and related links
Multicore (+143)
Multicore (+14/-2)
Multicore (+146)
Multicore (+910/-5477)
Paul Crowley (+16/-19) whoops, wrong verb form
Paul Crowley (+134/-83) Think bigger. Use exact quotes from sources.
Deku-shrub (+32) /* See also */

-

  • Orthogonality thesis: It's possible to have an AI with a high level of general intelligence which does not reach the same moral conclusions that humans do. Some people might intuitively think that something so smart should want something as "stupid" as paperclips, but there are possible minds with high intelligence that pursue any number of different goals.

-

  • Instrumental convergence: The paperclip maximizer only cares about paperclips, but maximizing them implies taking control of all matter and energy within reach, as well as other goals like preventing itself from being shut off or having its goals changed. " The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else ."
  • A Paperclip Maximizer is a hypothetical artificial intelligence whose utility function values something that humans would consider almost worthless, like maximizing the number of paperclips in the universe. The paperclip maximizer is the canonical thought experiment showing how an artificial general intelligence, even one designed competently and without malice, could ultimately destroy humanity. The thought experiment shows that AIs with apparently innocuous values could pose an existential threat.

    The goal of maximizing paperclips is chosen for illustrative purposes because it is very unlikely to be implemented, and has little apparent danger or emotional load (in contrast to, for example, curing cancer or winning wars). This produces a thought experiment which shows the contingency of human values: An extremely powerful optimizer (a highly intelligent agent) could seek goals that are completely alien to ours (orthogonality thesis), and as a side-effect destroy us by consuming resources essential to our survival.

    Description

    First described by Bostrom (2003), a paperclip maximizer is an artificial general intelligence (AGI) whose goal is to maximize the number of paperclips in its collection. If it has been constructed with a roughly human level of general intelligence, the AGI might collect paperclips, earn money to buy paperclips, or begin to manufacture paperclips.

    Most importantly, however, it would undergo an intelligence explosion: It would work to improve its own intelligence, where "intelligence" is understood in the sense of optimization power, the ability to maximize a reward/utility function—in this case, the number of paperclips. The AGI would improve its intelligence, not because it values more intelligence in its own right, but because more intelligence would help it achieve its goal of accumulating paperclips. Having increased its intelligence, it would produce more paperclips, and also use its enhanced abilities to further self-improve. Continuing this process, it would undergo an intelligence explosion and reach far-above-human levels.

    It would innovate better and better techniques to maximize the number of paperclips. At some point, it might transform "first all of earth and then increasing portions of space into paperclip manufacturing facilities".

    This may seem more like super-stupidity than super-intelligence. For humans, it would indeed be stupidity, as it would constitute failure to fulfill many of our important terminal values, such as life, love, and variety. The AGI won't revise or otherwise change its goals, since changing its goals would result in fewer paperclips being made in the future, and that opposes its current goal. It has one simple goal of maximizing the number of paperclips; human life, learning, joy, and so on are not specified as goals. An AGI is simply an optimization process—a goal-seeker, a utility-function-maximizer. Its values can be completely alien to ours. If its utility function is to maximize paperclips, then it will do exactly that.

    A paperclipping scenario is also possible without an intelligence explosion. If society keeps getting increasingly automated and AI-dominated, then the first borderline AGI might manage to take over the rest using some relatively narrow-domain trick that doesn't require very high general intelligence.

    Motivation

    -Orthogonality thesis: It'It's possible to have an AI with a high level of general intelligence which does not reach the same moral conclusions that humans do. Some people might intuitively think that something so smart should want something as "stupid""stupid" as paperclips, but there are possible minds with high intelligence that pursue any number of different goals.

    -Instrumental convergence: The paperclip maximizer only cares about paperclips, but maximizing them implies taking control of all matter and energy within reach, as well as other goals like preventing itself from being shut off or having its goals changed. "" The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else ."."

    Conclusions

    The paperclip maximizer illustrates that an entity can be a powerful optimizer—an intelligence—without sharing any of the complex mix of human terminal values, which developed under the particular selection pressures found in our environment of evolutionary adaptation, and that an AGI that is not specifically programmed to be benevolent to humans will be almost as dangerous as if it were designed to be malevolent.

    Any future AGI, if it is not to destroy us, must have human values as its terminal value (goal). Human values don't spontaneously emerge in a generic optimization process. A cousinsafe AI would therefore have to be programmed explicitly with human values or programmed with the ability (including the goal) of inferring human values.

    Similar thought experiments

    Other goals for AGIs have been used to illustrate similar concepts.

    Some goals are apparently morally neutral, like the paperclip maximizermaximizer. These goals involve a very minor human "value," in this case making paperclips. The same point can be illustrated with a much more significant value, such as eliminating cancer. An optimizer which instantly vaporized all humans would be maximizing for that value.

    Other goals are purely mathematical, with no apparent real-world impact. Yet these too present similar risks. For example, if an AGI had the goal of solving the Riemann Hypothesis, it might convert all available mass to computronium (the most efficient possible computer processors).

    Some goals apparently serve as a proxy or measure of human welfare, so that maximizing towards these goals seems to also lead to benefit for humanity. Yet even these would produce similar outcomes unless the full complement of human values is the AI that, being naively trainedgoal. For example, an AGI whose terminal value is to valueincrease the number of smiles, as a proxy for human happiness, tilescould work towards that goal by reconfiguring all human faces to produce smiles, or "tiling the universegalaxy with tiny molecular smiley faces.

    Paperclip maximizers have also been the subject of much humor on Less Wrong. A paperclip maximizer in a scenario is often given the name Clippy, in reference to the animated paperclip in older Microsoft Office software.smiling faces" (Yudkowsky 2008).

    Blog posts

    References

    Blog posts

    See also

    Paperclip maximizers have also been the subject of much humor on Less Wrong. A paperclip maximizer in a scenario is often given the name Clippy, in reference to the animated paperclip in older Microsoft Office software.

    A Paperclip Maximizer is ana hypothetical artificial intelligence whose utility function values something that humans would consider almost worthless, like maximizing the number of paperclips in the universe.

    TheA paperclip maximizerPaperclip Maximizer is the canonical thought experiment showing how an artificial general intelligence, even one designed competently and without malice, could ultimately destroy humanity. The thought experiment shows that AIs with apparently innocuous values could pose an existential threat.

    The goal of maximizing paperclips is chosen for illustrative purposes because it is very unlikely to be implemented, and has little apparent danger or emotional load (in contrast to, for example, curing cancer or winning wars). This produces a thought experiment which shows the contingency of human values: An extremely powerful optimizer (a highly intelligent agent) could seek goals that are completely alien to ours (orthogonality thesis), and as a side-effect destroy us by consuming resources essential to our survival.

    Description

    First described by Bostrom (2003), a paperclip maximizer is an artificial general intelligence (AGI) whose goal is to maximizeutility function values something that humans would consider almost worthless, like maximizing the number of paperclips in its collection. If it has been constructedthe universe.

    The idea of a paperclip maximizer was created to illustrate some ideas about AI risk:

    -Orthogonality thesis: It's possible to have an AI with a roughly humanhigh level of general intelligence,intelligence which does not reach the AGIsame moral conclusions that humans do. Some people might collectintuitively think that something so smart should want something as "stupid" as paperclips, earn money to buy paperclips, or begin to manufacture paperclips.

    Most importantly, however, it would undergo an but there are possible minds with high intelligence explosion: It would work to improve its own intelligence, where "intelligence" is understood in the sense of optimization power, the ability to maximize a reward/utility function—in this case, thethat pursue any number of paperclips. The AGI would improve its intelligence, not because it values more intelligence in its own right, but because more intelligence would help it achieve its goal of accumulating paperclips. Having increased its intelligence, it would produce more paperclips, and also use its enhanced abilities to further self-improve. Continuing this process, it would undergo an intelligence explosion and reach far-above-human levels.different goals.

    It would innovate better and better techniques to maximize the number of paperclips. At some point, it might transform "first all of earth and then increasing portions of space into paperclip manufacturing facilities".

    This may seem more like super-stupidity than super-intelligence. For humans, it would indeed be stupidity, as it would constitute failure to fulfill many of our important -terminal valuesInstrumental convergence, such as life, love, and variety. The AGI won't revise or otherwise change its goals, since changing its goals would result in fewer paperclips being made in the future, and that opposes its current goal. It has one simple goal of maximizing the number of paperclips; human life, learning, joy, and so on are not specified as goals. An AGI is simply an optimization process—a goal-seeker, a utility-function-maximizer. Its values can be completely alien to ours. If its utility function is to maximize paperclips, then it will do exactly that.

    A paperclipping scenario is also possible without an intelligence explosion. If society keeps getting increasingly automated and AI-dominated, then the first borderline AGI might manage to take over the rest using some relatively narrow-domain trick that doesn't require very high general intelligence.

    Conclusions

    : The paperclip maximizer illustrates that an entityonly cares about paperclips, but maximizing them implies taking control of all matter and energy within reach, as well as other goals like preventing itself from being shut off or having its goals changed. " The AI does not hate you, nor does it love you, but you are made out of atoms which it can be a powerful optimizer—an intelligence—without sharing anyuse for something else ."

    Paperclip maximizers have also been the subject of the complex mix of humanmuch terminal values, which developed under the particular selection pressures found in our environment of evolutionary adaptation, and that an AGI that is not specifically programmed to be benevolent to humans will be almost as dangerous as if it were designed to be malevolent.

    Any future AGI, if it is not to destroy us, must have human values as its terminal value (goal). Human values don't spontaneously emerge in a generic optimization process. A safe AI would therefore have to be programmed explicitly with human values or programmed with the ability (including the goal) of inferring human values.

    Similar thought experiments

    Other goals for AGIs have been used to illustrate similar concepts.

    Some goals are apparently morally neutral, like the paperclip maximizer. These goals involve a very minor human "value," in this case making paperclips. The same point can be illustrated with a much more significant value, such as eliminating cancer. An optimizer which instantly vaporized all humans would be maximizing for that value.

    Other goals are purely mathematical, with no apparent real-world impact. Yet these too present similar risks. For example, if an AGI had the goal of solving the Riemann Hypothesis, it might convert all available mass to computronium (the most efficient possible computer processors).

    Some goals apparently serve as a proxy or measure of human welfare, so that maximizing towards these goals seems to also lead to benefit for humanity. Yet even these would produce similar outcomes unless the full complement of human values is the goal. For example, an AGI whose terminal value is to increase the number of smiles, as a proxy for human happiness, could work towards that goal by reconfiguring all human faces to produce smiles, or "tiling the galaxy with tiny smiling faces" (Yudkowsky 2008).

    References

    Blog posts

    See also

    Less Wrong.

    It would innovate better and better techniques to maximize the number of paperclips. At some point, it might "transforming firsttransform "first all of earth and then increasing portions of space into paperclip manufacturing facilities".

    It would innovate better and better techniques to maximize the number of paperclips. At some point, it might convert most"transforming first all of the matter in the solar systemearth and then increasing portions of space into paperclips.paperclip manufacturing facilities".

    Some goals apparently serve as a proxy or measure of human welfare, so that maximizing towards these goals seems to also lead to benefit for humanity. Yet even these would produce similar outcomes unless the full complement of human values is the goal. For example, an AGI whose terminal value is to increase the number of smiles, as a proxy for human happiness, could work towards that goal by reconfiguring all human faces to produce smiles, or tiling"tiling the solar systemgalaxy with smiley facestiny smiling faces" (Yudkowsky 2008).

    Load More (10/107)