A Paperclip Maximizer is a hypothetical artificial intelligence whose utility function values something that humans would consider almost worthless, like maximizing the number of paperclips in the universe.
It's possible to have an AI with a high level of general intelligence which does not reach the same moral conclusions that humans do. Some people might intuitively think that something so smart should want something as "stupid" as paperclips, but there are possible minds with high intelligence that pursue any number of different goals.
-Instrumental convergence: The paperclip maximizer only cares about paperclips, but maximizing them implies taking control of all matter and energy within reach, as well as other goals like preventing itself from being shut off or having its goals changed.
" The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else ."
AI that, being naively trained to value happiness, tiles the universe with tiny molecular smiley faces. Paperclip maximizers have also been the subject of much humor on Less Wrong. A paperclip maximizer in a scenario is often given the name Clippy, in reference to the animated paperclip in older Microsoft Office software. Blog posts