LESSWRONG
LW

Wikitags

Unfriendly Artificial Intelligence

Edited by Vladimir_Nesov, Swimmer963 (Miranda Dixon-Luinenburg), et al. last updated 22nd Sep 2020

An Unfriendly artificial intelligence (or UFAI) is an capable of causing to humanity, and having goals that make it useful for the AI to do so. The AI's goals don't need to be antagonistic to humanity's goals for it to be Unfriendly; there are to expect that almost any powerful AGI not explicitly programmed to be benevolent to humans is lethal. A is often imagined as an illustrative example of an unFriendly AI indifferent to humanity. An AGI specifically designed to have a positive effect on humanity is called a Friendly AI.

See also

  • ,
  • Friendly AI

References

  • Eliezer S. Yudkowsky (2008). "Artificial Intelligence as a Positive and Negative Factor in Global Risk". Global Catastrophic Risks. Oxford University Press. (PDF)
  • Stephen M. Omohundro (2008). "The Basic AI Drives". Frontiers in Artificial Intelligence and Applications (IOS Press). (PDF)
Really powerful optimization process
great harm
Existential risk
artificial general intelligence
paperclip maximizer
Paperclip maximizer
Mind design space
magical categories
Discussion1
Discussion1
strong reasons
Basic AI drives