History of AI Risk Thought

Matthew Barnett (+5/-5)
NeuralBets (+8/-7)
pedrochaves (+10/-7)
pedrochaves (+5/-5) /* Early history */
pedrochaves (+4377/-53)
pedrochaves (+416) Created page with "This article contains a summary of the '''history of AI risk thought''', based on Luke Muehlhauser’s [http://lesswrong.com/r/discussion/lw/ajm/ai_risk_and_opportunity_a_strateg..."

Deeply involved in the first steps of AI development, Alan Turing stated soon after, during 1950, that we should expect machines to be able to hold conversations indistinguishably from humans. His colleague, I.J. Good was responsible to coin the term intelligence explosion in 1965, when discussing the moment when a machine was made that could start creating other, better machines. It can be said, however, that Good based his work on the previous Von Neumann speculations regarding complexity (1948, 1949). Curiously enough, despite so many authors embracing the idea of a sudden explosion in the development of AI and the recognition of risks it might bring, it was only in 1970 that, through Good, an explicit statement is made.made. The author hoped that in a decade from there, the matter has been thoroughly discussed. It was not, as we now know, and the author himself, again in 1982 expressed his views on the possible design of a machine ethics framework.

All this factors have led AI risk to become increasingly mainstream, reaching a vaster audience of both scientists - including peer-reviewdreviewed journals with special issues on the theme - and general population. It has seen an increased expansion in the last two decades, both in quantity and quality, but still leaving room for unanswered problems and serious difficulties to tackle.

The first registered thoughts on the possibility of artificial, machine intelligence, becoming a risk to humanity stem instems from the late Industrial Revolution. It was Samuel Butler, in 1863, in his Darwin Among the Machines that suggested machines could eventually replace humans as the dominant agents on Earth. This idea was initially picked up by science fiction writers, most notably Karel Čapek and his “R.U.R” (1921) and John W. Campbell’s “The Last Evolution” (1932) and “The Machine” (1935). Soon after followed Isaac Asimov and his famous “Runaround” (1942), where the Three Laws of Robotics were stated, and with them the first concerns with AI safety and the creation of rules to deal with AI agents.

Deeply involved in the first steps of AI development, Alan Turing stated soon after, during 1950, that we should expect machines to be able to hold conversations indistinguishably from humans. His colleague, I.J. Good was responsible to coin the term intelligence explosion in 1965, when discussing the moment when a machine was made that could start creating other, better machines. It can be said, however, that Good based his work on the previous Von Neumann speculations regarding complexity (1948, 1949). Curiously enough, despite so many authors embracing the idea of a sudden explosion in the development of AI and the recognition of risks it might bring, it was only in 1970 that, through Good, an explicit statement is made. The author hopeshoped that in a decade from there, the matter has been thoroughly discussed. It was not, as we now know, and the author himself, again in 1982 expressed his views on the possible design of a machine ethics framework.

This article contains a summary of the history of AI risk thought, based on Luke Muehlhauser’s AI Risk & Opportunity: A Strategic Analysis. It covers the development and evolution of ideas and concepts regarding AI risk from the early industrial revolution to the present days and the people involved, from computer scientists to sci-fi writers.involved.

The first registered thoughts on the possibility of artificial, machine intelligence, becoming a risk to humanity stem in the late Industrial Revolution. It was Samuel Butler, in 1863, in his Darwin Among the Machines that suggested machines could eventually replace humans as the dominant agents on Earth. This idea was initially picked up by science fiction writers, most notably Karel Čapek and his “R.U.R” (1921) and John W. Campbell’s “The Last Evolution” (1932) and “The Machine” (1935). Soon after followed Isaac Asimov and his famous “Runaround” (1942), where the Three Laws of Robotics were stated, and with them the first concerns with AI safety and the creation of rules to deal with AI agents.

Deeply involved in the first steps of AI development, Alan Turing stated soon after, during 1950, that we should expect machines to be able to hold conversations indistinguishably from humans. His colleague, I.J. Good was responsible to coin the term intelligence explosion in 1965, when discussing the moment when a machine was made that could start creating other, better machines. It can be said, however, that Good based his work on the previous Von Neumann speculations regarding complexity (1948, 1949). Curiously enough, despite so many authors embracing the idea of a sudden explosion in the development of AI and the recognition of risks it might bring, it was only in 1970 that, through Good, an explicit statement is made. The author hopes that in a decade from there, the matter has been thoroughly discussed. It was not, as we now know, and the author himself, again in 1982 expressed his views on the possible design of a machine ethicsframework.

From the 1980's the preoccupations with AI safety increased, even among the critics. Jack Schwartz, for instance, speculated that a new era of economical, sociological and historical definition could be overwhelming to humanity. He was much in line with Solomonoff's thoughts. Moravec, on the other hand, sustained that although AI could represent and existential risk, it was probably one society should face in order to solve other threats. This early era also sees the emergence Marvin Minsky’s assumption (1984) and worry that we might find it hard to make AI do what we want, due to our difficulty in expressing our true desires. His ideas resonate closely with the value extrapolation and CEV problem.

The modern era of though on AI risk was brought on mainly...

Read More (176 more words)

This article contains a summary of the history of AI risk thought, based on Luke Muehlhauser’s AI Risk & Opportunity: A Strategic Analysis. It covers the development and evolution of ideas and concepts regarding AI risk from the early industrial revolution to the present days and the people involved, from computer scientists to sci-fi writers.

Early history

Modern days

Further Reading & References

See Also