AGI Skepticism

habryka (+93)
Michael_Anissimov (-2)
Michael_Anissimov (+18/-68)
Kaj_Sotala (+1419/-984)
steven0461 (+25) /* See Also */
steven0461 (+2/-3) /* External Links */
steven0461 (+2/-1)
Kaj_Sotala (+1543/-944)
Daniel Trenor (-21) /* See Also */
Daniel Trenor (+20/-24) /* See Also */

There are also skeptics who think that the prospect of near-term AGI seems remote, but don't dismiss the issue entirely. An (AAAI presidential panel on long-term AI futures) concluded that

A typical argument is that we currently only have narrow AI, and that there is no sign of progress towards general intelligence. Some critics have gone as far as toeven argue that predictions of near-term AGI belong to the realm of religion, not science or engineering.

Some skeptics go even more far, sayingalso say that discussion about AGI risk is a dangerous waste of waste time that diverts attention from more important issues. Daniel Dennett considers AGI risk an "imprudent pastime" because it distracts our attention from more immediate threats, and the philosopher Alfred Nordmann holds the view that ethical concern is a scarce resource, not to be wasted on unlikely future scenarios (1, 2).

There are also skeptics who think that the prospect of near-term AGI seems remote, but don't go so far as to dismiss the issue entirely. An (AAAI presidential panel on long-term AI futures) concluded that

AGI skepticism involves objections to the possibility of Artificial General Intelligence being developed in the near future. AnSkeptics include various technology and science luminaries such as Douglas Hofstadter, Gordon Bell, Steven Pinker, and Gordon Moore:

"It might happen someday, but I think life and intelligence are far more complex than the current singularitarians seem to believe, so I doubt it will happen in the next couple of centuries." -- Douglas Hofstadter

A typical argument is that although AGI is possible in principle,we currently only have narrow AI, and that there is no reasonsign of progress towards general intelligence. Some critics have gone as far as to expect it in the near future. Typically, this is dueargue that predictions of near-term AGI belong to the belief that although there have been great strides in narrow AI, researchers are still no closer to understanding how to build AGI. Distinguished computer scientists such as Gordon Bell and Gordon Moore, as well as cognitive scientists such as Douglas Hofstadter and Steven Pinker, have expressed the opinion that AGI is remote (IEEE Spectrum 2008). Bringsjord et al. (2012) argue outright that a belief in AGI being developed within any time short of a century is fideistic, appropriate within the realm of religion but, not within science or engineering.

Some skeptics not only disagree with AGI being near, but also criticize anygo even more far, saying that discussion ofabout AGI risk in the first place. In their view, such discussionis a dangerous of waste time that diverts attention from more important issues. Daniel Dennett (2012) considers AGI risk an "imprudent pastime" because it distracts our attention from a more immediate threat: being enslaved by the internet. Likewise,threats, and the philosopher Alfred Nordmann holds the view that ethical concern is a scarce resource, not to be wasted on unlikely future scenarios (Nordmann (20071, 20092).

Others agreeThere are also skeptics who think that the prospect of near-term AGI is stillseems remote, but don't go so far away and not yet a major concern, but admit that it might be valuableas to givedismiss the issue some attention.entirely. An (AAAI presidential panel on long-term AI futures) concluded that there

There was overall skepticism about AGI risk, butthe prospect of an intelligence explosion as well as of a “coming singularity,” and also about the large-scale loss of control of intelligent systems. Nevertheless, there was a shared sense that additional research into the topic and related subjects would be valuable (Horvitz & Selman 2009).on methods for understanding and verifying the range of behaviors of complex computational systems to minimize unexpected outcomes. Some panelists recommended that more research needs to be done to better define “intelligence explosion,” and also to better formulate different classes of such accelerating intelligences. Technical work would likely lead to enhanced understanding of the likelihood of such phenomena, and the nature, risks, and overall outcomes associated with different conceived variants.

AGI skepticism involves objections to the possibility of Artificial General Intelligence being developed in the near future. AAn argument is that although AGI is possible in principle, there is no reason to expect it in the near future. Typically, this is due to the belief that although there have been great strides in narrow AI, researchers are still no closer to understanding how to build AGI. Distinguished computer scientists such as Gordon Bell and Gordon Moore, as well as cognitive scientists such as Douglas Hofstadter and Steven Pinker, have expressed the opinion that AGI is remote (IEEE Spectrum 2008). Bringsjord et al. (2012) argue outright that a belief in AGI being developed within any time short of a century is fideistic, appropriate within the realm of religion but not within science or engineering.

A number ofAGI skepticism involves objections have been raised to the possibility of Artificial General Intelligence being developed any time soon. Many of these arguments stem from opponents directly comparingin the near future. A argument is that although AGI is possible in principle, there is no reason to human cognition. However, human cognition may have littleexpect it in the near future. Typically, this is due to do with how AGI’s are eventually engineered.

It has been observedthe belief that since the 1950’salthough there have been several cyclesgreat strides in narrow AI, researchers are still no closer to understanding how to build AGI. Distinguished computer scientists such as Gordon Bell and Gordon Moore, as well as cognitive scientists such as Douglas Hofstadter and Steven Pinker, have expressed the opinion that AGI is remote (IEEE Spectrum 2008). Bringsjord et al. (2012) argue outright that a belief in AGI being developed within any time short of large investment (from both government and private enterprise) followed by disappointment caused by unrealistic predictions made by those workinga century is fideistic, appropriate within the realm of religion but not within science or engineering.

Some skeptics not only disagree with AGI being near, but also criticize any discussion of AGI risk in the field. Critics will pointfirst place. In their view, such discussion diverts attention from more important issues. Dennett (2012) considers AGI risk an "imprudent pastime" because it distracts our attention from a more immediate threat: being enslaved by the internet. Likewise, the philosopher Alfred Nordmann holds the view that ethical concern is a scarce resource, not to these failures asbe wasted on unlikely future scenarios (Nordmann 2007, 2009).

Others agree that AGI is still far away and not yet a means to attack the current generation of AGI scientists. This period of lack of progress is referred to as the "A.I winter".

Furthermore, a variety of high profile figures from computer and neuroscience, such as Steven Pinker and Douglas Hofstadter, have suggestedmajor concern, but admit that the complexity of intelligence is far greater than AGI advocates appreciate. Even if computing power continues to increase exponentially this does nothing to help understand how an AGIit might be built.valuable to give the issue some attention. An AAAI presidential panel on long-term AI futures concluded that there was overall skepticism about AGI risk, but that additional research into the topic and related subjects would be valuable (Horvitz & Selman 2009).

Load More (10/18)