You are viewing revision 1.13.0, last edited by steven0461

AGI skepticism involves objections to the possibility of Artificial General Intelligence being developed in the near future. An argument is that although AGI is possible in principle, there is no reason to expect it in the near future. Typically, this is due to the belief that although there have been great strides in narrow AI, researchers are still no closer to understanding how to build AGI. Distinguished computer scientists such as Gordon Bell and Gordon Moore, as well as cognitive scientists such as Douglas Hofstadter and Steven Pinker, have expressed the opinion that AGI is remote (IEEE Spectrum 2008). Bringsjord et al. (2012) argue outright that a belief in AGI being developed within any time short of a century is fideistic, appropriate within the realm of religion but not within science or engineering.

Some skeptics not only disagree with AGI being near, but also criticize any discussion of AGI risk in the first place. In their view, such discussion diverts attention from more important issues. Dennett (2012) considers AGI risk an "imprudent pastime" because it distracts our attention from a more immediate threat: being enslaved by the internet. Likewise, the philosopher Alfred Nordmann holds the view that ethical concern is a scarce resource, not to be wasted on unlikely future scenarios (Nordmann 2007, 2009).

Others agree that AGI is still far away and not yet a major concern, but admit that it might be valuable to give the issue some attention. An AAAI presidential panel on long-term AI futures concluded that there was overall skepticism about AGI risk, but that additional research into the topic and related subjects would be valuable (Horvitz & Selman 2009).

See Also