You are viewing revision 1.0.0, last edited by Daniel Trenor

A number of objections have been raised to the possibility of Artificial General Intelligence being developed any time soon. Many of these arguments stem from opponents directly comparing AI to human cognition. However, human cognition may have little to with how AGI’s are eventually engineered. Objections range from non-materialist models of the mind, to evidence based on the supposed lack of progress made in artificial intelligence over the last 60 years, to philosophical ideas that set fundamental limits on what digital computers can process.

Since the 1950’s there has been several cycles of large investment (from both government and private enterprise) followed by disappointment caused by unrealistic predictions made by those working in the field. Critics will point to these failures as a means to attack the current generation of AGI scientists. This period of apparent lack of progress is often referred to as the "A.I winter".

The philosopher John Searle in his thought experiment “The Chinese Room” proposes a flaw in the functionality of digital computers that would prevent them from possessing a “mind”. In his example he asks you to imagine a computer program that can take part in a conversation in written Chinese by recognizing symbols and responding with suitable “answer” symbols. We could then have a human follow the same instructions, the English speaker would still be able to carry out a Chinese conversation but they would have no understanding what was being said. Equally, Searle argues, a computer running the same program wouldn’t understand the conversation.

Stuart Hameroff and Roger Penrose have suggested that consciousness and learning patterns in humans may rely on fundamental quantum phenomena unavailable to digital computers. Although quantum phenomena has been studied in brains, there is no evidence that this would be a barrier for general intelligence.

See Also

[Artificial General Intelligence]