zeshen | v1.19.0Apr 25th 2023 | |||
plex | v1.18.0Aug 29th 2021 | making a link work for pages calling the api which need full link provided | ||
Ruby | v1.17.0Oct 3rd 2020 | (+43/-4) | ||
Deku-shrub | v1.16.0Jun 21st 2017 | |||
JoshuaFox | v1.15.0Aug 30th 2012 | (+884/-30) | ||
TerminalAwareness | v1.14.0Jul 17th 2012 | (+440/-9) Skepticism enhansed, another study linked | ||
TerminalAwareness | v1.13.0Jul 11th 2012 | (+10/-10) /* See also */ | ||
TerminalAwareness | v1.12.0Jul 11th 2012 | (+798/-181) Added reasons for expecting AGI, some editing | ||
Grognor | v1.11.0Mar 15th 2012 | (+211/-3) | ||
Vladimir_Nesov | v1.10.0Dec 30th 2011 | (+10/-10) |
An Artificial general intelligence, or AGI, is a machine capable of behaving intelligently over many domains. The term can be taken as a contrast to narrow AI, systems that do things that would be considered intelligent if a human were doing them, but that lack the sort of general, flexible learning ability that would let them tackle entirely new domains. Though modern computers have drastically more ability to calculate than humans, this in no way suggestsdoes not mean that they are generally intelligent.intelligent, as they have little ability to invent new problem-solving techniques, and their abilities are targeted in narrow domains.
The term "Artificial General Intelligence," introduced by Shane Legg and Mark Gubrud, is often used to refer more specifically to a design paradigm which mixes modules of different types: "neat" and "scruffy", symbolic and subsymbolic. Ben Goertzel is the researcher most commonly associated with this approach, but others, including Peter Voss, are also pursuing it. This design paradigm, though eclectic in adopting various techniques, stands in contrast to other approaches to creating new kinds of artificial general intelligence (in the broader sense), including brain emulation, artificial evolution, Global Brain, and pure "neat" or "scruffy" AI.
Reasons for expecting an AGI's creation in the near future include the continuation of Moore's law, larger datasets for machine learning, progress in the field of neuroscience, increasing population and collaborative tools, and the massive incentives for its creation. A survey of experts taken at a 2011 Future of Humanity Institute conference suggestedon machine intelligence found a 50% confidence median estimate of 2050 for the creation of an AGI, and 90% confidence in 2150. A significant minority of the AGI community views the prospects of an intelligence explosion or the loss of control over an AGI very skeptically however.
An Artificial general intelligence, or AGI, is a machine capable of behaving intelligently over many domains. The term can be taken as a contrast to narrow AI, systems that do things that would be considered intelligent if a human were doing them, but that lack the sort of general, flexible learning ability that would let them tackle entirely new domains. Though modern computers have drastically more ability to calculate than humans, this in no way suggests they are generally intelligent.
Directly comparing the performance of AI to human performance is often an instance of anthropomorphism. The internal workings of an AI need not resemble those of a human; an AGI could have a radically different set of capabilities than those we are used to seeing in our fellow humans. A powerful AGI capable of operating across many domains could achieve competency in any domain that exceeds that of any human. On the other hand, today's electronic calculators far exceed the ability of humans to calculate, but this observation in no way suggests that calculators are generally intelligent.
Comparing an AGI's preferences to those of humans, AGI are classified as Friendly and Unfriendly. An Unfriendly AGI would pose a large existential risk.
Reasons for expecting an AGI's creation in the near future include the continuation of Moore's law, larger datasets for machine learning, progress in the field of neuroscience, increasing population and collaborative tools, and the massive incentives for its creation. A survey taken at a 2011 Future of Humanity Institute conference suggested a 50% confidence median estimate of 2050 for the creation of an AGI, and 90% confidence in 2150.
Comparing thean AGI's preferences to those of humans, AGI are classified as Friendly and Unfriendly.
Comparing the AGI's preferences to those of humans, AGI are classified as Friendly and unFriendlyUnfriendly.
Related: AI (the main AI wiki-tag page)
See
alsoAlso