Random Tag
Contributors
You are viewing revision 1.0.0, last edited by joaolkf

An extensibility argument for greater-than-human intelligence argues that once we get to a human level AGI, greater than human level AGI would be also feasibly by extensibility. It is identified by David Chalmers as one of the main premises for the singularity and intelligence explosion hypothesis 1. One intuitive ground for this argument is that information technologies have always presented a continuous development towards greater computational capacity. Chalmers divide the argument as below:”

  • (i) If there is AI, AI will be produced by an extendible method.
  • (ii) If AI is produced by an extendible method, we will have the capacity to extend

the method (soon after).

  • (iii) Extending the method that produces an AI will yield an AI+.

—————-

  • (iv) Absent defeaters, if there is AI, there will (soon after) be AI+”<ref name"cha" />

He says premises I and II follows directly from most definitions of ‘extendible method’: a method which enables improvement yielding more intelligent systems. One possible extendible method would be programming an AGI, since all known software seems improvable. One known non-extendible method is biological reproduction; it produces human level intelligence and nothing more. Also, there could be methods of achieving greater than human intelligence without creating human level AGI, for example through Biological Cognitive Enhancement or genetic engineering. If the resulting greater than human intelligence is also extendible and so the next levels, then an intelligence explosion would follow. It could be argued that we are at a ceiling or at a very hard to surpass local optimal for intelligence, but there seems to have little to no basis for this.

Luke Muehlhauser and Anna Salamon 2 list several features of an artificial human level intelligence that suggest it would be easily extendible: increased computational resources, increased communication speed, increased serial depth, duplicability, editability, goal coordination and improved rationality. They also agree with Omohundro 345and Bostrom6 that most of advanced intelligence would have the instrumental goal of increasing its own intelligence since this would help achieving almost any other goal. This is a strong ground for the extensibility argument for greater-than-human intelligence.

See Also

References


  1. Chalmers, David. (2010) "The Singularity: A Philosophical Analysis, Journal of Consciousness Studies", 17 (9-10), pp. 7-65. http://consc.net/papers/singularity.pdf
  2. Muehlhauser, Luke & Salamon, Anna (2012). Intelligence Explosion: Evidence and Import. In Singularity Hypotheses. Springer. http://singularity.org/files/IE-EI.pdf
  3. Omohundro, Stephen M. 2007. The nature of self-improving artificial intelligence. Paper presented at Singularity Summit 2007, San Francisco, CA, Sept. 8–9. http : //singularity .org/summit2007/overview/abstracts/#omohundro.
  4. Omohundro, Stephen M. 2008. The basic AI drives. In Wang, Goertzel, and Franklin 2008, 483–492.
  5. Omohundro, Stephen M. Forthcoming. Rational artificial intelligence for the greater good. In Eden, Søraker, Moor, and Steinhart, forthcoming
  6. Bostrom, Nick. 2012. The superintelligent will: Motivation and instrumental rationality in advanced artificial agents. In Theory and philosophy of AI, ed. Vincent C. Müller. Special issue, Minds and Machines 22 (2): 71–85. doi:10.1007/s11023-012-9281-3. http://www.nickbostrom.com/superintelligentwill.pdf