An extensibility argument for greater-than-human intelligence argues that once we get to a human level AGI, greater than human level AGI would be also feasibly by extensibility. It is identified by David Chalmers as one of the main premises for the singularity and intelligence explosion hypothesis 1. One intuitive ground for this argument is that information technologies have always presented a continuous development towards greater computational capacity. Chalmers divide the argument as below:”
the method (soon after).
He says premises I and II follows directly from most definitions of ‘extendible method’: a method which enables improvement yielding more intelligent systems. One possible extendible method would be programming an AGI, since all known software seems improvable. One known non-extendible method is biological reproduction; it produces human level intelligence and nothing more. Also, there could be methods of achieving greater than human intelligence without creating human level AGI, for example through Biological Cognitive Enhancement or genetic engineering. If the resulting greater than human intelligence is also extendible and so the next levels, then an intelligence explosion would follow. It could be argued that we are at a ceiling or at a very hard to surpass local optimal for intelligence, but there seems to have little to no basis for this.
Luke Muehlhauser and Anna Salamon 3 list several features of an artificial human level intelligence that suggest it would be easily extendible: increased computational resources, increased communication speed, increased serial depth, duplicability, editability, goal coordination and improved rationality. They also agree with Omohundro 456and Bostrom7 that most of advanced intelligence would have the instrumental goal of increasing its own intelligence since this would help achieving almost any other goal. This is a strong ground for the extensibility argument for greater-than-human intelligence.