A possible scenario could be that the first confirmed AGI(s) are completely unimpressive,  i.e. with equivalent or less capacities than an average 10 year child. And then tremendous effort is put into ’growing’ its capacities with the best results yielding growth only along a sub exponential curve, probably with some plateaus as well. So that any AGI may take generations to become truly superhuman.

I’m asking this as I haven’t come across any serious prior discussion other than this: https://www.lesswrong.com/posts/77xLbXs6vYQuhT8hq/why-ai-may-not-foom, though admittedly my search was pretty brief.

Is there any serious expectation for this kind of scenario? 

New Answer
New Comment

6 Answers sorted by

Logan Zoellner

50

Here are  some plausible ways we could be trapped at a "sub adult human" AGI:

  1.  There is no such thing as "general intelligence".  For example, profoundly autistic humans have the same size brains as normal human beings, but their ability to navigate the world we live in is limited by their weaker social skills.  Even an AI with many super-human skills could still fail to influence our world in this way.
  2. Artificial intelligence is possible, but it is extremely expensive.  Perhaps the first AGI requires an entire power-plant's worth of electricity to run.  Biological systems are much more efficient than manmade ones.  If Moores law "stops", we may be trapped in  a  future were only sub-human AI is affordable enough to be practical.
  3. Legal barriers.  Just as you are not legally allowed to carry a machine-gun wherever you please, AI may be regulated such that human-level AI is only allowed under very controlled circumstances.  Nuclear power is a classic example of an industry where innovation  stopped because of regulation.
  4. Status Quo Bias.  Future humans may simply not care as  much about building AGI as present humans.  Modern humans could undoubtedly build pyramids much taller than those in Egypt, but we don't because we aren't all that interested  in pyramid-building.
  5. Catastrophe.  Near human AGI may trigger a catastrophe that prevents further progress.  For example, the perception that "the first nation to build AGI will rule the world" may lead to an arms-race that ends in catastrophic world-war. 
  6. Unknown Unknowns.  Predictions are hard, especially about the future.

#5 is an interesting survival possibility...

#1 resonates with me somehow. Perhaps because I’ve witnessed a few people in real life, profoundly autistics, or disturbed, or on drugs, speak somewhat like an informal spoken variant of GPT-3, or is it the other way around?

JBlack

30

I don't think anyone really expects this sort of scenario, but it does make for some nice safe science fiction stories where humans get to play a meaningful role in the outcome of the plot.

Personally I think there are a few pretty major things working against it.

It seems likely that if we can get to chimpanzee-equivalent capability at all (about the minimum I'd call AGI), scaling up by a factor of 10 with only relatively few architectural tweaks will give something at least as generally capable as a human brain. Human brains are only about 4x the size and not apparently very much more complex per unit mass than the other great apes. Whatever the differences are, they developed in something like 1/1000th of the total history of our species. We're far too homogeneous in ability (on an inter-species absolute intelligence scale) with too recent evolution for anything to be fundamentally more complex about our brains compared with apes. If the apes had stagnated in intelligence for a billion years before making some intelligence breakthrough to us in a much shorter time, I may have had a different opinion. The evidence seems to point toward a change in neuron scaling in primates that meant a cheaper increase in neuron counts and not much else. As soon as this lowered the marginal cost of supporting more neurons below the benefits of having more neurons, our ancestors fairly steadily increased in both brain size and intelligence from there to here, or at least as steadily as evolution ever gets.

If there are fundamental barriers, then I expect them to be at least as far above us as we are above chimpanzees because there are no signs that we're anywhere near as good as it gets. We're most likely the stupidest possible species capable of nontrivial technology. If we weren't then we'd have most likely found evidence of earlier, stupider species on Earth that did it before us.

While I am not certain, I suspect that even otherwise chimpanzee equivalent AGIs enhanced with the narrow superhuman capabilities we have already built today might be able to outsmart us even while being behind us in some ways. While humans too can make use of narrow superhuman AI capabilities, we still have to use them as external tools limited by external interfaces, instead of integrated into our minds from birth and as automatic to us as focusing our eyes. There is every reason to expect that the relative gains would be very much greater.

Even if none of those are true, and general intelligence stops at 10-year-old human capability and they can't directly use our existing superhuman tools better than we can, I wouldn't bet heavily against a the possibility that merely scaling up speed 100 times - studying and planning for up to a subjective century each year - could let them work out how to get through the barrier to better capabilities in a decade or two. Similarly if they could learn in concert with others, all of them benefiting in some way directly rather than via comparatively slow linear language. There may be many other ways in which we are biologically limited but don't think of it as being important, because everything else we've ever known is too. Some AGIs might not share those limits, and work around their deficiencies in some respects by using capabilities that nothing else on Earth has.

Thanks JBlack, those are some convincing points. Especially the likelihood that even a chimpanzee level intelligence directly interfaced to present day supercomputers would likely yield tangible performance greater than any human in many ways. Though perhaps the danger is lessened if for the first few decades the energy and space requirements are, at a minimum, equal to a present day supercomputing facility. It’s a big and obvious enough ’bright line’ so to speak.

Dagon

30

IMO, the best argument that it won't be exponential (for very long) is that almost nothing is.  Many things that appear exponential are actually sigmoid, and even things that ARE exponential for a time hit a limit and either plateau or collapse.

The question isn't "is it exponential forever?", but "is it superlinear for long enough to foom?".  I don't think I've heard compelling data on either side of that question.

I don't think "exponential" vs "superlinear" or even "sublinear" matters much. Those are all terms for asymptotic behaviour, in the far future, and all the problems are in the relatively short term after the first AGI.

For FOOM purposes, how long it takes to get from usefully human level capabilities to as far above us as we are above chimpanzees (let's call it 300 IQ for short, despite the technical absurdity of the label) is possibly the most relevant timescale.

Could a few hundred humans wipe out a world full of chimpanzees in the long term? I'm pretty su... (read more)

So it seems that even ‘fooming’ would be a coin toss as it stands?

delton137

20

It's hard to imagine a "general intelligence" getting stuck at the level of a 10 year child in all areas -- certainly it will have an ability to interface with hardware that allows it to perform rapid calculations or run other super-human algorithms. 

But there are some arguments that suggest intelligence scaling at an exponential rate can't go on indefinitely and in fact limitations to exponential growth ("foom") may be hit very soon after AGI is developed, so basically foom is impossible. For instance, see this article by Francois Chollet: 
https://medium.com/@francois.chollet/the-impossibility-of-intelligenceexplosion-5be4a9eda6ec

He makes a number of interesting points. For instance, he notes the slow development of science, despite exponentially more resources going into it. He also notes that science and other areas of human endeavor have recursive self improvement in them, but they seem to be growing linearly, not exponentially. 

Another point is that some (eg chaotic) physical systems are just impossible to predict over time scales of days or longer, even for superintelligent AI with vast computational resources. So there are some limitations there, at least. 

The other related reference I would recommend is this interview with Robin Hanson: https://aiimpacts.org/conversation-with-robin-hanson/

Thanks for the links. It may be that the development of science, and of all technical endeavours in general, follow a pattern of punctuated equilibrium, that is sub linear growth, or even regression, for the vast majority of the time, interspersed by brief periods of tremendous change.

Jon Garcia

20

I think that the mere development of an AGI with 10-year-old-human intelligence (or even infant-level) would first require stumbling across crucial generalizable principles of how intelligence works. In other words, by this time, there would have to be a working theory of intelligence that could probably be scaled up pretty straightforwardly. Then the only limit to an intelligence explosion would be limitations in hardware or energy resources (this may be more of a limitation while the theory of intelligence is still in its infancy; future designs might be more resource-efficient). I would expect economic pressure and international politics to create the perfect storm of unaligned incentives such that once a general theory is found, even if resource-intensive, you will see an exponential growth (actually sigmoidal as [temporary] hard limits are approached) in the intelligence of the biggest AGI systems.

You might find a rate-limiting step in the time it takes to train an AGI system, though. This would extend the window of opportunity for making any course corrections before superintelligence is reached. However, once it's trained, it might be easy to make a bunch of copies and combine them into a collective superintelligence, even if training a singleton ASI from scratch would take a much longer time on its own. Let's hope that a working theory of general alignment comes no later than a working theory of general intelligence.

Thanks for the in-depth answer. The engineer side of me gets leery whenever ‘straightforward real world scaling following a working theory’ is a premise, the likelihood of there being no significant technical obstacles at all, other than resources and energy, seems vanishingly low. A thousand and one factors could impede in realizing even the most perfect theory, much like other complex engineered systems. Possible surprises such as some dependence on the substrate, on the specific arrangement of hardware, on other emergent factors, on software factors, etc...

1Lone Pine
If there is a general theory of intelligence and it scales well, there are two possibilities. Either we are already in a hardware overhang, and we get an intelligence explosion even without recursive self improvement. Or the compute required is so great that it takes an expensive supercomputer to run, in which case it’ll be a slow takeoff. The probability that we have exactly human intelligence levels of compute seems low to me. Probably we either have way too much or way too little.

James_Miller

20

As discussed on this podcast I did with Robin Hanson, the UFOs seen by the US Navy might be aliens.  If this is true, the aliens would seem to have a preference to keep the universe in its natural state and so probably wouldn't let us create a paperclip maximizer.  These aliens might stop us from developing too powerful AI.

Although every sentence here is technically correct, I still feel I should share a link to a nice video explaining how you get the navy observations without any aliens being involved.

2James_Miller
Sam Altman seems to take UFOs seriously.  See 17.14 of this talk.  
2Charlie Steiner
Okay? Is your implied point that Sam Altman, or Tyler Cowen, is such an epistemic authority figure that I too should take UFOs seriously?
3James_Miller
Yes.  I don't know you so please don't read this as an insult.  But if Sam Altman and Tyler Cowen take an idea seriously don't you have to as well.  Remember that disagreement is disrespect so you saying that UFOs should not be taken seriously is your saying that you have a better reasoning process than either of those two men.
4Charlie Steiner
I don't take it as an insult, I just think it's a wrong line of reasoning. I'm a pretty smart guy myself, but I'm sure I'm wrong about things too! (Even though for any specific thing I believe, I of course think I'm probably right.) If someone else is right about something and I'm wrong, I don't want them to deferentially "take me seriously." I want them to be right and show me why I'm wrong - though not to the extent of spamming my email inbox. Makes me want to go re-read Guided By The Beauty Of Our Weapons.
2James_Miller
Normally this is a good approach, but a problem with the UFOs are aliens theory is that there is a massive amount of evidence (much undoubtedly crap) the most important of which is likely classified top secret so you have to put a lot of weight on what other people (especially those with direct access to those with top secret security clearances) say they believe. 
2Charlie Steiner
The best photos of my house from space are also classified, yet I don't live in total ignorance about what my house looks like from above. If the classified photos of my house did show something surprising (maybe I've got some graffiti that I've never noticed), I would update my beliefs, but because I have pretty good evidence about the top of my house already, I don't need to wait around for the best possible photos, and in fact I can give pretty good predictions of what those photos will look like. Suppose my house is in some declassified Navy photos and there's a bird over my roof. If someone claims that my roof has a picture of a bird on it, this is very bad evidence for my house actually having a picture of a bird on it, and very good evidence for this person having made a mistake.

Thanks, that does seem to be a possible motive for constant observation, and interference, if such aliens were to exist.