Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

Epistemic status: free speculation

How intelligent should we expect AI to be, compared to humans, and how quickly should we expect it to reach this level of intelligence? This is of course an important strategic question, which has a large impact on our AI threat models. At the moment, the most common answers to these questions are "much more intelligent", and "very quickly". In this post, I will describe an arguably realistic scenario in which this would not be the case.

First of all, why should we expect AI to end up much more intelligent than humans? A common argument goes roughly like this: humans are much, much smarter than animals. We can build computers, and send things to space, but they can't. This proves that the range of possible intelligence is very wide. Moreover, it would be prima facie very surprising if human intelligence is at the top of this range. Therefore, we should expect it to be possible for AI systems to get much smarter than humans. Moreover, there is no reason to think that AI progress would slow down around human intelligence in particular. Therefore, we should expect AI intelligence to quickly far outstrip our intelligence.

This argument relies very crucially on the assumption that humans are much smarter than animals. But is this actually true? I'm not entirely convinced. First of all, there have been "feral" humans that grew up surrounded by animals. As far as I know, these humans are not obviously much more intelligent than animals (in terms of their ability to solve problems). This already casts some doubt on the notion that humans are much, much smarter than animals.

It is important to remember that humans, unlike all other species, are able to use complex language. This is a huge confounding factor, when we try to compare the intelligence of humans and animals. It is obviously very powerful to be able to exchange complex ideas, and build up knowledge intergenerationally. This would probably be enough to give humans a very large advantage, even if our intelligence was otherwise exactly the same as that of other primates.

Therefore, consider the following hypothesis: humans have an innate ability to use complex, combinatorial language, but all other species lack this ability (in the way Noam Chomsky thinks). In addition to this, humans are also somewhat (but not hugely) more intelligent than other primates (eg, 1.5x as intelligent, say).

As far as I can tell, this hypothesis roughly fits all our observations (?). However, if it is the case that the difference between humans and monkeys is mostly due to a one-shot discrete difference (ie language), then this cannot necessarily be repeated to get a similar gain in intelligence a second time. If that is the case, then we should perhaps expect AI to still end up quite a bit more intelligent than humans, but perhaps not to an incomprehensible extent (ie, we end up with AI geniuses, but not AI gods).

There are obviously a lot of ways that this conclusion could be wrong, and a lot of counter-arguments one could offer (eg, in most board games, AI has quickly gone from below human performance, to far above human performance, yadda yadda). However, I don't know of any knock-down arguments, and so I put some weight on something like this being true.

New to LessWrong?

New Comment
19 comments, sorted by Click to highlight new comments since: Today at 9:00 AM

My stock counterargument to this: insofar as humans' advantage over other animals stems primarily from our ability to transmit knowledge/memes/etc across individuals and through generations, we should expect AI to have a much larger advantage, because they can do the same thing far, far better. This doesn't even require the AI to be all that "smart" - even just the ability to copy minds directly would allow transmission from "parent" to "child" with far less knowledge-loss than humans can achieve. (Consider, for instance, the notorious difficulty of training a student to the same level as an unusually accomplished human - and then imagine how much more effective it would be to just copy the accomplished human.) And that's just copying. The ability to merge two minds does not necessarily come for free with any AI (as copying does), but will likely be far easier for AI than for humans, and would provide a much more extreme version of what humans achieve by passing information via language. (Consider, for instance, the notorious difficulty of communicating well and avoiding misunderstanding - and then imagine how much easier that would be if two minds or components-of-minds could merge.)

What these arguments suggest is that humans' language capabilities are themselves relatively primitive ways to transmit knowledge/memes/etc between individuals, and it is not in-principle hard to do better once one starts to optimize directly for that goal without the constraints of biological wetware. More generally, we should expect that whatever did give humans such a large advantage (be it language or something else), humans have a relatively primitive version of it, because otherwise humans would have taken off sooner in our evolutionary history. As the saying goes, "humans are as stupid as they can be while still taking over the world, because otherwise they would have done so earlier".

Yes, I agree with this. I mean, even if we assume that the AIs are basically equivalent to human simulations, they still get obvious advantages from the ability to be copy-pasted, the ability to be restored to a checkpoint, the ability to be run at higher clock speeds, and the ability to make credible pre-commitments, etc etc. I therefore certainly don't think there is any plausible scenario in which unchecked AI systems wouldn't end up with most of the power on earth. However, there is a meaningful difference between the scenario where their advantages mainly come from overwhelmingly great intelligence, and the scenario where their advantages mainly (or at least in large part) come from other sources. For example, scaleable oversight is a more realistic possibility in the latter scenario than it is in the former scenario. Boxing methods are also more realistic in the latter scenario than the former scenario, etc.

I think this scenario is still strategically isomorphic to "advantages mainly come from overwhelmingly great intelligence". It's intelligence at the level of a collective, rather than the individual level, but the conclusion is the same. For instance, scalable oversight of a group of AIs which is collectively far smarter than any group of humans is hard in basically the same ways as oversight of one highly-intelligent AI. Boxing the group of AIs is hard for the same reasons as boxing one. Etc.

I think the broad strokes are mostly similar, but that a bunch of relevant details are different.

Yes, a large collective of near-human AI that is allowed to interact freely over a (subjectively) long period of time is presumably roughly as hard to understand and control as a Bostrom/Yudkowsky-esque God in a box. However, in this scenario, we have the option to not allow free interaction between multiple instances, while still being able to extract useful work from them. It is also probably much easier to align a system that is not of overwhelming intelligence, and this could be done before the AIs are allowed to interact. We might also be able to significantly influence their collective behaviour by controlling the initial conditions of their interactions (similarly to how institutions and cultural norms have a substantial long-term impact on the trajectory of a country, for example). It is also more plausible that humans (or human simulations or emulations) could be kept in the loop for a long time period in this scenario. Moreover, if intelligence is bottle-necked by external resources (such as memory, data, CPU cycles, etc) rather than internal algorithmic efficiency, then you can exert more control over the resulting intelligence explosion by controlling those resources. Etc etc.

It is important to remember that humans, unlike all other species, are able to use complex language. This is a huge confounding factor, when we try to compare the intelligence of humans and animals. It is obviously very powerful to be able to exchange complex ideas, and build up knowledge intergenerationally. This would probably be enough to give humans a very large advantage, even if our intelligence was otherwise exactly the same as that of other primates.

 

Communication is an aspect of intelligence. It takes place in the brain not the kidneys. Now you could argue that communication is a special extra boost above and beyond the normal gains of intelligence, that humans are near the top of the communication sigmoid, and that there are no other special extra boosts out there. 

Do monkeys have a mind capable of understanding calculus internally and just lack any language capable of learning it? (Such that a monkey given perfect communication but no other increases in intelligence would be able to learn calculus.) 

I am not convinced the question is meaningful. I doubt that "communication" is a clear boundary on the neurochemical level, with a sharp divide between communication neurons and other neurons.

First of all, there have been "feral" humans that grew up surrounded by animals. As far as I know, these humans are not obviously much more intelligent than animals (in terms of their ability to solve problems).

 

Think of that like a modern supercomputer being used to play pong. 

(well not that big of a gap, but you get the picture)

Animal brains have a relatively simple and limited range of pieces of software they can run. 

Human brains are able to run a much wider range of much more complicated programs. 

In other words, human intelligence shows up in that with the right training, we are able to do all sorts of complicated things. Whereas there are plenty of things most humans can do that animals can't be trained to do. 

To actually be able to do useful stuff, humans need the right training, both in the specific technical details, and more general stuff like the scientific method. (with some ability to figure those things out from large amounts of trial and error)

Train a human in a nutty cult, and their intelligence is useless. But the point is that humans can be trained to do physics. Not that every human crawls out of the womb doing calculus. 

 

I agree that if your only examples of humans were feral humans, then you would have no reason to think humans were much smarter. And then you would be very surprised by basically any educated human. 

Or another way to put this, for a slightly different definition of the word "intelligence" is that humans can be much more intelligent than animals with the right environment.

I agree with your thesis that language is critical to human intelligence.

I'm curious if you have specific references for the claim that humans raised without language are far less smart. I've tried to find the best evidence, and the only decently documented cases I could find might have been developmentally disabled. The caretakers of severely neglected children (e.g., no one talks to them) universally claim that those children were nonverbal, and that's why they were treated as nonhuman. Stories of literal wolf children are much less well documented.

This is an academic question; I'd like to have better evidence. I strongly suspect that a developmentally normal human would be vastly less intelligent if raised without language. I'd like to have stronger evidence for this claim.

I remember reading about zoologist couple that tried to rise their child together with baby gorilla. Gorilla development stopped at certain age and that stopped human development so they had to be separated.

[-]gwern10mo10

That would be the Kelloggs, I suspect (not to be confused with Meshie or Nim Chimpsky).

I don't have any good evidence that humans raised without language per se are less intelligent (if we understand "intelligence" to refer to a general ability to solve new problems). For example, Genie was raised in isolation for the first 13 years of her life, and never developed a first language. Some researchers have, for various reasons, guessed that she was born with average intelligence, but that she, as a 14-year old, had a mental age "between a 5- and 8-year-old". However, here we have the confounding factor that she also was severely abused, and that she got very little mental stimulus in general for the first 13 years of her life, which would presumably obstruct mental development independently of a lack of language. This makes it hard to draw any strong conclusions (and we would regardless have a very small number of data points).

However, just to clarify, the argument I'm making doesn't crucially rely on the assumption that a human with language is significantly more intelligent than a human without language, but rather on the (presumably much less controversial) assumption that language is a significant advantage regardless of whether or not it is also paired with an increase in intelligence. For example, it would not surprise me if orangutans with language (but orangutan-level intelligence) over time would outcompete humans without language (but otherwise human-level intelligence). This, in turn, makes it difficult to infer how intelligent humans are compared to animals, based on what we have achieved compared to animals. 

For example, one might say 

"Humans have gone to space, but no other species is anywhere close to being able to do that. This proves that humans are vastly more intelligent than all other species."

However, without access to language, humans can't go to space either. Moreover, we don't know if orangutans would eventually be able to go to space if they did have access to language. This makes it quite non-trivial to make a direct comparison.

[-]Cleo Nardo10moΩ120

However, if it is the case that the difference between humans and monkeys is mostly due to a one-shot discrete difference (ie language), then this cannot necessarily be repeated to get a similar gain in intelligence a second time.

Perhaps language is a zero-one, i.e. language renders a mind "cognitively complete" in the sense that the mind can represent anything about the external world, and make any inferences using those representations. But intelligence is not thereby zero-one because intelligence depends on continuous variables like computional speed, memory, etc.

More concretely, I am sceptic that "we end up with AI geniuses, but not AI gods", because running a genius at 10,000x speed, parallelised over 10,000x cores, with instantaneous access to the internet does (I think) make an AI god. A difference is quantity is a difference in kind.

Thar said, there might exist plausible threat models which require an AI which doesn't spatiotemporally decompose into less smart AIs. Could you sketch one out?

So, the claim is (of course) not that intelligence is zero-one. We know that this is not the case, from the fact that some people are smarter than other people. 

As for the other two points, see this comment and this comment.

[-]Viliam10mo20

humans have an innate ability to use complex, combinatorial language

The important part is that we use this language not just when talking to each other, but also internally. We can remember spoken sentences (for example other people's commands). We can talk to ourselves (i.e. think verbally). We can talk to our future selves (e.g. by writing a to-do list). We can communicate asynchronously (i.e. write). We can follow complex written instructions.

I am writing it this way to express that merely saying "language" feels like understatement of all the abilities that are unlocked by non-standard uses of language.

Yes. There are vigorous debates in cognitive psychology about how dependent people are on internal speech. I think many people do good thinking without using literal internal self-talk. But the conceptual representations they use for thinking are still shaped by the words they've learned.

[-]Ann10mo10

What if: Human ability to use language with certain interesting and useful properties isn't particularly innate, though we may have evolved advantages with it over time. Rather, it is a cultural invention with useful applications to technology. If this idea is true we should see gains in capability by any animal society that culturally transmits traditions that learns or develops languages of similar properties. I think from the AI perspective we can certainly argue it's been quite handy for a nonhuman entity that does have innate connection to language.

[-]mishka10mo10

At the moment, the most common answers to these questions are “much more intelligent”, and “very quickly”.

For "very quickly" ("foom", intelligence explosion), the aspect which is particularly important is how smart humans are specifically with respect to the task of making better and better AI systems.

We strongly suspect that AIs which are "roughly human equivalent with respect to the task of making better and better AI systems" will rapidly become much smarter than humans specifically with respect to this task, and this is the commonly assumed mechanism of "hard take-off'.

Other kinds of smartness are expected to be mostly downstream of this central mechanism...

(So, in some sense, when we are talking about take-off speed, we are talking not so much about "general AIs", but about systems which are strong "artificial AI researchers and software engineers").

So, this model of a takeoff scenario makes certain assumptions about how intelligence works, and these assumptions may or may not be correct. In particular, it assumes that the initial AI systems are very far from being algorithmically optimal. We don't know whether or not this will be the case; that is what I am trying to highlight.

The task of extracting knowledge from data is a computational task, which has a certain complexity-theoretic hardness. We don't know what that hardness is, but there is a lower bound on how efficiently this task can be done. Similarly for all the other tasks of intelligence (such as planning, etc).

Strong recursive self-improvement (given a fixed amount of resources) is only possible if the first AI systems are very far from being algorithmically optimal at all the relevant computational tasks. This is not a given; it could be true or false. For example, while you can optimise a SAT-solver in many ways, it will at the end of the day necessarily have a worst-case exponential runtime complexity (unless P = NP).

Therefore, the question of how much more intelligent AI systems will end up being compared to humans, depends on how close the human brain algorithm is to being (roughly) Pareto-optimal for all the relevant computational tasks. We don't know the answer to this question. Strong, sustained recursive self-improvement is only possible if our initial AGI algorithm, and the human brain, both are very far from being Pareto-optimal.

Is this the case? You could point to the discrepancy between humans and animals, and argue that this demonstrates that there are cognitive algorithms that yield vastly different results given similar amounts of resources (in terms of data and parameters). However, the argument I've written casts doubt on whether or not this evidence is reliable. Given that, I think the case is no longer so clear; perhaps the human neural architecture is (within some small-ish polynomial factor of being) Pareto optimal for most relevant cognitive tasks.

Now, even if the human brain algorithm is roughly optimal, AI systems will almost certainly still end up with vastly more cognitive force (because they can be run faster, and given more memory and more data). However, I think that this scenario is different in relevant ways. In particular, without (strong) recursive self-improvement, you probably don't get an uncontrollable, exponential growth in intelligence, but rather a growth that is bottle-necked by resources which you could control (such as memory, data, CPU cycles, and etc).



 

[-]mishka10mo10

In particular, it assumes that the initial AI systems are very far from being algorithmically optimal. We don't know whether or not this will be the case; that is what I am trying to highlight.

Strong recursive self-improvement (given a fixed amount of resources) is only possible if the first AI systems are very far from being algorithmically optimal at all the relevant computational tasks.

Right. But here we are talking about a specific class of tasks (which animals cannot do at all, they can't write computer code or a mathematical paper; so, evolutionary speaking, human are new at that, and probably not anywhere close to any equilibrium because this class of tasks is very novel on the evolutionary timescale).

Moreover, we know a lot about human performance at those tasks, and it's abysmal, even for top humans, and for AI research as a field. For example, there has been a paper in Nature in 2000 explaining that ReLU induce semantically meaningful sparseness. The field ignored this for over a decade, then rediscovered in 2009-2011 that ReLU were great, and by 2015 ReLU became dominant. That's typical, there are plenty of examples like that (obvious discoveries not done for decades (e.g. the ReLU discovery in question was really due in early 1970-s), then further ignored for a decade or longer after the initial publication before being rediscovered and picked up). AI could simply try various things lost in the ignored old papers for a huge boost (there is a lot of actionable, pretty strong stuff in those, not everything gets picked up, like ReLU, a lot remains published, but ignored by the field).

Anyone who has attended an AutoML conference recently knows that the field of AutoML is simply too big to deal with (too many promising methods for neural architecture search, for hyperparameter optimization, for metalearning of optimization algorithms; so we have all these things in metalearning and in "AI-generating algorithms" which can provide a really large boost over status quo if done correctly, but too difficult to fully take advantage of because of human cognitive limitations, as the whole field is just "too large a mess to think effectively about").

So it seems that, at least, there is quite a bit of room for a large initial boost over the current human-equivalent capacity. If one starts with a level of top AI researcher at a good company today, there are plenty of fixable things simply on the level of being able to consider everything that have been written in this field of study (there is also plenty of room for improvement in terms of being able to rapidly write correct computer code and rapidly try various small-scale ideas) .

It's still quite possible that recursive self-improvement saturates after a while (it's difficult to predict how far this would go, how soon it would start hitting difficult-to-overcome bottlenecks). But even the strongest human AI researchers (or strongest human software engineers) we have today are very, very improvable (perhaps, they can be improved by giving them more powerful tools, so that they might still remain in the loop, at least initially; the only thing people seem to be hoping for in this sense is that Copilot-like style of interaction with AIs would allow humans to keep participating in how it goes, so that the systems will remain human-AI hybrid systems, rather than AI-only).

 which animals cannot do at all, they can't write computer code or a mathematical paper


This is not obvious to me (at least not for some senses of the word "could"). Animals cannot be motivated into attempting to solve these tasks, and they cannot study maths or programming. If they could do those things, then it is not at all clear to me that they wouldn't be able to write code or maths papers. To make this more specific; insofar as humans rely on a capacity for general problem-solving in order to do maths and programming, it would not surprise me if many animals also have this capacity to a sufficient extent, but that it cannot be directed in the right way. Note that animals even outperform humans at some general cognitive tasks. For example, chimps have a much better short-term memory than humans.

Moreover, we know a lot about human performance at those tasks, and it's abysmal, even for top humans, and for AI research as a field.

Abysmal, compared to what? Yes, we can see that it is abysmal compared to what would in principle be information-theoretically possible. However, this doesn't tell us very much about whether or not it is abysmal compared to what is computationally possible.

The problem of finding the minimal complexity hypothesis for a given set of data is not computationally tractable. For Kolmogorov complexity, it is uncomputable, but even for Boolean complexity, it is at least exponentially difficult (depending a bit on how exactly the problem is formalised). This means that in order to reason effectively about large amounts of data, it is (presumably) necessary to model most of it using low-fidelity methods, and then (potentially) use various heuristics in order to determine what pieces of information deserve more attention. I would therefore expect a "saturated" AI system to also frequently miss things that look obvious in hindsight.

So it seems that, at least, there is quite a bit of room for a large initial boost over the current human-equivalent capacity.

I agree that AI systems have many clear and obvious advantages, and that e.g. simply running them at a higher clock speed will give you a clear boost regardless of what assumptions we make about the "quality" of their cognition compared to that of humans. The question I'm concerned with is whether or not a takeoff scenario is better modeled as "AI quickly bootstraps to incomprehensible, Godlike intelligence through recursive self-improvement", or whether it is better modeled as "economic growth suddenly goes up by a lot". All the obvious advantages of AI systems are compatible with the latter.