I struggle quite a bit with publishing, so today I am just going to publish a bunch of hopefully relatively refined drafts. Any comments and questions are welcome, I would be grateful to elaborate on any area where I appear unclear. If you like, you can also let me know some thoughts about formatting this to be easier to read. This applies to all my future posts.

This post captures most of my current thoughts on deconfusing intelligence.
note: the two parts overlap somewhat, but I didn’t quite synthesize them yet. Part 1 had been included in our publications in AISC before, but I think it's cleaner to post it seperately.

 

Part 1 (comments on conceptual confusion):

The “ability to achieve goals in a wide range of environments” as per Markus Hutter and Shane Legg (https://arxiv.org/abs/0706.3639) is often used as a definition of general intelligence, but in my opinion it better captures the notion of general competence. 

How might the two concepts differ?

Most agents can achieve significantly fewer goals in many environments if their embodiments are changed (say, if their legs were removed), even when retaining the same cognitive capabilities. Is it therefore right to say that an agent has become less intelligent as a consequence of losing its legs?

I believe that including the embodiment/interface of an agent within our measure/concept of its intelligence inflates the concept and leads to confusion.

An example of such confusion would be the notion that companies are already superintelligent agents/entities and thus serve as useful analogies/frameworks for thinking about how to align superintelligent agents in general. While I agree that companies can be more competent than individuals in many domains, they are interfacing with the world very differently, which is something that needs to be accounted for before characterising this difference in competence as purely/mostly a matter of intelligence.

 

Competence is contextual and compositional, there are many factors that can contribute to it. The sensors that an agent/system possesses, its actuators, processing speed, data storage capacity, and so on, are things that I believe should be understood separately from intelligence. If we study the factors that make up the construct of competence, both as individual factors and in their interaction/relation with each other, we become capable of generating more precise explanations and predictions, like “What happens when we increase the intelligence of agent x by y% in environment z, while keeping all the other factors the same? What about if we just double the processing speed, but change nothing else?”

There is perhaps no ground truth to what intelligence is. If competence can be cleanly divided into, say, 8 different contributing factors, we are free to refer to one of them (or a group) as intelligence, or to say that we explained the mystery away. The term is only important in so far as it guides intuition and association, and allows for clear communication. And exactly this is what gets muddied by the conflation: if people think of intelligence as this complex phenomenon that basically fully predicts competence and can hardly be split into smaller pieces, they are inhibited from considering more compositional and therefore more understandable notions of competence.

 

Here is the way I like thinking about cognition and, in that context, intelligence:

Simply speaking, cognition can be understood as the computational translation process between an agent’s observations to its actions, and sophisticated agents usually include observations about their cognitive architecture, and actions within their cognitive architecture, in that space. 

There are three basic levels to how this translation may occur.

  • Level 1 is akin to a lookup table, where there is just a direct connection from observation to action, without any conditional processing in between. 
  • Level 2 is an algorithm, a model or composition thereof, that takes in observations and perhaps some extra data like memories or current preferences, and outputs the action. For embedded agents one can argue that, because of the good regulator theorem, the most useful models will often be partial and compositional simulations of the environment. One can create level 1 structures this way, by simply storing the pair of observation and computed action.
  • Level 3 is any algorithm that makes changes to level 2 or level 3 according to some metric of improvement, it is basically the process by which the agent creates and updates models. This constitutes an agent’s ability to learn novel things and update its models, primarily by reducing prediction errors.

An agent’s competence in interacting with their environment, given a particular interface (/embodiment) to the environment, is largely determined by level 2. Only by putting agents into novel circumstances can their level 3 capabilities be inferred from their behavior/competence. Most contemporary AIs only have access to level 1 and 2 during deployment, since they lack the ability to update their weights after training. This is mostly because an inadequate (relative to the environment) level 3 capability can mess up a well developed level 2 capability. 

It should be noted, however, that level 3 is usually just a particular version of level 2, with the main difference being that the output action refers to a structural change in the system’s cognition. Without going into too much detail here, there can be functional equivalency between competence growth through changing one’s environment vs changing one’s models, meaning that a LLM can functionally possess some limited level 3 capabilities during deployment by generating the context for its future outputs, even though the weights are frozen. 

I like to think of level 2 as Understanding and level 3 as Intelligence, though terming them “crystallized intelligence” and “fluid intelligence” may be more intuitive to some readers. Intelligence is the way to get to Understanding that an agent doesn’t already have and is therefore indirectly related to competence.
I am not sure how to discuss level 4, which is simply the algorithmic layer updating the level 3 algorithms. This sort of meta-learning is crucial for efficiency and cognitive domain adaptation, but it is also quite exfohazardous. 


 

Part 2 (closer to formalization):

I would measure/define Competence as the ability to achieve goals in an environment. General Competence denotes this for a wide range of environments, in order to make the notion of Competence less contextual to a specific environment. One could have comparative or absolute notions of (General) Competence. 

”Achieving goals in an environment” translates to achieving a subset of possible states of a given environment. We usually think of these states as not being the default ones, so that the agent has to take an active role in bringing about the subset. 

Taking an active role in bringing about a subset of states of an environment(/ a system) means that the agent needs to interface with the environment and introduce changes away from the “default causal trajectory”. 
This “interfacing” is done through actuation, meaning that the agent makes changes to the environment through the boundary of its embodiment. One could imagine this actuation interface for every possible state of the environment (including the agent), offering different access to the sub-states of the environment, depending on many factors like e.g. the agent’s location and height.

In principle, this actuation interface could be so complete that the agent has direct control over every sub-component(/parameter) of the environment, making the issue of achieving goals so trivial that the agent does not even require perception. It could just copy-paste a desired state through the interface. This is a special case where no perception is required to be maximally competent with respect to the environment. 

Usually however, perception is required and can be seen as part of the interface on the boundary between agent and environment. It provides evidence about the current state of the environment. It can potentially inform the agent about which part of the environment can be altered through actuation and in what way. 

This is relevant for all non-trivial implementations of cognition. I conceive of cognition as the translation between perception and actuation, but with the optional addition of perception about the agent’s internal state and actuation on it - as part of the environment, the agent may also be interfacing with itself. The performance of the cognition is dependent on the “translation algorithm” and the computational substrate that it runs on. 

I like to think of “efficient translation software” as Understanding, and I think that, in the embedded agency setting, it generally cashes out as a composition of computational models that capture predictions about relevant sub-states of the environment and map them onto actions. 

Now, Intelligence is the algorithm/mechanism that updates and creates Understanding. It potentially makes the “translation software” more efficient, more effective, or adjusts it to new contexts (which I felt was worth mentioning separately, though it can be mapped onto increased efficiency/effectiveness).

Taking back a step, it should be apparent that multiple factors interplay to determine an agent’s Competence. If we lower the actuation capability of the agent, it potentially needs to find a “cleverer” way to achieve its goal, or maybe the goal has become principally untouchable from the new interface. If we turn any of these dials of perception, actuation, cognitive hardware and cognitive software, or the way in which these all connect, we can alter the agent’s (General) Competence.

Intelligence is the mechanism by which a functional translation algorithm is developed over time. If the hardware capacity suddenly doubles, one will need Intelligence to make use of that. Similar for any other turn of the dial. If one eye goes blind, Intelligence will need to adjust the Understanding to account for that. 

And yet, Intelligence can be superfluous if all these components are already correctly calibrated and connected, and the environment does not receive any unusual disturbance. In this case, Intelligence would not even register as a factor in the agent’s Competence. It is an indirect contributor.

An agent could self-improve in the absence of Intelligence, if its Understanding already encodes behavior that would lead to self-improvements. This is also why the boundary between Intelligence and Understanding is somewhat ambiguous. 

Intelligence is also just a sort of translation algorithm from observation to action, but the environment that it chooses actions within is the cognition of the agent. And in this environment, it can be extremely competent - not least due to its potent interface.

New Comment
7 comments, sorted by Click to highlight new comments since:

Interesting, and good job publishing rather than polishing!

I really like terminology of competence vs. intelligence.

I don't think you want to use the term intelligence for your level 3. I think I see why you want to; but intelligence is currently an umbrella term for any cognitive capacity, so you're invoking different intuitions when you use it for one particular cognitive capacity.

In either case, I think you should draw the analogy more closely with Level 3 and problem-solving. At least if you think it exists.

Suppose I'm a hunter-gatherer, and there are fruit high up in a tree. This tree has thorns, so my usual strategy of climbing it and shaking branches won't work. If I figure out, through whatever process of association, simulation, and trial and error that I can get a long branch from another tree, then knock the fruit down, I can incorporate that into my level 2 cognition, and from there into level 1. This type of problem-solving is also probably the single cognitive ability most often referred to as intelligence, thus justifying your use of the term for that level. If I'm right that you'd agree with all of that, that could maake the terminology more intuitive to the reader.

In any case, I'm glad to see you thinking about cognition in relation to alignment. It's obviously crucial; I'm unclear if most people just aren't thinking about it, or if it's all considered too infohazardous.

Thanks! 

In your example, I think it is possible that the hunter-gatherer solves the problem through pure level 2 capability, even if they never encountered this specific problem before. Using causal models compositionally to represent the current scene, and computing it to output a novel solution, does not actually require that the human updates their causal models about the world. 
I am trying to distinguish agents with this sort of compositional world model from ones that just have a bunch of cashed thoughts or habits (which would correspond to level 1), and I think this is perhaps a common case where people would attribute intelligence to a system that imo does not demonstrate level 3 capability.

Of course, this would require that the human in our example already has some sufficiently decontextualised notion of knocking loose objects down, or that generally their concepts are suited to this sort of compositional reasoning. It might be worth elaborating on level 2 to introduce some measure modeling flexibility/compositionality. 

I feel like this could be explained better, so I am curious if you think I am being clear.

You are probably right that I should avoid the term intelligence for the time being, but I haven't quite found an alternative term that resonates. Anyways, thanks for engaging!

 

Edit: I'll soon make some changes to the post to better account for this feature of level 2 algorithms to potentially solve novel problems even if no new learning occurs. It's an important aspect of why I am saying that level 3 capabilities are only indirectly related to competence.

This is something I've been thinking about as well, and I think you do a good job explaining it. There's definitely more to breakdown and analyze within competence and intelligence. Such as simulation being a distinct sort of part of intelligence. A measure of how many moves a player can think ahead in a strategy game like chess or Go. How large of a possibility-tree can they build in the available time? With what rate of errors? How quickly does the probability of error increase as the tree increases in size? How does their performance decrease as the complexity of the variables needed to be tracked for an accurate simulation increase?

Yeah, I wish we had some cleaner terminology for that.
Finetuning the "simulation engine" towards a particular task at hand (i.e. to find the best trade-off between breadth and depth search in strategy games, or even know how much "thinking time" or "error allowance" to allocate to a move), given limited cognitive resources, is something that I would associate with level 3 capability.
It certainly seems like learning could go into the direction of making the model of the game more useful by either improving the extent to which this model predicts/ouputs good moves or by improving the allocation of cognitive resources to the sub-tasks involved. Presumably, an intelligent system should be capable of testing which improvement vectors seem most fruitful (and the frequency with which to update this analysis), but I find myself a bit confused about whether that should count as level 3 or as level 4, since the system is reasoning about allocating resources across relevant learning processes.

Thanks for this post. I had two similar thoughts before.


One thing I'd like to discuss is Bostrom's definition of intelligence as instrumental rationality:

By ‘‘intelligence’’ here we mean something like instrumental rationality—skill at prediction, planning, and means-ends reasoning in general. (source)

This seems to be roughly similar to your "competence". I agree that this is probably too wide a notion of intelligence, at least in intuitive terms. For example, someone could plausibly suffer from akrasia (weakness of will) and thus be instrumentally irrational, while still be considered highly intelligent. Intelligence seems to be necessary for good instrumental reasoning, but not sufficient.

I think a better candidate for intelligence, to stay with the concept of rationality, would be epistemic rationality. That is, the ability to obtain well-calibrated beliefs from experience, or a good world model. Instrumental rationality requires epistemic rationality (having accurate beliefs is necessary for achieving goals), but epistemic rationality doesn't require the ability to effectively achieve goals. Indeed, epistemic rationality doesn't seem to require being goal-directed at all, except insofar we describe "having accurate beliefs" as a goal.

We can imagine a system that only observes the world and forms highly accurate beliefs about it, while not having the ability or a desire to change it. Intuitively such a system could be very intelligent, yet the term "instrumentally rational" wouldn't apply to it.

As the instrumental rationality / epistemic rationality (intelligence) distinction seems to roughly coincide with your competence / intelligence distinction, I wonder which you regard as the better picture. And if you view yours as more adequate, how do competence and intelligence relate to rationality?


An independent idea is that it is apparently possible to divide "competence" or "instrumental rationality" into two independent axes: Generality, and intelligence proper (or perhaps: competence proper). The generality axis describes how narrow or general a system is. For example, AlphaGo is a very narrow system, since it is only able to do one specific thing, namely playing Go. But within this narrow domain, AlphaGo clearly exhibits very high intelligence.

Similarly, we can imagine a very general system, but with quite low intelligence. Animals come to mind. Indeed I have argued before that humans are much more intelligent than other animals, but apparently not significantly more general. Animals seem to be already highly general, insofar they solve things like "robotics" (real world domain) and real-time online learning. The reason e.g. Yudkowsky sees apes as less general than humans seem to have mainly to do with their intelligence, not with their generality.

One way to think about this: You have an AI model, and create a version that is exactly similar, except you scale up the model size. A scaled up AlphaGo would be more intelligent, but arguably not more general. Similarly, the additional abilities of a scaled up LLM would be examples of increased intelligence, not of increased generality. And humans seem to be mostly scaled-up versions of smaller animals as well. The main reason we are so smart seems to be our large brain / neuron count. Generality seems to be a matter of "architecture" rather than model size, in the sense that AlphaGo and GPT-3 have different architectures, such that GPT-3 is more general; and GPT-2 and GPT-3 have the same architecture, such that GPT-3 is more intelligent, but not any more general.

Now your stratification of learning into several levels seems to be a case of such generality. The more levels a cognitive system implements, the more general it arguably is. I'm not sure whether your level 3 describes online learning or meta learning. One could perhaps argue that humans exhibit meta-learning in contrast to other animals, and should therefore considered to be more general. But again, maybe other animals also have this ability, just to a lesser degree, because they are less intelligent in the above sense (having smaller brains), not because they implement a less general cognitive architecture.

Anyway, I wonder whether you happen to have any comments on these related ideas.

I think that "epistemic rationality" matches very well with what I am thinking of as level 3, which is my notion of intelligence. It is indeed applicable to non-agentic systems.
I am still thinking about whether to include meta-learning (referring to updating level 3 algorithms based on experience) and meta-processes above that in my concept of intelligence.

Would this layer of meta-learning be part of epistemic rationality, do you think? It becomes particularly relevant if the system is resource constrained and has to prioritize what to learn about, and/or cares about efficiency of learning. These constraints feel a bit less natural to introduce for a non-agentic system, other than if said system is set up by an agentic system for some purpose. 

In any case, instrumental rationality does not seem to cover all that I mean with Competence, perhaps something narrower, like "cognitive competence". I find it a bit difficult to systematically distinguish between cognitive competence and noncognitive competence, because the cognitive part of the system is also implemented by its embodiment - and there are various correlations between events in "morphospace" and "cognitive space". 
One way of resolving that might be to distinguish between on-surface properties of a markov blanket (corresponding to the system interfacing with its environment) and within-surface properties of that markov blanket (corresponding to integrated regulation systems and the cognition). There will still be feedback loops between those properties, so our mileage on obtaining clean distinctions into different competencies may vary. 
If you are interested in some more thoughts on that, you can check out my post on extended embodiment.

 

An independent idea is that it is apparently possible to divide "competence" or "instrumental rationality" into two independent axes: Generality, and intelligence proper (or perhaps: competence proper).

I have already commented a bit on meta-learning above, per default my level 3 would refer just to online learning, but I am thinking of including different levels of meta-learning because of the algorithmic similarities. 
Perhaps interestingly for you, I consider one of the primary purposes of meta-learning to refine a generally intelligent system into a more narrowly intelligent system, by improving its learning capabilities for a particular set of domains, in some sense biasing the cognition towards the kind of environment it seems to operate within (i.e. in terms of hypothesis generation, or which kinds of functions to use when approximating the behavior of an observed sequence).

Of course, unless the system loses its meta-learning capability, it will be able to respond to changes in its environment by re-aiming/updating its learning tendencies over time - so it is technically general if you give it some time, but ends up converging towards beneficial specialisation.

I think of generality of intelligence as relatively conceptually trivial. At the end of the day, a system is given a sequence of data via observation, and is now tasked with finding a function or set of functions that both corresponds to plausible transition rules of the given sequence, and has a reasonably high chance of correctly predicting the next element of the sequence (which is easy to train for by hiding later elements of the sequence from the modeling process and sequentially introducing them to test and potentially update the fit of the model). 
Computationally speaking, the set of total atomic functions that you would have to consider in order to be able to compositionally construct arbitrary transition rules for sequences of discrete data packages, is very small. The only mathematical requirement is Turing universality - basically the entire difficulty arises due to resource constraints.

This seems to match with your thoughts about the appearance of greater generality simply due to more processing power. A cognitive system that is provided with more processing power could use that either to search more deeply those regions of causal models that it naturally considers, or it could branch out to consider new regions within model-space. Many brains in the animal kingdom seem to implement a sort of limited generative simulation of their environment, so that could be considered as a fairly general problem domain.

I could try to write more on this, but I am curious what you think about this so far and if I come across as reasonably clear.

So regarding things that involve active prioritizing of compute resources, I think that would fairly clearly fall no longer under epistemic rationality. Because "spending compute resources on this rather than that" is an action, which are only part of instrumental rationality. So in that sense it wouldn't be part of intelligence. Which makes some sense given that intuitively smart people often concentrate their mental efforts on things that are not necessarily very useful to them.

This relates also to what you write about level 1 and 2 compared to level 3. In the first two cases you mention actions, but not in the third. Which makes sense if level 3 is about epistemic rationality. Assuming level 1 and 2 are about instrumental rationality then, this would be an interesting difference to my previous conceptualization: On my picture, epistemic rationality was a necessary but not sufficient condition for instrumental rationality, but on your picture, instead level 1 and 2 (~instrumental rationality) are a necessary but not sufficient condition for level 3 (~epistemic rationality). I'm not sure what we can conclude from these inversed pictures.

I think of generality of intelligence as relatively conceptually trivial. At the end of the day, a system is given a sequence of data via observation, and is now tasked with finding a function or set of functions that both corresponds to plausible transition rules of the given sequence, and has a reasonably high chance of correctly predicting the next element of the sequence

Okay, but terminology-wise I wouldn't describe this as generality. Because the narrow/general axis seems to have more to to with instrumental rationality / competence than with epistemic rationality / intelligence. The latter can be described as a form of prediction, or building causal models / a world model. But generality seems to be more about what a system can do overall in terms of actions. GPT-4 may have a quite advanced world model, but at its heart it only imitates Internet text, and doesn't do so in real time, so it can hardly be used for robotics. So I would describe it as a less general system than most animals, though more general than a Go AI.

Regarding an overall model of cognition, a core part that describes epistemic rationality seems to be captured well by a theory called predictive coding or predictive processing. Scott Alexander has an interesting article about it. It's originally a theory from neuroscience, but Yann LeCun also sees it as a core part of his model of cognition. The model is described here on pages 6 to 9. Predictive coding is responsible for the part of cognition that he calls the world model.

Basically, predictive coding is the theory that an agent constantly does self-supervised learning (SSL) on sensory data (real-time / online) by continuously predicting its experiences and continuously updating the world-model depending on whether those predictions were correct. This creates a world model, which is the basis for the other abilities of the agent, like creating and executing action plans. LeCun calls the background knowledge created by this type of predictive coding the "dark matter" of intelligence, because it includes fundamental common sense knowledge, like intuitive physics.

The current problem is that currently self-supervised learning only really works for text (in LLMs), but not yet properly for things like video. Basically the difference is that with text we have a relatively small number of discrete tokens with quite low redundancy, while for sensory inputs we have basically continuous data with a very large amount of redundancy. It makes no computational sense to predict probabilities of individual frames of video data like it makes sense for an LLM to "predict" probabilities for the next text token. Currently LeCun tries to make SSL work for these types of sensory data by using his "Joint Embedding Predictive Architecture" (JEPA), described in the paper above.

To the extent that creating a world model is handled by predictive coding, and if we call the ability to create accurate world models "epistemic rationality" or "intelligence", we seem to have a pretty good grasp of what we are talking about. (Even though we don't yet have a working implementation of predictive coding, like JEPA.)

But if we talk about a general theory of cognition/competence/instrumental rationality, the picture is much less clear. All we have is things like LeCun's very coarse model of cognition (pages 6ff in the paper above), or completely abstract models like AIXI. So there is a big gap in understanding what the cognition of a competent agent even looks like.