PNAS Paper from April 29th that makes strides to solve the Hard Problem of Consciousness by dissolving it:

A conceptual framework for consciousness

by Michael S. A. Graziano

Abstract:

This article argues that consciousness has a logically sound, explanatory framework, different from typical accounts that suffer from hidden mysticism. The article has three main parts. The first describes background principles concerning information processing in the brain, from which one can deduce a general, rational framework for explaining consciousness. The second part describes a specific theory that embodies those background principles, the Attention Schema Theory. In the past several years, a growing body of experimental evidence—behavioral evidence, brain imaging evidence, and computational modeling—has addressed aspects of the theory. The final part discusses the evolution of consciousness. By emphasizing the specific role of consciousness in cognition and behavior, the present approach leads to a proposed account of how consciousness may have evolved over millions of years, from fish to humans. The goal of this article is to present a comprehensive, overarching framework in which we can understand scientifically what consciousness is and what key adaptive roles it plays in brain function.

 

Key quotes:

Principle 1.

Information that comes out of a brain must have been in that brain. [...]

For example, if I believe, think, and claim that an apple is in front of me, then it is necessarily true that my brain contains information about that apple. Note, however, that an actual, physical apple is not necessary for me to think one is present. If no apple is present, I can still believe and insist that one is, although in that case I am evidently delusional or hallucinatory. In contrast, the information in my brain is necessary. Without that information, the belief, thought, and claim are impossible, no matter how many apples are actually present.

[...]

You believe you have consciousness because of information in your brain that depicts you as having it. [...] The existence of an actual feeling of consciousness inside you, associated with the color, is not necessary to explain your belief, certainty, and insistence that you have it. Instead, your belief and claim derive from information about conscious experience. If your brain did not have the information, then the belief and claim would be impossible, and you would not know what experience is, no matter how much conscious experience might or might not “really” be inside you.

[...]

Note that principle 1 does not deny the existence of conscious experience. It says that you believe, think, claim, insist, jump up and down, and swear that you have a conscious feeling inside you, because of specific information in your brain that builds up a picture of what the conscious feeling is. [...]

Principle 2.

The brain’s models are never accurate.

...and I think you can anticipate what follows in this section. 

The central proposal of AST [Attention Schema Theory] is that the brain constructs an attention schema. The proposal was not originally intended as an explanation of consciousness, but rather to account for the skillful endogenous control of attention that people and other primates routinely demonstrate. A fundamental principle of control engineering is that a controller benefits from a model of the item it controls. In parallel to a body schema, an attention schema could also be used to model the attention states of others, thus contributing to social cognition. Finally, an attention schema, if at least partly accessible by higher cognition and language, could contribute to common human intuitions, beliefs, and claims about the self. 

I recommend reading the full article.

New Comment
22 comments, sorted by Click to highlight new comments since: Today at 8:46 AM

Graziano writes:

A fundamental principle of control engineering is that a controller benefits from a model of the item it controls.

This claim always raises my hackles. I have seen no reason to think it is true and plenty of reason to think it is not. Graziano references three papers in support. One is a paper by Conant and Ashby from 1970 that I have remarked on before: I cannot make any sense of their concepts and notations when I try to work through their exposition with an example of a control system. The second is a similarly old paper (1976), which at least has the virtue of not being Conant and Ashby. I am currently going down the rabbit-hole of the arguments there and in the papers it references. The third reference is a more recent (2004) textbook on model-based control, but which as far as I can make out, contains no claim that a model is an essential part of a control system, only that it is a possible part of a control system, with which I don't have a quarrel. Googling "internal model principle" (for such it is called) turns up only those old references and mentions in more recent control theory undergraduate lecture notes, but nothing more substantial.

If there is anyone here with a professional knowledge of control engineering (something I cannot claim myself), I would love to talk with them about this internal model principle that was in vogue decades ago but which seems to have left only fossilised traces today.

(Stumbled across this old thread, let me know if you’ve learned anything since then.)

(I do not have professional knowledge of control engineering.)

You cited the claim “a controller benefits from a model of the item it controls”, and then you wrote “The third reference…contains no claim that a model is an essential part of a control system”. Those are different, right?

For my part, I don’t think it’s essential. I do think it’s helpful. Incidentally, I note that you can find other places where Graziano has made the stronger claim that it’s essential, and when he does make that claim, I think Graziano is wrong.

Why do I think it’s helpful? Well, if you have a generative model, you can do MPC.

Then maybe you’ll respond: Fine, if I don’t have a generative model, I’ll just do something else instead, it’s not like MPC is the only game in town.

But a nice thing about MPC is that you can update a generative model from self-supervised (predictive) learning (unlike a policy). I.e., when you make a wrong prediction, you get high-dimensional data about what you should have predicted instead, and thus how to improve the model. (You get a full error gradient “for free” with each query.) And you can use off-policy observations to update the model. And you can have a ridiculously complicated open-ended space for what the generative model might look like, and still converge on a good model fast because of the rich data from self-supervised learning.

Those kinds of considerations make me think that putting a generative model inside the controller is at least plausibly helpful.

You cited the claim “a controller benefits from a model of the item it controls”, and then you wrote “The third reference…contains no claim that a model is an essential part of a control system”. Those are different, right?

The cited claim was additionally that benefitting from a model is "a fundamental principle of control engineering", and Graziano places no limitation on what sort of controller he was referring to. I don't think there's a substantial gap between that and my "essential".

Model the physics of how a room warms and cools, and that may tell you where the best place to site the thermostat is, and how powerful a heat source you need, but I do not know any way in which the thermostat might control better by itself containing any model.

I remarked on a problem with learning a model of the plant being controlled in another comment: because the controller is controlling the plant, the full dynamics of the plant alone cannot be observed. I don't know the current state of theory and practice on this issue.

So, yes, you can design a controller to contain a model, but my claim is that it is not so fundamental an idea.

Graziano (as far as appears from the extract, and I do not feel motivated to consult the full paper) uses it to justify the idea that we model our own minds and other people's, something which seems clear from our own experience without depending on control theory.

I’m definitely not defending Graziano, as mentioned.

I do not know any way in which the thermostat might control better by itself containing any model

Let’s say our thermostat had a giant supercomputer cluster and a 1-frame-per-minute camera inside it.

We use self-supervised learning to learn a mapping:

(temperature history (including now), heater setting history (including now), video history (including now)) ↦ (next temperature, next video frame).

This mapping is our generative model, right? And it could grow very sophisticated. Like it could learn that when cocktail glasses appear in the camera frame, then a party is going to start soon, and the people are going to heat up the room in the near future, so we should keep the room on the cooler side right now to compensate.

Then the thermometer can do MPC, i.e. run through lots of future probabilistic rollouts of the next hour with different possible heater settings, and find the rollout where the temperature is most steady—plus some randomness as discussed next:

because the controller is controlling the plant, the full dynamics of the plant alone cannot be observed

That’s just explore-versus-exploit, right? You definitely don’t want to always exactly follow the trajectory that is predicted to be optimal. You want to occasionally do other things (a.k.a. explore) to make sure your model is actually correct. I guess some kind of multi-armed bandit algorithm thing?

Let’s say our thermostat had a giant supercomputer cluster and a 1-frame-per-minute camera inside it.

This sounds like a product of Sirius Cybernetics Corporation. "It is very easy to be blinded to the essential uselessness of them by the sense of achievement you get from getting them to work at all."

All you need is a bimetallic strip and a pair of contacts to sense "too high" and "too low".

Like it could learn that when cocktail glasses appear in the camera frame, then a party is going to start soon, and the people are going to heat up the room in the near future, so we should keep the room on the cooler side right now to compensate.

In other words, control worse now, in order to...what?

In other words, control worse now, in order to...what?

Suppose the loss is mean-square deviation from the set point. Suppose there’s going to be a giant uncontrollable exogenous heat source soon (crowded party), and suppose there is no cooling system (the thermostat is hooked up to a heater but there is no AC).

Then we’re expecting a huge contribution to the loss function from an upcoming positive temperature deviation. And there’s nothing much the system can do about it once the party is going, other than obviously not turn on the heat and make it even worse.

But supposing the system knows this is going to happen, it can keep the room a bit too cool before the party starts. That also incurs a loss, of course. But the way mean-square-loss works is that we come out ahead on average.

Like, if the deviation is 0° now and then +10° midway through the party, that’s higher-loss than -2° now and +8° midway through the party, again assuming loss = mean-square-deviation. 0²+10² > 2²+8², right?

This sounds like a product of Sirius Cybernetics Corporation. "It is very easy to be blinded to the essential uselessness of them by the sense of achievement you get from getting them to work at all."

All you need is a bimetallic strip and a pair of contacts to sense "too high" and "too low".

Well jeez, I’m not proposing that we actually do this! I thought the “giant supercomputer cluster” was a dead giveaway.

If you want a realistic example, I do think the brain uses generative modeling / MPC as part of its homeostatic / allostatic control systems (and motor control and so on). I think there are good reasons that the brain does it that way, and that alternative model-free designs would not work as well (although they would work more than zero).

I have studied control engineering, and maybe "model" is too much or not exactly what you have in mind. Control systems need a mathematical description of the controlled process - its behavior in reaction to changing parameters. This description is traditionally called the transfer function, but it is effectively a model of the process. In simple cases, it is the time-independent response to change in control parameters. For example, how much and how quickly the temperature changes when you increase turn up the heating. In more advanced Model predictive control it is - well - more complicated.

I know about transfer functions and so forth. The transfer function is the designer's model of the process to be controlled. But the control system that they design need not have any such model. The room thermostat knows nothing but the current temperature at its sensor and the reference temperature set by the user. The designer puts the sensor in a suitable place, and chooses a heat source powerful enough to satisfy the demands made of it. After that, the system does what it was designed to do, with no knowledge of that design. Exactly the same system could be installed in many different rooms, and give satisfactory performance in each of them.

With the diagrams in the papers that Graziano references by Conant and Ashby, and Wonham, I see no place in them for all of the unmodelled disturbances, e.g. external temperature and number of people in the room for the room thermostat. These are assumed to be known perfectly, and the control system is assumed to control perfectly. It is not surprising that with everything so exactly matched and moving in lockstep, you can find one part whose behaviour maps exactly to another part. But not only are these assumptions never true, they are never even approximately true. The point of a control system is to insulate the controlled variables from disturbances about which nothing is known but a few general characteristics, such as typical magnitude and rate of change.

Most of the confusion I have read about the Good Regulator theorem seems to be about what it means to be a good model which is related to the map/territory distinction. What does it mean to have a map/model?

For the interested here are some missing links:

The imminent paper by Conant and Ashby: Every good regulator of a system must be a model of that system 

The theorem is also discussed on LW: Fixing The Good Regulator Theorem

I also found these Notes on The Good Regulator Theorem insightful.

Exactly the same system could be installed in many different rooms, and give satisfactory performance in each of them.

This is the case only in the simplest cases. Thermostats have a straightforward model (because heat content is slow-moving and smooth) that will work well with many rooms. But with less well-behaved targets, depending on the type of controller, you will get permanent delta, oscillations, or worse results.   

If you were a thermostat and had a model of how people in the room affect temperature (and had a way to figure out how many there are), you could control the room temperature much more effectively (e.g., bringing it up to the expected temperature very quickly).  

Again, the point at issue is not whether you can use models in the design of a controller, but whether a controller necessarily benefits from itself containing a model, which is what Graziano claims, relying on those musty old references.

I believe the usefulness of models (in the controller, not the designer) is greatly overestimated. The room thermostat will not benefit from any such model. When it turns the heating on, it's already raising the temperature as fast as it can. In its steady state, the temperature remains close to the reference temperature no matter what, oscillating up and down in the hysteresis zone. Half a dozen people coming in will not warm the room. The weather turning cold will not chill the room. And this without the thermostat having any model of what is happening, only a rule of what to do given the sensed and reference temperatures. As long as the heat source (or cold source, in warmer climates) has the necessary capacity, the performance is effectively perfect.

I think you refer to Robust Control, where a controller is designed to handle a wide range of parameters. But it still needs to have (correspond to) a model of regulating temperature. You can't use the same controller to control a balancing pole (or a plane's flaps). Even Adaptive Control can't do that.

But even for your thermostat: The performance may be perfect in the limit given infinite time, but maybe you want to get the heat up quickly, and with your assumption of sufficient capacity, that should be possible, right? But it turns out that it will not work if the thermostat doesn't have a sufficient model that doesn't break down at shorter timescales.

You can find some failure modes here

The room thermostat is plenty robust, but owes nothing to Robust Control. Or to put that differently, Robust Control means "control that works".

But it still needs to have (correspond to) a model of regulating temperature.

The designer needs that, but the controller does not.

You can't use the same controller to control a balancing pole (or a plane's flaps).

The designer considers the dynamics of the pole and designs a controller for it. The controller need not have any model. Here's a simple example. The inverted pendulum controller there has a certain architecture with 4 parameters chosen to suitably place the poles of the transfer function. It stretches the concept of "model" to call those parameters a model of the inverted pendulum. For the walking robot example, I didn't even do that calculation, just picked parameters from physical intuition. It did not take much trial and error to get it to work.

You can find some failure modes here

Very interesting paper! Thank you for that!

I found one of the flaws he describes particularly striking, the empirically observed "bursting" phenomenon, whereby under adaptive control the plant may occasionally go into unstable oscillations for a short while. This happens because you cannot observe the full space of a plant's behaviour while it is under control, only some subspace of it. That is what a control loop does to a plant: it removes some of its degrees of freedom. (For example, the room thermostat removes all degrees of freedom from the room's temperature.) The adaptive part is trying to learn a model of the plant's behaviour while the plant is being controlled. But if the number of degrees of freedom while under control is less than the number of parameters the adaptive part is estimating, then some degrees of freedom of the parameter space are unobservable. Those degrees of freedom cannot be learned, and are free to drift arbitrarily. Eventually they drift so far that control fails, the plant exhibits more degrees of freedom, and the adaptive part manages to learn its parameters better. Control is restored, until the next time.

That phenomenon happens for such fundamental reasons that I would expect there to even be theorems about it, but I'm not familiar enough with the field to know. The behaviours of the plant under control and not under control are completely different. It is difficult to learn a model of the latter while only having access to the former. One might even say "you can't tell what the plant is doing by watching what it's doing".

I guess we talk past each other when we use the word "model" here somehow but at least we seem to agree on what happens and why in these examples. 

What do you think will happen as the number of degrees of freedom goes up significantly?

What do you think will happen as the number of degrees of freedom goes up significantly?

I don't think any of the points at issue here will be affected. The plant and the controller may both be more complicated, but the issue of whether a given controller has a model or not is unchanged.

What, then, is a model? This is where people broaden the scope of the word far beyond its normal use, apparently in order to maintain the claim that a good controller must have a "model". But changing the meaning of a word changes the meaning of every sentence that uses it. It does not change the things that the sentences are talking about, nor the claims made when using the original sense of the word.

That original sense, the ordinary sense of the word whenever anyone uses it, in the absence of any urge to find a model whether it exists or not, is illustrated by the following examples.

In a typical textbook on Model-Based Control, the block diagrams of the controllers it discusses include a block explicitly labelled as a model of the plant. This block calculates a function from inputs to outputs, in a way that is close to the input-output behaviour of the plant being modelled. In non-model-based control there is no such component.

That is a special case of a mathematical model: a set of variables and equations relating them that describe the behaviour of a physical system.

A physical model is the same thing done with physics instead of mathematics, such as a scaled-down model of an aircraft wing in a wind tunnel.

In biology, a model organism is an organism that is representative of some larger class, to which class experimental results may be expected to extrapolate. A practical model must also be easy to work with, hence the ubiquity of Drosophila, Arabidopsis, and mice.

Even a model on the catwalk is the same sort of thing. The model displays clothes as they are intended to appear when worn by potential customers. Protests some years back about a tendency for female models to look more like waifish teenage boys make the point: those models were not very good models, in the sense I'm talking about.

These are all examples of a single concept. Here are some anti-examples. A keycard is not a model of the set of doors that it opens. A password is not a model of the account it gives access to. A table is not a model of whatever might be placed on it. An eye is not a model of the photons it responds to.

Perhaps some concise formula can be given that draws the line in the right place, but I do not have one to hand. I suppose that categories and adjoint functors might be a part of it.

I have found your old post Without models. You work from a very clear understanding of what a model is, and a thermostat doesn't have it, and with the definition of a model that you seem to use, I agree. 

I think people may mean two things when they talk about a "model":

  1. An abstract representation of something that some entity can reason about. You mention both physical and software structures that are embedded in the larger system and are operated on (interpreted, evaluated, measured) and influence the larger system (to control it).
  2. A part of the system that represents future states. Mathematically speaking, the factorized part of the system's state space correlates more with future states of the system than current states of the system (over some time intervals of interest).  

These overlap. Think of an explicit model component that is fed input from the environment (the controlled process) and outputs predicted future states. This model will have outputs that highly correlate with the component of the state-space of the environment in the future.

More examples:

  • A model of the Earth, i.e., a globe, is a model in sense 1 but not in sense 2 because it is an abstraction of the real Earth, and you can be reason about it. But no part of it corresponds to the future state of the Earth.
  • A mathematical model in the head of an engineer is a model in sense 1 but not in sense 2 unless it includes the application of the model to imaginary inputs from the real world. In that latter case, to the degree the outputs correspond to actual future states, it is also a model in sense 2.
  • A feedforward circuit in a controller that calculates the effect of a disturbance on the output of a process is a model in sense 2 but not in sense 1 because it is not a separate entity that you can reason about, but its output still correlates with the future state of the process.  
  • A beauty contest model might be a model in sense 2 if its look correlates with the looks of other persons in the future.
  • A thermostat is no model in sense 1 but you will find that its state-space has a component that corresponds to some future states (at least if it is not a simple P-controller).

I scanned through the article in search of a new insight, and came up empty. What did you get from it?

Few people have solved the hard problem of consciousness in a so clearly spelled out way. Aren't you happy to see this becoming common knowledge?

I believe zero people have solved it, and the number remains zero after Graziano's paper. The hard problem is why there is such a thing as experience, and how there could possibly be any such thing, when everything else we know seems to leave no room for it; yet we have it. Graziano substitutes the question of why we think we have it; but that thinking is itself an experience left unexplained. It does not so much dissolve the problem as ignore it.

I failed to find anything resembling a solution, or a dissolution, that is any better or clearer than this Scott Aaronson's post on the "pretty hard problem of consciousness." I may have missed something, that's why I asked.

There are significant differences between IIT and AST (and Aaronson doesn't go much beyond refuting IIT).

  • IIT and AST both start (or argue) from first principles - information processing. But differently:
    • IIT is self-contained and refers to the amount of information processing
    • AST related the information processing directly to processes about attention and the information in reports about consciousness.
  • AST has a neuropsychological plausibility, and processing can be located in specific brain regions. IIT does almost no such thing. 
  • AST makes testable predictions about consciousness. IIT can only be tested on the plausibility of the information processing (though that you can do that is better than most other theories of consciousness).