I found this theory pretty interesting, and it reminded me of Gary Drescher's explanation of consciousness in Good and Real:

How the light gets out

Consciousness is the ‘hard problem’, the mystery that confounds scientists and philosophers. Has a new theory cracked it?

[...]

Attention requires control. In the modern study of robotics there is something called control theory, and it teaches us that, if a machine such as a brain is to control something, it helps to have an internal model of that thing. Think of a military general with his model armies arrayed on a map: they provide a simple but useful representation — not always perfectly accurate, but close enough to help formulate strategy. Likewise, to control its own state of attention, the brain needs a constantly updated simulation or model of that state. Like the general’s toy armies, the model will be schematic and short on detail. The brain will attribute a property to itself and that property will be a simplified proxy for attention. It won’t be precisely accurate, but it will convey useful information. What exactly is that property? When it is paying attention to thing X, we know that the brain usually attributes an experience of X to itself — the property of being conscious, or aware, of something. Why? Because that attribution helps to keep track of the ever-changing focus of attention.

I call this the ‘attention schema theory’. It has a very simple idea at its heart: that consciousness is a schematic model of one’s state of attention. Early in evolution, perhaps hundreds of millions of years ago, brains evolved a specific set of computations to construct that model. At that point, ‘I am aware of X’ entered their repertoire of possible computations.

- Princeton neuroscientist, Michael Graziano, writing in Aeon Magazine.

New to LessWrong?

New Comment
8 comments, sorted by Click to highlight new comments since: Today at 5:12 PM

When it is paying attention to thing X, we know that the brain usually attributes an experience of X to itself

This is either selection fallacy or tautology. How do we know what the brain is paying attention to outside of consciousness? Or is non-conscious attention ruled out by definition?

In fact, my brain pays attention to a great many things that I do not experience. I know this because there are specific examples. One is motor control, which mostly happens inside the brain but outside of consciousness. Touch your finger to your nose. You can do that, but how did you do it?

In the modern study of robotics there is something called control theory, and it teaches us that, if a machine such as a brain is to control something, it helps to have an internal model of that thing.

...

It has a very simple idea at its heart: that consciousness is a schematic model of one’s state of attention.

According to this theory, every model-based controller is conscious. So we've been building artificial consciousnesses for forty years. They even talk to us through their control panels. The Swiss have officially put into law the concept of the dignity of plants; should we add the dignity of machines?

If theories that the cerebellum uses model-based control methods are correct, then it follows from Graziano's view that cerebellums are also conscious. This, however, is not our experience, and experience is what he is supposedly trying to explain.

Now, Graziano is, as usual with explanations of consciousness, not actually trying to explain conscious experience. He starts out claiming to address that, but then makes the standard bait and switch:

Consciousness isn’t a non-physical feeling that emerges. Instead, dedicated systems in the brain compute information. Cognitive machinery can access that information, formulate it as speech, and then report it.

He is explaining why we talk about having conscious experience, while ignoring conscious experience itself.

[-][anonymous]9y00

I think that his theory is that the kinds of activity that the brain can carry on without the "attention modeling" and its consequent "conscious experience"/awareness is what most other animals haven't managed to progress beyond, and that the "attention model" ( of the parietal junction etc ), is what has enabled the massively sustained attention span of humans compared to other animals, and other advanced kinds of cognitive function, a step up in complexity, rather like the M button on a calculator. It has enabled the human brain to optimise attention processes, plan and organise attention, avoid distraction, and even more importantly, compared to other animals, to pay attention to things which are not in front of us, not present in the here and now, to conceive of and focus on imaginary things, which are elsewhere or don't exist/haven't been built yet. [ Not sure if this posting/comment is working. Just trying again. :) ]

[This comment is no longer endorsed by its author]Reply

I think that his theory is that the kinds of activity that the brain can carry on without the "attention modeling" and its consequent "conscious experience"/awareness is what most other animals haven't managed to progress beyond, and that the "attention model" ( of the parietal junction etc ), is what has enabled the massively sustained attention span of humans compared to other animals, and other advanced kinds of cognitive function, a step up in complexity, rather like the M button on a calculator. It has enabled the human brain to optimise attention processes, plan and organise attention, avoid distraction, and even more importantly, compared to other animals, to pay attention to things which are not in front of us, not present in the here and now, to conceive of and focus on imaginary things, which are elsewhere or don't exist/haven't been built yet.

When it is paying attention to thing X, we know that the brain usually attributes an experience of X to itself

This is either selection fallacy or tautology. How do we know what the brain is paying attention to outside of consciousness? Or is non-conscious attention ruled out by definition?

I think "attributes an experience of X to itself" is being used to mean "is conscious of experiencing." Stated this way, the role of attention doesn't seem to be either tautological or necessarily a product of selection fallacy. As you pointed out, brains do pay attention to things that are not consciously experienced, so I think this is why the original said 'usually' rather than 'always'.

He is explaining why we talk about having conscious experience, while ignoring conscious experience itself.

Do you not agree that any explanation that is sufficient to explain why we talk about consciousness necessarily entails an explanation of consciousness itself? Otherwise, it seems you'd have to believe the cause of us talking about conscious experience is something entirely unrelated to our actual conscious experience.

Do you not agree that any explanation that is sufficient to explain why we talk about consciousness necessarily entails an explanation of consciousness itself?

Sort of -- only on the rather trivial grounds that if talk of conscious experience is caused by conscious experience, then an explanation of the talk must explain how it is caused by conscious experience, and for the explanation to go beyond the assertion that it is so caused, it must contain some sort of explanation of consciousness.

But Graziano's explanation is not of this nature. He explains talk of conscious experience by the existence of models within the brain. One cannot argue that because this is an explanation of the talk, it must be an explanation of consciousness; it may just be a wrong explanation of the talk.

Fair enough.

This is the sort of theory that sounds really interesting, but most likely amounts to nothing more than a metaphor. If anyone disagrees with this characterization, I must only ask, what exactly have we learned here, and how could it be applied to anything in the real world?

So we now know that consciousness is the situation of having control over one's internal mechanisms, and that this requires knowing the essentials about how these mechanisms work in what situation. But what is this "control"? The question has merely been pushed back, with the metaphor as no more than byplay, until we realize nothing has been answered, and "control" is the new word to be analyzed. We know nothing we didn't know before.

The trouble is that, though this captures most of what we mean, there's still plenty of un-captured baggage that people drag in with the word "consciousness." Like intelligence, planning, human-comprehensible goals, human-similar pain/want/pleasure system, and probably plenty I'm missing.