5 min read

24

Followup to: Neural Correlates of Conscious Access; Related to: How an Algorithm Feels From the Inside, Dissolving the Question

Global Workspace Theory and its associated Theater Metaphor are empirically plausible, but why should it result in consciousness? Why should globally available information being processed by separate cognitive modules make us talk about being conscious?

Sure, that's how brains see stuff, but why would that make us think that it's happening to anyone? What in the world corresponds to a self?

So far, I've only encountered two threads of thought that try to approach this problem: the Social Cognitive Interface of Kurzban, and the Self-Model theories like those of Metzinger and Damasio.

I’ll be talking about the latter, starting off with what self-models are, and a bit about how they’re constructed. Then I’ll say what a self-model theory is.

Humans as Informational Processing Systems

Questions: What exactly is there for things to happen to? What can perceive things?

Well, bodies exist, and stuff can to happen to them. So let's start there.

Humans have bodies which include informational processing systems called brains. Our brains are causally entangled with the outside world, and are capable of mapping it. Sensory inputs are transformed into neural representations which can then be used in performing adaptive responses to the environment.

In addition to receiving sensory input from our eyes, ears, nose, tongue, skin, etc, we get sensory input about the pH level of our blood, various hormone concentrations, etc. We map not only things about the outside world, but things about our own bodies. Our brain's models of our bodies also include things like limb position.

From the third person, brains are capable of representing the bodies that they're attached to. Humans are information processing systems which, in the process of interacting with the environment, maintain a representation of themselves used by the system for the purposes of the system.

Answers: We exist. We can perceive things. What we see as being our "self" is our brain's representation of ourselves. Generalizably, a "self" is a product of a system representing itself.

Note: I don't mean to assert that human self-modeling accomplished by a single neurological system or module, but I do mean to say that there are a nonzero set of systems which, when taken together, can be elegantly expressed as being part of a self-model which presents information about a person to the person's brain.

Bodily Self Models

Human self-models seem to normally be based on sensory input, but can be separated from it. Your bodily self-model looks a lot like this:Sensory Homunculus, Courtesy Wikipedia
Phantom limb syndrome  is a phenomenon where, after a limb is amputated, a person continues to feel it. Their self-model continues to include the limb, even though they no longer receive sensory input from it. Phantom limb syndrome has also been reported by people who, due to congenital circumstances, never had those limbs. This suggests that to some extent, they’re based on neurological structures that humans are born with.

Freaky stuff happens when a body model and sensory inputs don't coincide.  Apotemnophilia  is a disorder where people want to amputate one of their otherwise healthy limbs, complaining that their body is "overcomplete", or that the limb is "intrusive". They also have very specific and consistent specifications for the amputation that they want, suggesting that the desire comes from a stable trait rather than say, attention seeking. They don't want want to get an amputation, they want a particular amputation. Which sounds pretty strange.

This is distinct from  somatoparaphrenia,  where a patient denies that a limb is theirs but is fairly apathetic towards it. Somatoparaphrenia is caused by damage to both S1 and the superior parietal lobule, leading to a limb which isn't represented in the self-model, that they don't get sensory input from. Hence, its not theirs and its just sorta hanging out there, but its not particularly distressing or creepy. Apotemnophilia can be described as lacking a limb in the self-model, but continuing to get input from it. Imagine if you felt a bunch of armness coming into your back.

In some sense, your brain also marks this model of the body as being you. I'll talk more about it in another article, but for now just notice that that's important. It's useful to know that our body is in fact ours for planning purposes, etc.

Self Models and Global Availability

Anosognosia  is the disorder where someone has a disability, but is unable/unwilling to believe that they have that disability. They insist that they don't move paralyzed arms because they don't want to, or that they can actually see while they're stumbling around bumping into things.

This is also naturally explained in terms of self-model theory. A blind person with anosognosia isn't able to see, and doesn't receive visual information, but they still represent themselves as seeing. So when you ask them about it, or they try and plan, they assert that they can still see. When the brain damage leading to blindness and anosognosia occurs, they stop being able to see, but their self-model isn't updated to reflect that.

Blindsight is the reverse case where someone is able to see, but don't represent themselves as seeing.
In both cases, the person's lack of an ability to represent themselves as having particular properties interferes with those properties being referred to by other cognitive modules such as those of speech or planning.

Self-Model Theories of Consciousness

Self-Model Theories hold that we're self aware because we're able to pay attention to ourselves in the same way that we pay attention to other things. You map yourself based on sensory inputs the same way that you map other things, and identify your model as being you.

We think that things are happening to someone because we're able to notice that things are happening to something

That's true of lots of animals though. What makes humans more conscious?

Humans are better at incorporating further processing based on the self-model into the self-model. Animals form representations of and act in the environment, but humans can talk about their representations. Animals represent things, but they don't represent their representation. The lights are on, and somebody's home, but they don't know they're home.
Animals and Humans (and  some Robots) both represent themselves, but Humans are really good at representing other things -- like intentions -- and incorporating that into their self-model.
So umm... How does that work?
To be continued...
Notes:
The vast majority of this article is from Being No One.
Thanks again to John Salvatier for reading early versions of this post, as well as getting a few papers for me.
References:
Metzinger, T. (2003). Being No One: The Self-Model Theory of Subjectivity. Nature (p. 713). MIT Press. Chapter 7

Kurzban, R., & Aktipis, C. A. (2007). Modularity and the social mind: are psychologists too self-ish? Personality and social psychology review : an official journal of the Society for Personality and Social Psychology, Inc, 11(2), 131-49. doi:10.1177/1088868306294906

 

Ramachandran, V. S., Brang, D., McGeoch, P. D., & Rosar, W. (2009). Sexual and food preference in apotemnophilia and anorexia: interactions between “beliefs” and “needs” regulated by two-way connections between body image and limbic structures. Perception, 38(5), 775-777. doi:10.1068/p6350

New Comment
29 comments, sorted by Click to highlight new comments since:

That's true of lots of animals though. What makes humans more conscious?

I liked the article in general, but I experienced this line as sort of "coming out of left field".

It is not clear to me that humans as a class are categorically "more conscious" than animals if we drop the definitional assumption that consciousness is largely "what it feels like to be a human from the inside" and try to give it a more cybernetic basis in arrangements of computational modules or data flows.

It seems to me like an empirical question whether there are some animals that are "more conscious" than some humans, and it seems to me that very very little of the necessary science that would need to be done to answer the question has actually been published. I can imagine science being done after we have better cognitive neuroscience on the subject of consciousness and discovering the existence of a kind of animal (octopus? crow? bee hive? orca? dog?) that has "more consciousness" than many reasonably normal humans do.

In the meantime, when I talk with people about the cognitive neuroscience of consciousness, I notice over and over how people conflate the capacity for consciousness with something like "higher levels of brain-mediated efficacy" and also conflate the capacity of consciousness with something like "moral status as a agent with interests". The "animals don't count" content comes up over and over, and has come up over and over in philosophic musings for thousands of years, and I rarely see evidence based justification for the claim.

Irene Pepperberg has a theory (which seems plausible to me) that humans use animals to such an extent that we have deeply engrained tendencies to see them as edible/trainable/killable rather than friend-able. It would actually be surprising if it were otherwise, considering the positive glee dogs and cats take from the torture and dismemberment of smaller animals. If Pepperberg is right, then there is an enormous amount of confabulated hooey in our culture basically "justifying the predation of animals by people who do not want to think of themselves as evil just because they love bacon". I'm not trying to take a position here, so much as pointing out an area of known confusion and dispute where the real answer is not obvious to many otherwise clear-thinking people.

If I saw a vegan arguing against animal consciousness or an omnivore arguing in favor of it, I would tend to suspect that they were arguing based on more detailed local knowledge of the actual neuroscience... rather than due to cached moral justifications for their personal dietary choices. Most people are omnivores and most people think animals are not conscious or morally important, so most people trigger my confabulation detecting heuristic in this respect... as you just did.

It is probably worth pointing out that I'm an omnivore I'm not trying to start a big ole debate on vegetarianism. What I'm hoping to do is simply to encourage the separation of conclusions and data about animal minds that I've marked as "40% likely to be garbage" so it doesn't contaminate very pragmatically worthwhile theoretical thinking about the nature of consciousness, morality, cognitive efficacy, and self-reflective or inter-subjective assessments of agency. I think you can say a lot that is very worthwhile in this area without ever mentioning animals. If you need to talk about animals then it is better to do so with care and citations, rather than casually in the concluding remarks.

I'm starting to think that not splitting up the word consciousness was a big mistake. I think that there are lots of very qualia-laden Animals that it's very mean to hurt, but that said Animals probably wouldn't talk about being conscious for reasons other than the fact that they don't talk.

Conscious, but not self-aware. Humans do more things related to consciousness, which I'm going to talk about later.

I really like the term "qualia-laden" for picking out a specific property of an agent. Did you make that up?

My current guess is that with "human self-awareness" you're trying to talk about whatever it is that the mirror test aims to measure. My guess would be that in the biological world all mirror-test-passers are qualia-laden, and that human intelligence (and possible future machine intelligences) do (or will) involve both of these properties but also many other things as well (like symbolic communication, etc). Is mirror-test-passing a helpful term, or does the fact that magpies pass the mirror test mean that it's not what you're trying to talk about?

I think I read in Daniel Gilbert's Book "Stumbling on Happiness" a very good approximation to what I always thought an animal "thinks" like:

Imagine you're reading a long text and your mind starts wandering and gets lost in shallow feelings like the warm weather or the sounds of the birds, while your eyes keep reading the letters. Suddenly you're at the end of the paragraph and you realize that you can't remember anything you just read... or did you actually read it at all? You look over it again and the words sure look familiar, but you simply can't say for sure if you actually read the text. You were in fact reading it, but you weren't consciously aware of the fact that you were reading it.

Being a dog is probably like letting your mind wander and never snapping out of that state, everything you do is as if it's on autopilot without having to first pass the "semi-control" of a conscious observer/inhibitor/decision-maker who has a concept of a future that goes on beyond a few moments.

Not sure if that is really a fair caricature of an animal mind, but it's my current model of how an animal mind functions.

I'm an omnivore who considers animals probably concious, but I also don't understand the problem (other than medical with prions and such) with eating humans so I might not count.

Some currently existing robots also have some representation of themselves, but they aren't conscious at all... I think it is true that the concept of self-model has something to do with consciousness, but it is not the source of it. (By the way, there is not much recursive about the brain modeling the body.)

Animals represent things, but they don't represent their representation.

For me, this seems to be the key point... that conscious entities have representations of their thoughts. That we can perceive them just like tables and apples in front of us, and reason about them, allowing thoughts like "I know that the thing what I see is a table" (because "I see a thought in my brain saying <

>").

Using this view, "conscious" just stands for "able to perceive its own thoughts as sensory input". The statement "we experience qualia" is a reasonable output for a process that has to organize inputs like ... This would also explain the fact that we tend to talk about qualia as something that physically exist but we can never be sure that others also have them: they arrive to sensory pathways just like when we see an apple (so they look like parts of reality) but we get to see only our own...

Does this sound reasonable, by the way? (can't wait for the second part, especially if it is dealing with similar topics)

Some currently existing robots also have some representation of themselves, but they aren't conscious at all.

Not that I honestly think you're wrong here, but it's worth asking how you supposedly know this. If we know every line of a robot's source code, and that robot happens to be conscious, then it won't do anything unexpected as a result of being conscious: No "start talking about consciousness" behavior will acausally insert itself into the robot's program.

It's also not entirely clear how "representing their representation" is any different from what many modern computer tools do. My computer can tell me about how much memory it's using and what it's using it for, about how many instructions it's processing per second, print a stack trace of what it was doing when something went wrong, etc. It's even recursive: the diagnostic can inspect itself while it runs.

I don't think that my computer is conscious, but if I was going by your explanation alone I might be tempted to conclude that it is.

If we know every line of a robot's source code, and that robot happens to be conscious, then it won't do anything unexpected as a result of being conscious

What about splitting the concept "conscious" into more different ones? (As atucker also suggested.) I think what you mean is something like "qualia-laden", while my version could be called "strong version of consciousness" ( = "starts talking about being self-aware" etc). So I think both of us are right in a "tree falling in forest" sense.

You're also mostly right in the question about computers being conscious: of course, they mostly aren't... but this is what the theory also predicts: although there definitely are some recursive elements, as you described, most of the meta-information is not processed by the same level as the one they are collected about, so no "thoughts" representing other, similar "thoughts" appear. (That does not exclude a concept of self-model: I think antivirus software definitely pass the mirror test by not looking for suspicious patterns in themselves...)

I guess the reason for this is that computers (and software systems) are soo complicated that they usually don't understand what they are doing. There might be higher level systems that partially do (JIT compilers, for example), but this is still not a loop but some frozen, unrolled version of the consciousness loop programmers had when writing the code... (Maybe that's why computers sometimes act quite intelligently, but this suddenly breaks down as they encounter situations the programmers haven't thought of.)

By the way, have we ever encountered any beings other than humans so close to being reflectively conscious as computers? (I usually have more intelligent communications with them than with dogs, for example...)

This is very similar to my current beliefs on the subject.

I was considering adding to that "Animals are conscious, but not self aware", but that would mostly be using the word consciousness in a not-agreed-upon way. Namely as the ability to feel or perceive, but not full-blown human-style consciousness.

That's called sentience, isn't it?

I think that's the most commonly accepted correct word for that, but think that it means enough things to enough people that at that point it's better to just talk about things directly.

I don't recognize myself at all in all the implicit assumption that everyone, even/especially those with related damage making them normal, seem to make. This does not invalidate your theory thou since I also don't have and is confused by the experience of conciousness you are trying to explain.

I do not feel any special connection to my body, or even brain parts, that is qualitatively different from any other kind of tool, and can alter my self model more or less at will. Even the usage of the word "I" here is pragmatic and not consistent under to much scrutiny. Currently I'm modelling lesswrong, including you the reader, as part of my mind.

[-][anonymous]00

I wonder if I would have a claim against you for copyright infringement.

Does writing "X belongs to [name]" on a piece of paper miles from the referent, and never taking any action related to it, constitute theft?

My only real exposure to the recursive self-symbol idea has been through Hofstadter's I am a strange loop and perhaps Dennett's Consciousness explained. (Maybe one should refuse to put these in the same category as the research mentioned in this post though.)

The material on blindsight and anosognosia forced out a confusion in my understanding of the idea. You write that blindsight is "where someone is able to see, but don't represent themselves as seeing." It is natural to ask, why don't they just update their self-model? So I guess that the self-model must be partially accessible to conscious read/writing, and partially accessible only to subconscious read/writing. Maybe only the former is characteristic of human-style consciousness. So the recursive arrow pointing from the self-symbol to itself needs to be unpacked into multiple arrows representing these different types of access.

They normally have a lesion in v1 preventing visual information from reaching either the ventral or dorsal stream. Basically meaning that non-cortical visual systems still work, but everything in the cortex doesn't get to use it.

If the consciously accessible sections of the self-model are based in the cortex (which I think they are), then damage to v1 should prevent self-model updates.

Though, Metzinger mentions a case where someone had anosognosia for a year, then suddenly realized that they were blind.

My main (possibly only) beef with Hofstadter is the idea that the self symbol refers to itself. I currently think that it kind of does that, but in a very roundabout non-infinitely looping way.

Basically, you have all these systems that make you do things, and inputs from the body. When the doing-stuff systems make you do something, your body changes, and your self-model updates. The updated self-model is used by other parts of your brain to do more stuff, but your self-model for the most part updates after the fact.

Though, Metzinger mentions a case where someone had anosognosia for a year, then suddenly realized that they were blind.

I feel like this should be the summary of a neural-koan. (Or maybe a LW koan - if there can be hacker koans why not LW koans?)

I think bayes koan would be a more proper term. And yes there should probably be some large post for making and collecting as many such as possible.

Well, a lot depends on how precise I want my description to get. The conscious/subconscious distinction is a very high-level approximation; when I get down into the details it's not especially useful. There are a lot of more-or-less independent processes, some of which have self-monitoring functions (and can therefore be considered "part of the self-model") to varying degrees in varying contexts, with varying degrees of cognitive permeability and conscious control.

More generally, I find it often helps to think of my mind as a collection of individual agents with goals that are not quite aligned (and sometimes radically opposed). So, sure, the "self-symbol" "modifies itself" in lots of different ways, just like the U.S.Senate does, but there's nothing mysterious about that... it's what you expect from a system with lots of moving parts that interact with one another.

There seems to be something wrong with some of the text, or is it just my browser?

Referencesďťż:

[...]

ďťżRamachandran

Eeep!

Thanks. I don't like citations in the html editor....

Edit: I haven't found the offending portion of the html that's causing this. Anyone want to help?

I just tried to fix it for you, but it disappears for me as well when I open it in the LW editor or the LW HTML editor!!

Wow. What an impressive heisen-bug! I'll tell Matt + Wes about it.

[-]matt00

Non-printing unicode character removed. I don't know how you got that in there in the first place :)

Thanks!

It correlates well with citations pasted in from Mendeley, if that helps.

Voting up and waiting for your next installment. (dtz weird text still there)

Shouldn't the arrow from "self-model" to "explicit planning" be dashed and labelled "inaccurate" in the case of blindsight like it is for anosognosia ? From my understanding of your article, both are opposite cases of inaccurate self-model.

Else, interesting, but I'm waiting for the next article since the main question (to me) "That's true of lots of animals though. What makes humans more conscious?" was not really answered to (but take your time !).

The dashed/inaccurate is more valuable than solid, but it's not really the connection that's the locus of inaccuracy.

If instead the self-model contained a diagram inside of it, then we could see that the connection self-model to planning is working fine, it's the diagram inside self-model that is wrong.

Yeah, pretty much this.

The Self-Model is accurately saying that it doesn't see anything because it's not getting visual input, but it's failing to be an accurate self-model because other parts can still see, and vision can be acted on in a limited way.