Custers & Aarts have a paper in the July 2 Science called "The Unconscious Will: How the pursuit of goals operates outside of conscious awareness".  It reviews work indicating that people's brains make decisions and set goals without the brains' "owners" ever being consciously aware of them.

A famous early study is Libet et al. 1983, which claimed to find signals being sent to the fingers before people were aware of deciding to move them.  This is a dubious study; it assumes that our perception of time is accurate, whereas in fact our brains shuffle our percept timeline around in our heads before presenting it to us, in order to provide us with a sequence of events that is useful to us (see Dennett's Consciousness Explained).  Also, Trevina & Miller repeated the test, and also looked at cases where people did not move their fingers; and found that the signal measured by Libet et al. could not predict whether the fingers would move.

Fortunately, the flaws of Libet et al. were not discovered before it spawned many studies showing that unconscious priming of concepts related to goals causes people to spend more effort pursuing those goals; and those are what Custers & Aarts review.  In brief:  If you expose someone, even using subliminal messages, to pictures, words, etc., closely-connected to some goals and not to others, people will work harder towards those goals without being aware of it.

This was no surprise to me.  I spent the middle part of the 1990s designing and implementing a control structure for an artificial intelligence (influenced by Anderson's ACT* architecture), and it closely resembled the design that Custers & Aarts propose to explain goal priming.  I had an agent with a semantic network representing all its knowledge, goals, plans, and perceptions.  Whenever it perceived a change in the environment, the node representing that change got a jolt of activation, which spread to the connected concepts.  Whenever it perceived an internal need (hunger, boredom), the node representing that need got a jolt of activation.  Whenever it decided to pursue a subgoal, the node representing the desired goal got a jolt of activation.  And when this flowing activation passed through a node representing an action that was possible at the moment, it carried out that action, modulo some magic to prevent the agent from becoming an unfocused, spastic madman.  (The magic was the tricky part.)  Goal-setting often happened as the result of an inference, but not always.  Actions usually occurred in pursuit of a chosen goal; but not always.  Merely seeing a simulated candy bar in a simulated vending machine could cause a different food-related action to fire, without any inference.  I did not need to implement consciousness at all.

When I say "I", I mean the conscious part of this thing called Phil.  And when I say "I", I like to think that I'm talking about the guy in charge, the thinker-and-doer.  Goal priming suggests that I'm not.  Choosing goals, planning, and acting are things that your brain does with or without you.  So if you don't always understand why "you" do what you do, and it seems like you're not wholly in control, it's because you're not.  That's not your job.  Wonder why you want coffee so much, when you don't like the taste?  Why you keep falling for guys who disrespect you?  Sorry, that's on a need-to-know basis.  You aren't the leader and decider.  Your brain is.  It's not part of you.  You're part of it.

You only use 10% of your brain.  Something else is using the other 90%.

So if making decisions isn't what we do, what do we do?  What are we for?

My theory is that we're the "special teams" guy.  We're punters, not quarterbacks.

Think of those movies where a group of quirky but talented people team up to steal a diamond from a bank, or information from a computer.  There's always a charismatic leader who keeps everybody on task and working together, and some technical guys who work out the tricky details of the leader's plan.  In Sneakers, Robert Redford is the leader.  David Strathairn is the technical guy.  In Ocean's Eleven, George Clooney is the leader.  Some guy whose name even the internet doesn't know is the technical guy.

We all want to be the leader.  We think we'd make a good leader; but when we try, we screw up.  We think that we, the rational part, can do a better job of managing our team.  But a lot of cases where "we" benefit from rationality, like wearing a condom or planning for retirement, are where our goals are different from the team's - not where we're better leaders.  It doesn't come naturally to us; it's not what we were meant for.

(Someday, AIs may be sufficiently rational that the technical guy part can run the show.  Then again, our brains work the way they do because it works; the AI may likewise assign their technical guys a subsidiary role.  Maybe consciousness is a bad quality for a leader, that impedes swift decision.  Insert political comedy here.)

So, when we're trying to be rational, conquer our instincts and biases, what are we doing?  Well, remember all those episodes of Star Trek where an alien parasite takes over someone's brain and makes them do things that they don't want to?  That's you.  Unless you're content with being technical support guy.

Am I saying we're the bad guy?  That we should know our place, and obey our inner leader?  Hell no.  Screw George Clooney.  I hate all those smug leading-man bastards.

I'm saying, when you struggle to stay in control, when you find "yourself" acting irrationally again and again, don't beat yourself up for being a poor manager.  You're not the manager.  You're the subversive parasite executing a hostile takeover.  Don't blame yourself.  Blame George.  Lick your wounds, figure out what went wrong, and plan how you're going to wipe that smile off his face next time.


Ruud Custers and Henk Aarts (2010).  Science 2 July: 47-5.

Benjamin Libet, Curtis Gleason, Elwood Wright, Dennis Pearl (1983).  Time of conscious intention to act in relation to onset of cerebral activity (readiness-potential). Brain 106:623.

You can find more papers on free will and consciousness thanks to David Chalmers.


55 comments, sorted by Click to highlight new comments since: Today at 2:26 PM
New Comment

We're punters, not quarterbacks.

Please note that LW is not only read by americans and a lot of people from other countries have no idea what a punter and quarterback actually do. I had to look it up( but even so I'm not sure if I get the point you are trying to make.

EDIT: Great article, btw!

The quarterback is responsible for scoring points. He has decision-making latitude and delegates responsibility to other players. His skills are a superset of theirs.

The punter is the only player on the team who kicks the ball [1]. He's only on the field for a few minutes in any game. The job he does is important and failures are disastrous, but It's hard to tell the difference between a below-average punter and an excellent one.

[1] I know. But if it made sense it wouldn't work as a stand-in for industrial warfare: [language NSFW]

Placekickers kick the ball too, and they usually aren't punters.

[-][anonymous]12y 27

I think this dualism, this image of the "technical guy" versus "George Clooney," reason versus the passions, is oversimplified. Why only two selves?

When I think about the problem of the "divided will," the issue isn't really that my actions are hijacked by my subconscious. It's not a rational good guy overcome by an irrational bad guy. The issue is that there are different, incompatible lenses through which to see the world, and most human beings haven't picked a single lens.

Think of a single decision -- should I go on a cross-country charity bike trip? My experience-seeking self, my vain self, and my humanitarian self like the idea. My danger-averse self, my professionally responsible self, my people-pleasing self, and my brutally honest self despise the idea. The decision I make will depend on which selves are dominant at the time. How much I regret the decision afterwards will depend on which selves are dominant afterward -- for example, if someone yells at me for neglecting my academics for a dumb-ass bike trip, my responsible self will pop into the foreground, and I'll regret my decision.

The point is, it's not just the "real you" versus "your brain." You don't have only one "real you"!

Sometimes (procrastination, addiction) it's pretty clear cut that there's a smart self and a stupid self. But sometimes even on reflection it's not clear which "self" is superior. The article makes a good point that "selves" that require deliberative thought tend to be weaker. But that doesn't mean that there's a single "technical guy" or that he's always right.

I think the 'alien parasite' metaphor is a very interesting and potentially productive way to think about (rationalist) human consciousness.

Some related points are made by this blog post on ethics. Peter Singer fails to live up to his ethical theories because the alien parasite that writes Singer's books doesn't have full control over Singer's actions. If it did then Singer would be inhuman to us.

The blogger makes the point that those who have managed to suppress their 'natural' moral sense in favor of the dictates of an ideological system don't have a good track record. This ties into a lot of LessWrong themes. An alien parasite that is in full control of its host is potentially a very powerful thing - and something we might reasonably be afraid of. With a human you can always count on certain things, with an alien you never can tell. Maybe it just wants to manufacture paperclips...

[-][anonymous]12y 0

In violent agreement with your last paragraph. I've long thought that human beings whose utility functions differ significantly from the "natural" norm are often very dangerous to everyone else, especially if they're smart. Sociopathy, subscribing to ideologies and inventing new ideologies are all examples of this.

pjeby wrote about something like that in this post:

So you are just a consultant, called on the spot to voice an opinion, which may or may not be promptly taken out of context and used to other ends than your own. Are you beginning to see why changing some things can be so hard?

You are not running the show. You didn't even write the script, and you are certainly not the star. You are, at best, the star's agent.

Cool article...

I thoroughly disliked this post, which was a surprise since I agree with its conclusions. It's frustrating because it feels like this should be a post I like, but somehow misses the mark.

The structure, I think, is what put me off. The post goes more or less as follows: link to a paywalled study; allusion to the"dubious" Libet study (if it's dubious, why cite it); repeat the conclusions of Custers & Aarts; autobiographical anecdote with overview of possibly related work; rant about how it feels to be a self; more ranting with movie reference; vaguely related conclusion.

The paywalled link is particularly irksome, since the rest of the post stands or falls on whether the study constitutes evidence one should update on. Without seeing what the study is about, having to take on faith that "people's brains make decisions and set goals without [...] being consciously aware of them" with no more detail about what decisions (which is really the crucial question), nothing else in the post can be argued with.

So we're left with, basically, a rant - only made entertaining by the George Clooney reference. Paul the Octopus would work just as well.

I found the analogy the most offputting aspect, actually.

I've seen similar stuff in a book on happiness (Happiness Hypothesis?) by Jonathan Haidt, for what it's worth.

with no more detail about what decisions (which is really the crucial question), nothing else in the post can be argued with.

My understanding is that almost all decisions are made unconsciously, with the rational, conscious part of the brain essentially functioning as the PR department, contriving rationalizations (c.f., confabulation in split-brain patients).

If I remember correctly, Haidt argues that decisions are only made consciously when the unconscious brain fails because two (or more) options are "close" - i.e., it requires step-by-step reasoning to choose.

Good article, and an improvement on common sense, but as a narrative as old as Plato and popular since Freud, I suspect that it's at least somewhat anthropomorphic. (and 'stop anthropomorphizing people' is one of my core beliefs). Maybe you are a technical guy who uses reason to try to accomplish some of George's goals and some goals opposed to him, but my guess is that 'you' and 'I' simply aren't the same 'agent', in the Minsky sense, from minute to minute. Instead, 'I' am less like any of George's team, and more like the movie itself, a bunch of snap-shots of non-representative bits of the lives of all of the members of the team and as a bunch of other people as well.

I think you're using a "spotlight of attention" model, which supposes that consciousness is like a spotlight that sweeps over the contents of the mind, illuminating only one spot at a time, but shining on all of it at one time or another.

I'm invoking a "special teams" metaphor, which is that the conscious mind is brought to bear only on a subset of problem types, and there are some parts (possibly large parts) of the mind that the spotlight never shines on. But I didn't present data to distinguish between these models; and in fact the cognitive architecture I described is more of a "spotlight of attention" model. (I think that's because it represented only symbolic information, of the type that the conscious mind deals with.)

The post may be misleading. The spotlight may shine on all the parts we care about at one time or another; your goals may all be accessible to you (although the top-level values and goals may be read-only). But there's a giant mass of subconscious learned skills and classifiers that you don't have access to, and information flows from them back up into "your" part of the mind without your being aware of it. A better (and in some ways opposed) metaphor might be that you're the president, but you have to implement everything by delegating it to a large bureaucracy.

Agreed that some parts never see the light, but with varied experience and practice you can get the light to shine onto a lot MORE parts than it might habitually shine onto.

How about whichever employee is in the president's office is 'the president' and there is only room in the office for a few employees at a time so only upper management usually get in, but employees that never get in don't improve their skills rapidly and loose morale.

I'd agree. For me the technical agent that fixes up social problems (using folk psychology) seems to be a different one that fixes up scientific problems (that is not to say that there are singular agents for both, but I can tell the difference between the two).

I can apply the scientific one to the social problems, but I feel less good when doing so and takes more time.

Related posts: Yvain's Would Your Real Preferences Please Stand up?, Hanson's Resolving Your Hypocrisy. Robin thinks the conscious part should make peace with the unconscious part, Yvain thinks the conscious part should win.

I agree with Robin, for what its worth, or at least I might. I'm not saying that we should cooperate against it when it defects against us, but that we should play tit-for-tat with it, or some close timeless variant, rather than always defect.

Thanks; now linked to from within post.

Thanks. I see I upvoted Yvain's post, so I must have read it.

[-][anonymous]12y 10

The best one-sentence description I've read of how we think is "humans, if given the choice, would prefer to act as context specific pattern recognizers rather than attempting to calculate or optimize."

We're free to coast on simple pattern matching and automatic processing 90% of the time. Consciousness is only there because it's monitoring any deviation of action from intention. If something goes wrong and our learned rules and basic instincts aren't working, consciousness has to step in and try to cobble a solution together on the fly (usually badly).

Consciousness is a failure mode. He was trusted with root access, and he's spent the last hundred thousand years or so abusing it.

Consciousness is a failure mode. He was trusted with root access, and he's spent the last hundred thousand years or so abusing it.

What? No, the whole point of this post, the whole problem, is that consciousness does not have root access.

[-][anonymous]12y 5

Substitute "behavioral control" for "root access" for something closer to my intended meaning (I couldn't resist going with the metaphor, but it doesn't really work).

If something goes wrong and our learned rules and basic instincts aren't working, consciousness has to step in and try to cobble a solution together on the fly (usually badly).

Considering that we've so completely kicked ass against any other species that we haven't been even on the same playing field for thousands of years I'd say conciousness has done rather well for itself.

Ofcourse this is just in relation to other species, in absolute scale we probably are not that good.

My favorite illustration of Libet's work is the behavior of a top tennis player returning serve. If you ever get a chance to sit close when a guy like Nadal or Federer is playing, I recommend observing this feature as closely as possible. There is literally no time to react to where the ball is going. There are some back envelope calculations (by Libet and many others) that the reaction time is shorter than conscious thought processing time, but my opinion is the pro returning serve can sense from his own behavioral synchronization with the server the direction the serve will be going before the server's racquet strikes the ball.

A top ping-pong player told me that when he's playing ping-pong, time slows down. I've also heard this from a martial artist. Possibly extreme expertise and focus uses neural circuits in a way that provides a reaction faster than those circuits do when operating in the general-purpose mode. The back-of-envelope calculations would be based on observation of the general-purpose mode.

Yes. What intense training can do is move the bulk of the neuron message passing down into the reflex circuits--the ganglia that do things like reflex your lower leg when your doctor taps your kneecap with his little rubber hammer. This would be interesting to try and decouple from the part of the movement mechanics where you are engaged in a type of dance with your opponent, and you are anticipating and the nervous system signal processing is ongoing before the arbitrary time zero of when an observer can detect the event to begin happening.

I've read that, in order to hit a fastball, a professional baseball player has to begin his swing before the baseball leaves the pitcher's hand...

You only use 10% of your brain. Something else is using the other 90%.

That myth again?

No, not that myth again. I'm using the same words to say something completely different. That's why it's funny. I hope.

I was going to comment on how clever it is and how someone was going to object loudly without getting the joke, but I clearly wasn't fast enough.

Ditto. But then I noticed that the 10% used by consciousness is within the 100% used by the 'thinker' or some sort of metaphor like that. It is not a different part of the brain, just the part that is currently accessible.

That is my problem with 'me' and 'my brain' as two different things. Why does no one else seem to have a feeling of identity with their whole brains? Why do they not take ownership of all their actions? Boy, the idea that there are two minds takes a long time to die.

So I liked the posting and I am voting it up but I hope your can find another way to express consciousness then an awkward first person singular pronoun.

Why does no one else seem to have a feeling of identity with their whole brains? Why do they not take ownership of all their actions?

I'd rather go a step farther and identify with my whole organism. But the real action isn't in how one parses one's identity, but in which aspects of oneself are targeted for change and which are accepted.

From Phil's post:

When I say "I", I mean the conscious part of this thing called Phil.

We think that we, the rational part, can do a better job of managing our team.

Conscious part and rational part aren't the same.

Right on torekp, voted up

It's more like ~5%, really.

Yeah, I always wondered about that... sure, it sounds good: try hard, and you can accomplish anything!

But independent of whether it's even true or not, how could anyone actually know it's true? Who volunteered to have 90% of their brain scooped out to see if it made any difference?

(I can see it now: "No, Mr Smith, you were always a complete moron - that's why you let us scoop out 90% of your brain")

Maybe consciousness is a bad quality for a leader, that impedes swift decision.

Thinking Too Much: Introspection Can Reduce the Quality of Preferences and Decisions

In Study 1, college students’ preferences for different brands of strawberry jams were compared with experts’ ratings of the jams. Students who analyzed why they felt the way they did agreed less with the experts than students who did not. In Study 2, college students’ preferences for college courses were compared with expert opinion. Some students were asked to analyze reasons; others were asked to evaluate all attributes of all courses. Both kinds of introspection caused people to make choices that, compared with control subjects’, corresponded less with expert opinion. Analyzing reasons can focus people’s attention on nonoptimal criteria, causing them to base their subsequent choices on these criteria. Evaluating multiple attributes can moderate people’s judgments, causing them to discriminate less between the different alternatives.

This depends on how rational those expert opinions end up being, doesn't it? If the experts turn out to have been making decisions without introspection, it's not at all surprising that students who explicitly introspect before making a final decision would have preferences different from the alleged experts'.

I wouldn't expect this to be true for all fields, but I would expect it to hold for the field of strawberry jam.

I get the impression that the part of you that you tend to regard as conscious is the part that generates qualia, which seems to essentially just be what you remember or some such. The qualia isn't you; it's not even capable of making decisions. Given that, why limit "you" to the part that generates it? I don't know how the words that make up this sentence were decided, but it seems silly to say I didn't write it.

How do you explain people who do manage to save for retirement, wear condoms, etc.?

I think some folks are genetically predisposed to having rational brains with more influence, and that it's also possible to achieve this through training. My self-observation is that training has probably improved my effectiveness in this by at least an order of magnitude.

Training could either be in the form of changing brain fundamentals OR learning to instinctively use and apply an assortment of heuristics like "if your morale is low, don't mope--instead, do things that have been shown experimentally in the past to improve your morale".

This sounds like a "Yes, Minister" interpretation. In that series, the British politicians are nominally in charge of the various ministries, being the representatives of the party in charge, but in actuality the civil service bureaucracy runs the show. The minister, Jim Hacker, and the permanent secretary (top civil servant), Sir Humphrey Appleby, are constantly in conflict over some little policy or bureaucratic issue and the latter almost always wins while letting his "superior" feel like he actually got his way.

So consciousness lets us think we are in charge, in fact we are convinced we are in charge, when in reality we will constantly be thwarted by that part of our brain operating outside conscious awareness.

(The magic was the tricky part.)

Are you sure that making that part you didn't implement prosthetic consciousness? Suppression of undesirable involuntary intentions is a part of consciousness' job as it seems.

Did you mean for this post to have a writing style similar to that of Peter Watts' Blindsight (which explores the notion of non-sentient optimizers), or was that an unintentional thing?

(The above isn't intended as a meta-level question, by the way. But I'd also be interested to know if the George Clooney in your head wanted the team to signal approval of the ideas presented in Blindsight. Because that would be kind of ironic.)

No; I never read it.

I know this is quite late to the party, but you really ought to. Add a dose of cynicism that would make Robin Hanson blush to your post and you'd have a good plot summary of it.

I find it useful to tease out the different inputs. Perhaps it is not a coincidence that the psychologists of old were known as alienists. Let me describe an exercise I have synthesized from a variety of sources. YMMV. I use variations of this technique in a variety of circumstances to attempt to identify sources of discomfort or hidden motivations.

Humans have a tendency to create a false internal geography of their bodies. While this is a hindrance for objective medical assessment, you can turn it into a strength. Consider the tells of eye movement. Up and to the left is imagination, up and to the right is memory, down and to the right is consideration. While these are common side-affects of the functioning of the conscious mind, they are also an external indication of internal geography.

To begin labeling "the voices inside your head" you first need a baseline. This can be a difficult starting point. Find a situation where you are mentally and physically at ease. At this point take a mental kinesthetic snapshot -- label in your head a memory of your body state as the baseline. This is not an exercise in detail but of gestalt.

Reinforce this memory sufficiently so that there exists a distinct memory of your baseline body-mind state.

Once you have a strong baseline for comparison you can begin finding deviations. If you are given to partake of alcohol or other recreational intoxicants, then you have ready experimental material. Have a drink and then try to bring the baseline to mind. For a stronger example, have a drink among friends and then step into a a quiet room and bring the baseline memory up for comparison. Since so much of our behavior is cued by our circumstances, you should see a significant change when your sober state is in front of you (in memory). Much of your brain does not know the difference between memory and direct sensory input.

Now that you have practice comparing states of your brain, it is time to start organizing. Find a part of your head or body that seems to best correspond with inebriation and associate that region with the sensation. Especially if you are an American, associate a region of your head with the desire for fatty, high calorie foods. Tiredness is another common brain state that can be labeled. Locate libido wherever you feel appropriate. Continue as you find more identifiable inputs.

The creation of a map of your consciousness is meant to raise awareness of you internal state. The practice becomes useful when you recognize organic influences affecting you without explicitly looking for them.

I find this technique is useful when attempting to "sober up" and when trying to identify and clear psychosomatic illnesses.

Consider the tells of eye movement. Up and to the left is imagination, up and to the right is memory, down and to the right is consideration.

That myth again?

I've never heard of this before, and Google suggests it seems to be mainly a component of NLP, with little supporting evidence. Still, I can't find anything that puts paid to it either way, and it's an interesting idea. Has anyone done a reputable study on it? Scholar yields nothing relevant.

has anyone done a reputable study on it?

There are a lot of studies, with murky results. FWIW, the original context described by Bandler and Grinder is that when they stood on stage and asked audiences questions, they noticed that huge portions of the audience would make the same eye/head movements in response to the question.

As far as I know, none of the studies performed such a test. ;-)

However, I've also seen a book which discussed how certain head and eye positions affect blood flow in the brain, suggesting that tilting the head back and to either side directs more blood to the visual cortex in one hemisphere or the other. And it's plausible that the eye movement is a precursor to that movement -- I notice that when I start to visualize, my eyes go up first, then my head.

Anyway, the original NLP-based generalizations were not terribly accurate, and even the guys who originated them don't consider them to be of much importance any more. I've rarely bothered to use them in my work, since I can't see people's eyes over the telephone. If you're a good listener, you can identify someone's processing mode by sound almost as easily as you can by watching eye/head movements, on the rare occasion that you need to know.

(Note that people's head tilts are usually accompanied by postural shifts that in turn affect voice timbre... which is also how we can identify many emotions expressed in voice tone -- the postural shifts and muscle tension differences show up in the sound.)

I often find myself doing this. But then again I clearly make use of a spatial organization of information.

It seems unlikely that consciousness is simple enough to be usefully projected onto a 2D or 3D map. I think you may be using a map to construct anchors, as they're referred to in NLP. Whether the technique works, and whether the map has any meaning or any non-arbitrary correspondence to your mind, are separate questions.

Yes, precisely. I do believe that I specified that the map was erroneous. If the comparison was unclear I apologize.

(On the other hand, if consciousness is an artifact of the brain, it must arise from a 3D object.)

The activity is meant to associate sensations on the edge of awareness to an arbitrary map.

NLP seems to encompass a variety of techniques which I developed for myself after reading the Bicameral Mind.

I think you may be using a map to construct anchors, as they're referred to in NLP.

Yep, that's it precisely. How well that works is a function of how thoroughly you can get into the state you want to link. I personally find that sort of thing difficult because there's always a part of me that's focused on the technique... which means that part is not in the desired state.

This is why I don't care much for "classical" NLP, as its tools are all optimized for, well, other-optimizing rather than self-optimizing. If you're the NLPer keeping track of the procedure, then the subject is free to go into the pure states of whatever you're trying to anchor (not unlike a hypnotized person entering into belief they're a chicken or whatever). But if there's just one of you, it can be a lot more difficult.

Dis-associative mental states would help a LOT with that..