Related on OB: Priming and Contamination
Related on LWWhen Truth Isn't Enough

When I was a kid, I wanted to be like Mr. Spock on Star Trek.  He was smart, he could kick ass, and he usually saved the day while Kirk was too busy pontificating or womanizing.

And since Spock loved logic, I tried to learn something about it myself.  But by the time I was 13 or 14, grasping the basics of boolean algebra (from borrowed computer science textbooks), and propositional logic (through a game of "Wff'n'Proof" I picked up at a garage sale), I began to get a little dissatisfied with it.

Spock had made it seem like logic was some sort of "formidable" thing, with which you could do all kinds of awesomeness.  But real logic didn't seem to work the same way.

I mean, sure, it was neat that you could apply all these algebraic transforms and dissect things in interesting ways, but none of it seemed to go anywhere.

Logic didn't say, "thou shalt perform this sequence of transformations and thereby produce an Answer".  Instead, it said something more like, "do whatever you want, as long as it's well-formed"...  and left the very real question of what it was you wanted, as an exercise for the logician.

And it was at that point that I realized something that Spock hadn't mentioned (yet): that logic was only the beginning of wisdom, not the end.

Of course, I didn't phrase it exactly that way myself...  but I did see that logic could only be used to check things...  not to generate them.  The ideas to be checked, still had to come from somewhere.

But where?

When I was 17, in college philosophy class, I learned another limitation of logic: or more precisely, of the brains with which we do logic.

Because, although I'd already learned to work with formalisms -- i.e., meaningless symbols -- working with actual syllogisms about Socrates and mortals and whatnot was actually a good bit harder.

We were supposed to determine the validity of the syllogisms, but sometimes an invalid syllogism had a true conclusion, while a valid syllogism might have a false one.  And, until I learned to mentally substitute symbols like A and B for the included facts, I found my brain automatically jumping to the wrong conclusions about validity.

So "logic", then -- or rationality -- seemed to require three things to actually work:

But it wasn't until my late thirties and early forties -- just in the last couple of years -- that I realized a fourth piece, implicit in the first.

And Spock, ironically enough, is the reason I found it so difficult to grasp that last, vital piece:

That to generate possibly-useful ideas in the first place, you must have some notion of what "useful" is!

And that for humans at least, "useful" can only be defined emotionally.

Sure, Spock was supposed to be immune to emotion -- even though in retrospect, everything he does is clearly motivated by emotion, whether it's his obvious love for Kirk, or his desire to be accepted as a "real" rationalis... er, Vulcan.  (In other words, he disdains emotion merely because that's what he's supposed to do, not because he doesn't actually have any.)

And although this is all still fictional evidence, one might compare Spock's version of "unemotional" with the character of the undead assasin Kai, from a different science-fiction series.

Kai, played by Michael McManus, shows us a slightly more accurate version of what true emotionlessness might be like: complete and utter apathy.

Kai has no goals or cares of his own, frequently making such comments as "the dead do not want anything", and "the dead do not have opinions".  He mostly does as he's asked, but for the most part, he just doesn't care about anything one way or another.

(He'll sleep in his freezer or go on a killing spree, it's all the same to him, though he'll probably tell you the likely consequences of whatever action you see fit to request of him.)

And scientifically speaking, that's a lot closer to what you actually get, if you don't have any emotions.

Not a "formidable rationalist" and idealist, like Spock or Eliezer...

But an apathetic zombie, like Kai.

As Temple Grandin puts it (in her book, Animals In Translation):

Everyone uses emotion to make decisions. People with brain damage to their emotional systems have a hard time making any decision at all, and when they do make a decision it's usually bad.

She is, of course, summarizing Antonio Damasio's work in relation to the somatic marker hypothesis and decision coherence.  From the linked article:

Somatic markers explain how goals can be efficiently prioritized by a cognitive system, without having to evaluate the propositional content of existing goals. After somatic markers are incorporated, what is compared by the deliberator is not the goal as such, but its emotional tag. [Emphasis added]

The biasing function of somatic markers explains how irrelevant information can be excluded from coherence considerations. With Damasio's thesis, choice activation can be seen as involving emotion at the most basic computational level. [Emphasis added]
...
This sketch shows how emotions help to prevent our decision calculations from becoming so complex and cumbersome that decisions would be impossible. Emotions function to reduce and limit our reasoning, and thereby make reasoning possible. [Emphasis added]

Now, we can get into all sorts of argument about what constitues "emotion", exactly.  I personally like the term "somatic marker", though, because it ties in nicely with concepts such as facial micro-expressions and gestural accessing cues.  It also emphasizes the fact that an emotion doesn't actually need to be conscious or persistent, in order to act as a decision influencer and a source of bias.

But I didn't find out about somatic markers or emotional decisions because I was trying to find out more about logic or rationalism.  I was studying akrasia1, and writing about it on my blog.

That is, I was trying to find out why I didn't always do what I "decided to do"... and what I could do to fix that.

And in the process, I discovered what somatic markers have to do with akrasia, and with motivated reasoning...  long before I read any of the theories about the underlying machinery.  (After all, until I knew what they did, I didn't know what papers would've been relevant.  And in any case, I was looking for practice, not theory)

Now, in future posts in this series, I'll tie somatic markers, affective synchrony, and Robin Hanson's "near/far" hypothesis together into something I call the "Akrasian Orchestra"...  a fairly ambitious explanation of why/how we "don't do what we decide to" , and for that matter, don't even think the way we decide to.

But for this post, I just want to start by introducing the idea of somatic markers in decision-making, and give a little preview of what that means for rationality.

Somatic markers are effectively a kind of cached thought.  They are, in essence, the "tiny XML tags of the mind", that label things "good" or "bad", or even "rational" and "irrational". (Which of course are just disguised versions of "good" and "bad", if you're a rationalist.)

And it's imporant to understand that you cannot escape this labeling, even if you wanted to.  (After all, the only reason you're able to want to, is because this labeling system exists!)

See, it's not even that only strong emotions do this: weak or momentary emotional responses will do just fine for tagging purposes.  Even momentary pairing of positive or negative words with nonsense syllables can carry over into the perception of the taste of otherwise-identical sodas, branded with made-up names using the nonsense syllables!

As you can see, this idea ties in rather nicely with things like priming and the IAT: your brain is always, always, always tagging things for later retrieval.

Not only that, but it's also frequently  replaying these tags -- in somatic, body movement form -- as you think about things.

For example, let's say that you're working on an equation or a computer program...  and you get that feeling that something's not quite right.

As I wrote the preceding sentence, my face twisted into a slight frown, my brow wrinkling slightly as well -- my somatic marker for that feeling of "not quite right-ness".  And, if you actually recall a situation like that for yourself, you may feel it too.

Now, some people would claim that this marker isn't "really" an emotion: that they just "logically" or "rationally" decided that something wasn't right with the equation or program or spaceship or whatever.

But if we were to put those same people on a brain scanner and a polygraph, and observe what happens to their brain and body as they "logically" think through various possibilities, we would see somatic markers flying everywhere, as hypotheses are being considered and discarded.

It's simply that, while your conscious attention is focused on your logic, you have little interest in attending directly to the emotions that are guiding you.  When you get the "information scent" of a good or a bad hypothesis, you simply direct your attention to either following the hypothesis, or discarding it and finding a replacement.

Then, when you stop reasoning, and experience the frustration or elation of your results (or lack thereof), you finally have attention to spare for the emotion itself...  leading to the common illusion that emotion and reasoning don't mix.  (When what actually doesn't mix, at least without practice, is reasoning and paying conscious attention to your emotions/somatic markers at the same time.)

Now, some somatic markers are shared by all humans, such as the universal facial expressions, or the salivation and mouth-pursing that happens when you recall (or imagine) eating something sour.  Others may be more individual.

Some markers persist for longer periods than others -- that "not quite right" feeling might just flicker for a moment while you're recalling a situation, but persist until you find an answer, when it's a response to the actual situation.

But it's not even necessary for a somatic marker to be expressed, in order for it to influence your thinking, since emotional associations and speed of recall are tightly linked.  In effect, recall is prioritized by emotional affect...  meaning that your memories are sorted by what makes you feel better.

(Or what makes you feel  less bad ... which is not the same thing, as we'll see later in this series!)

What this means is that all reasoning is in some sense "motivated", but it's not always consciously motivated, because your memories are pre-sorted for retrieval in an emotionally biased fashion.

In other words, the search engine of your mind...

Returns paid results first.

This means that, strictly speaking, you don't know your own motivations for thinking or acting as you do, unless you explicitly perform the necessary steps to examine them in the moment.  Even if you previously believe yourself to have worked out those motivations, you cannot strictly know that your analysis still stands, since priming and other forms of conditioning can change those motivations on the fly.

This is the real reason it's important to make beliefs pay rent, and to ground your thinking as much as possible in "near" hypotheses: keeping your reasoning tied closely to physical reality represents the only possible "independent fact check" on your biased "search engine".

Okay, that's enough of the "emotional decisions are bad and scary" frame.  Let's take the opposite side now:

Without emotions, we couldn't reason at all.

Spock's dirty little secret is that logic doesn't go anywhere, without emotion.  Without emotion, you have no way to narrow down the field of "all possible hypotheses" to "potentially useful hypotheses" or "likely to be true" hypotheses...

Nor would you have any reason to do so in the first place!

Because the hidden meaning of the word "reason", is that it doesn't just mean logical, sensible, or rational...

It also means "purpose".

And you can't have a purpose, without an emotion.

If Spock didn't make me feel something good, I might never have studied logic.  If stupid people hadn't made me feel something bad, I might never have looked up to Spock for being smart.  If procrastination hadn't made me feel bad, I never would've studied it.  If writing and finding answers to provocative questions didn't make me feel good, I never would've written as much as I have.

The truth is, we can't do anything -- be it good or bad -- without some emotion playing a key part.

And that fact itself, is neither good nor bad: it's just a fact.

And as Spock himself might say, it's "highly illogical" to worry about it.

No matter what your somatic markers might be telling you.

 

Footnotes:

1. I actually didn't know I was studying "akrasia"...  in fact, I'd never even heard the term akrasia before, until I saw it in a thread on LessWrong discussing my work.  As far as I was concerned, I was working on "procrastination", or "willpower", or maybe even "self-help" or "productivity".  But akrasia is a nice catch-all term, so I'll use it here.

New Comment
71 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]Cyan140

The ideas are interesting, but I'm finding the use of italics and especially bold font somewhat distracting -- I feel like I'm being harangued. (Not as bad as all caps, but still.)

[-]gjm210

Strongly concur. I have the same reaction to pjeby's blog. I don't think it's only because of the bold; it's the writing style too, which consistently seems to me to be saying "I understand all this stuff, and you are stupid. So stupid I have to write in short sentences. And sentence fragments. Because otherwise ... you won't get it." And I find it offputting. Very offputting.

Which is a pity, because ...

... pjeby has some interesting things to say.

4olimay
I agree that bolded text is a bit too much, particularly given the typography used here on LW. I think emphasis is fine, though.
2[anonymous]
I do find It somewhat distracting to 'hear' words with the emphasis forced onto us through formatting. I usually appreciate the use of bold and italics as a way of highlighting key concepts. It helps me navigate to the most interesting parts and give a framework to the document layout. Randomly bolding random words distracts from this.
0Anatoly_Vorobey
I agree that there's too much bolding going on; but let me just add (having just returned from a bout of wiki-reading prompted by your links) that this is a superb post; I'll be thinking and reading about much of this and looking forward to the promised future posts.

Very interesting post. If you can do even a fraction of what you say you will, it'll be a spectacular contribution. I already have your blog on my list of things I need to get around to reading, and it just moved up a few places on that list.

You're moving pretty quickly, though, and I have trouble following you at some areas. Maybe in the future break large essays like this into a few blog posts, one for each sub-point.

5pjeby
Heh. This is a way scaled-back version of my original planned first post, which was to jump straight from motivated reasoning into either the Speculator/Savant divide, or the Towards/Away distinctions. I went, "crap, this is getting too long" and pulled the plug before I got to anything really "interesting", figuring that this one at least laid a little bit of groundwork and some references to build a foundation for the rest. I'm accustomed to being able to get further in one sitting, but that's because my usual writing isn't peppered with references to experimental results and tediously building my case point by point; usually I just rely on metaphor and people's personal experiences as evidence. Here, though, I've noticed that people prefer authorities in the form of citations, to looking at their own personal experiences... so it seems to take a hell of a lot longer to build up statements of any substance. Which is not to say it's not worth it... discussions on LW, and the preparation for this post, have helped me immensely clarify and simplify certain aspects of my knowledge and work, in ways that will help me teach my self-improvement audience better, not just communicate better on LW.
0Scott Alexander
I'm glad you're paying attention to experimental results. I wouldn't believe you if you didn't :) Now that I've read it over a few more times, I'm still not sure if I understand correctly. Tell me if this is the right track: the brain tags thoughts or concepts as good or bad by associating them with certain micro-expressions which are the physiological correlates of emotions. When you're reasoning, you are unconsciously trying to generate pleasant emotions by using only those lines of thinking that lead to the micro-expressions associated with pleasant feelings. Generating these pleasant feelings is, on a preconscious level, the desire motivating reasoning. Also, are you taking as a premise something like the James-Lange theory of emotions? What about something like Reich's theory of muscular armor? (see about a quarter of the way down this page)
1pjeby
I haven't even talked about actual motivated reasoning in this post... barely touched on it. What I'm talking about here is something you might think of as "pre-biased reasoning" -- that is, before you even consciously perform any reasoning, your brain has to generate hypotheses... and these are based on manipulation of existing memories... which are retrieved in emotion-biased sequences. This is a hell of a lot more low-level description than an idea like "unconsciously trying to generate pleasant emotions". Also, that description attributes motivation and thinking-process to the unconscious... which is pure projection. The unconscious is not a "mind", in the sense of having intentions of the sort we attribute to ourselves and to other humans. When I get to the Savant/Speculator distinction, that part will hopefully be a lot clearer. Not as a premise, no, although there may be similarities in our conclusions. However, I'm not in full agreement with the idea that you can generate emotions through muscular action, partly because I see physical action as being caused by emotion (rather than being emotion as such) and partly because an existing emotion can easily dominate the relatively weak influence of working from the outside in. I also know that, Reich to the contrary, muscular armor can be dropped through mental work alone -- body awareness is required, at least to be able to tell if you're doing things right -- but you don't necessarily need to do anything particularly physical. The "efficiency" objection to somatic markers and James-Lange is nonsense, however. If the purpose of an emotions is to prepare the body for action, then it's not "inefficient" to send the information out to the periphery -- it's the purpose! It's the part where we infer emotions from that information coming back that's the kludge, because we only needed that information once we became social creatures... and even then, we already had the communication taking place via the external
3orthonormal
This point is incisive, has important consequences for rationality, and deserves a post (by somebody).
2[anonymous]
Do you plan to substitute the flawed model with whatever model the 'Savant/Speculator distinction' comes from? If so, perhaps consider another post explaining and validating said system first? Google seems unwilling to tell me what on earth you are talking about. Book? Paper? Link of some kind?
0Vladimir_Nesov
Now this sounds like some kind of ritual, empty of substance. Authority? Give me a break.
0pjeby
You seem to be ignoring the part where I implied your previous challenge led to me learning some new things about affective synchrony, that explained some of my results more clearly and gave me some new ideas to experiment with. As I said, it was worth it, at least for me.
[-]Roko60

"good" or "bad", or even "rational" and "irrational". (Which of course are just disguised versions of "good" and "bad", if you're a rationalist.)

I saw this, and felt a strong urge to walk to work where my laptop is and correct it.

Rational agents/things are not synonymous with good things. A paperclip maximizer is the canonical example of an agent acting rationally. As far as most people are concerned, including me, the paperclip maximizer is not acting in a good way. When I see "rational&... (read more)

8pjeby
Let's review the statement in question: By "narrow down", I actually meant "narrow down prior to conscious evaluation" -- not consciously evaluate for truth or falsehood. You can consciously evaluate whatever you like, and you can certainly check a statement for factual accuracy without the use of emotion. But that's not what the sentence is talking about... it's referring to the sorting or scoring function of emotion in selecting what memories to retrieve, or hypotheses to consider, before you actually evaluate them.
3Roko
I disagree again: I don't think that any reasonable definition of emotion makes the following statement true: : I think that emotions often do the opposite. They narrow down the field of "all possible hypotheses" to "likely to make me feel good about myself if I believe it" hypotheses and "likely to support my preexisting biases about the world" hypotheses, which is precisely the problem that this site is tackling... if emotions subconsciously selected "likely to be true" hypotheses, we would not be in the somewhat problematic situation we are in.
0pjeby
Those are subsets of what you believe to be likely true.
1Roko
Great! Hurrah for emotions, they make you believe things that you believe are likely to be true... epistemic rationality is about believing things that are actually true, rather than believing things that you believe to be true.
1pjeby
And that's why it's a good thing to know what you're up against, with respect to the hardware upon which you're trying to do that.
0Roko
Right, we agree. But I think that we have overused the word emotion... That which proposes hypotheses is not exactly the same piece of brainware as that which makes you laugh and cry and love. We need different names for them. I call the latter emotion, and the former a "hypothesis generating part of your cognitive algorithm". I think and hope that one can separate the two.
2pjeby
No... the former merely sorts those hypotheses based on information from the latter. Or more precisely, the raw data from which those hypotheses are generated, has been stored in such a manner that retrieval is prioritized on emotion, and such that any such emotions are played back as an integral part of retrieval. One's physio-emotional state at the time of retrieval also has an effect on retrieval priorities... if you're angry, for example, memories tagged "angry" are prioritized.
3[anonymous]
Still either false or meaningless, depending on how you interpret 'emotion'. Our brains narrow things down prior to conscious evaluation. It's their speciality. If you hacked out the limbic system you would still be left with a whole bunch of cortex that is good at narrowing things down without conscious evaluation. In fact, if you hacked out the frontal lobes you would end up with tissue that retained the ability to narrow things down without being able to conscoiusly evaluate anything.
3pjeby
The point of emotions -- which I see I failed to make sufficiently explicit in this post, from the frequent questions about it -- is that their original purpose was to prepare the body to take some physical, real-world action... and thus they were built in to our memory/prediction systems long before we reused those systems to "think" or "reason" with. Brains weren't originally built for thinking -- they were built for emoting: motivating co-ordinated physical action.
2rhollerith
Although these days Roko is probably uninterested in whether I agree with him, I agree with that passage. According to my definition, "epistemically rational" means "effective at achieving one's goals". If the goals are incompatible with my goals, I'm going to hope that the agent remains epistemically irrational. (Garcia used "intelligent" and "ethical" for my "epistemically rational" and "has goals compatible with my goals".) Since 1971, Garcia's been stressing that increasing a person's epistemic rationality increases that person's capacity for good and capacity for evil, so you should try to determine whether the person will do good or do evil before you increase the epistemic rationality of the person. (Of course your definition of "good" might differ from mine.) The smartest person (Ph. D. in math from a top program, successful entrepreneur) I ever met before I met Eliezer was probably unethical or evil. I say "probably" only to highlight that one cannot be highly confident of one's judgement about someone's ethics or evilness even if one has observed them closely. But most people here would probably agree with me that this person was unethical or evil.
3Roko
no! not at all.
0jimrandomh
Rationality can be bad when it's given to an agent with undesirable goals, but your own goals are always good to, so where your own thoughts are concerned, being 'rational' means they're good and being 'irrational' means they're bad. I think the article's statement was meant to apply only to thoughts evaluated from the inside.
[-]pre60

Even Spock's most famous gesture, that single raised eyebrow of his, is an expression of puzzlement or condescension. Course, he always claimed to have emotions under check rather than wiped out.

I like the idea of Kai as an properly emotionless rationalist. A robot. My friend just called hist newborn "Kai" but he's never seen Lexx.

I often figure that if you take emotion away from people you get Abulia rather than rationality anyway.

3Annoyance
"Course, he always claimed to have emotions under check rather than wiped out." Precisely. The few times in which he openly displayed his emotions were those in which they were so strong to be overwhelming. For example: his exuberance at discovering that Kirk was alive, instead of having been killed by Spock during ritual battle, in "Amok Time". Spock was generally played as being profoundly controlled and reserved. It's not that he didn't possess emotions, but that they kept private and prevented from interfering. The original series is somewhat inconsistent in this, though, as different writers saw things in different ways.
1Paul Crowley
That is surely only his second-most famous gesture, after the Vulcan salute.
[-][anonymous]50

David Hume summed it up well : "Reason is, and ought only to be, the slave of the passions.”

Eliezer tells us that "Rationalists should WIN". But you can easily substitute 'win' for 'achieve whatever makes them happy', once again reinforcing the importance of emotions. Our passions are ultimately what drive us, rationality is just taking the best available information into account when trying to achieve them.

0quiescent
Yes, except the belief that the polygraph is accurate. It's almost useless. http://www.nap.edu/openbook.php?record_id=10420&page=212
0pjeby
I didn't say it was "accurate". II was merely indicating that thoughts have physical influence on the body, not that you can use them to tell what someone is thinking or whether they're telling the truth.
0quiescent
Sorry for nitpicking, never mind.

If you are interested in akrasia, you must read George Ainslie's "Breakdown of Will", which gives an economic account of akrasia based on the strong empirical evidence for hyperbolic discounting and the idea of intertemporal bargaining. See picoeconomics.com

3pjeby
Those ideas are certainly meaningful, but I don't talk about them much any more. For practical purposes, you don't actually need to understand the discounting curve -- it suffices to know that you need to use present-tense representations of experience when making decisions... as long as you consistently act in accord with that knowledge. And knowing the hyperbolic curve equation doesn't provide any additional motivation for you to take the necessary action. By the way, here's an example of how to use present-tense representations, combined with positive somatic markers, to create immediate positive motivation: http://thinkingthingsdone.com/2008/07/thoughts-into-action.html (There's actually a lot more to "present tense representation" than merely counteracting the discount curve, though, and I'll probably talk more about that in the later posts of this series. For now, I need to get back to the prep work for the workshop I'm doing on Saturday, though I'll still be reading and commenting.)
2Paul Crowley
I'm confused now, because the way you discuss it here it sounds like you read Ainslie's book some time ago, but in the post you say you only very recently learned the word akrasia. If you haven't read the book, I again urge you to - there's a lot more to it than just presenting the discount curve, there's a whole theory that sets out how our wacky discounting curve leads to all sorts of behavours like making rules for ourselves. Certainly if you're actively trying to make a theory of akrasia, doing so without making sure you're thorougly familiar with this work would seem like a great mistake to me.
6pjeby
No, I haven't read the book -- I've just encountered discussions of the discount curve before. And I read the precis and a couple articles available on the site you linked, and find his rules and bargaining model to be massively overcomplicated, compared to what you need to know to achieve actual results. From my POV, it's like he's trying to explain a word processor by discussing pixels, instead of fonts and character buffers. IOW, his model actively distracts one from knowing anything useful about how the human platform generates the results it gets, or how to make the platform DO anything. It's like trying to build a model of health by discussing how to work around symptoms, instead of actually curing any diseases. And it perpetuates the notion that you need to (and can) work around your "interests" at the conscious level, instead of simply adjusting the interests directly -- i.e., it's a perpetuation of "far" (extrapolative) thinking in a place where "near" (directly-associative) thinking is desperately needed. That having been said, there are some things he gets right: we do have conflicting interests, and they do more or less interact in the manner described. It's just that knowing that as an isolated fact, doesn't tell you anything: it's like knowing a thing's emergent properties, but not the rules that generate those properties. (Also, his ideas about appetite moderation and satiation are interesting, so I do intend to study that further, to see if it leads to anything useful. Likewise some of his thoughts on dissociation.) I'm not making a theory of akrasia; I've been reverse-engineering fixes for it. That means I've been developing a practical model that supports predictions I can test in myself and my clients, to produce quick results. That's not quite the same thing as developing an accurate theoretical model. You might say I'm making a street map rather than a terrain map of the same territory: it might not be "accurate" in a literal sense, but it

Based on this, you may want to call it "Trope and Liberman's near/far theory," rather than attributing it to Robin Hanson.

5Paul Crowley
Trope and Liberman's Construal Level Theory. http://www.psych-it.com.au/Psychlopedia/article.asp?id=79

There are some good points in this post. However, you have constructed an unwieldy overloading of the word emotion, forging it into phlogiston of your theory. Taboo "emotion". When you describe the quite real operations performed by human mind, consisting in assigning properties to things and priming to see some properties easier or at all, you bless this description with the action of emotion-substance for some reason.

Somatic markers are effectively a kind of cached thought. They are, in essence, the "tiny XML tags of the mind", t

... (read more)
7pjeby
By emotion, I mean, "that which controls the macro-physiological state of the body across multiple control systems, whose effects may be observed through kinesthetic awareness, and which is not a product of direct conscious effort to influence that state." Or, in simpler words, "feelings". ;-) Evolutionarily, I propose that the function of emotion is to prepare the body for co-ordinated action of some kind - for example, fear prepares for fight/flight and triggers heightened sensory focus. Other emotions are more cerebral (e.g. the "aha" sensation), but still can be perceived in physical form, often still having externally visible effects, even to the naked eye. The reason is that brains were not created for us to perform reasoning, they were created to classify things by emotion -- that is, to prepare the body for responses appropriate to recognized external events. It's important to remember that thinking arose after simple memory-prediction-action chains, and that it's built on top of that legacy system. That's why emotion (using the definition I gave above) is critical: tagging things with emotions and replaying those emotions upon recall is the primary substance and function of brains. Yes, there are goal subsystems and all that... but that's another system (like "thinking") that's layered on top of the memory-prediction-action chain. Certainly -- that's later in the sequence. It was going to be next, but yours and Yvain's comments make me think that maybe I need to get a bit more explicit about the evolutionary chain here, including the memory-prediction-action core, although maybe I can work that in at the beginning.
2[anonymous]
Really, taboo emotion, taboo feelings, taboo anything that allows "all things that are not conscious reasoning" to be compressed to a single word.
6pjeby
I don't understand your request. Do you want me to list every possible somatic marker? (Note, btw, that different animals have different somatic marker hierarchies, so that would be a pretty extensive list, if I were even able to compile it.) In my work, I rarely need to distinguish the nature of an emotion in any finer degree than "toward" or "away", "good" or "bad". The difference between (say) somebody feeling "terrible" about their work or "awful" is not important to me, nor do I care what specific somatic markers are involved in marking those concepts either across human beings or even within any given single human being. However, it is important for the person experiencing that marker to be able to identify the physical components of it, in order to be able to test whether or not an intervention I suggest has actually removed the link between a concept and the marker that gets automatically played back when the concept is thought of. (Since the marker is a preparation for action -- including actions such as "hesitation" -- changing the marker also changes the behavior associated with the concept... but the markers can be tested much more quickly than full-blown behaviors, allowing for faster feedback in cases where more than one technique might be relevant.) Thus, "emotion" to me is a testable and predictable concept that governs human motivation in a meaningful way. If somebody wants to give me a better word to use to describe the thing upon which my interventions operate and manifest as physical (muscular, visceral, etc.) sensations in the body, then by all means, suggest away. I am not a psychology researcher -- I help people to fix motivation problems and make personality changes. My work is not to "prove" that a particular hypothesis or physical mechanism is in effect in human beings; it's to identify practical techniques, and to devise useful models for understanding how those techniques operate and by which new techniques can be developed. Of cours
1Vladimir_Nesov
You have already said this in the article, and I basically agree with this model. But it doesn't follow that the categories/responses/tags are in any sense simple. They have the structure of their own, the structure as powerful as any piece of imagination. The structure of these "tags" has complexity still beyond the reach of any scientific investigation hitherto entered upon. ;-) And for this reason it's an error to write them off as phlogiston, even if you proceed with describing their properties.
3pjeby
I'm treating emotion -- or better, somatic markers -- as a category of thing that is useful to know about. But I have not really needed to have finer distinctions than "good" or "bad", for practical purposes in teaching people how to modify their markers and change their beliefs, motivations, etc. So, if you're saying I have too broad a category, I'm saying that in practice, I haven't needed to have a smaller one. Frankly, it seems to me that perhaps some people are quibbling about the word "emotion" because they have it labeled "bad", but I'm also using it to describe things they have labeled "good". Ergo, I must be using the word incorrectly. (I'd be happy to be wrong about that supposition, of course.) From my perspective, though, it's a false dichotomy to split emotion in such a way -- it overcomplicates the necessary model of mind, rather than simplifying it.
1Vladimir_Nesov
I don't believe anyone is thinking that.
4jimrandomh
I agree that the word "emotion" as it's conventionally used is different form how it's used here, and overloading the term serves to confuse things, but there's a relation between the two meanings that's worth exploring. To summarize pjeby's essay as best I can, we generate propositions, which when we think about them activate concepts like "useful" or "truthy", which are special cases of "good", or else activate concepts like "convoluted" or "absurd", which are special cases of "bad", and whether good or bad markers are active determines whether we continue along the same line or purge it from our working memory. pjeby treats the goodness or badness of a concept in memory as being emotion, but conventional use of the word emotion refers instead to an aspect of mental state that adjusts the perceived goodness and badness of things as they are retrieved from or recorded to memory. This may be the mechanism by which concepts get tagged in the first place, but the relation between these two meanings is complex enough that assigning them the same word can lead to false conclusions. Also, there are almost certainly more concepts basic to cognition than just the good/bad spectrum. Other possible tag spectra would be calm/excited, near/far, and certain/uncertain.
4pjeby
Yes, and I'm further arguing that these markers are somatic -- they exist to effect physical changes in the body. I don't even begin to understand this sentence, since in my view, the goodness or badness is represented by emotion - i.e. a somatic marker. And the markers are somatic because in an evolutionary context, goodness or badness had to do with moving towards or away from things: I can eat that, that will eat me, that's a potential mate, etc. In a sense, that's more or less the "root" system from which all other markers derive, although it's a mistake to treat it like some sort of logical hierarchy, when in fact it's just a collection of kludges upon kludges (like most everything else that evolution does).
1jimrandomh
First I should clarify my rather ambiguous remark about mental states affecting perceived goodness and badness as things are retrieved or recorded from memory. What I mean is, the markers which we assign things depend partially on our state of mind. For example, we think of some things as dangerous (tigers, guns, ninjas), and some things as not-dangerous (puppies, phones, secretaries), but some things could go either way (spiders, bottles, policemen), depending on how they're interpreted. If you're feeling safe, then you'll tend to label the border cases as safe; if you're feeling frightened for unrelated reasons, the border cases will come up as dangerous. In other words, priming applies to somatic markers too, not just semantic ones. Or, as I put it in my previous post, emotional state adjusts the perceived goodness and badness of things as they are retrieved from memory. If every time you think of something you feel frightened, then you will come to think of that thing as scary, even if the only reason you were frighted at the time was because of some irrelevant other thing. This is what I meant by saying that emotional state affects perceived goodness and badness as it's recorded to memory. I'm not so sure about this. They certainly effect behaviors, and those behaviors may have physiological ramifications, but many markers have no effect or only indirect effects. Or you could say that each mechanism in the mind exists to support the body, since they co-evolved, but that would be like saying that my liver exists to support my left thumb; all parts of the body are interdependent, the brain included.
3Roko
Agreed. I second the request to either taboo "emotion" or define it more precisely. Agreed. The human brain is too much of a mess to imagine subtracting "all traces of emotion" and still have a human brain. Also, the fuzziness of the word emotion makes it hard to decide the truth of such statements.

Are there sequels to this post?

I agree with the other posts. I had a distinctly negative somatic marker when I read the word 'emotion' and this discomfort made it impossible for me to carefully read the rest of the post. If I was required to (say, for work), then I would have to wait until the negative response attenuated -- usually it take a half a day or so to willfully erase a somatic marker.

0[anonymous]
I find it takes effort to distinguish between the dogmatic enforcement of particular uses of brain related concepts like 'emotion' and the actual insights being shared.
[-]Jack30

I'm not familiar with the psychological literature on emotions but its a little counter-intuitive (I think my brain is tagging it as annoying) to use the word emotions to describe all of these different tags. Maybe the process of tagging something "morally obligatory" is indistinguishable from tagging something "happy" on an fMRI but in common parlance and, I think phenomenologically, the two are different. Different enough to justify using a word other than emotion (which traditionally refers to a much smaller set of experiences). It i... (read more)

0pjeby
You bet... but both are going to be tagged with somatic markers that are to some extent universal... and the same term may have both negative and positive markers attached. I think, though, that you are thinking "morally obligatory" somehow allows you to cheat and pretend that you arrived at an idea through pure reasoning, when in fact, "morally obligatory" is just a word pasted on top of a different set of somatic markers. For example, it may have the same somatic markers that another person would call "righteous indignation", or perhaps "disgust"... or maybe even something representing "elevation" (See Haidt's "Happiness Hypothesis"). The fact that we put different words on the same somatic markers doesn't magically make them pure. OTOH, if all you meant is that "happy" is likely to be more long-lived than "morally obligatory", I'm inclined to agree, subject to the caution that verbal labels are not somatic markers... and there exist people with negative somatic markers associated with good and happy things -- for example, if they believe those things cannot be attained by them. I'll talk more about the relationship between somatic markers and toward/away motivation in future posts. I thought Eliezer had already more-or-less established this in his OB posts. In other words, yes. Human values are human values because they're based on human feelings. And our moral reasoning is motivated reasoning... not just because it's influenced by emotion, but also because verbal reasoning itself appears to have been evolved for the specific purpose of manipulating other people's emotions, while defending against others' attempts at manipulation. But now I'm getting ahead of the series again.

Damisio's view of the brain is very interesting stuff. His book Descartes' Error is a fairly easy introduction to it.

This is my view of why the brain and reasoning works off usefulness and emotion.

Consider the genes eye view of the brain. You want to control what a very complex and changeable system does so that it will propagate more of you, so you find a way to hook what and how that system behaves into signals from the body such as hunger, discomfit and desire. Because you can directly control those signals, you can get it do what you want. The genes do... (read more)

So: deep blue has emotions?!?

It seems like a definitional debate over what the term "emotion" means - without actually offering any definitions.

4pre
Does Deep Blue have emotions? Well, as I understand the way it works it does attach some kinda value to how 'good' any given board position is, then works through the tree of positions and finds the route to the 'best' of those positions. Is that value an emotion? Well no. In a very single-dimensional way it might be a model of one though. I assume it's probably a single real number, maybe even an integer, rather than a complex set of semantic associations. Deep Blue is seeking just "WIN!" and labelling potential board positions accordingly whereas humans are seeking "Fun" and "Happy" and "Enough sex" and "Intellectually Interesting" and "Not scary" and god knows how many other dimensions too.
2pjeby
Most of those dimensions can actually be classified as "towards" or "away", though, which will be part of the subject of the next post in the series. The important distinction for humans, though, is that emotions are "somatic markers" -- meaning that they are distinctions in the body, for purposes of organizing action responses. They aren't arbitrary scores, but more like "action stances" of varying degrees. So yes, they're multi-dimensional and all of the categories you mention (e.g. "intellectually interesting" and "enough sex") qualify... but they also largely group into (and layer on top of the machinery for) the somewhat-more-fundamental operators of "toward" and "away".
2pjeby
Sort of. It has hardware support for scoring the value of specific positions... which is actually an awful lot like the brain's sorting and tagging, albeit considerably more crude. Which is one reason I like the "somatic marker" term, when we're talking about this -- it highlights their nature as action postures in the physical body, being used as a scoring and sorting system. The fact that we call some of these markers "emotions" isn't really all that relevant. (Also, "somatic marker" helps to avoid some rationalists' existing negative emotional tags on the idea of "emotions" being involved in reasoning.)
1Kaj_Sotala
I think what Fellous comments about "emotions" in machines is pretty good. As summarized by Browne:

While I agree with the gist, I'm looking forward to a more detailed vision of emotions. This current post gives the false impression that emotions are neatly symetrical and one-dimensional (good-bad). In reality there are multiple dimensions to emotions (desirable-undesirable, pleasurable-displeasing), and they're not clearly symetrical. If fear is the symetrical of desire, then what is disgust?

Emotions are action triggers and regulators that existed way before cognition did. We might mistakenly believe that they help our cognition by sorting stimuli in go... (read more)

For the embodiment of pure rationality, why not simply a computer? Everyone knows one, we can all see that you put whatever you want on one end depending on your goals and values, and it very rationally obeys those commands to the letter, without taking initiatives. Well, that used to be that way at least.

I enjoyed reading this, pjeby. It answered and tied together a lot of the things I'd wondered since I started reading about artificial intelligence. I won't spell out the relationship between your post and the issues (this will be long anyway...), but I'll list the connections I saw and what it brought to mind:

-How evolution's "shards of desire" translate into actions.

-What it would mean to, as evolutionary psychology attempts to do, "explain emotions by the behaviors they induce". And similarly, what feelings an animal would have t... (read more)

[-][anonymous]00

You mention about this article being a part of series. Are you still planning to write those other articles on LW?

Read Diane Duane's "Spock's World". It goes to great lengths to correct the error you're making.

Among other things, it suggests that the word usually translated as "suppression of emotion" actually means something closer to "passion's mastery", and that the Vulcan ethos is to recognize and compensate for emotions instead of, as many seem to believe, denying them.

Also, as awesome as Kai is, Data is clearly a better example of a functioning rational being without emotions. Data isn't lacking in preferences, goals, and motivati... (read more)

0thomblake
I completely agree with you about Data. pjeby is begging a question w.r.t. metaethics - he assumes that judgments of 'good' and 'bad' have only emotional content, apparently based solely on the fact that they are correlated with emotions (we call that emotivism - it's not very popular amongst ethicists).
6pjeby
I didn't say anything about meta-ethics; I said that human brains require emotion in order to prioritize their thinking... no matter how much you might like the case to be otherwise. The brain with which you seek to devise some sort of extra-emotional calculation requires emotion in order to perform those calculations. That doesn't say anything about the content of the calculations themselves, however. Your brain needs emotion to learn chess or play it... but that doesn't mean that chess itself is emotional. So there's your escape hatch.
[-][anonymous]00

Kai has no goals or cares of his own, frequently making such comments as "the dead do not want anything", and "the dead do not have opinions". He mostly does as he's asked, but for the most part, he just doesn't care about anything one way or another.

The way Kai is described certainly matches what an unemotional and goalless yet powerfully rational creature would be. Yet somehow, the authors manage to slip in a remarkable amount of goal direction and 'caring'. We just can't help but assume that amoral, inhuman creatures would take on human characteristics if we socialised them enough.

4Vladimir_Nesov
Emotionless (in the normal sense of the word), maybe. Goalless, no. What defines the decisions to follow requests? What defines the specific manner in which they are followed? How is your request to be understood? These all depend on how the agent in question sees the world, and on its preference to act this way and not another. The goal-less agent is not an apathetic zombie servant, but a rock.
0pjeby
Right -- that's why I said Kai was merely an improvement on Spock, not that he was an accurate model.