To develop methods of teaching rationality skills, you need to learn to focus on mental events that occur in 5 seconds or less.  Most of what you want to teach is directly on this level; the rest consists of chaining together skills on this level.

As our first example, let's take the vital rationalist skill, "Be specific."

Even with people who've had moderate amounts of exposure to Less Wrong, a fair amount of my helping them think effectively often consists of my saying, "Can you give me a specific example of that?" or "Can you be more concrete?"

A couple of formative childhood readings that taught me to be specific:

"What is meant by the word red?"
"It's a color."
"What's a color?"
"Why, it's a quality things have."
"What's a quality?"
"Say, what are you trying to do, anyway?"

You have pushed him into the clouds.  If, on the other hand, we habitually go down the abstraction ladder to lower levels of abstraction when we are asked the meaning of a word, we are less likely to get lost in verbal mazes; we will tend to "have our feet on the ground" and know what we are talking about.  This habit displays itself in an answer such as this:

"What is meant by the word red?"
"Well, the next time you see some cars stopped at an intersection, look at the traffic light facing them.  Also, you might go to the fire department and see how their trucks are painted."

-- S. I. Hayakawa, Language in Thought and Action

and:

"Beware, demon!" he intoned hollowly.  "I am not without defenses."
"Oh yeah?  Name three."

-- Robert Asprin, Another Fine Myth

And now, no sooner does someone tell me that they want to "facilitate communications between managers and employees" than I say, "Can you give me a concrete example of how you would do that?"  Hayakawa taught me to distinguish the concrete and the abstract; and from that small passage in Asprin, I picked up the dreadful personal habit of calling people's bluffs, often using the specific phrase, "Name three."

But the real subject of today's lesson is how to see skills like this on the 5-second level.  And now that we have a specific example in hand, we can proceed to try to zoom in on the level of cognitive events that happen in 5 seconds or less.

Over-abstraction happens because it's easy to be abstract.  It's easier to say "red is a color" than to pause your thoughts for long enough to come up with the example of a stop sign.  Abstraction is a path of least resistance, a form of mental laziness.

So the first thing that needs to happen on a timescale of 5 seconds is perceptual recognition of highly abstract statements unaccompanied by concrete examples, accompanied by an automatic aversion, an ick reaction - this is the trigger which invokes the skill.

Then, you have actionable stored procedures that associate to the trigger.  And "come up with a concrete example" is not a 5-second-level skill, not an actionable procedure, it doesn't transform the problem into a task.  An actionable mental procedure that could be learned, stored, and associated with the trigger would be "Search for a memory that instantiates the abstract statement", or "Try to come up with hypothetical examples, and then discard the lousy examples your imagination keeps suggesting, until you finally have a good example that really shows what you were originally trying to say", or "Ask why you were making the abstract statement in the first place, and recall the original mental causes of your making that statement to see if they suggest something more concrete."

Or to be more specific on the last mental procedure:  Why were you trying to describe redness to someone?  Did they just run a red traffic light?

(And then what kind of exercise can you run someone through, which will get them to distinguish red traffic lights from green traffic lights?  What could teach someone to distinguish red from green?)

When you ask how to teach a rationality skill, don't ask "How can I teach people to be more specific?"  Ask, "What sort of exercise will lead people through the part of the skill where they perceptually recognize a statement as overly abstract?"  Ask, "What exercise teaches people to think about why they made the abstract statement in the first place?"  Ask, "What exercise could cause people to form, store, and associate with a trigger, a procedure for going through hypothetical examples until a good one or at least adequate one is invented?"

Coming up with good ways to teach mental skills requires thinking on the 5-second level, because until you've reached that level of introspective concreteness, that fineness of granularity, you can't recognize the elements you're trying to teach; you can't recognize the patterns of thought you're trying to build inside a mind.

To come up with a 5-second description of a rationality skill, I would suggest zooming in on a concrete case of a real or hypothetical person who (a) fails in a typical fashion and (b) successfully applies the skill.  Break down their internal experience into the smallest granules you can manage:  perceptual classifications, contexts that evoke emotions, fleeting choices made too quick for verbal consideration.  And then generalize what they're doing while staying on the 5-second level.

Start with the concrete example of the person who starts to say "Red is a color" and cuts themselves off and says "Red is what that stop sign and that fire engine have in common."  What did they do on the 5-second level?

  1. Perceptually recognize a statement they made as overly abstract.
  2. Feel the need for an accompanying concrete example.
  3. Be sufficiently averse to the lack of such an example to avoid the path of least resistance where they just let themselves be lazy and abstract.
  4. Associate to and activate a stored, actionable, procedural skill, e.g:
    4a.  Try to remember a memory which matches that abstract thing you just said.
    4b.  Try to invent a specific hypothetical scenario which matches that abstract thing you just said.
    4c.  Ask why you said the abstract thing in the first place and see if that suggests anything.

and

  • Before even 1:  They recognize that the notion of "concrete" means things like folding chairs, events like a young woman buying a vanilla ice cream, and the number 17, i.e. specific enough to be visualized; and they know "red is a color" is not specific enough to be satisfying.  They perceptually recognize (this is what Hayakawa was trying to teach) the cardinal directions "more abstract" and "less abstract" as they apply within the landscape of the mind.

If you are thinking on this level of granularity, then you're much more likely to come up with a good method for teaching the skill "be specific", because you'll know that whatever exercise you come up with, it ought to cause people's minds to go through events 1-4, and provide examples or feedback to train perception 0.

Next example of thinking on the 5-second scale:  I previously asked some people (especially from the New York LW community) the question "What makes rationalists fun to be around?", i.e., why is it that once you try out being in a rationalist community you can't bear the thought of going back?  One of the primary qualities cited was "Being non-judgmental."  Two different people came up with that exact phrase, but it struck me as being not precisely the right description - rationalists go around judging and estimating and weighing things all the time.  (Noticing small discordances in an important description, and reacting by trying to find an exact description, is another one of those 5-second skills.)  So I pondered, trying to come up with a more specific image of exactly what it was we weren't doing, i.e. Being Specific, and after further visualization it occurred to me that a better description might be something like this:  If you are a fellow member of my rationalist community and you come up with a proposal that I disagree with - like "We should all practice lying, so that we feel less pressure to believe things that sound good to endorse out loud" - then I may argue with the proposal on consequentialist grounds.  I may judge.  But I won't start saying in immense indignation what a terrible person you must be for suggesting it.

Now I could try to verbally define exactly what it is we don't do, but this would fail to approach the 5-second level, and probably also fail to get at the real quality that's important to rationalist communities.  That would merely be another attempt to legislate what people are or aren't allowed to say, and that would make things less fun.  There'd be a new accusation to worry about if you said the wrong thing - "Hey!  Good rationalists don't do that!" followed by a debate that wouldn't be experienced as pleasant for anyone involved.

In this case I think it's actually easier to define the thing-we-avoid on the 5-second level.  Person A says something that Person B disagrees with, and now in Person B's mind there's an option to go in the direction of a certain poisonous pleasure, an opportunity to experience an emotional burst of righteous indignation and a feeling of superiority, a chance to castigate the other person.  On the 5-second level, Person B rejects this temptation, and instead invokes the procedure of (a) pausing to reflect and then (b) talking about the consequences of A's proposed policy in a tone that might perhaps be worried (for the way of rationality is not to refuse all emotion) but nonetheless is not filled with righteous outrage and indignation which demands that all others share that indignation or be likewise castigated.

(Which in practice, makes a really huge difference in how much rationalists can relax when they are around fellow rationalists.  It's the difference between having to carefully tiptoe through a minefield and being free to run and dance, knowing that even if you make a mistake, it won't socially kill you.  You're even allowed to say "Oops" and change your mind, if you want to backtrack (but that's a whole 'nother topic of 5-second skills)...)

The point of 5-second-level analysis is that to teach the procedural habit, you don't go into the evolutionary psychology of politics or the game theory of punishing non-punishers (by which the indignant demand that others agree with their indignation), which is unfortunately how I tended to write back when I was writing the original Less Wrong sequences Rather you try to come up with exercises which, if people go through them, causes them to experience the 5-second events - to feel the temptation to indignation, and to make the choice otherwise, and to associate alternative procedural patterns such as pausing, reflecting, and asking "What is the evidence?" or "What are the consequences?"

What would be an exercise which develops that habit?  I don't know, although it's worth noting that a lot of traditional rationalists not associated with LW also have this skill, and that it seems fairly learnable by osmosis from watching other people in the community not be indignant.  One method that seems worth testing would be to expose people to assertions that seem like obvious temptations to indignation, and get them to talk about evidence or consequences instead.  Say, you propose that eating one-month-old human babies ought to be legal, because one-month-old human babies aren't as intelligent as pigs, and we eat pigs.  Or you could start talking about feminism, in which case you can say pretty much anything and it's bound to offend someone.  (Did that last sentence offend you?  Pause and reflect!)  The point being, not to persuade anyone of anything, but to get them to introspectively recognize the moment of that choice between indignation and not-indignation, and walk them through an alternative response, so they store and associate that procedural skill.  The exercise might fail if the context of a school-exercise meant that the indignation never got started - if the temptation/choice were never experienced.  But we could try that teaching method, at any rate.

(There's this 5-second skill where you respond to mental uncertainty about whether or not something will work, by imagining testing it; and if it looks like you can just go test something, then the thought occurs to you to just go test it.  To teach this skill, we might try showing people a list of hypotheses and asking them to quickly say on a scale of 1-10 how easy they look to test, because we're trying to teach people a procedural habit of perceptually considering the testableness of ideas.  You wouldn't give people lots of time to think, because then that teaches a procedure of going through complex arguments about testability, which you wouldn't use routinely in real life and would end up associating primarily to a school-context where a defensible verbal argument is expected.)

I should mention, at this point, that learning to see the 5-second level draws heavily on the introspective skill of visualizing mental events in specific detail, and maintaining that introspective image in your mind's eye for long enough to reflect on it and analyze it.  This may take practice, so if you find that you can't do it right away, instinctively react by feeling that you need more practice to get to the lovely reward, instead of instinctively giving up.

Has everyone learned from these examples a perceptual recognition of what the "5-second level" looks like?  Of course you have!  You've even installed a mental habit that when you or somebody else comes up with a supposedly 5-second-level description, you automatically inspect each part of the description to see if it contains any block units like "Be specific" which are actually high-level chunks.

Now, as your exercise for learning the skill of "Resolving cognitive events to the 5-second level", take a rationalist skill you think is important (or pick a random LW post from How To Actually Change Your Mind); come up with a concrete example of that skill being used successfully; decompose that usage to a 5-second-level description of perceptual classifications and emotion-evoking contexts and associative triggers to actionable procedures etcetera; check your description to make sure that each part of it can be visualized as a concrete mental process and that there are no non-actionable abstract chunks; come up with a teaching exercise which seems like it ought to cause those sub-5-second events to occur in people's minds; and then post your analysis and proposed exercise in the comments.  Hope to hear from you soon!

The 5-Second Level
New Comment
328 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

IAWYC, and introspective access to what my mind was doing on this timescale was one of the bigger benefits I got out of meditation. (Note: Probably not one of the types of meditation you've read about). However, I don't think you've correctly identified what went wrong in the example with red. Consider this analogous conversation:

What's a Slider? It's a Widget.
What's a Widget? It's a Drawable.
What's a Drawable? It's an Object.

In this example, as with the red/color example, the first question and answer was useful and relevant (albeit incomplete), while the next two were useless. The lesson you seem to have drawn from this is that looking down (subclassward) is good, and looking up (superclassward) is bad. The lesson I draw from this is that relevance falls off rapidly with distance, and that each successive explanation should be of a different type. It is better to look a short distance in each direction rather than to look far in any one direction. Compare:

X is a color. This object is X. (One step up, one step down)
X is a color. A color is a quality that things have. (Two steps up)
This object is X. That object is also X. (Two steps down)

I would expect the first of these three explanations to succeed, and the other two to fail miserably.

"One step up and one step down" sounds like a valuable heuristic; it's what I actually did in the post, in fact. Upvoted.

A few months later, I've been teaching Anna and Luke and Will Ryan and others this rule as the "concrete-abstract pattern". Give a specific example with enough detail that the listener can visualize it as an image rather than as a proposition, and then describe it on the level of abstraction that explains what made it relevant. I.e., start with an application of Bayes's Theorem, then show the abstract equation that circumscribes what is or isn't an example of Bayes's Theorem.

[-]TrE180

Also, it is very important to give counter-examples: 'This crow over there belongs to the bird category. But the plane in the sky and the butterfly over there do not.' Or, more fitting the 'red' example: 'That stop sign and that traffic light are red. But this other traffic sign (can't think of an example) doesn't.'

And as well, this could be done with categories. 'Red is a color. Red is not a sound.'

I guess this one has something to do with confirmation bias, as cwillu suggested.

[-]jimmy411

I'm a big fan of breaking things down to the finest grain thoughts possible, but it still surprises me how quickly this gets complicated when trying to actually write it down.

http://lesswrong.com/lw/2l6/taking_ideas_seriously/

Example: Bob is overweight and an acquaintance mentions some "shangri-la" diet that helps people lose weight through some "flavor/calorie association". Instead of dismissing it immediately, he looks into it, adopts the diet, and comfortably achieves his desired weight.

1) Notice the feeling of surprise when encountering a claim that runs counter to your expectations.

2) Check in far mode the importance of the claim if it were true by running through a short list of concrete implications (eg "I can use this diet and as a result, I can enjoy exercise more, I can feel better about my body, etc")

  • If any thoughts along the lines of "but it's not true!" come up, remind yourself that you need to be able to clearly understand the implications of the statement and its importance separately from deciding its truth value, and that this is good practice even if this example is obviously false.

3) Imagine reaping the benefits in ... (read more)

Upvoted for being the only one to try the exercise.

"Be specific" is a nice flinch, I've always had it and it helps a lot. "Don't moralize" is a flinch I learned from experience and it also helps. Here's some other nice flinches I have:

  1. "Don't wait." Waiting for something always takes more time than I thought it would, so whenever I notice myself waiting, I switch to doing something useful in the meanwhile and push the waiting task into the background. Installing the habit took a little bit of effort, but by now it's automatic.

  2. "Don't hesitate." With some effort I got a working version of this flinch for tasks like programming, drawing or physical exercise. If something looks like it would make a good code fix or a good sketch, do it immediately. Would be nice to have this behavior for all other tasks too, but the change would take a lot of effort and I'm hesitating about it (ahem).

  3. "Don't take on debt." Anything that looks even vaguely similar to debt, I instinctively run away from it. Had this flinch since as far as I can remember. In fact I don't remember ever owing >100$ to anyone. So far it's served me well.

8MartinB
A nice hack from GTD is to keep a 'wait-for' list. I use that for orders, reactions to inquires, everything where someone has to get back to me. Put it on a list and forget about it. Extra points if you do not check the arrival time of you internet purchases at all during the first week of waiting.
7gjm
I have the same debt-flinch, and the same feeling about how well it works, but with one qualification: I was persuaded to treat mortgage debt differently (though I've always been very conservative about how much I'd take on) and that seems to have served me very well too. This isn't meant as advice about mortgages: housing markets vary both spatially and temporally. More as a general point: it's probably difficult to make very sophisticated flinch-triggers, which means that even good flinching habits are likely to have exceptions from time to time, and sometimes they might be big ones.
9taryneast
Agreed. Kiyosaki's "Rich dad Poor dad" has lots of good advice about the difference between "good debt" and "bad debt". AFAI recall it boiled down to "only borrow money for assets, not liabilities" ie - good debt is borrowing for things that will continue to make you more money (including your appreciating house or your business) and bad debt is for things like holidays or house redecorating projects - things that simply take cash our of your hand. This has worked pretty well for me so far too.
8gjm
Kiyosaki's "Rich Dad, Poor Dad" has also received some extremely harsh criticism, some of it at least from people who seem to have a clue what they're talking about. I haven't looked at it myself and am not a financial expert, but would advise anyone considering reading it and/or taking Kiyosaki's advice to exercise caution.

The classic takedown of Kiyosaki is from John T. Reed.

4taryneast
Thanks for the link. ok, that made me reconsider entirely. Lots of good points here. I guess I liked the motivational tone of the book - but yep, it looks like his facts are not so hot (and in a lot of cases entirely fictional).
2JohnH
The same can and should be said about any book that purports to advise people on how to become rich. I wish people were required to include in the appendix of such a book their net worth as independently assessed by an external audit and tax returns and other filings presented to show that they are wealthy and have actually gained that wealth in the manner described by the book. Even then caution would still be needed as if markets are efficient (or even slightly efficient) then something that provided market beating returns 3-5 years ago (or however long it has been since they gained their wealth) should be expected to only provide market rates of return currently.
-1BillyOblivion
Is there any financial advisor or financial book that you can recommend without reservation and that people can take without exercising caution?
2gjm
I doubt it. But there are some for which no more caution is needed than could be taken largely for granted with an intelligent bunch of people like the readership of Less Wrong, and some that aren't very approachable by anyone who isn't quite expert already. There's no need to say "exercise caution" about those. It appears that Kiyosaki's book is very approachable and may be very unreliable. That's an especially dangerous combination, if true.
-1Blueberry
The classic is Andrew Tobias, "The Only Investment Guide You'll Ever Need." You can trust it because he's not selling anything and teaches common-sense, conservative advice: no risky speculation or anything.
1BillyOblivion
Sorry, I was attempting to be clever, cynical and hip. This apparently impeded effective communication. Let me rephrase it so that it is more difficult to misunderstand: All financial advice should be received with reservation and taken with caution. Better?
-1MartinB
Ramith Sethi: iwillteachyoutoberich.com Kiyosaki is nice for some mindset and basic approach, but horrible on the concrete advise. Do not go into buying houses due to his books. My small favorite is George Clayson: the richest man in Babylon. Then there is a galore of more modern books. Check out Ramiths recommended readings.
4RHollerith
Only borrow money for assets, not expenses.
1taryneast
The book defines a liability as "something that takes money from your pocket" - so the two can be considered roughly equivalent.
5RHollerith
OK, but that's not the standard definition of a liability used by accountants and such.
2taryneast
Yes, that is discussed in the book. He makes a big deal about the difference. In fact he discuses the seeming inconsistency of accountant putting large items into the "assets" column that do nothing but depreciate in value... I'd argue that the main point of Rich dad, poor dad can be summarised as: 1) assets put money into your pocket, liabilities take money out of it 2) you gain wealth by adding to your assets instead of your liabilities It's roughly equivalent to the dietary advice of "you lose weight by making sure there are more calories being spent than eaten"

Well, it makes me sad to see a very standardized and crisp term like "liability" used in such a confusing and nonstandard way. Especially when there is another equally crisp and very standardized term ("expense") that could be used instead. And I do not want to talk about it anymore.

3Swimmer963 (Miranda Dixon-Luinenburg)
This is what my mother said to me: all types of debt are bad, but mortgage debt is unavoidably. My chosen career field is nursing, which is a pretty reliable income source, so I'm not worried about taking on a mortgage when the time comes.
5[anonymous]
Could you elaborate a bit on that? I noticed that I often wait for small tasks that end up taking a lot of time. For example, I need to compile a library or finish a download and estimate that it won't take long, maybe a few minutes at most. But I find it really hard to just do something else instead of waiting. I can't just go read a book or do some Anki reps. Whenever I tried that, I either have the urge to constantly check up on the blocking task or I get caught up in the replacement (or on reddit). So I end up staring at a screen, doing nothing, just so I don't lose my mental context. At worst, I can sit for half an hour and get really frustrated with myself.

I find that I worry a lot less about checking up on background tasks (compiles, laundry, baking pies, brewing tea, etc.) if I know I'll get a clear notification when the process is complete. If it's something that takes a fixed amount of time I'll usually just set a timer on my phone — this is a new habit that works well for tea in particular. Incidentally, owning an iPhone has done a surprising amount for my effectiveness just by reducing trivial inconveniences for this sort of thing.

For compiles, do something like

$ make; growlnotify -m "compile done!"

or run a script that sends you an SMS or something. This is something that I'm not in the habit of doing, but I just wrote myself a note to figure something out when I get into work on Monday.[1] (For most of my builds it's already taken care of, since it brings up a window when it's done. This would be for things like building the server, which runs in a terminal, and for svn updates, which are often glacial.)

[1] This is another thing that helps me a lot. Write things down in a place that you look at regularly. Could be a calendar app, could be a text file in Dropbox, whatever.

3MBlume
I assume someone's already told you you'll be better off with Git?
3taryneast
Not necessarily true. git and svn are suited to slightly different applications. For one thing - sometimes you want One Source of Truth... which svn gives you, and git does not.
4sketerpot
If you have a central git repository to which all contributors have write privileges, you can treat it a lot like a svn-style centralized VCS that just happens to be git. Is there a significant advantage of svn over this kind of git setup?
2matt
Consider… … and Quicksilver.app does this very nicely without your fingers ever leaving the keyboard (if you're making tea… your fingers probably already left the keyboard). Consider also (These suggestions live in mac land. If you live in Windows land, consider moving. If you live in Linux land you'll probably figure our how to do this yourself pretty quickly :)
0RHollerith
I couldn't get growlnotify to work reliably on my Snow Leopard. And some of Growl's preference panes are absurd. And Growl insists on growling at you every time it auto-updates itself, with no way to turn that off. My friend Darius dislikes it, too.
2Antisuji
Is there a better alternative?
1RHollerith
I'll tell you what I do even though it is far from ideal. I have the program play a sound file to notify me. Sound is not the best way for a program to notify me because I have a habit of taking off my headphones, but leaving them plugged in. After you install the free app "Adium" you can find some nice chimes in /Applications/Adium.app/Contents/Resources/Sounds/ I use the following command line to play a chime: open -a VLC /Applications/Adium.app/Contents/Resources/Sounds//TokyoTrainStation.AdiumSoundset/Contact_On.m4a Of course this presupposes you have VLC installed. And the first time I play a chime, there's a delay of a few seconds while VLC loads the chime. ADDED. I also use a visual signal as follows. In the "Hearing" tab on the Universal Access system pref pane, I check the box "Flash the screen when an alert sound occurs". I use the Emacs function DING to generate the aforementioned alert sound. Sorry, I do not know how to generate an alert sound from the shell.
0sullyj3
why not use mplayer for the sound?
0RHollerith
These days I use /usr/bin/afplay. The advantages are (1) lightweight program that loads quickly, (2) installed by default on all Macs.
0Antisuji
Just to follow up: there is indeed a Growl for Windows, and it comes bundled with a growlnotify.exe that I can run from a cygwin bash shell. Rejoice!
2cousin_it
I usually continue coding during long recompiles (over a minute or so), just don't save the my edits until it's finished.
1John_Maxwell
You could also make a version control commit before compiling and then use "git stash" or equivalent to save your while-compiling edits.
3Swimmer963 (Miranda Dixon-Luinenburg)
Same. And it has also served me well, although maybe not solely because of that preference–I was in a better financial situation to start with than many university students, and I'm a workaholic with a part-time job that I enjoy, and I also enjoy living frugally and don't consider it to diminish my quality of life the way some people do.
2gjm
The trouble with not waiting is that it increases your number of mental context switches, and they can be really expensive. Whether "don't wait" is good advice probably depends on details like the distribution of waiting times, what sort of tasks one's working on, and one's mental context-switch speed.
0Sniffnoy
For purposes of avoiding ambiguity this might be better phrased as "don't block" or "don't busy-wait". Although combined with #2 it might indeed become "don't wait" in the more general sense to some extent!

My attempt at the exercise for the skill "Hold Off On Proposing Solutions"

Example: At a LessWrong meet up someone talks about some problem they have and asks for advice, someone points out that everyone should explore the problem before proposing solutions. Successful use of the skill involves:

1) Noticing that a solution is being asked for. This is the most important sub-skill. It involves listening to everything you ever hear and sorting it into appropriate categories.

2) Come up with a witty and brilliant solution. This happens automatically.

3) Suppress the urge to explain the solution to everyone, even though it is so brilliant, and will make you look so cool, and (gasp) maybe someone else has thought of it, and you better say it before they do, otherwise it will look like it was their idea!

4) Warn other people to hold off on proposing solutions.

Exercise: Best done in a group, where the pressure to show intelligence is greatest. Read the group a list of questions. Use many different types of questions, some about matters of fact, some about opinion, and some asking for a solution. The first two types are to be answered immediately. The last type are to be met with absolute silence. Anyone found talking after a solution has been requested loses points.

Encourage people to write down any solutions they do come up with. After the exercise is finished, destroy all the written solutions, and forbid discussion of them.

5Alex Flint
Wouldn't it be better to realise right after step (1) that one needs to avoid coming up with solutions and deliberately focus one's mind on understanding the problem. Avoiding verbalization of solutions is good, but they can still pollute your own thinking, even if not others'.

I think that the big skill here is not being offended. If someone can say something and control your emotions, literally make you feel something you had no intention to feel beforehand, then perhaps it's time to start figuring out why you're allowing people to do this to you.

At a basic level anything someone can say to you is either true or false. If it's true then it's something you should probably consider and accept. If it's false then it's false and you can safely ignore/gently correct/mock the person saying it to you. In any case there really isn't any reason to be offended and especially there is no reason to allow the other person to provoke you to anger or acting without thought.

This isn't the same as never being angry! This is simply about keeping control for yourself over when and why you get angry or offended, rather than allowing the world to determine that for you.

Edit - please disregard this post

8wilkox
It seems really, really difficult to convey to people who don't understand it already that becoming offended is a choice, and it's possible to not allow someone to control you in that way. Maybe "offendibility" is linked to a fundamental personality trait.
8loqi
What constitutes a "choice" in this context is pretty subjective. It may be less confusing to tell someone they could have a choice instead of asserting that they do have a choice. The latter connotes a conscious decision gone awry, and in doing so contradicts the subject's experience that no decision-making was involved.
2wilkox
Good point. Reading my comment again, it seems obvious that I committed the typical mind fallacy in assuming that it really is a choice for most people.
0erikerikson
I'd take this differently. I would at least hope that you are claiming that there is, in fact, a choice, whether the subjective experience of the moment provides indication of the choice or not. Maybe stated differently you could be claiming that there is the possibility of choice for all people whether a person is aware or capable of taking advantage of that fact. That a person can alter his or her self in order to provide his or her self with the opportunity to choose in such situations. Loqi's feedback seems to me to be suggesting that individuals who do not have a belief that they have such a "possibility of choice" could have a more positive phenomenological experience of your assertion and as a result be more likely to integrate the belief into their own belief set and [presumably] gain advantage by encountering it. That is me asserting that Loqi does not appear to be rejecting your assertion but only suggesting a manner by which it can be improved.
0erikerikson
Of course, Loqi's suggestion could contingently be less optimal than the less easy to accept presentation. While the approach you suggest could provide a more subjectively negative experience, the cognitive dissonance could cause the utterance to gain more attention with the brain as a more aberrant occurrence in its stimuli and as a result be worthy of further analysis and consideration. I am generally in favor of delivering notions I believe to be helpful in a manner which can/will be accepted. In some cases however, others are able and more likely to accept a less than pleasant delivery mechanism. This is contingent upon the audience, of course, as well as the level of knowledge you have about your audience. In the absence of such knowledge, the more gentle approach seems advisable.
4Cayenne
It could be. It seems not just difficult but actually against most culture on the planet. Consider that crimes of passion, like killing someone when you find them sleeping around on you, often get a lower sentence than a murder 'in cold blood'. If someone says 'he made me angry' we know exactly what that person means. Responding to a word with a bullet is a very common tactic, even in a joking situation; I've had things thrown at me for puns! It does seem like a learn-able skill even so. I did not have this skill when I was child, but I do have it now. The point I learned it in my life seems to roughly correspond to when I was first trained and working as technical support. I don't know if there's a correlation there. In any case, merely being aware that this is a skill may help a few people on this forum to learn it, and I can see only benefit in trying. It is possible to not control anger but instead never even feel it in the first place, without effort or willpower. Edit - please disregard this post
3Sabiola
I imagine you wouldn't have lasted long in tech support if you hadn't learned that skill. :-)
0mendel
And yet, not to feel an emotion in the first place may obscure you to yourself - it's a two-sided coin. To opt to not know what you're feeling when I struggle to find out seems strange to me.
2Cayenne
I think you're misunderstanding what I said. I'm not obscuring my feelings from myself. I'm just aware of the moment when I choose what to feel, and I actively choose. I'm not advocating never getting angry, just not doing it when it's likely to impair your ability to communicate or function. If you choose to be offended, that's a valid choice... but it should also be an active choice, not just the default. I find it fairly easy to be frustrated without being angry at someone. It is, after all, my fault for assuming that someone is able to understand what I'm trying to argue, so there's no point in being angry at them for my assumption. They might have a particularly virulent meme that won't let them understand... should I get mad at them for a parasite? It seems pointless. Edit - please disregard this post
0mendel
Well, it seems I misunderstand your statement, "It is possible to not control anger but instead never even feel it in the first place, without effort or willpower." I know it is possible to experience anger, but control it and not act angry - there is a difference between having the feeling and acting on it. I know it is also possible to not feel anger, or to only feel anger later, when distanced from the situation. I'm ok with being aware of the feeling and not acting on it, but to get to the point where you don't feel it is where I'm starting to doubt whether it's really a net benefit. And yes, I do understand that with understand / assumptions about other people, stuff that would have otherwise bothered me (or someone else) is no longer a source of anger. You changed your outlook and understanding of that type of situation so that your emotion is frustration and not anger. If that's what you meant originally, I understand now.
0Cayenne
Mostly I don't even feel frustration, but instead sadness. I'd like to be able to help, but sometimes the best I can do is just be patient and try to explain clearly, and always immediately abandon my arguments if I find that I'm the one with the error. Edit - please disregard this post
2wedrifid
I (really) like what you're saying here and it is something I often recommend (where appropriate) to people that have no interest in rationality whatsoever. Well, except for drawing a line at 'true/false' with respect to when it an be wise to take actions to counter the statements. Truth is only one of the relevant factors. This doesn't detract at all for your core point. I extend this philosophy to when evaluating socially relevant interactions of others. When things become a public scene that for some reason I care about I do not automatically attribute the offense, indignation or anger of the recipient to be the responsibility of the person who provided the stimulus.
0Cayenne
The true/false isn't the only line, but I feel that it's the most important. If something someone says to or about you is true, then no matter what you should own it in some way. Acknowledge that they're right, try to internalize it, try to change it, but never never just ignore it! (edit: If you're getting mad when someone says something truthful about you, then this should raise other warning flags as well! Examine the issue carefully to figure out what's really happening here.) If the thing they say is false, then don't get mad first! Think it through carefully, and then do the minimum you can to deal with it. The most important thing is to not obsess over it afterward, because if you're doing that you're handing a piece of your life away for a very low or even negative return. Laugh about it, ignore it, get over it, but don't let it sit and fester in your mind. Edit - please disregard this post
4wedrifid
When it comes to making the most beneficial responses feeling anger is almost never useful when you have a sufficient foundation in the mechanisms of social competition, regardless of truth. It tends to show weakness - the vulnerability to provocation that you are speaking of gives an opportunity for one upmanship that social rivals will instinctively hone in on. In terms of the benefits and necessity of making a response it is the connotations that are important. Technical truth is secondary.
2Cayenne
Very true. I didn't mean to suggest that the truth/falsehood line was as usefully socially as I believe it is internally. The social reaction you may decide on is mostly independent from truth. Internally, it's important to recognize that truth, since it is vital feedback that can tell you when you may need to change. Edit - please disregard this post
2wedrifid
And, when false, when you may need to change what you do such that others don't get that impression (or don't think they can get away with making the public claim even though they know it is false).

rationalists don't moralize

I like the theory but 'does not moralize' is definitely not a feature I would ascribe to Eliezer. We even have people quoting Eliezer's moralizing for the purpose of spreading the moralizing around!

"Bad argument gets counterargument. Does not get bullet. Never. Never ever never for ever."

In terms of general moralizing tendencies of people who identify as rationalists they seem to moralize slightly less than average but the most notable difference is what they choose to moralize about. When people happen to have similar morals to yourself it doesn't feel like they are moralizing as much.

9Eliezer Yudkowsky
Not everything that is not purely consequentialist reasoning is moralizing. You can have consequentialist justifications of virtue ethics or even consequentialist justifications of deontological injunctions, and you are allowed to feel strongly about them, without moralizing. It's a 5-second-level emotional direction, not a philosophical style. Sigh. This is why I said, "But trying to define exactly what constitutes 'moralizing' isn't going to get us any closer to having nice rationalist communities."

Sigh.

A 5-second method (that I employ to varying levels of success) is whenever I feel the frustration of a failed interaction, I question how it might have been made more successful by me, regardless of whose "fault" it was. Your "sigh" reaction comes across as expressing the sentiment "It's your fault for not getting me. Didn't you read what I wrote? It's so obvious". But could you have expressed your ideas almost as easily without generating confusion in the first place? If so, maybe your reaction would be instead along the lines of "Oh that's interesting. I thought it was obvious but I guess I can see how that might have generated confusion. Perhaps I could...".

FWIW I actually really like the central idea in this post, and arguably too many of the comments have been side-tracked by digressions on moralizing. However, my hunch is that you probably could have easily gotten the message across AND avoided this confusion. My own specific suggestion here is that stipulative definitions are semantic booby traps, so if possible avoid them. Why introduce a stipulative definition for "moralize" when a less loaded phrase like "susp... (read more)

Eliezer, did you mean something different by the "does not get bullet" line than I thought you did? I took it as meaning: "If your thinking leads you to the conclusion that the right response to criticism of your beliefs is to kill the critic, then it is much more likely that you are suffering from an affective death spiral about your beliefs, or some other error, than that you have reasoned to a correct conclusion. Remember this, it's important."

This seems to be a pretty straightforward generalization from the history of human discourse, if nothing else. Whether it fits someone's definition of "moralizing" doesn't seem to be a very interesting question.

4Eliezer Yudkowsky
Agreed.
2wedrifid
I agree with the parent but maintain everything in the grandparent. There just isn't any kind of contradiction of the kind that from the sigh I assume is intended.
3matt
I find myself frequently confused by Eliezer's "sigh"s.
0katydee
Noticing your confusion is the first step to understanding.
2wedrifid
Poster child for ADBOC.
0katydee
Good point, link added.
0[anonymous]
You say rationalists don't moralize. Could you give me three concrete examples of moralizing that also promote a moral imperative that rationalists agree with, such as "One should respond to bad arguments with counterarguments rather than gunfire"?
9BenAlbahari
Really? The LW website attracts aspergers types and apparently morality is stuff aspergers people like.
7wedrifid
That's true, and usually I say 'a lot more' rather than 'slightly less'. However in this instance Eliezer seemed to be referring to a rather limited subset of 'moralizing'. He more or less excluded being obnoxiously judgemental but phrasing your objections with consequentialist language. So the worst of nerd-moralizing was cut out.
2BillyOblivion
I suspect that what aspergers types like--if that post is correct and they do like it--is more the rules part of morality than the being judgmental[1] part of it. Rules for strict rules for interacting with other folks make social interactions less error prone when you literally don't--can't--get those social cues others do. I've been judged to be at best borderline aspery (absent any real testing, who knows) and manifest many of the more subtle symptoms, and my take(s) on morality are (1) that it is much like driving regulations. No one gives a flying f' which side of the road you drive on as long as everybody does. (no need to get judgemental about it unless someone is deliberately doing it wrong) and (2) that the human animal (at least neurotypical human animals) have behavior patterns that are a result of both evolution and society. Following these behavior patterns will keep you from some fun and lots of pain, and will generally get you into the fat part of the bell curve. Break the wrong ones and you will wind up in the ugly part of the curve. Figure out how to break the right ones the right way and you get into the cool part of the bell curve where interesting shit happens. Oh, and sometimes when you break these rules you hurt other people. When you hurt them by accident that's bad, when you hurt them on purpose and they don't deserve it, that's even worse. If they do deserve it then it's probably because they broke one of the rules. People do shit for all sorts of reasons, and in contemporary society there are all sorts of people in power advocating all sorts of mildly to wildly stupid shit. Can't really blame someone all that much if they spent 12 years in schools that pushed the sort of "education" that you get from compromising between fundamentalist Christians, New Age Fruit Cakes, Universal Church Members, and your typical politicians. Oh, and people with masters degrees in Education, much less Doctorates. Seriously, you're better off with the f'ing

When people say they appreciate rationalists for their non-judgmentalism, I think they mean more than just that rationalists tend not to moralize. What they also mean is that rationalists are responsive to people's actual statements and opinions. This is separate from moralizing and in my opinion is more important, both because it precedes it in conversation and because I think people care about it more.

Being responsive to people means not (being interpreted as [inappropriately or] incorrectly) assuming what a person you are listening to thinks.

If someone tells says "I think torture, such as sleep deprivation, is effective in getting information," and they support, say, both the government doing such and legalizing it, judging them to be a bad person for that and saying so won't build communal ties, but it's unlikely to be frustrating for the first person.

If, on the other hand, they don't support the legalization or morality of it despite their claim it is effective, indignation will irritate them because it will be based on false assumptions about their beliefs.

If someone says "I'm thinking of killing myself", responding with "That violates my arbitraty an... (read more)

9RobinZ
My usual method when confronted with a situation where a speaker appears to be stupid, crazy, or evil is to assume I misunderstood what they said. Usually by the time I understand what the opposite party is saying, I no longer have any problematic affective judgment.
4wedrifid
I usually find that I do understand what they are saying and it belongs in one of the neglected categories of 'bullshit' or "".
3wilkox
This sounds interesting, but I can't parse it.
1wedrifid
That's because you are using an English parser while my words were not valid English.
2RobinZ
Those don't usually give me much trouble - I find that the nonsense people propose is usually self-consistent in an interesting way, much like speculative fiction. On reflection, what really gives me trouble is viewpoints I understand and disagree with all within five seconds, like [insert politics here].
0[anonymous]
My experience is opposite. On one hand you'll have people who do job that require a sort of met
0RobinZ
"things that people say that" what? The grammar gets a little odd toward the latter half of that.
1endoself
Presumably "things that people say that aren't really actionable beliefs"; though this reply feels awkward in a discussion about misunderstanding, I'm pretty sure that was the intended phrase.
0wedrifid
Fixed.
0RobinZ
Thanks!

On the topic of the "poisonous pleasure" of moralistic critique:

I am struck by the will to emotional neutrality which appears to exist among many "aspies". It's like a passive, private form of nonviolent resistance directed against neurotypical human nature; like someone who goes limp when being beaten up. They refuse to take part in the "emotional games", and they refuse to resist in the usual way when those games are directed against them - the usual form of defense being a counterattack - because that would make them just as bad as the aggressor normals.

For someone like that, it may be important to get in touch with their inner moralizer! Not just for the usual reason - that being able to fight back is empowering - but because it's actually a healthy part of human nature. The capacity to denounce, and to feel the sting of being denounced without exploding or imploding, is not just some irrational violent overlay on our minds, without which there would be nothing but mutual satisficing and world peace. It has a function and we neutralize it at our peril.

If the message you intend to send is "I am secure in my status. The attacker's pathetic attempts at reducing my status are beneath my notice.", what should you do? You don't seem to think that ignoring the "attacks" is the correct course of action.

This is a genuine question. I do not know the answer and I would like to know what others think.

3TimFreeman
I think the real message is "The attacker's attempt to reduce my status is too ineffective to need a response". On a good day I'd say "okay" so he knows I heard him, and then start a conversation with someone else, unless there's some instrumental value in confronting him or continuing the conversation given that I now know he's playing status games. I don't know a good way to carry on a useful conversation with someone who is playing status games, so I'm stuck in that situation too.
0novalis
Sarcasm.
0wedrifid
Ignoring the attempts is a good default. It gives a decent payoff while being easy to implement. More advanced alternatives are the witty, incisive comeback or the smooth, delicately calibrated communication of contempt for the attacker to the witnesses. In the latter case especially body language is the critical component.
0mendel
My opinion? I'd not lie. You've noticed the attempt, why claim you didn't? Display your true reaction.
6wedrifid
Noticing the attempt and doing nothing is not a lie. It is a true reaction.
0mendel
I'm referring to that. Sending that message is an implicit lie -- well, you could call it a "social fiction", if you like a less loaded word. It is also a message that is very likely to be misunderstood (I don't yet know my way around lesswrong well enough to find it again, but I think there's an essay here someplace that deals with the likelyhood of recipients understanding something completely different than what you intended to mean, but you not being able to detect this because the interpretation you know shapes your perception of what you said). So if your true reaction is "you are just trying to reduce my status, and I don't think it's worth it for me to discuss this further", my choice, given the option to not display it or to display it, would usually be to display it, if a reaction was expected of me. I hope I was able to clarify my distinction between having a true reaction, and displaying it. In a nutshell, if you notice something, you have a reaction, and by not displaying it (when it is expected of you), you create an ambiguous situation that is not likely to communicate to the other person what you want it to communicate.
5Barry_Cotter
implicit lie vs. social fiction I don't think these are normally useful ways of thinking about status posturing. Verbalising this stuff is a faux pas in the overwhelming majority of human social groups. I'm not sure if I disagree with you on whether the message is "very likely" to be understood. In my limited experience, and with my below average people reading skills, I'd say that most status jockeying in non-intimate contexts is obvious enough for me to notice if I'm paying attention to the interaction. The post you meant is probably Illusion of Transparency. I contend that it applies less strongly to in person status jockeying than to lingual information transfer. I suggest you watch a clip of a foreign language movie if you disagree.
0mendel
Yes, that's the post I was referring to. Thank you!
1wedrifid
This can work sometimes but it in most contexts it is difficult to pull off without sounding awkward or crude. At best it conveys that you are aware that social dynamics exist but aren't quite able to navigate them smoothly yet. Mind you unless there is a pre-existing differential in status or social skills in their favour they will tend to come off slightly worse than you in the exchange. A costly punishment.

It's like a passive, private form of nonviolent resistance directed against neurotypical human nature; like someone who goes limp when being beaten up.

Mitchell, yes, that was me back in high school. But IIRC I thought I was doing this.

4Barry_Cotter
You don't need to be angry to hit someone, or to spread gossip, or to otherwise retaliate against them. If you recognise that someone is a threat or an obstacle you can deal with them as such without the cloud of rage that makes you stupider. You do not need to be angry to decide that someone is in your way and that it will be necessary to fuck them up.
7Vladimir_M
Then why didn't humans evolve to perform rational calculations of whether retaliation is cost-effective instead of uncontrollable rage? The answer, of course, is largely in Schelling. The propensity to lose control when enraged is a strategic precommitment to lash out if certain boundaries are overstepped. Now of course, in the modern world there are many more situations where this tendency is maladaptive than in the human environment of evolutionary adaptedness. Nevertheless, I'd say that in most situations in which it enters the strategic calculations it's still greatly beneficial.
3Barry_Cotter
I agree, or at least agree for situations where people are in their native culture or one they're intimately familiar with, so that they're relatively well-calibrated. What I wrote was poorly phrased to the point of being wrong without lawyerly cavilling. To rephrase more carefully; you can act in a manner that gets the same results as anger without being angry. You can have a better, more strategic response. I'm not claiming it's easy to rewire yourself like this, but it's possible. If your natural anger response is anomalously low, as is the case for myself and many others on the autism spectrum, and you're attempting some relatively hardcore rewiring anyway, why not go for the strategic analysis instead of trying to decrease your threshold for blowing up?
8Vladimir_M
I'm not sure if you understand the real point of precommitment. The idea is that your strategic position may be stronger if you are conditionally committed to act in ways that are irrational if these conditions are actually realized. Such precommitment is rational on the whole because it eliminates the opponent's incentives to create these conditions, so if the strategy works, you don't actually have to perform the irrational act, which remains just a counterfactual threat. In particular, if you enter confrontations only when it is cost-effective to do so, this may leave you vulnerable to a strategy that maneuvers you into a situation where surrender is less costly than fighting. However, if you're precommitted to fight even irrationally (i.e. if the cost of fighting is higher than the prize defended), this makes such strategies ineffective, so the opponent won't even try them. So for example, suppose you're negotiating the price you'll charge for some work, and given the straightforward cost-benefit calculations, it would be profitable for you to get anything over $10K, while it would be profitable for the other party to pay anything under $20K, so the possible deals are in that range. Now, if your potential client resolutely refuses to pay more than $11K, and if it's really impossible for you to get more, it is still rational for you to take that price rather than give up on the deal. However, if you are actually ready to accept this price given no other options, this gives the other party the incentive to insist with utter stubbornness that no higher price is possible. On the other hand, if you signal credibly that you'd respond to such a low offer by getting indignant that your work is valued so little and leaving angrily, then this strategy won't work, and you have improved your strategic position -- even though getting angry and leaving is irrational assuming that $11K really is the final offer. (Clearly, the strategy goes both ways, and the buyer is also b
2wedrifid
I agree with what you are saying and would perhaps have described it as "ways that would otherwise have been irrational".
0Barry_Cotter
I obviously need to work on phrasing things more clearly. Anger functions as a strategic precommitment which improves your bargaining position. Two examples of a precommitment would be as follows (1) A car buyer going to a dealership with a contract stating that for every dollar they pay over a predetermined price (manufacturers price plus average industry margin presumably) they must pay ten dollars to some other party (who can credibly hold them to it). (2) Destroying your means of retreat when you plan aggression against another party, so that you have no motive to hold anything back, like Cortes did when he burned his ships upon landing in Mexico. Now (1) is more like anger than (2) is because it's a public signal, but both of them reduce your options to strengthen your position, (1) in a negotiation, (2) as a committed, cohesive group. (1) is very much like throwing the steering wheel out the window in the game of chicken. Pretending your hands are tied and you can't go above/below the stated price without going further up the chain of command is actually one of those negotiating tricks that are in all the books, like the car salesman who goes "Oh, I'm not sure; I'll have to consult my boss" and smokes a cigarette in the office before coming back and agreeing to a lower price. Swimmer963 asked me: and I replied which I think shows at least a weak grasp of how these precommitments can work; one builds a reputation, and given that we're meatbags with malleable conceptions of self, a reason to make such precommitments even when they cannot effect our reputation. If "normally impossible" means very, very hard I agree completely; robust self-behavioural modification is hard even for small things, never mind for something as difficult to bring into conscious awareness or control as anger. Would you consider expanding upon quality of calibration?
6Vladimir_M
Yes, I think we understand each other now. Funny, I had the "must consult my boss" trick pulled on me just a few days ago by a guy whom I called up to haul off some trash. I still managed to make him lower the supposedly boss-mandated price by about 20%. (And when I later thought about the whole negotiation more carefully, I realized I could have probably lowered it much more.) Regarding the quality of calibration, it's straightforward. Emotional reactions can serve as strategic precommitments the way we just discussed, and often they also serve as decision heuristics in problems where one lacks the necessary information and processing power for a conscious rational calculation. In both cases, they can be useful if they are well-calibrated to produce strategically sound actions, but if they're poorly calibrated, they can lead to outright irrational and self-destructive behavior. So for example, if you fail to feel angry indignation when appropriate, you're in danger of others maneuvering you into a position where they'll treat you as a doormat, both in business and in private life. On the other hand, if such emotions are triggered too easily, you'll be perceived as short-tempered, unreasonable, and impossible to deal with, again with bad consequences, both professional and private. It seems to me that the key characteristic that distinguishes high achievers is the excellent calibration of their emotional reactions -- especially compared to people who are highly intelligent and conscientious and nevertheless have much less to show for it.
7fiddlemath
No; but it certainly makes it likelier that you will bring yourself to action.
3Swimmer963 (Miranda Dixon-Luinenburg)
If you're not angry, what would motivate you to do any of those things? If someone injures me in some way or takes something that I wanted, usually neither hitting them nor spreading gossip about them will in any way help me repair my injury or get back what they took from me. So I don't. Unless I'm angry, in which case it kind of just happens, and then I regret it because it usually makes the situation worse.
8AdeleneDawner
Put simply, sometimes displaying a strong emotional response (genuine or otherwise) is the only way to convince someone that you're serious about something. This seems to be particularly true when dealing with people who aren't inclined to use more 'intellectual' communication methods.
0wedrifid
I think you're right. Mind you as someone who is interested in communication that doesn't involve control via strong emotional responses I most definitely don't reward bad behaviour by giving the other what they want. This applies especially if they use the aggressive tactics of the kind mentioned here. I treat those as attacks and respond in such a way as to discourage any further aggression by them or other witnesses. This is not to say I don't care about the other's experience or desires, nor does it mean that a strong emotional response will rule out me giving them what they want. If the other is someone that I care about I will encourage them towards expressions that actually might work for getting me to give them what they want. I'll guide them towards asking me for something and perhaps telling me why it matters to them. This is more effective than making demands or attempting to emotionally control. I'm far more generous than I am vulnerable to dominance attempts and I'm actually willing to consciously make myself vulnerable to personal requests to just behind the line of being an outright weakness because I have a strong preference for that mode of communication. Mind you even this tends to be strongly conditional on a certain degree of reciprocation. Point being that I agree with the sometimes qualifier; the benefit to such displays (genuine or otherwise) is highly variable. We also have the ability to influence whether people make such displays to us. Partly by the incentive they have and partly by simple screening.
0Swimmer963 (Miranda Dixon-Luinenburg)
Seems true. Nevertheless I've never used it in this way. This may have more to do with my personality than anything: from what I've read here, I'm more of a conformist than the average Less Wrong reader, and I put a higher value on social harmony. I hate arguments that turn personal and emotional.
3TheOtherDave
I might hit someone because they're pointing a gun at me and I believe hitting them is the most efficient way to disarm them. I might hit someone because they did something dangerous and I believe hitting them is the most efficient way to condition them out of that behavior. I might spread gossip about them because they are using their social status in dangerous ways and I believe gossiping about them is the best available way of reducing their status. None of those cases require anger, and they might even make the situation better. (Or they might not.) Or, less nobly, I might hit someone because they have $100 I want, and I think that's the most efficient way to rob them. I might spread gossip about them because we're both up for the same promotion and I want to reduce their chance of getting it. None of those cases require anger, either. (And, hey, they might make the situation better, too. Or they might not.)
3Swimmer963 (Miranda Dixon-Luinenburg)
I suppose the context of my comment was limited to a) me personally (I don't have any desire to steal money or reduce other people's chances of promotion) and b) to the situations I have encountered in the past (no guns or danger involved). Your points are very valid though.
0Barry_Cotter
If you are dealing with someone in your social circle, or can be seen by someone in your social circle and you want to build or maintain a reputation as someone it is not wise to cross. Even if it's more or less a one shot game, if you make a point of not being a doormat it is likely to impact your self-image, which will impact your behaviour, which will impact how others treat you. Even if in the short run retaliating helps nobody and slightly harms you, it can be worth it for repuatational and self-concept reasons.
5Swimmer963 (Miranda Dixon-Luinenburg)
Point taken. I am a doormat. People have told me this over and over again, so I probably have a reputation as a doormat, but that has certain value in itself; I have a reputation as someone who is dependable, loyal, and does whatever is asked of me, which is useful in a work context.
2Wei Dai
Can you be more specific? What exactly are the dangers of neutralizing our "inner moralizers"? Also, see my previous comments, which may be applicable here. I speculate that "aspies" free up a large chunk of the brain for other purposes when they ignore "emotional games", and it's not clear to me that they should devote more of their cognitive resources toward such games.
1Mitchell_Porter
Having brought up this topic, I find that I'm reluctant to now do the hard work of organizing my thoughts on the matter. It's obvious that the ability to moralize has a tactical value, so doing without it is a form of personal or social disarmament. However, I don't want to leave the answer at that Nietzschean or Machiavellian level, which easily leads to the view that morality is a fraud but a useful fraud, especially for deceptive amoralists. I also don't want to just say that the human utility function has a term which attaches significance to the actions, motives and character of other agents, in such a way that "moralizing" is sometimes the right thing to do; or that labeling someone as Bad is an efficient heuristic. I have glimpsed two rather exotic reasons for retaining one's capacity for "judging people". The first is ontological. Moral judgments are judgments about persons and appeal to an ontology of persons. It's important and useful to be able to think at that level, especially for people whose natural inclination is to think in terms of computational modules and subpersonal entities. The second is that one might want to retain the capacity to moralize about oneself. This is an intriguing angle because the debate about morality tends to revolve around interactions between persons, whether morality is just a tool of the private will to power, etc. If the moral mode can be applied to one's relationship to reality in general (how you live given the facts and uncertainties of existence, let's say), and not just to one's relationship to other people, that gives it an extra significance. The best answer to your question would think through all that, present it in an ordered and integrated fashion, and would also take account of all the valid reasons for not liking the moralizing function. It would also have to ground the meaning of various expressions that were introduced somewhat casually. But - not today.
4mendel
In another comment on this post, Eugine Nier linked to Schelling. I read that post, and the Slate page that mentions Schelling vs. Vietnam, and it became clear to me that acting moral acts as an "antidote" to these underhanded strategies that count on your opponent being rational. (It also serves as a Gödelian meta-layer to decide problems that can't be decided rationally.) If, in Schellings example, the guy who is left with the working radio set is moral, he might reason that "the other guy doesn't deserve the money if he doesn't work for it", and from that moral strongpoint refuse to cooperate. Now if the rationalist knows he's working with a moralist, he'll also know that his immoral strategy won't work, so he won't attempt it in the first place - a victory for the moralist in a conflict that hasn't even occurred (in fact, the moralist need never know that the rationalist intended to cheat him). This is different from simply acting irrationally in that the moralist's reaction remains predictable. So it is possible that moral indignation helps me to prevent other people from manouevering me into a position where I don't want to be.
0Viliam_Bur
Seems like morality is (inter alia) a heuristic for improving one's bargaining position by limiting one's options.
3Wei Dai
It occurs to me that I'm not less judgmental than the typical human, just judgmental in a different way and less vocal about it (except in the "actions speak louder than words" sense). My main judgement of a person is just whether it is worth my time to talk to / work with / play with / care about that person, and if my "inner moralizer" says no, I simply ignore or get away from them. I'm not sure if I can be considered an "aspie" but I suspect many of them are similar in this way. Compared to what's more typical, this method of "moralizing" seems to have all of the benefits you listed (except the last one, "If the moral mode can be applied to one's relationship to reality in general", which I don't understand) but fewer costs. It is less costly in mental resources, and less likely to get you involved in negative-sum situations. I note that it wouldn't have worked well in an ancestral environment where you lived in a small tribe and couldn't ignore or get away from others freely, which perhaps explains why it doesn't come naturally to most people despite its advantages.
1Mitchell_Porter
See the comments here on the psychological meaning of "kingship". That's one aspect of the "relationship to reality" I had in mind. If you subtract from consideration all notions of responsibility towards other people, are all remaining motivations fundamentally hedonistic in nature, or is there a sense in which you could morally criticize what you were doing (or not doing), even if you were the only being that existed? There is a tendency, in discussions here and elsewhere about ethics, choice, and motivation, either to reduce everything to pleasure and pain, or to a functionalist notion of preference which makes no reference to subjective states at all. Eliezer advocates a form of moral realism (since he says the word "should" has an objective meaning), but apparently the argument depends on behavior (in the real world, you'd pull the child on the train tracks out of harm's way) and on the hypothesized species-universality of the relevant cognitive algorithms. But that doesn't say what is involved in making the judgment, or in making the meta-judgment about how you would act. Subjectively, are we to think of such judgments as arising from emotional reactions (e.g. basic emotions like disgust or fear)? It leaves open the question of whether there is a distinctive moral modality - a mode of perception or intuition - and my further question would be whether it only applies to other people (or to relations between you the individual and other people), or whether it can ever apply to yourself in isolation. In culture, I see a tendency to regard choices about how to live (that don't impact on other people) as aesthetic choices rather than ethical choices. Mostly I have questions rather than answers here.
2mutterc
WIth Aspies it's probably less that they won't take part in emotional games, than can't.
1Desrtopa
I'm not sure I'm correctly interpreting what you're referring to here. Could you give a concrete example?
2Mitchell_Porter
The Zen thing to do would be to flame you with absurd viciousness for being excessively vague in your own request for clarification, in the hope that your response would be combative (rather than purely analytical), but still appropriate - because then you would have provided the example yourself. But that's a high-risk conversational strategy. :-)
0[anonymous]
Can you be more specific?
0TimFreeman
Agreed, although I don't know that I have any Asperger's. Here's a sample dialogue I actually had that would have gone better if I had been in touch with my inner moralizer. I didn't record it, so it's paraphrased from memory: X: It's really important to me what happens to the species a billion years from now. (X actually made a much longer statement, with examples.) Me: Well, you're human, so I don't think you can really have concerns about what happens a billion years from now because you can't imagine that period of time. It seems much more likely that you perceive talking about things a billion years off to be high status, and what you really want is the short term status gain from saying you have impressive plans. People aren't really that altruistic. X: I hate it when people point out that there are two of me. The status-gaming part is separate from the long-term planning part. Me: There are only one of you, and only one of me. X: You're selfish! (This actually made more sense in the real conversation than it does here. This was some time ago and my memory has faded.) Me: (I exited the conversation at this point. I don't remember how.) I exited because I judged that X was making something he perceived to be an ad-hominem argument, and I knew that X knew that ad-hominem arguments were fallacious, and I couldn't deal with the apparent dishonesty. It is actually true that I am selfish, in the sense that I acknowledge no authority over my behavior higher than my own preferences. This isn't so bad given that some of my preferences are that other people get things they probably want. Today I'm not sure X was intending to make an ad-hominem argument. This alternative for my last step would have been better: Me if I were in touch with my inner moralizer: Do I correctly understand that you are trying to make an ad-hominem argument? If I had taken that path, I would either have clear evidence that X is dishonest, or a more interesting conversation if he wasn't;
5Peter_de_Blanc
In what sense are you using the word imagine, and how hard have you tried to imagine a billion years?
1TimFreeman
I have a really poor intuition for time, so I"m the wrong person to ask. I can imagine a thousand things as a 10x10x10 cube. I can imagine a million things as a 10x10x10 arrangements of 1K cubes. My visualization for a billion looks just like my visualization for a million, and a year seems like a long time to start with, so I can't imagine a billion years. In order to have desires about something, you have to have a compelling internal representation of that something so you can have a desire about it. X didn't say "I can too imagine a billion years!", so none of this pertains to my point.

My visualization for a billion looks just like my visualization for a million, and a year seems like a long time to start with, so I can't imagine a billion years.

Would it help to be more specific? Imagine a little cube of metal, 1mm wide. Imagine rolling it between your thumb and fingertip, bigger than a grain of sand, smaller than a peppercorn. Yes?

A one-litre bottle holds 1 million of those. (If your first thought was the packing ratio, your second thought should be to cut the corners off to make cuboctahedra.)

Now imagine a cubic metre. A typical desk has a height of around 0.75m, so if its top is a metre deep and 1.33 metres wide (quite a large desk), then there is 1 cubic metre of space between the desktop and the floor.

It takes 1 billion of those millimetre cubes to fill that volume.

Now find an Olympic-sized swimming pool and swim a few lengths in it. It takes 2.5 trillion of those cubes to fill it.

Fill it with fine sand of 0.1mm diameter, and you will have a few quadrillion grains.

A bigger problem I have with the original is where X says "It's really important to me what happens to the species a billion years from now." The species, a billion years from now? Tha... (read more)

2TimFreeman
Excellent. I can visualize a billion now. Thank you.
1Peter_de_Blanc
First, I imagine a billion bits. That's maybe 15 minutes of high quality video, so it's pretty easy to imagine a billion bits. Then I imagine that each of those bits represents some proposition about a year - for example, whether or not humanity still exists. If you want to model a second proposition about each year, just add another billion bits.
5ArisKatsaris
Perhaps I don't understand your usage of the word 'imagine' because this example doesn't really help me 'imagine' them at all. Imagine their result (the high quality video) sure, but not the bits themselves.
3shokwave
I can't imagine the difference between sixteen million dollars and ten million dollars - in my imagination, the stuff I do with the money is exactly the same. I definitely prefer 16 to 10 though. In much the same way, my imagination of a million dollars and a billion dollars doesn't differ too much; I would also prefer the billion. I don't know if I need to imagine a billion years accurately in order to prefer it, or have concerns about it becoming less likely.
2wedrifid
One of the great benefits that being in touch with the inner moralizer can have is that can warn you about how what you say will be interpreted by another. It would probably recommend against speaking your first paragraph, for example. I suspect the inner moralizer would also probably not treat the "You're selfish" as an ad hominem argument. It technically does apply but from within a moral model what is going on isn't of the form of the ad hominem fallacy. It is more of the form: * Not expressing and expecting others to express a certain moral position is bad. * You are bad. * You should fear the social consequences of being considered bad. * You should change your moral position. I'm not saying the above is desirable reasoning - it's annoying and has its own logical probelms. But it is also a different underlying mistake than the typical ad hominem.
0TimFreeman
If it works that way, I don't want it. My relationship with X has no value to me if the relevant truths cannot be told, and so far as I can tell that first paragraph was both true and relevant at the time. Now if that had been a coworker with whom I needed ongoing practical cooperation, I would have made some minimal polite response just like I make minimal polite responses to statements about who is winning American Idol. Okay, there might be some detailed definition of ad hominem that doesn't exactly match the mistake you described. I presently fail to see how the difference is important. The purpose of both ad hominem and your offered interpretation is to use emotional manipulation to get the target (me in this example) to shut up. Would I benefit in some way from making a distinction between the fallacy you are describing and ad hominem?
0lukstafi
Could you be more specific? Is the "inner moralizer", as opposed to, say, "inner consequentialist", a virtue by the human condition (by how the brain is wired), or is it "objectively good solution given limited cognitive resources"? Is your statement rather about humans, or rather about moralization?
2Mitchell_Porter
I am still thinking this through. It's a very subtle topic. But having begun to think about it, the sheer number of arguments that I have found (which are in favor of preserving and employing the moral perspective) encourages me to believe that I was right - I'm just not sure where to place the emphasis! Of course there is such a thing as moral excess, addiction to moralizing, and so forth. But eschewing moral categories is psychologically and socially utopian (in a bad sense), the intersubjective character of the moral perspective has a lot going for it (it's cognitively holistic since it is about whole agents criticizing whole agents; you can't forgive someone unless you admit that they have wronged you; something about how you can't transcend the moral perspective, in the attractive emotional sense, unless you understand it by passing through it)... I wouldn't say it's just about computational utility.
0lukstafi
I must clarify that I've been concerned with contrasting the function of moralization, and the mechanism of moralization, which is ingrained very deeply to the effect that without enough praise children develop dysfunctionally, etc.

take a rationalist skill you think is important

Facing Reality, applied to self-knowledge

come up with a concrete example of that skill being used successfully;

"It sure seems I can't get up. Yet this looks a lot like laziness or attention-whoring. No-no-I'm-not-this-can't-be-STOP. Yes, there is a real possibility I could get up but am telling myself I can't, and I should take that into account. But upon introspection, and trying to move the damn things, it does feel like I can't, which is strong evidence.

So I'm going to figure out some tests. Maybe see a doctor; try to invoke reflexes that would make me move (careful, voluntary movement can truly fail even if reflexes don't); ask some trusted people, telling them the whole truth. Importantly, I'm going to refuse to use it as an excuse to slack off. I can crawl!"

crawls to nearest pile of homework, and works lying prone, occasionally trying to get up

decompose that use to a 5-second-level description of perceptual classifications and emotion-evoking contexts and associative triggers to actionable procedures;

  • try to move legs, fail
  • compare with expectation (possibly verbalizing it "Those are legs. They're used
... (read more)