IAWYC, and introspective access to what my mind was doing on this timescale was one of the bigger benefits I got out of meditation. (Note: Probably not one of the types of meditation you've read about). However, I don't think you've correctly identified what went wrong in the example with red. Consider this analogous conversation:
What's a Slider? It's a Widget.
What's a Widget? It's a Drawable.
What's a Drawable? It's an Object.
In this example, as with the red/color example, the first question and answer was useful and relevant (albeit incomplete), while the next two were useless. The lesson you seem to have drawn from this is that looking down (subclassward) is good, and looking up (superclassward) is bad. The lesson I draw from this is that relevance falls off rapidly with distance, and that each successive explanation should be of a different type. It is better to look a short distance in each direction rather than to look far in any one direction. Compare:
X is a color. This object is X. (One step up, one step down)
X is a color. A color is a quality that things have. (Two steps up)
This object is X. That object is also X. (Two steps down)
I would expect the first of these three explanations to succeed, and the other two to fail miserably.
"One step up and one step down" sounds like a valuable heuristic; it's what I actually did in the post, in fact. Upvoted.
A few months later, I've been teaching Anna and Luke and Will Ryan and others this rule as the "concrete-abstract pattern". Give a specific example with enough detail that the listener can visualize it as an image rather than as a proposition, and then describe it on the level of abstraction that explains what made it relevant. I.e., start with an application of Bayes's Theorem, then show the abstract equation that circumscribes what is or isn't an example of Bayes's Theorem.
Also, it is very important to give counter-examples: 'This crow over there belongs to the bird category. But the plane in the sky and the butterfly over there do not.' Or, more fitting the 'red' example: 'That stop sign and that traffic light are red. But this other traffic sign (can't think of an example) doesn't.'
And as well, this could be done with categories. 'Red is a color. Red is not a sound.'
I guess this one has something to do with confirmation bias, as cwillu suggested.
I'm a big fan of breaking things down to the finest grain thoughts possible, but it still surprises me how quickly this gets complicated when trying to actually write it down.
http://lesswrong.com/lw/2l6/taking_ideas_seriously/
Example: Bob is overweight and an acquaintance mentions some "shangri-la" diet that helps people lose weight through some "flavor/calorie association". Instead of dismissing it immediately, he looks into it, adopts the diet, and comfortably achieves his desired weight.
1) Notice the feeling of surprise when encountering a claim that runs counter to your expectations.
2) Check in far mode the importance of the claim if it were true by running through a short list of concrete implications (eg "I can use this diet and as a result, I can enjoy exercise more, I can feel better about my body, etc")
3) Imagine reaping the benefits in ...
"Be specific" is a nice flinch, I've always had it and it helps a lot. "Don't moralize" is a flinch I learned from experience and it also helps. Here's some other nice flinches I have:
"Don't wait." Waiting for something always takes more time than I thought it would, so whenever I notice myself waiting, I switch to doing something useful in the meanwhile and push the waiting task into the background. Installing the habit took a little bit of effort, but by now it's automatic.
"Don't hesitate." With some effort I got a working version of this flinch for tasks like programming, drawing or physical exercise. If something looks like it would make a good code fix or a good sketch, do it immediately. Would be nice to have this behavior for all other tasks too, but the change would take a lot of effort and I'm hesitating about it (ahem).
"Don't take on debt." Anything that looks even vaguely similar to debt, I instinctively run away from it. Had this flinch since as far as I can remember. In fact I don't remember ever owing >100$ to anyone. So far it's served me well.
Well, it makes me sad to see a very standardized and crisp term like "liability" used in such a confusing and nonstandard way. Especially when there is another equally crisp and very standardized term ("expense") that could be used instead. And I do not want to talk about it anymore.
I find that I worry a lot less about checking up on background tasks (compiles, laundry, baking pies, brewing tea, etc.) if I know I'll get a clear notification when the process is complete. If it's something that takes a fixed amount of time I'll usually just set a timer on my phone — this is a new habit that works well for tea in particular. Incidentally, owning an iPhone has done a surprising amount for my effectiveness just by reducing trivial inconveniences for this sort of thing.
For compiles, do something like
$ make; growlnotify -m "compile done!"
or run a script that sends you an SMS or something. This is something that I'm not in the habit of doing, but I just wrote myself a note to figure something out when I get into work on Monday.[1] (For most of my builds it's already taken care of, since it brings up a window when it's done. This would be for things like building the server, which runs in a terminal, and for svn updates, which are often glacial.)
[1] This is another thing that helps me a lot. Write things down in a place that you look at regularly. Could be a calendar app, could be a text file in Dropbox, whatever.
My attempt at the exercise for the skill "Hold Off On Proposing Solutions"
Example: At a LessWrong meet up someone talks about some problem they have and asks for advice, someone points out that everyone should explore the problem before proposing solutions. Successful use of the skill involves:
1) Noticing that a solution is being asked for. This is the most important sub-skill. It involves listening to everything you ever hear and sorting it into appropriate categories.
2) Come up with a witty and brilliant solution. This happens automatically.
3) Suppress the urge to explain the solution to everyone, even though it is so brilliant, and will make you look so cool, and (gasp) maybe someone else has thought of it, and you better say it before they do, otherwise it will look like it was their idea!
4) Warn other people to hold off on proposing solutions.
Exercise: Best done in a group, where the pressure to show intelligence is greatest. Read the group a list of questions. Use many different types of questions, some about matters of fact, some about opinion, and some asking for a solution. The first two types are to be answered immediately. The last type are to be met with absolute silence. Anyone found talking after a solution has been requested loses points.
Encourage people to write down any solutions they do come up with. After the exercise is finished, destroy all the written solutions, and forbid discussion of them.
I think that the big skill here is not being offended. If someone can say something and control your emotions, literally make you feel something you had no intention to feel beforehand, then perhaps it's time to start figuring out why you're allowing people to do this to you.
At a basic level anything someone can say to you is either true or false. If it's true then it's something you should probably consider and accept. If it's false then it's false and you can safely ignore/gently correct/mock the person saying it to you. In any case there really isn't any reason to be offended and especially there is no reason to allow the other person to provoke you to anger or acting without thought.
This isn't the same as never being angry! This is simply about keeping control for yourself over when and why you get angry or offended, rather than allowing the world to determine that for you.
Edit - please disregard this post
rationalists don't moralize
I like the theory but 'does not moralize' is definitely not a feature I would ascribe to Eliezer. We even have people quoting Eliezer's moralizing for the purpose of spreading the moralizing around!
"Bad argument gets counterargument. Does not get bullet. Never. Never ever never for ever."
In terms of general moralizing tendencies of people who identify as rationalists they seem to moralize slightly less than average but the most notable difference is what they choose to moralize about. When people happen to have similar morals to yourself it doesn't feel like they are moralizing as much.
Sigh.
A 5-second method (that I employ to varying levels of success) is whenever I feel the frustration of a failed interaction, I question how it might have been made more successful by me, regardless of whose "fault" it was. Your "sigh" reaction comes across as expressing the sentiment "It's your fault for not getting me. Didn't you read what I wrote? It's so obvious". But could you have expressed your ideas almost as easily without generating confusion in the first place? If so, maybe your reaction would be instead along the lines of "Oh that's interesting. I thought it was obvious but I guess I can see how that might have generated confusion. Perhaps I could...".
FWIW I actually really like the central idea in this post, and arguably too many of the comments have been side-tracked by digressions on moralizing. However, my hunch is that you probably could have easily gotten the message across AND avoided this confusion. My own specific suggestion here is that stipulative definitions are semantic booby traps, so if possible avoid them. Why introduce a stipulative definition for "moralize" when a less loaded phrase like "susp...
Eliezer, did you mean something different by the "does not get bullet" line than I thought you did? I took it as meaning: "If your thinking leads you to the conclusion that the right response to criticism of your beliefs is to kill the critic, then it is much more likely that you are suffering from an affective death spiral about your beliefs, or some other error, than that you have reasoned to a correct conclusion. Remember this, it's important."
This seems to be a pretty straightforward generalization from the history of human discourse, if nothing else. Whether it fits someone's definition of "moralizing" doesn't seem to be a very interesting question.
When people say they appreciate rationalists for their non-judgmentalism, I think they mean more than just that rationalists tend not to moralize. What they also mean is that rationalists are responsive to people's actual statements and opinions. This is separate from moralizing and in my opinion is more important, both because it precedes it in conversation and because I think people care about it more.
Being responsive to people means not (being interpreted as [inappropriately or] incorrectly) assuming what a person you are listening to thinks.
If someone tells says "I think torture, such as sleep deprivation, is effective in getting information," and they support, say, both the government doing such and legalizing it, judging them to be a bad person for that and saying so won't build communal ties, but it's unlikely to be frustrating for the first person.
If, on the other hand, they don't support the legalization or morality of it despite their claim it is effective, indignation will irritate them because it will be based on false assumptions about their beliefs.
If someone says "I'm thinking of killing myself", responding with "That violates my arbitraty an...
On the topic of the "poisonous pleasure" of moralistic critique:
I am struck by the will to emotional neutrality which appears to exist among many "aspies". It's like a passive, private form of nonviolent resistance directed against neurotypical human nature; like someone who goes limp when being beaten up. They refuse to take part in the "emotional games", and they refuse to resist in the usual way when those games are directed against them - the usual form of defense being a counterattack - because that would make them just as bad as the aggressor normals.
For someone like that, it may be important to get in touch with their inner moralizer! Not just for the usual reason - that being able to fight back is empowering - but because it's actually a healthy part of human nature. The capacity to denounce, and to feel the sting of being denounced without exploding or imploding, is not just some irrational violent overlay on our minds, without which there would be nothing but mutual satisficing and world peace. It has a function and we neutralize it at our peril.
If the message you intend to send is "I am secure in my status. The attacker's pathetic attempts at reducing my status are beneath my notice.", what should you do? You don't seem to think that ignoring the "attacks" is the correct course of action.
This is a genuine question. I do not know the answer and I would like to know what others think.
My visualization for a billion looks just like my visualization for a million, and a year seems like a long time to start with, so I can't imagine a billion years.
Would it help to be more specific? Imagine a little cube of metal, 1mm wide. Imagine rolling it between your thumb and fingertip, bigger than a grain of sand, smaller than a peppercorn. Yes?
A one-litre bottle holds 1 million of those. (If your first thought was the packing ratio, your second thought should be to cut the corners off to make cuboctahedra.)
Now imagine a cubic metre. A typical desk has a height of around 0.75m, so if its top is a metre deep and 1.33 metres wide (quite a large desk), then there is 1 cubic metre of space between the desktop and the floor.
It takes 1 billion of those millimetre cubes to fill that volume.
Now find an Olympic-sized swimming pool and swim a few lengths in it. It takes 2.5 trillion of those cubes to fill it.
Fill it with fine sand of 0.1mm diameter, and you will have a few quadrillion grains.
A bigger problem I have with the original is where X says "It's really important to me what happens to the species a billion years from now." The species, a billion years from now? Tha...
take a rationalist skill you think is important
Facing Reality, applied to self-knowledge
come up with a concrete example of that skill being used successfully;
"It sure seems I can't get up. Yet this looks a lot like laziness or attention-whoring. No-no-I'm-not-this-can't-be-STOP. Yes, there is a real possibility I could get up but am telling myself I can't, and I should take that into account. But upon introspection, and trying to move the damn things, it does feel like I can't, which is strong evidence.
So I'm going to figure out some tests. Maybe see a doctor; try to invoke reflexes that would make me move (careful, voluntary movement can truly fail even if reflexes don't); ask some trusted people, telling them the whole truth. Importantly, I'm going to refuse to use it as an excuse to slack off. I can crawl!"
crawls to nearest pile of homework, and works lying prone, occasionally trying to get up
decompose that use to a 5-second-level description of perceptual classifications and emotion-evoking contexts and associative triggers to actionable procedures;
To develop methods of teaching rationality skills, you need to learn to focus on mental events that occur in 5 seconds or less. Most of what you want to teach is directly on this level; the rest consists of chaining together skills on this level.
As our first example, let's take the vital rationalist skill, "Be specific."
Even with people who've had moderate amounts of exposure to Less Wrong, a fair amount of my helping them think effectively often consists of my saying, "Can you give me a specific example of that?" or "Can you be more concrete?"
A couple of formative childhood readings that taught me to be specific:
and:
And now, no sooner does someone tell me that they want to "facilitate communications between managers and employees" than I say, "Can you give me a concrete example of how you would do that?" Hayakawa taught me to distinguish the concrete and the abstract; and from that small passage in Asprin, I picked up the dreadful personal habit of calling people's bluffs, often using the specific phrase, "Name three."
But the real subject of today's lesson is how to see skills like this on the 5-second level. And now that we have a specific example in hand, we can proceed to try to zoom in on the level of cognitive events that happen in 5 seconds or less.
Over-abstraction happens because it's easy to be abstract. It's easier to say "red is a color" than to pause your thoughts for long enough to come up with the example of a stop sign. Abstraction is a path of least resistance, a form of mental laziness.
So the first thing that needs to happen on a timescale of 5 seconds is perceptual recognition of highly abstract statements unaccompanied by concrete examples, accompanied by an automatic aversion, an ick reaction - this is the trigger which invokes the skill.
Then, you have actionable stored procedures that associate to the trigger. And "come up with a concrete example" is not a 5-second-level skill, not an actionable procedure, it doesn't transform the problem into a task. An actionable mental procedure that could be learned, stored, and associated with the trigger would be "Search for a memory that instantiates the abstract statement", or "Try to come up with hypothetical examples, and then discard the lousy examples your imagination keeps suggesting, until you finally have a good example that really shows what you were originally trying to say", or "Ask why you were making the abstract statement in the first place, and recall the original mental causes of your making that statement to see if they suggest something more concrete."
Or to be more specific on the last mental procedure: Why were you trying to describe redness to someone? Did they just run a red traffic light?
(And then what kind of exercise can you run someone through, which will get them to distinguish red traffic lights from green traffic lights? What could teach someone to distinguish red from green?)
When you ask how to teach a rationality skill, don't ask "How can I teach people to be more specific?" Ask, "What sort of exercise will lead people through the part of the skill where they perceptually recognize a statement as overly abstract?" Ask, "What exercise teaches people to think about why they made the abstract statement in the first place?" Ask, "What exercise could cause people to form, store, and associate with a trigger, a procedure for going through hypothetical examples until a good one or at least adequate one is invented?"
Coming up with good ways to teach mental skills requires thinking on the 5-second level, because until you've reached that level of introspective concreteness, that fineness of granularity, you can't recognize the elements you're trying to teach; you can't recognize the patterns of thought you're trying to build inside a mind.
To come up with a 5-second description of a rationality skill, I would suggest zooming in on a concrete case of a real or hypothetical person who (a) fails in a typical fashion and (b) successfully applies the skill. Break down their internal experience into the smallest granules you can manage: perceptual classifications, contexts that evoke emotions, fleeting choices made too quick for verbal consideration. And then generalize what they're doing while staying on the 5-second level.
Start with the concrete example of the person who starts to say "Red is a color" and cuts themselves off and says "Red is what that stop sign and that fire engine have in common." What did they do on the 5-second level?
4a. Try to remember a memory which matches that abstract thing you just said.
4b. Try to invent a specific hypothetical scenario which matches that abstract thing you just said.
4c. Ask why you said the abstract thing in the first place and see if that suggests anything.
and
If you are thinking on this level of granularity, then you're much more likely to come up with a good method for teaching the skill "be specific", because you'll know that whatever exercise you come up with, it ought to cause people's minds to go through events 1-4, and provide examples or feedback to train perception 0.
Next example of thinking on the 5-second scale: I previously asked some people (especially from the New York LW community) the question "What makes rationalists fun to be around?", i.e., why is it that once you try out being in a rationalist community you can't bear the thought of going back? One of the primary qualities cited was "Being non-judgmental." Two different people came up with that exact phrase, but it struck me as being not precisely the right description - rationalists go around judging and estimating and weighing things all the time. (Noticing small discordances in an important description, and reacting by trying to find an exact description, is another one of those 5-second skills.) So I pondered, trying to come up with a more specific image of exactly what it was we weren't doing, i.e. Being Specific, and after further visualization it occurred to me that a better description might be something like this: If you are a fellow member of my rationalist community and you come up with a proposal that I disagree with - like "We should all practice lying, so that we feel less pressure to believe things that sound good to endorse out loud" - then I may argue with the proposal on consequentialist grounds. I may judge. But I won't start saying in immense indignation what a terrible person you must be for suggesting it.
Now I could try to verbally define exactly what it is we don't do, but this would fail to approach the 5-second level, and probably also fail to get at the real quality that's important to rationalist communities. That would merely be another attempt to legislate what people are or aren't allowed to say, and that would make things less fun. There'd be a new accusation to worry about if you said the wrong thing - "Hey! Good rationalists don't do that!" followed by a debate that wouldn't be experienced as pleasant for anyone involved.
In this case I think it's actually easier to define the thing-we-avoid on the 5-second level. Person A says something that Person B disagrees with, and now in Person B's mind there's an option to go in the direction of a certain poisonous pleasure, an opportunity to experience an emotional burst of righteous indignation and a feeling of superiority, a chance to castigate the other person. On the 5-second level, Person B rejects this temptation, and instead invokes the procedure of (a) pausing to reflect and then (b) talking about the consequences of A's proposed policy in a tone that might perhaps be worried (for the way of rationality is not to refuse all emotion) but nonetheless is not filled with righteous outrage and indignation which demands that all others share that indignation or be likewise castigated.
(Which in practice, makes a really huge difference in how much rationalists can relax when they are around fellow rationalists. It's the difference between having to carefully tiptoe through a minefield and being free to run and dance, knowing that even if you make a mistake, it won't socially kill you. You're even allowed to say "Oops" and change your mind, if you want to backtrack (but that's a whole 'nother topic of 5-second skills)...)
The point of 5-second-level analysis is that to teach the procedural habit, you don't go into the evolutionary psychology of politics or the game theory of punishing non-punishers (by which the indignant demand that others agree with their indignation), which is unfortunately how I tended to write back when I was writing the original Less Wrong sequences. Rather you try to come up with exercises which, if people go through them, causes them to experience the 5-second events - to feel the temptation to indignation, and to make the choice otherwise, and to associate alternative procedural patterns such as pausing, reflecting, and asking "What is the evidence?" or "What are the consequences?"
What would be an exercise which develops that habit? I don't know, although it's worth noting that a lot of traditional rationalists not associated with LW also have this skill, and that it seems fairly learnable by osmosis from watching other people in the community not be indignant. One method that seems worth testing would be to expose people to assertions that seem like obvious temptations to indignation, and get them to talk about evidence or consequences instead. Say, you propose that eating one-month-old human babies ought to be legal, because one-month-old human babies aren't as intelligent as pigs, and we eat pigs. Or you could start talking about feminism, in which case you can say pretty much anything and it's bound to offend someone. (Did that last sentence offend you? Pause and reflect!) The point being, not to persuade anyone of anything, but to get them to introspectively recognize the moment of that choice between indignation and not-indignation, and walk them through an alternative response, so they store and associate that procedural skill. The exercise might fail if the context of a school-exercise meant that the indignation never got started - if the temptation/choice were never experienced. But we could try that teaching method, at any rate.
(There's this 5-second skill where you respond to mental uncertainty about whether or not something will work, by imagining testing it; and if it looks like you can just go test something, then the thought occurs to you to just go test it. To teach this skill, we might try showing people a list of hypotheses and asking them to quickly say on a scale of 1-10 how easy they look to test, because we're trying to teach people a procedural habit of perceptually considering the testableness of ideas. You wouldn't give people lots of time to think, because then that teaches a procedure of going through complex arguments about testability, which you wouldn't use routinely in real life and would end up associating primarily to a school-context where a defensible verbal argument is expected.)
I should mention, at this point, that learning to see the 5-second level draws heavily on the introspective skill of visualizing mental events in specific detail, and maintaining that introspective image in your mind's eye for long enough to reflect on it and analyze it. This may take practice, so if you find that you can't do it right away, instinctively react by feeling that you need more practice to get to the lovely reward, instead of instinctively giving up.
Has everyone learned from these examples a perceptual recognition of what the "5-second level" looks like? Of course you have! You've even installed a mental habit that when you or somebody else comes up with a supposedly 5-second-level description, you automatically inspect each part of the description to see if it contains any block units like "Be specific" which are actually high-level chunks.
Now, as your exercise for learning the skill of "Resolving cognitive events to the 5-second level", take a rationalist skill you think is important (or pick a random LW post from How To Actually Change Your Mind); come up with a concrete example of that skill being used successfully; decompose that usage to a 5-second-level description of perceptual classifications and emotion-evoking contexts and associative triggers to actionable procedures etcetera; check your description to make sure that each part of it can be visualized as a concrete mental process and that there are no non-actionable abstract chunks; come up with a teaching exercise which seems like it ought to cause those sub-5-second events to occur in people's minds; and then post your analysis and proposed exercise in the comments. Hope to hear from you soon!