As you may know, the Center for Applied Rationality has run several workshops, each teaching content similar to that in the core sequences, but made more practical, and more into fine-grained habits.
Below is the checklist of rationality habits we have been using in the minicamps' opening session. It was co-written by Eliezer, myself, and a number of others at CFAR. As mentioned below, the goal is not to assess how "rational" you are, but, rather, to develop a personal shopping list of habits to consider developing. We generated it by asking ourselves, not what rationality content it's useful to understand, but what rationality-related actions (or thinking habits) it's useful to actually do.
I hope you find it useful; I certainly have. Comments and suggestions are most welcome; it remains a work in progress. (It's also available as a pdf.)
---
This checklist is meant for your personal use so you can have a wish-list of rationality habits, and so that you can see if you're acquiring good habits over the next year—it's not meant to be a way to get a 'how rational are you?' score, but, rather, a way to notice specific habits you might want to develop. For each item, you might ask yourself: did you last use this habit...
- Never
- Today/yesterday
- Last week
- Last month
- Last year
- Before the last year
- Reacting to evidence / surprises / arguments you haven't heard before; flagging beliefs for examination.
- When I see something odd - something that doesn't fit with what I'd ordinarily expect, given my other beliefs - I successfully notice, promote it to conscious attention and think "I notice that I am confused" or some equivalent thereof. (Example: You think that your flight is scheduled to depart on Thursday. On Tuesday, you get an email from Travelocity advising you to prepare for your flight “tomorrow”, which seems wrong. Do you successfully raise this anomaly to the level of conscious attention? (Based on the experience of an actual LWer who failed to notice confusion at this point and missed their plane flight.)
- When somebody says something that isn't quite clear enough for me to visualize, I notice this and ask for examples. (Recent example from Eliezer: A mathematics student said they were studying "stacks". I asked for an example of a stack. They said that the integers could form a stack. I asked for an example of something that was not a stack.) (Recent example from Anna: Cat said that her boyfriend was very competitive. I asked her for an example of "very competitive." She said that when he’s driving and the person next to him revs their engine, he must be the one to leave the intersection first—and when he’s the passenger he gets mad at the driver when they don’t react similarly.)
- I notice when my mind is arguing for a side (instead of evaluating which side to choose), and flag this as an error mode. (Recent example from Anna: Noticed myself explaining to myself why outsourcing my clothes shopping does make sense, rather than evaluating whether to do it.)
- I notice my mind flinching away from a thought; and when I notice, I flag that area as requiring more deliberate exploration. (Recent example from Anna: I have a failure mode where, when I feel socially uncomfortable, I try to make others feel mistaken so that I will feel less vulnerable. Pulling this thought into words required repeated conscious effort, as my mind kept wanting to just drop the subject.)
- I consciously attempt to welcome bad news, or at least not push it away. (Recent example from Eliezer: At a brainstorming session for future Singularity Summits, one issue raised was that we hadn't really been asking for money at previous ones. My brain was offering resistance, so I applied the "bad news is good news" pattern to rephrase this as, "This point doesn't change the fixed amount of money we raised in past years, so it is good news because it implies that we can fix the strategy and do better next year.")
- When I see something odd - something that doesn't fit with what I'd ordinarily expect, given my other beliefs - I successfully notice, promote it to conscious attention and think "I notice that I am confused" or some equivalent thereof. (Example: You think that your flight is scheduled to depart on Thursday. On Tuesday, you get an email from Travelocity advising you to prepare for your flight “tomorrow”, which seems wrong. Do you successfully raise this anomaly to the level of conscious attention? (Based on the experience of an actual LWer who failed to notice confusion at this point and missed their plane flight.)
- Questioning and analyzing beliefs (after they come to your attention).
- I notice when I'm not being curious. (Recent example from Anna: Whenever someone criticizes me, I usually find myself thinking defensively at first, and have to visualize the world in which the criticism is true, and the world in which it's false, to convince myself that I actually want to know. For example, someone criticized us for providing inadequate prior info on what statistics we'd gather for the Rationality Minicamp; and I had to visualize the consequences of [explaining to myself, internally, why I couldn’t have done any better given everything else I had to do], vs. the possible consequences of [visualizing how it might've been done better, so as to update my action-patterns for next time], to snap my brain out of defensive-mode and into should-we-do-that-differently mode.)
- I look for the actual, historical causes of my beliefs, emotions, and habits; and when doing so, I can suppress my mind's search for justifications, or set aside justifications that weren't the actual, historical causes of my thoughts. (Recent example from Anna: When it turned out that we couldn't rent the Minicamp location I thought I was going to get, I found lots and lots of reasons to blame the person who was supposed to get it; but realized that most of my emotion came from the fear of being blamed myself for a cost overrun.)
- I try to think of a concrete example that I can use to follow abstract arguments or proof steps. (Classic example: Richard Feynman being disturbed that Brazilian physics students didn't know that a "material with an index" meant a material such as water. If someone talks about a proof over all integers, do you try it with the number 17? If your thoughts are circling around your roommate being messy, do you try checking your reasoning against the specifics of a particular occasion when they were messy?)
- When I'm trying to distinguish between two (or more) hypotheses using a piece of evidence, I visualize the world where hypothesis #1 holds, and try to consider the prior probability I'd have assigned to the evidence in that world, then visualize the world where hypothesis #2 holds; and see if the evidence seems more likely or more specifically predicted in one world than the other (Historical example: During the Amanda Knox murder case, after many hours of police interrogation, Amanda Knox turned some cartwheels in her cell. The prosecutor argued that she was celebrating the murder. Would you, confronted with this argument, try to come up with a way to make the same evidence fit her innocence? Or would you first try visualizing an innocent detainee, then a guilty detainee, to ask with what frequency you think such people turn cartwheels during detention, to see if the likelihoods were skewed in one direction or the other?)
- I try to consciously assess prior probabilities and compare them to the apparent strength of evidence. (Recent example from Eliezer: Used it in a conversation about apparent evidence for parapsychology, saying that for this I wanted p < 0.0001, like they use in physics, rather than p < 0.05, before I started paying attention at all.)
- When I encounter evidence that's insufficient to make me "change my mind" (substantially change beliefs/policies), but is still more likely to occur in world X than world Y, I try to update my probabilities at least a little. (Recent example from Anna: Realized I should somewhat update my beliefs about being a good driver after someone else knocked off my side mirror, even though it was legally and probably actually their fault—even so, the accident is still more likely to occur in worlds where my bad-driver parameter is higher.)
- I notice when I'm not being curious. (Recent example from Anna: Whenever someone criticizes me, I usually find myself thinking defensively at first, and have to visualize the world in which the criticism is true, and the world in which it's false, to convince myself that I actually want to know. For example, someone criticized us for providing inadequate prior info on what statistics we'd gather for the Rationality Minicamp; and I had to visualize the consequences of [explaining to myself, internally, why I couldn’t have done any better given everything else I had to do], vs. the possible consequences of [visualizing how it might've been done better, so as to update my action-patterns for next time], to snap my brain out of defensive-mode and into should-we-do-that-differently mode.)
-
Handling inner conflicts; when different parts of you are pulling in different directions, you want different things that seem incompatible; responses to stress.
- I notice when I and my brain seem to believe different things (a belief-vs-anticipation divergence), and when this happens I pause and ask which of us is right. (Recent example from Anna: Jumping off the Stratosphere Hotel in Las Vegas in a wire-guided fall. I knew it was safe based on 40,000 data points of people doing it without significant injury, but to persuade my brain I had to visualize 2 times the population of my college jumping off and surviving. Also, my brain sometimes seems much more pessimistic, especially about social things, than I am, and is almost always wrong.)
- When facing a difficult decision, I try to reframe it in a way that will reduce, or at least switch around, the biases that might be influencing it. (Recent example from Anna's brother: Trying to decide whether to move to Silicon Valley and look for a higher-paying programming job, he tried a reframe to avoid the status quo bias: If he was living in Silicon Valley already, would he accept a $70K pay cut to move to Santa Barbara with his college friends? (Answer: No.))
- When facing a difficult decision, I check which considerations are consequentialist - which considerations are actually about future consequences. (Recent example from Eliezer: I bought a $1400 mattress in my quest for sleep, over the Internet hence much cheaper than the mattress I tried in the store, but non-returnable. When the new mattress didn't seem to work too well once I actually tried sleeping nights on it, this was making me reluctant to spend even more money trying another mattress. I reminded myself that the $1400 was a sunk cost rather than a future consequence, and didn't change the importance and scope of future better sleep at stake (occurring once per day and a large effect size each day).
- I notice when I and my brain seem to believe different things (a belief-vs-anticipation divergence), and when this happens I pause and ask which of us is right. (Recent example from Anna: Jumping off the Stratosphere Hotel in Las Vegas in a wire-guided fall. I knew it was safe based on 40,000 data points of people doing it without significant injury, but to persuade my brain I had to visualize 2 times the population of my college jumping off and surviving. Also, my brain sometimes seems much more pessimistic, especially about social things, than I am, and is almost always wrong.)
- What you do when you find your thoughts, or an argument, going in circles or not getting anywhere.
- I try to find a concrete prediction that the different beliefs, or different people, definitely disagree about, just to make sure the disagreement is real/empirical. (Recent example from Michael Smith: Someone was worried that rationality training might be "fake", and I asked if they could think of a particular prediction they'd make about the results of running the rationality units, that was different from mine, given that it was "fake".)
- I try to come up with an experimental test, whose possible results would either satisfy me (if it's an internal argument) or that my friends can agree on (if it's a group discussion). (This is how we settled the running argument over what to call the Center for Applied Rationality—Julia went out and tested alternate names on around 120 people.)
- If I find my thoughts circling around a particular word, I try to taboo the word, i.e., think without using that word or any of its synonyms or equivalent concepts. (E.g. wondering whether you're "smart enough", whether your partner is "inconsiderate", or if you're "trying to do the right thing".) (Recent example from Anna: Advised someone to stop spending so much time wondering if they or other people were justified; was told that they were trying to do the right thing; and asked them to taboo the word 'trying' and talk about how their thought-patterns were actually behaving.)
- I try to find a concrete prediction that the different beliefs, or different people, definitely disagree about, just to make sure the disagreement is real/empirical. (Recent example from Michael Smith: Someone was worried that rationality training might be "fake", and I asked if they could think of a particular prediction they'd make about the results of running the rationality units, that was different from mine, given that it was "fake".)
- Noticing and flagging behaviors (habits, strategies) for review and revision.
- I consciously think about information-value when deciding whether to try something new, or investigate something that I'm doubtful about. (Recent example from Eliezer: Ordering a $20 exercise ball to see if sitting on it would improve my alertness and/or back muscle strain.) (Non-recent example from Eliezer: After several months of procrastination, and due to Anna nagging me about the value of information, finally trying out what happens when I write with a paired partner; and finding that my writing productivity went up by a factor of four, literally, measured in words per day.)
- I quantify consequences—how often, how long, how intense. (Recent example from Anna: When we had Julia take on the task of figuring out the Center's name, I worried that a certain person would be offended by not being in control of the loop, and had to consciously evaluate how improbable this was, how little he'd probably be offended, and how short the offense would probably last, to get my brain to stop worrying.) (Plus 3 real cases we've observed in the last year: Someone switching careers is afraid of what a parent will think, and has to consciously evaluate how much emotional pain the parent will experience, for how long before they acclimate, to realize that this shouldn't be a dominant consideration.)
- I consciously think about information-value when deciding whether to try something new, or investigate something that I'm doubtful about. (Recent example from Eliezer: Ordering a $20 exercise ball to see if sitting on it would improve my alertness and/or back muscle strain.) (Non-recent example from Eliezer: After several months of procrastination, and due to Anna nagging me about the value of information, finally trying out what happens when I write with a paired partner; and finding that my writing productivity went up by a factor of four, literally, measured in words per day.)
- Revising strategies, forming new habits, implementing new behavior patterns.
- I notice when something is negatively reinforcing a behavior I want to repeat. (Recent example from Anna: I noticed that every time I hit 'Send' on an email, I was visualizing all the ways the recipient might respond poorly or something else might go wrong, negatively reinforcing the behavior of sending emails. I've (a) stopped doing that (b) installed a habit of smiling each time I hit 'Send' (which provides my brain a jolt of positive reinforcement). This has resulted in strongly reduced procrastination about emails.)
- I talk to my friends or deliberately use other social commitment mechanisms on myself. (Recent example from Anna: Using grapefruit juice to keep up brain glucose, I had some juice left over when work was done. I looked at Michael Smith and jokingly said, "But if I don't drink this now, it will have been wasted!" to prevent the sunk cost fallacy.) (Example from Eliezer: When I was having trouble getting to sleep, I (a) talked to Anna about the dumb reasoning my brain was using for staying up later, and (b) set up a system with Luke where I put a + in my daily work log every night I showered by my target time for getting to sleep on schedule, and a — every time I didn't.)
- To establish a new habit, I reward my inner pigeon for executing the habit. (Example from Eliezer: Multiple observers reported a long-term increase in my warmth / niceness several months after... 3 repeats of 4-hour writing sessions during which, in passing, I was rewarded with an M&M (and smiles) each time I complimented someone, i.e., remembered to say out loud a nice thing I thought.) (Recent example from Anna: Yesterday I rewarded myself using a smile and happy gesture for noticing that I was doing a string of low-priority tasks without doing the metacognition for putting the top priorities on top. Noticing a mistake is a good habit, which I’ve been training myself to reward, instead of just feeling bad.)
- I try not to treat myself as if I have magic free will; I try to set up influences (habits, situations, etc.) on the way I behave, not just rely on my will to make it so. (Example from Alicorn: I avoid learning politicians’ positions on gun control, because I have strong emotional reactions to the subject which I don’t endorse.) (Recent example from Anna: I bribed Carl to get me to write in my journal every night.)
- I use the outside view on myself. (Recent example from Anna: I like to call my parents once per week, but hadn't done it in a couple of weeks. My brain said, "I shouldn't call now because I'm busy today." My other brain replied, "Outside view, is this really an unusually busy day and will we actually be less busy tomorrow?")
- I notice when something is negatively reinforcing a behavior I want to repeat. (Recent example from Anna: I noticed that every time I hit 'Send' on an email, I was visualizing all the ways the recipient might respond poorly or something else might go wrong, negatively reinforcing the behavior of sending emails. I've (a) stopped doing that (b) installed a habit of smiling each time I hit 'Send' (which provides my brain a jolt of positive reinforcement). This has resulted in strongly reduced procrastination about emails.)
This may be the single most useful thing I've ever read on LessWrong. Thank you very, very much for posting it.
Here's one I use all the time: When a problem seems overwhelming, break it up into manageable subproblems.
Often, when I am procrastinating, I find that the source of my procrastination is a feeling of being overwhelmed. In particular, I don't know where to begin on a task, or I do but the task feels like a huge obstacle towering over me. So when I think about the task, I feel a crushing sense of being overwhelmed; the way I escape this feeling is by procrastination (i.e. avoiding the source of the feeling altogether).
When I notice myself doing this, I try to break the problem down into a sequence of high-level subtaks, usually in the form of a to-do list. Emotionally/metaphorically, instead of having to cross the obstacle in one giant leap, I can climb a ladder over it, one step at a time. (If the subtasks continue to be intimidating, I just apply this solution recursively, making lists of subsubtasks.)
I picked this strategy up after realizing that the way I approached large programming projects (write the main function, then write each of the subroutines that it calls, etc.) could be applied to life in general. Now I'm about to apply it to the task of writing an NSF fellowship application. =)
It's a classic self-help technique (especially in 'Getting Things Done') for a reason: it works.
Very nice list! I feel like this one in particular is one of the most important ones:
To give my own example: I try to be vegetarian, but occasionally the temptation of meat gets the better of me. At some point I realized that whenever I walked past a certain hamburger place - which was something that I typically did on each working day - there was a high risk of me succumbing. Obvious solution: modify my daily routine to take a slightly longer route which avoided any hamburger places. Modifying your environment so that you can completely avoid the need to use willpower is ridiculously useful.
My personal example: arranging to go exercise on the way to or from somewhere else will drastically increase the probability that I'll actually go. There's a pool a 5 minute bike ride from my house, which is also on the way home from most of the places I would be biking from. Even though the extra 10 minutes round trip is pretty negligable (and counts as exercise itself), I'm probably 2x as likely to go if I have my swim stuff with me and stop off on the way home. The effect is even more drastic for my taekwondo class: it's a 45 minute bike ride from home and about a 15 minute bike ride from the campus where I have most of my classes. Even if I finish class at 3:30 pm and taekwondo is at 7 pm, it still makes more sense for me to stay on campus for the interim–if I do, there's nearly 100% likelihood that I'll make it to taekwondo, but if I go home and get comfy, that drops to less than 50%.
For me this was the biggest insight that dramatically improved my ability to form habits. I don't actually decide things most of the time. Agency is something that only occurs intermittently. Therefore I use my agency on changing what sorts of things I am surrounded by rather than on the tasks themselves. This works because the default state is to simply be the average of what I am surrounded by.
Cliche example: not having junk food in the house improves my diet by making it take additional work to go out and get it.
Awesome list. I'm interested in the way there are 24 questions that are grouped into 6 overarching categories. Do they empirically cluster like this in actual humans? It would be fascinating to get a few hundred responses to each question and do dimensional analysis to see if there is a small number of common core issues that can be communicated and/or adjusted more efficiently :-)
I'd like to add "noticing when you don't know something." When someone asks you a question, its surprisingly tempting to try to be helpful and offer them an answer even when you don't have the necessary knowledge to provide an accurate answer. It can be easy to infer what the truth might be and offer that as an answer, without explaining that you're just guessing and don't actually know. (Example: I recently purchased a new television and my co-worker asked me what sort of Parental Controls it offered. I immediately started providing him an answer I had inferred from limited knowledge, and it took me a moment to realize I didn't actually know what I was talking about and instead tell him, "I don't know.")
This is essentially the problem of confabulation mentioned here; in this case its a confabulation of knowledge about the world, as opposed to confabulating knowledge about the self. In terms of the map/territory analogy, this would be a situation where someone asks you a question about a specific area of your map, and you choose to answer as if that section of your map is perfectly clear to you, even when you know that its blurry. Don't treat a blurry map as if it were clear!
The example about stacks in 1.2 has a certain irony in context. This requires a small mathematical parenthese:
A stack is a certain sophisticated type of geometric structure which is increasingly used in algebraic geometry, algebraic topology (and spreading to some corners of differential geometry) to make sense of geometric intuitions and notions on "spaces" which occur "naturally" but are squarely out of the traditional geometric categories (like manifolds, schemes, etc.).
See www.ams.org/notices/200304/what-is.pdf for a very short introduction focusing on the basic example of the moduli of elliptic curves.
The upshot of this vague outlook is that in the relevant fields, everything of interest is a stack (or a more exotic beast like a derived stack), precisely because the notion has been designed to be as general and flexible as possible ! So asking someone working on stacks a good example of something which is not a stack is bound to create a short moment of confusion.
Even if you do not care for stacks (and I wouldn't hold it against you), if you are interested in open source/Internet-based scientific projects, it is worth having a look at the web page of the Stacks project (http://stacks.math.columbia.edu/), a collaborative fully hyperlinked textbook on the topic, which is steadily growing towards the 3500 pages mark.
[Edit] But his utility function would predictably change under those circumstances.
I know that I have a status quo bias, hedonic treadmill, and strongly decreasing marginal utility of money (particularly when progressive taxation is factored in).
If I made 2/3 of what I do now, I'd be pretty much as happy as I am now, and want more money; if I made 3/2 of what I do now (roughly the factor described in the OP), I'd also be pretty much as happy as I am now, and want more money.
The logical conclusion is that we should lower the weight of salary increases in decisions, the opposite of the conclusion proposed here.
You're burying your argument in the constants 'pretty much' there. You can repeat your argument sorites-style after you have taken the 2/3 salary cut: "Well, if I made 2/3 what I do now, I'd still be 'pretty much as happy' as I am now" and so on and so forth until you have hit sub-poverty wages.
To keep the limits of the log argument in mind, log 50k is 10.8 and log (50k+70k) is 11.69 and log 1 billion is 20.7; do you really think if someone handed you a billion dollars and you filled your world-famous days competing with Musk to reach Mars or something insanely awesome like that, you would only be twice as happy as when you were a low-status scrub-monkey making 50k?
Here again more work is necessary. One of the chief suggestions of positive psychology is donating more and buying more fuzzies... and guess what is favored by progressive taxation? Donating.
Of course there are people who are surely making the mistake of over-valuing salaries; but you're going to need to do more work to show you're one of them.
Comparing these numbers tells you pretty much nothing. First of all, taking log($50k) is not a valid operation; you should only ever take logs of a dimensionless quantity. The standard solution is to pick an arbitrary dollar value $X, and compare log($50k/$X), log($120k/$X), and log($10^9/$X). This is equivalent to comparing 10.8 + C, 11.69 + C, and 20.7 + C, where C is an arbitrary constant.
This shouldn't be a surprise, because under the standard definition, utility functions are translation-invariant. They are only compared in cases such as "is U1 better than U2?" or "is U1 better than a 50/50 chance of U2 and U3?" The answer to this question doesn't change if we add a constant to U1, U2, and U3.
In particular, it's invalid to say "U1 is twice as good as U2". For that matter, even if you don't like utility functions, this is suspicious in general: what does it mean to say "I would be twice as happy if I had a million dollars"?
It would make sense to say, if your utility for money is logarithmic and you currently have $50k, that you're indifferent between a 100% chance of an extra $70k and a 8.8% chance of an extra $10^9 -- that being the probability for which the expected utilities are the same. If you think logarithmic utilities are bad, this is the claim you should be refuting.
Taking logs of a dimensionful quantity is possible, if you know what you're doing. (In math, we make up our own rules: no one is allowed to tell us what we can and cannot do. Whether or not it's useful is another question.) Here's the real scoop:
In physics, we only really and truly care about dimensionless quantities. These are the quantities which do not change when we change the system of units, i.e. they are "invariant". Anything which is not invariant is a purely arbitrary human convention, which doesn't really tell me anything about the world. For example, if I want to know if I fit through a door, I'm only interested in the ratio between my height and the height of the door. I don't really care about how the door compares to some standard meter somewhere, except as an intermediate step in some calculation.
Nevertheless, for practical purposes it is convenient to also consider quantities which transform in a particularly simple way under a change of units systems. Borrowing some terminology from general relativity, we can say that a quantity X is "covariant" if it transforms like X --> (unit1 / unit2 )^p X when we change from unit1 to unit2. Here... (read more)
Right, but then log (2 apple) = log 2 + log apple and so forth. This is a perfectly sensible way to think about things as long as you (not you specifically, but the general you) remember that "log apple" transforms additively instead of multiplicatively under a change of coordinates.
You're describing a situation in which politically powerful people become rich, not one in which rich people become politically powerful.
The idea that will power or thinking depletes brain glucose has been debunked:
http://www.psychologytoday.com/blog/ulterior-motives/201211/is-willpower-energy-or-motivation http://lesswrong.com/r/discussion/lw/ej7/link_motivational_versus_metabolic_effects_of/
But nevertheless, the suggestion of sweets will still work per your own links. A nice example of how revised theories remain consistent with old observations...
I put the checklist into an Anki deck a week or two ago that I've been reviewing (as cloze deletions). Subjectively it seems to have helped the relevant concepts come more readily to mind, although that could just be the CFAR workshop (though we didn't talk about the checklist then and some of the ideas in the checklist, like social commitment mechanisms, weren't otherwise explicitly mentioned).
This is awesome. I might remove the examples, print down the rest of the list, and read it every morning when I get up and every night before going to sleep. OTOH I have a few quibbles with some examples:
... (read more)Your math is right but your mother has the right interpretation of the situation. If your friend is dead, calling him does neither of you any good! This is a 29,999 out of 30,000 chance to earn brownie points.
The thing is, it seems quite clear that the problem wasn't about how likely they are to respond badly, but that Anna (?) would visualize and anticipate the negative response beforehand based on no evidence that they would respond poorly, simply as a programmed mental habit. This would end up creating a vicious circle where each time the negatives from past times make it even more likely that this time it feels bad, regardless of the actual reactions.
The tactic of smiling reinforces the action of sending emails instead of terrorizing yourself into never sending emails anymore (which I infer from context would be a bad thing), and once you're rid of the looming vicious circle you can then base your predictions of the reaction on the content of the email, rather than have it be predetermined by your own feelings.
(Obligatory nitpicker's note: I agree with pretty much everything you said, I just didn't think that the real event in that example had a bad decision as you seemed to imply.)
It's much less pretty than the PDF, but if anyone else wants a spreadsheet with write-in-able blanks, I have made a Google doc.
I have read this post and have not been persuaded that people who follow these steps will lead longer or happier lives (or will cause others to live longer or happier lives). I therefore will make no conscious effort to pay much of any regard to this post, though it is plausible it will have at least a small unconscious effect. I am posting this to fight groupthink and sampling biases, though this post actually does very little against them.
Thanks for posting this. I always enjoy these "in-practice" oriented posts, as I feel they help me check if I truly understand the concepts I learn here, in a similar way that example problems in textbooks check if I know how to correctly apply the material I just read.
I would be interested in an updated checklist. This seems potentially quite useful for a single post.
There are some good ideas here that I can pick up on. Among the things that I already successfully implement, it may sound stupid, but I think of my different brain modules as different people, and have different names for them. That way I can compliment or admonish them without thinking, "Oh..kay, I'm talking to myself?" That makes it easier to remember that I'm not the only one reacting and making the sole decisions, but avoids turning everything into similar-sounding entities (me, myself, I, my brain, my mind, etc.) Example: This morning, I ke... (read more)
I'm currently trying to evaluate how to adjust some of these for problems related to mental illness. For example, 4.3:
Whenever I taboo words, I start developing pressured speech, and begin mumbling the tabooed words subconsciously... (read more)
What about "when faced with a hard problem, close your eyes, clear your mind and focus your attention for a few minutes to the issue at hand"?
It sounds so very simple, that I routinely fail to do it when, e.g. I try to solve some project euler problem or another, and I don't see a solution in the first few seconds, do something else for a while, until I finally get a handle on my slippery mind, sit down and solve the bloody thing.
At some point I started feeling like my bf is more interested in telling me things than having a conversation with me. So I started trying to flag the instances where he did it and the instances where he didn't, and it kinda felt like it matched my feeling since I had several more examples of one than the other. But I didn't document then carefully or anything, so how do I know I'm not falling into the confirmation bias trap? Or is this just the wrong way to handle something that started out as a ... feeling?
Has the checklist been revisited or optimized in any way since its original formulation? (By CFAR or otherwise?)
Why are these rationality habits? Based on what? All the examples are personal. Isn't it possible to give (also) a scientific examples for each habit : study ..... shows that .... hence 1) the habit is useful for dealing with this bias 2) it doesn't create or reinforce other biases.
Looks like a very useful list. One comment: I found the example in 2(a) a bit complicated and very difficult to parse.
Something to add: allocating attention in the correct order:
Otherwise you have the failure mode of avoiding painful emotions (even if they're being triggered erroneously) and then all sorts of bad things happen. So check in with (1) before (2) and (3). And check in with (2) before applying (3), because otherwise you're using cached thoughts.
The PDF version is very nice looking and very readable, thanks for making it. I think people on here often underestimate the benefits of low hanging aesthetic fruit.
I just joined the community, how can I save or mark this article so it is available for me to read at anytime?
I really appreciate having the examples in parentheses and italicised. It lets me easily skip them when I know what you mean. I wish others would do this.
Great list. My guide post for rationality and related issues has been the works of Carl Sagan, as he had many books and good advice for thinking critically. His works are an absolute must read (or watch) for anybody wanting to wade through the mass of misdirection that exists in the world.
This all sounds quite groovy, but are there any suggestions on how I could go about implementing them into my daily pattern of thought? I wonder if perhaps an Anki deck would have any merit whatsoever in accomplishing this...
Another one: You see a way to do things that in theory might work better that what everyone else is doing, but in practice no one seems to use. Do you investigate it and consider exploiting it?
Example: You're trying to get karma on reddit. You notice that http://www.reddit.com/r/randomization/ has almost a million subscribers but no new submissions in the past two months. Do you think "hm, that's weird" and keep looking for a subreddit to submit your link in, or do you think "oh wow, karma feast!"
Maybe it's worth a poll, if someone feels like creating one. I'm not sure how to make a multi-level poll and it probably would be too presumptuous of me to create 24 replies with one poll in each.
The Checklist Manifesto is very interesting about what goes into an excellent checklist rather than a casually constructed checklist. It's about institutional checklists rather than personal checklists, though.