This may be the single most useful thing I've ever read on LessWrong. Thank you very, very much for posting it.
Here's one I use all the time: When a problem seems overwhelming, break it up into manageable subproblems.
Often, when I am procrastinating, I find that the source of my procrastination is a feeling of being overwhelmed. In particular, I don't know where to begin on a task, or I do but the task feels like a huge obstacle towering over me. So when I think about the task, I feel a crushing sense of being overwhelmed; the way I escape this feeling is by procrastination (i.e. avoiding the source of the feeling altogether).
When I notice myself doing this, I try to break the problem down into a sequence of high-level subtaks, usually in the form of a to-do list. Emotionally/metaphorically, instead of having to cross the obstacle in one giant leap, I can climb a ladder over it, one step at a time. (If the subtasks continue to be intimidating, I just apply this solution recursively, making lists of subsubtasks.)
I picked this strategy up after realizing that the way I approached large programming projects (write the main function, then write each of the subroutines that it calls, etc.) could be applied to life in general. Now I'm about to apply it to the task of writing an NSF fellowship application. =)
Here's one I use all the time: When a problem seems overwhelming, break it up into manageable subproblems.
It's a classic self-help technique (especially in 'Getting Things Done') for a reason: it works.
Very nice list! I feel like this one in particular is one of the most important ones:
I try not to treat myself as if I have magic free will; I try to set up influences (habits, situations, etc.) on the way I behave, not just rely on my will to make it so. (Example from Alicorn: I avoid learning politicians’ positions on gun control, because I have strong emotional reactions to the subject which I don’t endorse.) (Recent example from Anna: I bribed Carl to get me to write in my journal every night.)
To give my own example: I try to be vegetarian, but occasionally the temptation of meat gets the better of me. At some point I realized that whenever I walked past a certain hamburger place - which was something that I typically did on each working day - there was a high risk of me succumbing. Obvious solution: modify my daily routine to take a slightly longer route which avoided any hamburger places. Modifying your environment so that you can completely avoid the need to use willpower is ridiculously useful.
Modifying your environment so that you can completely avoid the need to use willpower is ridiculously useful.
My personal example: arranging to go exercise on the way to or from somewhere else will drastically increase the probability that I'll actually go. There's a pool a 5 minute bike ride from my house, which is also on the way home from most of the places I would be biking from. Even though the extra 10 minutes round trip is pretty negligable (and counts as exercise itself), I'm probably 2x as likely to go if I have my swim stuff with me and stop off on the way home. The effect is even more drastic for my taekwondo class: it's a 45 minute bike ride from home and about a 15 minute bike ride from the campus where I have most of my classes. Even if I finish class at 3:30 pm and taekwondo is at 7 pm, it still makes more sense for me to stay on campus for the interim–if I do, there's nearly 100% likelihood that I'll make it to taekwondo, but if I go home and get comfy, that drops to less than 50%.
For me this was the biggest insight that dramatically improved my ability to form habits. I don't actually decide things most of the time. Agency is something that only occurs intermittently. Therefore I use my agency on changing what sorts of things I am surrounded by rather than on the tasks themselves. This works because the default state is to simply be the average of what I am surrounded by.
Cliche example: not having junk food in the house improves my diet by making it take additional work to go out and get it.
Awesome list. I'm interested in the way there are 24 questions that are grouped into 6 overarching categories. Do they empirically cluster like this in actual humans? It would be fascinating to get a few hundred responses to each question and do dimensional analysis to see if there is a small number of common core issues that can be communicated and/or adjusted more efficiently :-)
I'd like to add "noticing when you don't know something." When someone asks you a question, its surprisingly tempting to try to be helpful and offer them an answer even when you don't have the necessary knowledge to provide an accurate answer. It can be easy to infer what the truth might be and offer that as an answer, without explaining that you're just guessing and don't actually know. (Example: I recently purchased a new television and my co-worker asked me what sort of Parental Controls it offered. I immediately started providing him an answer I had inferred from limited knowledge, and it took me a moment to realize I didn't actually know what I was talking about and instead tell him, "I don't know.")
This is essentially the problem of confabulation mentioned here; in this case its a confabulation of knowledge about the world, as opposed to confabulating knowledge about the self. In terms of the map/territory analogy, this would be a situation where someone asks you a question about a specific area of your map, and you choose to answer as if that section of your map is perfectly clear to you, even when you know that its blurry. Don't treat a blurry map as if it were clear!
The example about stacks in 1.2 has a certain irony in context. This requires a small mathematical parenthese:
A stack is a certain sophisticated type of geometric structure which is increasingly used in algebraic geometry, algebraic topology (and spreading to some corners of differential geometry) to make sense of geometric intuitions and notions on "spaces" which occur "naturally" but are squarely out of the traditional geometric categories (like manifolds, schemes, etc.).
See www.ams.org/notices/200304/what-is.pdf for a very short introduction focusing on the basic example of the moduli of elliptic curves.
The upshot of this vague outlook is that in the relevant fields, everything of interest is a stack (or a more exotic beast like a derived stack), precisely because the notion has been designed to be as general and flexible as possible ! So asking someone working on stacks a good example of something which is not a stack is bound to create a short moment of confusion.
Even if you do not care for stacks (and I wouldn't hold it against you), if you are interested in open source/Internet-based scientific projects, it is worth having a look at the web page of the Stacks project (http://stacks.math.columbia.edu/), a collaborative fully hyperlinked textbook on the topic, which is steadily growing towards the 3500 pages mark.
he tried a reframe to avoid the status quo bias: If he was living in Silicon Valley already, would he accept a $70K pay cut to move to Santa Barbara with his college friends? (Answer: No.))
[Edit] But his utility function would predictably change under those circumstances.
I know that I have a status quo bias, hedonic treadmill, and strongly decreasing marginal utility of money (particularly when progressive taxation is factored in).
If I made 2/3 of what I do now, I'd be pretty much as happy as I am now, and want more money; if I made 3/2 of what I do now (roughly the factor described in the OP), I'd also be pretty much as happy as I am now, and want more money.
The logical conclusion is that we should lower the weight of salary increases in decisions, the opposite of the conclusion proposed here.
If I made 2/3 of what I do now, I'd be pretty much as happy as I am now, and want more money; if I made 3/2 of what I do now, I'd also be pretty much as happy as I am now, and want more money.
You're burying your argument in the constants 'pretty much' there. You can repeat your argument sorites-style after you have taken the 2/3 salary cut: "Well, if I made 2/3 what I do now, I'd still be 'pretty much as happy' as I am now" and so on and so forth until you have hit sub-poverty wages.
To keep the limits of the log argument in mind, log 50k is 10.8 and log (50k+70k) is 11.69 and log 1 billion is 20.7; do you really think if someone handed you a billion dollars and you filled your world-famous days competing with Musk to reach Mars or something insanely awesome like that, you would only be twice as happy as when you were a low-status scrub-monkey making 50k?
(particularly when progressive taxation is factored in).
Here again more work is necessary. One of the chief suggestions of positive psychology is donating more and buying more fuzzies... and guess what is favored by progressive taxation? Donating.
The logical conclusion is that I should lower the weight of salary increases in decisions, the opposite of the conclusion proposed here.
Of course there are people who are surely making the mistake of over-valuing salaries; but you're going to need to do more work to show you're one of them.
To keep the limits of the log argument in mind, log 50k is 10.8 and log (50k+70k) is 11.69 and log 1 billion is 20.7
Comparing these numbers tells you pretty much nothing. First of all, taking log($50k) is not a valid operation; you should only ever take logs of a dimensionless quantity. The standard solution is to pick an arbitrary dollar value $X, and compare log($50k/$X), log($120k/$X), and log($10^9/$X). This is equivalent to comparing 10.8 + C, 11.69 + C, and 20.7 + C, where C is an arbitrary constant.
This shouldn't be a surprise, because under the standard definition, utility functions are translation-invariant. They are only compared in cases such as "is U1 better than U2?" or "is U1 better than a 50/50 chance of U2 and U3?" The answer to this question doesn't change if we add a constant to U1, U2, and U3.
In particular, it's invalid to say "U1 is twice as good as U2". For that matter, even if you don't like utility functions, this is suspicious in general: what does it mean to say "I would be twice as happy if I had a million dollars"?
It would make sense to say, if your utility for money is logarithmic and you currently have $50k, that you're indifferent between a 100% chance of an extra $70k and a 8.8% chance of an extra $10^9 -- that being the probability for which the expected utilities are the same. If you think logarithmic utilities are bad, this is the claim you should be refuting.
Taking logs of a dimensionful quantity is possible, if you know what you're doing. (In math, we make up our own rules: no one is allowed to tell us what we can and cannot do. Whether or not it's useful is another question.) Here's the real scoop:
In physics, we only really and truly care about dimensionless quantities. These are the quantities which do not change when we change the system of units, i.e. they are "invariant". Anything which is not invariant is a purely arbitrary human convention, which doesn't really tell me anything about the world. For example, if I want to know if I fit through a door, I'm only interested in the ratio between my height and the height of the door. I don't really care about how the door compares to some standard meter somewhere, except as an intermediate step in some calculation.
Nevertheless, for practical purposes it is convenient to also consider quantities which transform in a particularly simple way under a change of units systems. Borrowing some terminology from general relativity, we can say that a quantity X is "covariant" if it transforms like X --> (unit1 / unit2 )^p X when we change from unit1 to unit2. Here...
Right, but then log (2 apple) = log 2 + log apple and so forth. This is a perfectly sensible way to think about things as long as you (not you specifically, but the general you) remember that "log apple" transforms additively instead of multiplicatively under a change of coordinates.
You're describing a situation in which politically powerful people become rich, not one in which rich people become politically powerful.
Recent example from Anna: Using grapefruit juice to keep up brain glucose, I had
The idea that will power or thinking depletes brain glucose has been debunked:
http://www.psychologytoday.com/blog/ulterior-motives/201211/is-willpower-energy-or-motivation http://lesswrong.com/r/discussion/lw/ej7/link_motivational_versus_metabolic_effects_of/
But nevertheless, the suggestion of sweets will still work per your own links. A nice example of how revised theories remain consistent with old observations...
I put the checklist into an Anki deck a week or two ago that I've been reviewing (as cloze deletions). Subjectively it seems to have helped the relevant concepts come more readily to mind, although that could just be the CFAR workshop (though we didn't talk about the checklist then and some of the ideas in the checklist, like social commitment mechanisms, weren't otherwise explicitly mentioned).
This is awesome. I might remove the examples, print down the rest of the list, and read it every morning when I get up and every night before going to sleep. OTOH I have a few quibbles with some examples:
...Recent example from Anna: Jumping off the Stratosphere Hotel in Las Vegas in a wire-guided fall. I knew it was safe based on 40,000 data points of people doing it without significant injury, but to persuade my brain I had to visualize 2 times the population of my college jumping off and surviving. Also, my brain sometimes seems much more pessimistic, esp
my mother told me “you should call [your friend who's there] and ask him if he's all right”, and I answered “there are 10 million people in London, so the probability that he was involved is about 1 in 30,000, which is less than the probability that he would die naturally in...”; my mother called me heartless before I even finished the sentence.
Your math is right but your mother has the right interpretation of the situation. If your friend is dead, calling him does neither of you any good! This is a 29,999 out of 30,000 chance to earn brownie points.
Huh, no. If they are likely to respond badly, I want to believe they are likely to respond badly. If they aren't likely to respond badly, I want to believe they aren't likely to respond badly. What is true is already so, owning it up doesn't make it worse. The solution to that problem is to think twice and re-read the email and think about ways to make it less likely for it to be interpreted in an unintended way before hitting Send.
The thing is, it seems quite clear that the problem wasn't about how likely they are to respond badly, but that Anna (?) would visualize and anticipate the negative response beforehand based on no evidence that they would respond poorly, simply as a programmed mental habit. This would end up creating a vicious circle where each time the negatives from past times make it even more likely that this time it feels bad, regardless of the actual reactions.
The tactic of smiling reinforces the action of sending emails instead of terrorizing yourself into never sending emails anymore (which I infer from context would be a bad thing), and once you're rid of the looming vicious circle you can then base your predictions of the reaction on the content of the email, rather than have it be predetermined by your own feelings.
(Obligatory nitpicker's note: I agree with pretty much everything you said, I just didn't think that the real event in that example had a bad decision as you seemed to imply.)
It's much less pretty than the PDF, but if anyone else wants a spreadsheet with write-in-able blanks, I have made a Google doc.
I have read this post and have not been persuaded that people who follow these steps will lead longer or happier lives (or will cause others to live longer or happier lives). I therefore will make no conscious effort to pay much of any regard to this post, though it is plausible it will have at least a small unconscious effect. I am posting this to fight groupthink and sampling biases, though this post actually does very little against them.
Thanks for posting this. I always enjoy these "in-practice" oriented posts, as I feel they help me check if I truly understand the concepts I learn here, in a similar way that example problems in textbooks check if I know how to correctly apply the material I just read.
I would be interested in an updated checklist. This seems potentially quite useful for a single post.
There are some good ideas here that I can pick up on. Among the things that I already successfully implement, it may sound stupid, but I think of my different brain modules as different people, and have different names for them. That way I can compliment or admonish them without thinking, "Oh..kay, I'm talking to myself?" That makes it easier to remember that I'm not the only one reacting and making the sole decisions, but avoids turning everything into similar-sounding entities (me, myself, I, my brain, my mind, etc.) Example: This morning, I ke...
I'm currently trying to evaluate how to adjust some of these for problems related to mental illness. For example, 4.3:
If I find my thoughts circling around a particular word, I try to taboo the word, i.e., think without using that word or any of its synonyms or equivalent concepts. (E.g. wondering whether you're "smart enough", whether your partner is "inconsiderate", or if you're "trying to do the right thing".)
Whenever I taboo words, I start developing pressured speech, and begin mumbling the tabooed words subconsciously...
What about "when faced with a hard problem, close your eyes, clear your mind and focus your attention for a few minutes to the issue at hand"?
It sounds so very simple, that I routinely fail to do it when, e.g. I try to solve some project euler problem or another, and I don't see a solution in the first few seconds, do something else for a while, until I finally get a handle on my slippery mind, sit down and solve the bloody thing.
At some point I started feeling like my bf is more interested in telling me things than having a conversation with me. So I started trying to flag the instances where he did it and the instances where he didn't, and it kinda felt like it matched my feeling since I had several more examples of one than the other. But I didn't document then carefully or anything, so how do I know I'm not falling into the confirmation bias trap? Or is this just the wrong way to handle something that started out as a ... feeling?
Has the checklist been revisited or optimized in any way since its original formulation? (By CFAR or otherwise?)
Why are these rationality habits? Based on what? All the examples are personal. Isn't it possible to give (also) a scientific examples for each habit : study ..... shows that .... hence 1) the habit is useful for dealing with this bias 2) it doesn't create or reinforce other biases.
Looks like a very useful list. One comment: I found the example in 2(a) a bit complicated and very difficult to parse.
Something to add: allocating attention in the correct order:
Otherwise you have the failure mode of avoiding painful emotions (even if they're being triggered erroneously) and then all sorts of bad things happen. So check in with (1) before (2) and (3). And check in with (2) before applying (3), because otherwise you're using cached thoughts.
The PDF version is very nice looking and very readable, thanks for making it. I think people on here often underestimate the benefits of low hanging aesthetic fruit.
I just joined the community, how can I save or mark this article so it is available for me to read at anytime?
I really appreciate having the examples in parentheses and italicised. It lets me easily skip them when I know what you mean. I wish others would do this.
Great list. My guide post for rationality and related issues has been the works of Carl Sagan, as he had many books and good advice for thinking critically. His works are an absolute must read (or watch) for anybody wanting to wade through the mass of misdirection that exists in the world.
This all sounds quite groovy, but are there any suggestions on how I could go about implementing them into my daily pattern of thought? I wonder if perhaps an Anki deck would have any merit whatsoever in accomplishing this...
Another one: You see a way to do things that in theory might work better that what everyone else is doing, but in practice no one seems to use. Do you investigate it and consider exploiting it?
Example: You're trying to get karma on reddit. You notice that http://www.reddit.com/r/randomization/ has almost a million subscribers but no new submissions in the past two months. Do you think "hm, that's weird" and keep looking for a subreddit to submit your link in, or do you think "oh wow, karma feast!"
For each item, you might ask yourself: did you last use this habit...
Maybe it's worth a poll, if someone feels like creating one. I'm not sure how to make a multi-level poll and it probably would be too presumptuous of me to create 24 replies with one poll in each.
The Checklist Manifesto is very interesting about what goes into an excellent checklist rather than a casually constructed checklist. It's about institutional checklists rather than personal checklists, though.