I'm disappointed at how few of these comments, particularly the highly-voted ones, are about proposed solutions, or at least proposed areas for research. My general concern about the LW community is that it seems much more interested in the fun of debating and analyzing biases, rather than the boring repetitive trial-and-error of correcting them.
Anna's post lays out a particular piece of poor performance which is of core strategic value to pretty much everyone - how to identify and achieve your goals - and which, according to me and many people and authors, can be greatly improved through study and practice. So I'm very frustrated by all the comments about the fact that we're just barely intelligent and debates about the intelligence of the general person. It's like if Eliezer posted about the potential for AI to kill us all and people debated how they would choose to kill us instead of how to stop it from happening.
Sorry, folks, but compared to the self-help/self-development community, Less Wrong is currently UTTERLY LOSING at self-improvement and life optimization. Go spend an hour reading Merlin Mann's site and you'll learn way more instrumental rationality than you do here....
I've disappointed in LessWrong too, and it's caused me to come here more and more infrequently. I'm even talking about the lurking. I used to come here every other day, then every week, then it dropped to once a month. This
I get the impression many people either didn't give a shit or despaired about their own ability to function better through any reasonable effort that they dismissed everything that came along. It used to make me really mad, or sad. Probably I took it a little too personally too, because I read a lot of EY's classic posts as inspiration not to fucking despair about what seemed like a permanently ruined future. "tsuyoku naritai" and "isshou kenmei" and "do the impossible" and all that said, look, people out there are working on much harder problems--there's probably a way up and out for you too. The sadness: I wanted other people to get at least that, and the anger--a lot of LessWrongers not seeming to get the point.
On the other hand, I'm pleased with our OvercomingBias/LessWrong meetup group in NYC. I think we do a good job in-person helping other members with practical solutions to problems--how we can all become really successful. ...
If there are (relative to LW) many good self-help sites and no good sites about rationality as such, that suggests to me LW should focus on rationality as such and leave self-help to the self-help sites. This is compatible with LW's members spending a lot of time on self-help sites that they recommend each other in open threads.
My impression is that there are two good reasons to incorporate productivity techniques into LW, instead of aiming for a separate community specialized in epistemic rationality that complements self-help communities.
Our future depends on producing people who can both see what needs doing (wrt existential risk, and any other high-stakes issues), and can actually do things. This seems far higher probability than “our future depends on creating an FAI team” and than “our future depends on plan X” for any other specific plan X. A single community that teaches both, and that also discusses high-impact philanthropy, may help.
There seems to be a synergy between epistemic and instrumental rationality, in the sense that techniques for each give boosts to the other. Many self-help books, for example, spend much time discussing how to think through painful subjects instead of walling them off (instead of allowing ugh fields to clutter up your to do list, or allowing rationalized “it’s all your fault” reactions to clutter up your interpersonal relations). It would be nice to have a community that could see the whole picture here.
But part of my point is that LW isn't "focusing on rationality", or rather, it is focusing on fun theoretical discussions of rationality rather than practical exercises that are hard to work implement but actually make you more rational. The self-help / life hacking / personal development community is actually better (in my opinion) at helping people become more rational than this site ostensibly devoted to rationality.
The self-help / life hacking / personal development community is actually better (in my opinion) at helping people become more rational than this site ostensibly devoted to rationality.
Hmm. The self-help / life hacking / personal development community may well be better than LW at focussing on practice, on concrete life-improvements, and on eliciting deep-seated motivation. But AFAICT these communities are not aiming at epistemic rationality in our sense, and are consequently not hitting it even as well as we are. LW, for all its faults, has had fair success at teaching folks how to thinking usefully about abstract, tricky subjects on which human discussions often tend rapidly toward nonsense (e.g. existential risk, optimal philanthropy, or ethics). It has done so by teaching such subskills as:
Instead, the sentiment is more, "Shit, none of us can do much about it directly. How 'bout we all get freaking rich and successful first!"
Well, I think that's the rational thing to do for the vast majority of people. Not only due to public good problems, but because if there's something bad about the world which affects many people negatively, it's probably hard to fix or one of the many sufferers would have already. Whereas your life might not have been fixed just because you haven't tried yet. It's almost always a better use of your resources. Plus "money is the unit of caring", so the optimal way to help a charitable cause is usually to earn your max cash and donate, as opposed to working on it directly.
I suspect the empathy formed from face to face contact can be a really great motivator.
Agreed. Not just a motivator to help other people - but f2f contact is more inherently about doing, while web forums are more inherently about talking. In person it is much more natural to ask about someone's life and how it is going - which is where interventions happen.
Yet if we're intentional about it I think we can keep it real here too.
Perhaps. I thi...
Interestingly, the people who seem most interested in the topic of instrumental rationality never seem to write a lot of posts here, compared to the people interested in epistemic rationality. Maybe that's because you're too busy "doing" to teach (or to ask good open questions), but I'm confident that's not true of all the I-Rationality crowd.
Of course, as an academic, I'm perfectly happy staying on the E-Rationality side.
Instrumental rationality is one of my primary interests here, but I don't post much -- the standard here is too high. All I have to offer is personal anecdotal evidence about various self-help / anti-akrasia techniques I tried on myself, and I always feel a bit guilty when posting them because unsubstantiated other-optimizing is officially frowned upon here. Attempting to extract any deep wisdom from these anecdotes would be generalizing from one example.
An acceptable way to post self-help on LW would be in the form of properly designed, properly conducted long-term studies of self-help techniques. However, designing and conducting such studies is a full-time job which ideally requires a degree in experimental psychology.
If that's true, we absolutely need to lower the bar for such posts. Three good sorts of posts that are not terribly difficult are: (1) a review of a good self-help book and what you personally took from it; (2) a few-sentence summary of an academic study on an income-boosting technique, a method for improving your driving safety, or other useful content, with a link to the same; or (3) a description of self-intervention you tried and tracked impacts from, quantified self style.
When someone says they have anecdotes but want data, I hear an opportunity for crowdsourcing.
Perhaps a community blog is the wrong tool for this? What if we had a tool that supported tracking rationalist intervention efficacy? People could post specific interventions and others could report their personal results. Then the tool would allow for sorting interventions by reported aggregate efficacy. Maybe even just a simple voting system?
That seems like it could be a killer app for lowering the bar toward encouraging newcomers and data-poor interventions from getting posted and evaluated.
I can think of at least 3 ways that people fail to make strategic, effective decisions.
(as the above post pointed out) it's difficult to analyze options (or even to come up with some of them), for any number of reasons: too many of them (and too little time), lack of information, unforeseeable secondary consequences, etc.. One can do one's best in the most rational fashion, but still comes out with a wrong choice. That's unfortunate, but if this is the only kind of mistakes I am making, i am not too worried. it's a matter of learning better heuristics, building better models, gathering more data... or, in the limit, admitting that there's a limit to how much human intelligence and limited time/resources can go, even if correctly applied to problems.
A second, more worrisome, mistake is not to even realize that one can step out of one's immediate reactions, stop whatever one's doing, and think about the rationality of it, and alternatives. This mistake differs from (1). As a hypothetical example, suppose the wannabe comedian generated a list of things he could do, and decided to watch the Garfield cartoon. His choice might be wrong, but it's a conscious, deliberate choice that h
A few years ago, Paul Graham wrote an essay[1] about type (3) failures which he referred to as type-B procrastination. I've found that just having a label helps me avoid or reduce the effect, e.g. "I could be productive and creative right now instead of wasting my time on type-B procrastination" or "I will give myself exactly this much type-B procrastination as a reward for good behavior, and then I will stop."
(Embarrassing aside: I hadn't looked at the essay for several years and only now realized that I've been mentally calling it type-A procrastination this whole time.)
EDIT: The essay goes on to link type-C procrastination with doing the impossible, yielding a nice example of how I-rationality and self-help are linked.
[1] Paul Graham, Good and Bad Procrastination
There's an important piece missing from the articles analysis.
As humans we are inherently social in nature.
We delegate a lot of our reasoning to the wider social group around us. This is more energy efficient.
The article asks 'why do many people go through long training programs "to make money" without spending a few hours doing salary comparisons ahead of time'. We do long training programs (eg, college degrees) mostly because they are socially esteemed. This social esteem serves as a proxy to their worth, and its typically information that has a lower personal cost to obtain, than going and looking at salary surveys.
The reason we do so little systematic testing for ourselves is that we have trusted our wider social grouping to do it for us. I don't find a rational argument about the bungie jump mechanism nearly as compelling evidence of safety, as I do my talking with enthusiastic friend who has done it 20 times. If I was to learn about my cars braking mechanism in sufficient detail to convince myself of why it worked, I would never go anywhere. Instead, I see others who I trust driving the car, and 'delegate' to them.
This is simply a heuristic. It doesn't always wo...
We delegate a lot of our reasoning to the wider social group around us.... the vast majority of the time its a much more cost/information efficient way of doing things.
This strikes me as half right. Specifically: Yes, we often use social indicators to take the place of personal reasoning. And, yes, these indicators are better than nothing. But given the rapid (relative to the EEA) of change in e.g. what jobs pay well, what we know about how to avoid accidents, what skills can boost your productivity (e.g., typing on computers is now important, and, thus, it's important to learn more than two-fingered typing), etc., and the fact that social recommendations update fairly slowly, it seems that most on this site can do far better by adding some internet research and conscious thought to standard socially recommended productivity heuristics.
Most basically, because humans are only just on the cusp of general intelligence.
This a point I've been thinking about a lot recently - that the time between the evolution of a species whose smartest members crossed the finish line into general intelligence, and today, is a blink of an eye in evolutionary terms, and therefore we should expect to find that we are roughly as stupid as it's possible to be and still have some of us smart enough to transform the world. You refer to it here in a way that suggests this is a well-understood point - is this point discussed more explicitly elsewhere?
It occurs to me that this is one reason we suffer from the "parochial intelligence scale" Eliezer complains about - that the difference in effect between being just barely at the point of having general intelligence and being slightly better than that is a lot, even if the difference in absolute capacity is slight.
I wonder how easy it would be to incorporate this point into my spiel for newcomers about why you should worry about AGI - what inferential distances am I missing?
Another example of attribution error: Why would Gimli think that Galadriel is beautiful?
To be fair, the races of Middle-Earth weren't created by evolution, so the criticism isn't fully valid. Ilúvatar gave the dwarves spirits but set them to sleep so that they wouldn't awaken before the elves. It's not unreasonable to assume that as he did so, he also made them admire elven beauty.
Another example of attribution error: Why would Gimli think that Galadriel is beautiful?
Why do humans think dolphins are beautiful?
Is a human likely to think that one specific dolphin is so beautiful as to be almost worth fighting a duel about it being the most beautiful?
This seems a bad example to use given the context. If you are trying to convince people that greater than human intelligence will give AIs an insurmountable advantage over even the smartest humans then drawing attention to a supposed idiot who became the most powerful man in the world for 8 years raises the question of whether you either don't know what intelligence is or vastly overestimate its ability to grant real world power.
I woke up this morning with a set of goals. After reading this post, my goals abruptly pivoted: I had a strong desire to compose a reply. I like this post and think it is an excellent and appropriate reply to Lionhearted's (also a nice post), and would have liked to proffer some different perspectives. Realizing that this was an exciting but transient passion, I didn't allow my goals to be updated and persisted in my previous plans. An hour or two into my morning's work, I finally recalled the motivation behind my original goals and was grateful. It took some time, though, before I felt emotionally that I had chosen the right set of goals for my morning. Working through those transient periods of no-emotional-reward is tough. You need to have faith in the goal decisions of previous selves, but not too much.
The fact that we so blatantly fail to optimize for using reason to solve our problems, and so effortlessly use it to rationalize our actions, is another strong piece of evidence for the thesis that reasoning evolved primarily for arguing.
Do you agree with (a)-(g) above?
There's a reason why we don't think strategically, and it's actually a very good reason and is unfortunately why we will never have an innately strategic mentality: cost. Specifically, the cost of time. i.e. it's always cheaper in terms of time to make a correct lucky guess on the first try than to work out a solution properly over a significant length of time.
Imagine there was a such thing as a lucky charm, and by holding it, you were, say, 70% more likely to always get the right answer on your calculus test without even needing to completely understand the problem. In this situation, taking the calculus test would take you just a few minutes, and you'd still score well enough to pass the class. In fact, you could take the entire years worth of tests, perhaps, in the same amount of time that it takes the rest of the students to work their way through the first one, yet still most likely pass. Your lucky charm didn't give you the best grade, but it allowed you to quickly solve all the problems you needed to solve and now you can spend the rest of the year taking other classes.
Well, the thing is, the human mind has evolved just such a "lucky charm", specifically our...
Part of it is that achieving success through means other than the standard things you're supposed to achieve success by doing well at can feel like cheating, possibly for some sort of signaling reason. Part of it is there are serious psychological and social costs not only to doing things that other people don't do, but to doing things for different kinds of reasons. Part of it is you're suggesting the benefits of what you call being strategic are larger than they really are by focusing on available cases where it changed someone's life and ignoring a great many forgettable and hard to pinpoint cases where it was just a time/energy sink, or where considering it was a time/energy sink, or where there was good reason to believe the relevant strategy had already been taken into account by whatever caused you to be doing the default thing, or where there seemed to be such good reason absent an appreciation of the world's madness.
Perhaps 5% of the population has enough abstract reasoning skill to verbally understand that the above heuristics would be useful once these heuristics are pointed out.
I think you're underestimating the average person.
I might well be. Given the value of empiricism-type virtues, anyone want to go test it (by creating an operationalized notion of what it is to understand the heuristics, and then finding randomly choosing several people independently from e.g. your local grocery store and testing it on them), and let us know the results?
Jasen Murray and Marcello and I tried this the other day concerning what portion of native English speaking American adults know what a "sphere" is ("a ball" or "orange-shaped" count; "a circle" doesn't), and found that of the five we sampled, three knew and two didn't.
I once taught middle- and high-school teachers who wanted to get certified to teach math. I was a TA for a class in geometry (basically 8th or 9th grade Euclidean geometry.) I had an incredibly hard time explaining to them that "draw a circle with center point A" means that A goes in the middle of the circle, instead of on the boundary. As I recall, it took more than a week of daily problem sessions before they got that.
Of course, I may have been a bad teacher. But I was trying.
Reply to: A "Failure to Evaluate Return-on-Time" Fallacy
Lionhearted writes:
Why will a randomly chosen eight-year-old fail a calculus test? Because most possible answers are wrong, and there is no force to guide him to the correct answers. (There is no need to postulate a “fear of success”; most ways writing or not writing on a calculus test constitute failure, and so people, and rocks, fail calculus tests by default.)
Why do most of us, most of the time, choose to "pursue our goals" through routes that are far less effective than the routes we could find if we tried?[1] My guess is that here, as with the calculus test, the main problem is that most courses of action are extremely ineffective, and that there has been no strong evolutionary or cultural force sufficient to focus us on the very narrow behavior patterns that would actually be effective.
To be more specific: there are clearly at least some limited senses in which we have goals. We: (1) tell ourselves and others stories of how we’re aiming for various “goals”; (2) search out modes of activity that are consistent with the role, and goal-seeking, that we see ourselves as doing (“learning math”; “becoming a comedian”; “being a good parent”); and sometimes even (3) feel glad or disappointed when we do/don’t achieve our “goals”.
But there are clearly also heuristics that would be useful to goal-achievement (or that would be part of what it means to “have goals” at all) that we do not automatically carry out. We do not automatically:
.... or carry out any number of other useful techniques. Instead, we mostly just do things. We act from habit; we act from impulse or convenience when primed by the activities in front of us; we remember our goal and choose an action that feels associated with our goal. We do any number of things. But we do not systematically choose the narrow sets of actions that would effectively optimize for our claimed goals, or for any other goals.
Why? Most basically, because humans are only just on the cusp of general intelligence. Perhaps 5% of the population has enough abstract reasoning skill to verbally understand that the above heuristics would be useful once these heuristics are pointed out. That is not at all the same as the ability to automatically implement these heuristics. Our verbal, conversational systems are much better at abstract reasoning than are the motivational systems that pull our behavior. I have enough abstract reasoning ability to understand that I’m safe on the glass floor of a tall building, or that ice cream is not healthy, or that exercise furthers my goals... but this doesn’t lead to an automatic updating of the reward gradients that, absent rare and costly conscious overrides, pull my behavior. I can train my automatic systems, for example by visualizing ice cream as disgusting and artery-clogging and yucky, or by walking across the glass floor often enough to persuade my brain that I can’t fall through the floor... but systematically training one’s motivational systems in this way is also not automatic for us. And so it seems far from surprising that most of us have not trained ourselves in this way, and that most of our “goal-seeking” actions are far less effective than they could be.
Still, I’m keen to train. I know people who are far more strategic than I am, and there seem to be clear avenues for becoming far more strategic than they are. It also seems that having goals, in a much more pervasive sense than (1)-(3), is part of what “rational” should mean, will help us achieve what we care about, and hasn't been taught in much detail on LW.
So, to second Lionhearted's questions: does this analysis seem right? Have some of you trained yourselves to be substantially more strategic, or goal-achieving, than you started out? How did you do it? Do you agree with (a)-(h) above? Do you have some good heuristics to add? Do you have some good ideas for how to train yourself in such heuristics?
[1] For example, why do many people go through long training programs “to make money” without spending a few hours doing salary comparisons ahead of time? Why do many who type for hours a day remain two-finger typists, without bothering with a typing tutor program? Why do people spend their Saturdays “enjoying themselves” without bothering to track which of their habitual leisure activities are *actually* enjoyable? Why do even unusually numerate people fear illness, car accidents, and bogeymen, and take safety measures, but not bother to look up statistics on the relative risks? Why do most of us settle into a single, stereotyped mode of studying, writing, social interaction, or the like, without trying alternatives to see if they work better -- even when such experiments as we have tried have sometimes given great boosts?