Humans are not automatically strategic

Reply to: A "Failure to Evaluate Return-on-Time" Fallacy

Lionhearted writes:

[A] large majority of otherwise smart people spend time doing semi-productive things, when there are massively productive opportunities untapped.

A somewhat silly example: Let's say someone aspires to be a comedian, the best comedian ever, and to make a living doing comedy. He wants nothing else, it is his purpose. And he decides that in order to become a better comedian, he will watch re-runs of the old television cartoon 'Garfield and Friends' that was on TV from 1988 to 1995....

I’m curious as to why.

Why will a randomly chosen eight-year-old fail a calculus test?  Because most possible answers are wrong, and there is no force to guide him to the correct answers.  (There is no need to postulate a “fear of success”; most ways writing or not writing on a calculus test constitute failure, and so people, and rocks, fail calculus tests by default.)

Why do most of us, most of the time, choose to "pursue our goals" through routes that are far less effective than the routes we could find if we tried?[1]  My guess is that here, as with the calculus test, the main problem is that most courses of action are extremely ineffective, and that there has been no strong evolutionary or cultural force sufficient to focus us on the very narrow behavior patterns that would actually be effective. 

To be more specific: there are clearly at least some limited senses in which we have goals.  We: (1) tell ourselves and others stories of how we’re aiming for various “goals”; (2) search out modes of activity that are consistent with the role, and goal-seeking, that we see ourselves as doing (“learning math”; “becoming a comedian”; “being a good parent”); and sometimes even (3) feel glad or disappointed when we do/don’t achieve our “goals”.

But there are clearly also heuristics that would be useful to goal-achievement (or that would be part of what it means to “have goals” at all) that we do not automatically carry out.  We do not automatically:

  • (a) Ask ourselves what we’re trying to achieve; 
  • (b) Ask ourselves how we could tell if we achieved it (“what does it look like to be a good comedian?”) and how we can track progress; 
  • (c) Find ourselves strongly, intrinsically curious about information that would help us achieve our goal; 
  • (d) Gather that information (e.g., by asking as how folks commonly achieve our goal, or similar goals, or by tallying which strategies have and haven’t worked for us in the past); 
  • (e) Systematically test many different conjectures for how to achieve the goals, including methods that aren’t habitual for us, while tracking which ones do and don’t work; 
  • (f) Focus most of the energy that *isn’t* going into systematic exploration, on the methods that work best;
  • (g) Make sure that our "goal" is really our goal, that we coherently want it and are not constrained by fears or by uncertainty as to whether it is worth the effort, and that we have thought through any questions and decisions in advance so they won't continually sap our energies;
  • (h) Use environmental cues and social contexts to bolster our motivation, so we can keep working effectively in the face of intermittent frustrations, or temptations based in hyperbolic discounting;

.... or carry out any number of other useful techniques.  Instead, we mostly just do things.  We act from habit; we act from impulse or convenience when primed by the activities in front of us; we remember our goal and choose an action that feels associated with our goal.  We do any number of things.  But we do not systematically choose the narrow sets of actions that would effectively optimize for our claimed goals, or for any other goals.

Why?  Most basically, because humans are only just on the cusp of general intelligence.  Perhaps 5% of the population has enough abstract reasoning skill to verbally understand that the above heuristics would be useful once these heuristics are pointed out.  That is not at all the same as the ability to automatically implement these heuristics.  Our verbal, conversational systems are much better at abstract reasoning than are the motivational systems that pull our behavior.  I have enough abstract reasoning ability to understand that I’m safe on the glass floor of a tall building, or that ice cream is not healthy, or that exercise furthers my goals... but this doesn’t lead to an automatic updating of the reward gradients that, absent rare and costly conscious overrides, pull my behavior.  I can train my automatic systems, for example by visualizing ice cream as disgusting and artery-clogging and yucky, or by walking across the glass floor often enough to persuade my brain that I can’t fall through the floor... but systematically training one’s motivational systems in this way is also not automatic for us.  And so it seems far from surprising that most of us have not trained ourselves in this way, and that most of our “goal-seeking” actions are far less effective than they could be.

Still, I’m keen to train.  I know people who are far more strategic than I am, and there seem to be clear avenues for becoming far more strategic than they are.  It also seems that having goals, in a much more pervasive sense than (1)-(3), is part of what “rational” should mean, will help us achieve what we care about, and hasn't been taught in much detail on LW.

So, to second Lionhearted's questions: does this analysis seem right?  Have some of you trained yourselves to be substantially more strategic, or goal-achieving, than you started out?  How did you do it?  Do you agree with (a)-(h) above?  Do you have some good heuristics to add?  Do you have some good ideas for how to train yourself in such heuristics?

 

[1] For example, why do many people go through long training programs “to make money” without spending a few hours doing salary comparisons ahead of time?  Why do many who type for hours a day remain two-finger typists, without bothering with a typing tutor program?  Why do people spend their Saturdays “enjoying themselves” without bothering to track which of their habitual leisure activities are *actually* enjoyable?  Why do even unusually numerate people fear illness, car accidents, and bogeymen, and take safety measures, but not bother to look up statistics on the relative risks? Why do most of us settle into a single, stereotyped mode of studying, writing, social interaction, or the like, without trying alternatives to see if they work better -- even when such experiments as we have tried have sometimes given great boosts?

266 comments, sorted by
magical algorithm
Highlighting new comments since Today at 12:15 PM
Select new highlight date
Moderation Guidelinesexpand_more

I'm disappointed at how few of these comments, particularly the highly-voted ones, are about proposed solutions, or at least proposed areas for research. My general concern about the LW community is that it seems much more interested in the fun of debating and analyzing biases, rather than the boring repetitive trial-and-error of correcting them.

Anna's post lays out a particular piece of poor performance which is of core strategic value to pretty much everyone - how to identify and achieve your goals - and which, according to me and many people and authors, can be greatly improved through study and practice. So I'm very frustrated by all the comments about the fact that we're just barely intelligent and debates about the intelligence of the general person. It's like if Eliezer posted about the potential for AI to kill us all and people debated how they would choose to kill us instead of how to stop it from happening.

Sorry, folks, but compared to the self-help/self-development community, Less Wrong is currently UTTERLY LOSING at self-improvement and life optimization. Go spend an hour reading Merlin Mann's site and you'll learn way more instrumental rationality than you do here. Or take a GTD class, or read a top-rated time-management book on Amazon.

Talking about biases is fun, working on them is hard. Do Less Wrongers want to have fun, or become super-powerful and take over (or at least save) the world? So far, as far as I can tell, LW is much worse than the Quantified Self & time/attention-management communities (Merlin Mann, Zen Habits, GTD) at practical self-improvement. Which is why I don't read it very often. When it becomes a rationality dojo instead of a club for people who like to geek out about biases, I'm in.

I've disappointed in LessWrong too, and it's caused me to come here more and more infrequently. I'm even talking about the lurking. I used to come here every other day, then every week, then it dropped to once a month. This

I get the impression many people either didn't give a shit or despaired about their own ability to function better through any reasonable effort that they dismissed everything that came along. It used to make me really mad, or sad. Probably I took it a little too personally too, because I read a lot of EY's classic posts as inspiration not to fucking despair about what seemed like a permanently ruined future. "tsuyoku naritai" and "isshou kenmei" and "do the impossible" and all that said, look, people out there are working on much harder problems--there's probably a way up and out for you too. The sadness: I wanted other people to get at least that, and the anger--a lot of LessWrongers not seeming to get the point.

On the other hand, I'm pleased with our OvercomingBias/LessWrong meetup group in NYC. I think we do a good job in-person helping other members with practical solutions to problems--how we can all become really successful. Maybe it's because a lot of our members have integrated ideas from QS, Paleo, and CrossFit, Seth Roberts, and PJ Eby. We've counseled members on employment opportunities, how to deal with crushing student and consumer debts, how to make money, and nutrition. By now we all tend to look down on the kind of despairing analysis that's frequently upvoted on here LW. We talk about FAI sparingly these days, unless someone has a particular insight we think would be valuable. Instead, the sentiment is more, "Shit, none of us can do much about it directly. How 'bout we all get freaking rich and successful first!"

I suspect the empathy formed from face to face contact can be a really great motivator. You hear someone's story from their own mouth and think, "Shit man, you're cool, but you're in bad shape right now. Can we all figure out how to help you out?" Little by little people relate, even the successful ones--we've all been there in small ways. This eventually moves towards, "Can we we think about how to help all of us out?" It's not about delivering a nice tight set of paragraphs with appropriate references and terminology. When we see each other again, we care that our proposed solutions and ideas are going somewhere because we care about the people. All the EvPsych speculation and calibration admonitions can go to hell if doesn't fucking help. But if it does, use it, use it to help people, use it to help yourself, use it to help the future light cone of the human world.

Yet if we're intentional about it I think we can keep it real here too. We can give a shit. Okay, maybe I don't know that. Maybe it takes looking for and rewarding the useful insights and then coming back later and talking about how the insights were useful. Maybe it takes getting a little more personal. Maybe I and my suggestions are full of shit but, hell, I want to figure this out. I used to talk about LessWrong with pride and urge people to come check it out because the posts were great, the commenters /comment scheme is great, it was a shining example of what the rest of the intellectually discursive interwebs could be like. And, man, I'd like it to be that way again.

So damn, what do y'all think?

If there are (relative to LW) many good self-help sites and no good sites about rationality as such, that suggests to me LW should focus on rationality as such and leave self-help to the self-help sites. This is compatible with LW's members spending a lot of time on self-help sites that they recommend each other in open threads.

My impression is that there are two good reasons to incorporate productivity techniques into LW, instead of aiming for a separate community specialized in epistemic rationality that complements self-help communities.

  1. Our future depends on producing people who can both see what needs doing (wrt existential risk, and any other high-stakes issues), and can actually do things. This seems far higher probability than “our future depends on creating an FAI team” and than “our future depends on plan X” for any other specific plan X. A single community that teaches both, and that also discusses high-impact philanthropy, may help.

  2. There seems to be a synergy between epistemic and instrumental rationality, in the sense that techniques for each give boosts to the other. Many self-help books, for example, spend much time discussing how to think through painful subjects instead of walling them off (instead of allowing ugh fields to clutter up your to do list, or allowing rationalized “it’s all your fault” reactions to clutter up your interpersonal relations). It would be nice to have a community that could see the whole picture here.

Instrumental rationality and productivity techniques and self-help are three different though overlapping things, though the exact difference is hard to pinpoint. In many cases it can be rational to learn to be more productive or more charismatic, but productivity and charisma don't thereby become kinds of rationality. Your original post probably counts as instrumental rationality in that it's about how to implement better general decision algorithms. In general, LW will probably have much more of an advantage relative to other sites in self-help that's inspired by the basic logic/math of optimal behavior than in other kinds of self-help.

Re: 1, obviously one needs both of those things, but the question is which is more useful at the margin. The average LWer will go through life with some degree of productivity/success/etc even if such topics never get discussed again, and it seems a lot easier to get someone to allocate 2% rather than 1% of their effort to "what needs doing" than to double their general productivity.

I feel like noting that none of the ten most recent posts are about epistemic rationality; there's nothing that I could use to get better at determining, just to name some random examples, whether nanotech will happen in the next 50 years, or whether egoism makes more philosophical sense than altruism.

On the other hand, I think a strong argument for having self-help content is that it draws people here.

But part of my point is that LW isn't "focusing on rationality", or rather, it is focusing on fun theoretical discussions of rationality rather than practical exercises that are hard to work implement but actually make you more rational. The self-help / life hacking / personal development community is actually better (in my opinion) at helping people become more rational than this site ostensibly devoted to rationality.

The self-help / life hacking / personal development community is actually better (in my opinion) at helping people become more rational than this site ostensibly devoted to rationality.

Hmm. The self-help / life hacking / personal development community may well be better than LW at focussing on practice, on concrete life-improvements, and on eliciting deep-seated motivation. But AFAICT these communities are not aiming at epistemic rationality in our sense, and are consequently not hitting it even as well as we are. LW, for all its faults, has had fair success at teaching folks how to thinking usefully about abstract, tricky subjects on which human discussions often tend rapidly toward nonsense (e.g. existential risk, optimal philanthropy, or ethics). It has done so by teaching such subskills as:

  • Never attempting to prove empirical facts from definitions;
  • Never saying or implying “but decent people shouldn’t believe X, so X is false”;
  • Being curious; participating in conversations with intent to update opinions, rather than merely to defend one’s prior beliefs;
  • Asking what potential evidence would move you, or would move the other person;
  • Not expecting all sides of a policy discussion to line up;
  • Aspiring to have true beliefs, rather than to make up rationalizations that back the group’s notions of virtue.

By all means, let's copy the more effective, doing-oriented aspects of life hacking communities. But let’s do so while continuing to distinguish epistemic rationality as one of our key goals, since, as Steven notes, this goal seems almost unique to LW, is achieved here more than elsewhere, and is necessary for tackling e.g. existential risk reduction.

I'm surprised that you seem to be saying that LW shouldn't getting more into instrumental rationality! That would seem to imply that you think the good self-help sites are doing enough. I really don't agree with that. I think LWers are uniquely suited to add to the discussion. More bright minds taking a serious, critical look at all thing, and, importantly, urgently looking for solutions contains a strong possibility of making a significant dent in things.

Major point, though, of GGP is not about what's being discussed, but how. He's bemoning that when topics related to self-improvement come up that we completely blow it! A lot of ineffectual discussion gets upvoted. I'm guilty of this too, but this little tirade's convinced me that we can do better, and that it's worth thinking about how to do better.

Instead, the sentiment is more, "Shit, none of us can do much about it directly. How 'bout we all get freaking rich and successful first!"

Well, I think that's the rational thing to do for the vast majority of people. Not only due to public good problems, but because if there's something bad about the world which affects many people negatively, it's probably hard to fix or one of the many sufferers would have already. Whereas your life might not have been fixed just because you haven't tried yet. It's almost always a better use of your resources. Plus "money is the unit of caring", so the optimal way to help a charitable cause is usually to earn your max cash and donate, as opposed to working on it directly.

I suspect the empathy formed from face to face contact can be a really great motivator.

Agreed. Not just a motivator to help other people - but f2f contact is more inherently about doing, while web forums are more inherently about talking. In person it is much more natural to ask about someone's life and how it is going - which is where interventions happen.

Yet if we're intentional about it I think we can keep it real here too.

Perhaps. I think it will need a lot of intentionality, and a combination of in-person meetups and online discussions. I've thought about this as a "practicing life" support group, Eliezer's term is "rationality dojo", either way the key is to look at rationality and success just like any other skill - you learn by breaking it down into practiceable components and then practicing with intention and feedback, ideally in a social group. The net can be used to track the skill exercises, comment on alternative solutions for various problems, rank the leverage of the interventions and so forth.

But the key from my perspective is the website would be more of a database rather than an interaction forum. "This is where you go to find your local chapter, and a list of starting exercises / books to work through together / metrics / etc"

I'm new here at LW -- are there any chapters outside of the New York meetup?

If not, is there a LW mechanism to gather location info from interested participants to start new ones? Top-level post and a Wiki page?

I created a Wiki to kick things off, but as a newb I think I can't create an article yet, and quite frankly I'm not confident enough that that's the right way to go about it to do it even if I could. So if you've been here longer and think that's the right way, please do it and direct LWers to the Wiki page.

http://wiki.lesswrong.com/wiki/LocalChapters

"money is the unit of caring", so the optimal way to help a charitable cause is usually to earn your max cash and donate, as opposed to working on it directly.

This is false. Giving food directly to starving people (however it is obtained) is much better than throwing financial aid at a nation or institution and hope that it manages to "trickle-down" past all the middle-men and career politicians/activists and eventually is used to purchase food that eventually actually gets to people who need it. The only reason sayings like the above are so common and accepted is because people assume that there are no methods of Direct Action that will directly and immediately alleviate suffering, and are comparing "throwing money at it" to just petitioning, marching, and lengthy talks/debates. Yes, in those instances, years of political lobbying may do a lot less than just using that lobbying money to directly buy necessities for the needy or donating them to an organization who does (after taking a cut for cost of functioning, and to pay themselves), but compared to actually getting/taking the necessary goods and services directly to the needy (and teaching them methods for doing so themselves), it doesn't hold up. Another way of comparison is to ask "what if everyone (or even most) did what people said was best?" If we compared the rule of "donate money to institutions you trust (after working up to the point where you feel wealthy enough to do so)", and "directly applying their time and energy in volunteer work and direct action", one would lead to immediate relief and learning for those in need, and the other would be a long-term hope that the money would work its way through bureaucracies, survive the continual shaving of funds for institutional funding and employee payment, and eventually get used to buy the necessities the people need (hoping that everything they need can be bought, and that they haven't starved or been exposed to the elements enough to kill them).

Giving food directly to starving people (however it is obtained) is much better than throwing financial aid at a nation or institution

What's your estimate of how much money and how much time I would have to spend to deliver $100 of food directly to a starving person?
Does that estimate change if 50% of my neighbors are also doing this?

Actually my point is questions like that are already guiding discussion away from alternative solutions which may be capable of making a real impact (outside of needing to "become rich" first, or risk the cause getting lost in bureaucracy and profiteering). Take a group like Food Not Bombs for instance; they diminish the "money spent" part of the equation by dumpstering and getting food donations. The time involved would of course depend on where you live, and how easily you could find corporate food waste (sometimes physically guarded by locks, wire, and even men with guns to enforce artificial scarcity), and transporting it to the people who need it. The more people who join in, would of course mean more food must be produced and more area covered in search of food waste to be reclaimed. A fortunate thing is that the more people pitch in, the shorter it takes to do large amounts of labor that benefits everyone; thus the term mutual aid.

I'm not even taking the cost of the food into consideration. I'm assuming there's this food sitting here.. perhaps as donations, perhaps by dumpstering, perhaps by theft, whatever. What I was trying to get a feel for is your estimate of the costs of individuals delivering that food to where it needs to go. But it sounds like you endorse people getting together in groups in order to do this more efficiently, as long as they don't become bureaucratic institutions in the process, so that addresses my question. Thanks.