Reply to: A "Failure to Evaluate Return-on-Time" Fallacy

Lionhearted writes:

[A] large majority of otherwise smart people spend time doing semi-productive things, when there are massively productive opportunities untapped.

A somewhat silly example: Let's say someone aspires to be a comedian, the best comedian ever, and to make a living doing comedy. He wants nothing else, it is his purpose. And he decides that in order to become a better comedian, he will watch re-runs of the old television cartoon 'Garfield and Friends' that was on TV from 1988 to 1995....

I’m curious as to why.

Why will a randomly chosen eight-year-old fail a calculus test?  Because most possible answers are wrong, and there is no force to guide him to the correct answers.  (There is no need to postulate a “fear of success”; most ways writing or not writing on a calculus test constitute failure, and so people, and rocks, fail calculus tests by default.)

Why do most of us, most of the time, choose to "pursue our goals" through routes that are far less effective than the routes we could find if we tried?[1]  My guess is that here, as with the calculus test, the main problem is that most courses of action are extremely ineffective, and that there has been no strong evolutionary or cultural force sufficient to focus us on the very narrow behavior patterns that would actually be effective. 

To be more specific: there are clearly at least some limited senses in which we have goals.  We: (1) tell ourselves and others stories of how we’re aiming for various “goals”; (2) search out modes of activity that are consistent with the role, and goal-seeking, that we see ourselves as doing (“learning math”; “becoming a comedian”; “being a good parent”); and sometimes even (3) feel glad or disappointed when we do/don’t achieve our “goals”.

But there are clearly also heuristics that would be useful to goal-achievement (or that would be part of what it means to “have goals” at all) that we do not automatically carry out.  We do not automatically:

  • (a) Ask ourselves what we’re trying to achieve; 
  • (b) Ask ourselves how we could tell if we achieved it (“what does it look like to be a good comedian?”) and how we can track progress; 
  • (c) Find ourselves strongly, intrinsically curious about information that would help us achieve our goal; 
  • (d) Gather that information (e.g., by asking as how folks commonly achieve our goal, or similar goals, or by tallying which strategies have and haven’t worked for us in the past); 
  • (e) Systematically test many different conjectures for how to achieve the goals, including methods that aren’t habitual for us, while tracking which ones do and don’t work; 
  • (f) Focus most of the energy that *isn’t* going into systematic exploration, on the methods that work best;
  • (g) Make sure that our "goal" is really our goal, that we coherently want it and are not constrained by fears or by uncertainty as to whether it is worth the effort, and that we have thought through any questions and decisions in advance so they won't continually sap our energies;
  • (h) Use environmental cues and social contexts to bolster our motivation, so we can keep working effectively in the face of intermittent frustrations, or temptations based in hyperbolic discounting;

.... or carry out any number of other useful techniques.  Instead, we mostly just do things.  We act from habit; we act from impulse or convenience when primed by the activities in front of us; we remember our goal and choose an action that feels associated with our goal.  We do any number of things.  But we do not systematically choose the narrow sets of actions that would effectively optimize for our claimed goals, or for any other goals.

Why?  Most basically, because humans are only just on the cusp of general intelligence.  Perhaps 5% of the population has enough abstract reasoning skill to verbally understand that the above heuristics would be useful once these heuristics are pointed out.  That is not at all the same as the ability to automatically implement these heuristics.  Our verbal, conversational systems are much better at abstract reasoning than are the motivational systems that pull our behavior.  I have enough abstract reasoning ability to understand that I’m safe on the glass floor of a tall building, or that ice cream is not healthy, or that exercise furthers my goals... but this doesn’t lead to an automatic updating of the reward gradients that, absent rare and costly conscious overrides, pull my behavior.  I can train my automatic systems, for example by visualizing ice cream as disgusting and artery-clogging and yucky, or by walking across the glass floor often enough to persuade my brain that I can’t fall through the floor... but systematically training one’s motivational systems in this way is also not automatic for us.  And so it seems far from surprising that most of us have not trained ourselves in this way, and that most of our “goal-seeking” actions are far less effective than they could be.

Still, I’m keen to train.  I know people who are far more strategic than I am, and there seem to be clear avenues for becoming far more strategic than they are.  It also seems that having goals, in a much more pervasive sense than (1)-(3), is part of what “rational” should mean, will help us achieve what we care about, and hasn't been taught in much detail on LW.

So, to second Lionhearted's questions: does this analysis seem right?  Have some of you trained yourselves to be substantially more strategic, or goal-achieving, than you started out?  How did you do it?  Do you agree with (a)-(h) above?  Do you have some good heuristics to add?  Do you have some good ideas for how to train yourself in such heuristics?


[1] For example, why do many people go through long training programs “to make money” without spending a few hours doing salary comparisons ahead of time?  Why do many who type for hours a day remain two-finger typists, without bothering with a typing tutor program?  Why do people spend their Saturdays “enjoying themselves” without bothering to track which of their habitual leisure activities are *actually* enjoyable?  Why do even unusually numerate people fear illness, car accidents, and bogeymen, and take safety measures, but not bother to look up statistics on the relative risks? Why do most of us settle into a single, stereotyped mode of studying, writing, social interaction, or the like, without trying alternatives to see if they work better -- even when such experiments as we have tried have sometimes given great boosts?

New to LessWrong?

New Comment
277 comments, sorted by Click to highlight new comments since: Today at 8:29 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I'm disappointed at how few of these comments, particularly the highly-voted ones, are about proposed solutions, or at least proposed areas for research. My general concern about the LW community is that it seems much more interested in the fun of debating and analyzing biases, rather than the boring repetitive trial-and-error of correcting them.

Anna's post lays out a particular piece of poor performance which is of core strategic value to pretty much everyone - how to identify and achieve your goals - and which, according to me and many people and authors, can be greatly improved through study and practice. So I'm very frustrated by all the comments about the fact that we're just barely intelligent and debates about the intelligence of the general person. It's like if Eliezer posted about the potential for AI to kill us all and people debated how they would choose to kill us instead of how to stop it from happening.

Sorry, folks, but compared to the self-help/self-development community, Less Wrong is currently UTTERLY LOSING at self-improvement and life optimization. Go spend an hour reading Merlin Mann's site and you'll learn way more instrumental rationality than you do here.... (read more)

I've disappointed in LessWrong too, and it's caused me to come here more and more infrequently. I'm even talking about the lurking. I used to come here every other day, then every week, then it dropped to once a month. This

I get the impression many people either didn't give a shit or despaired about their own ability to function better through any reasonable effort that they dismissed everything that came along. It used to make me really mad, or sad. Probably I took it a little too personally too, because I read a lot of EY's classic posts as inspiration not to fucking despair about what seemed like a permanently ruined future. "tsuyoku naritai" and "isshou kenmei" and "do the impossible" and all that said, look, people out there are working on much harder problems--there's probably a way up and out for you too. The sadness: I wanted other people to get at least that, and the anger--a lot of LessWrongers not seeming to get the point.

On the other hand, I'm pleased with our OvercomingBias/LessWrong meetup group in NYC. I think we do a good job in-person helping other members with practical solutions to problems--how we can all become really successful. ... (read more)

If there are (relative to LW) many good self-help sites and no good sites about rationality as such, that suggests to me LW should focus on rationality as such and leave self-help to the self-help sites. This is compatible with LW's members spending a lot of time on self-help sites that they recommend each other in open threads.

My impression is that there are two good reasons to incorporate productivity techniques into LW, instead of aiming for a separate community specialized in epistemic rationality that complements self-help communities.

  1. Our future depends on producing people who can both see what needs doing (wrt existential risk, and any other high-stakes issues), and can actually do things. This seems far higher probability than “our future depends on creating an FAI team” and than “our future depends on plan X” for any other specific plan X. A single community that teaches both, and that also discusses high-impact philanthropy, may help.

  2. There seems to be a synergy between epistemic and instrumental rationality, in the sense that techniques for each give boosts to the other. Many self-help books, for example, spend much time discussing how to think through painful subjects instead of walling them off (instead of allowing ugh fields to clutter up your to do list, or allowing rationalized “it’s all your fault” reactions to clutter up your interpersonal relations). It would be nice to have a community that could see the whole picture here.

Instrumental rationality and productivity techniques and self-help are three different though overlapping things, though the exact difference is hard to pinpoint. In many cases it can be rational to learn to be more productive or more charismatic, but productivity and charisma don't thereby become kinds of rationality. Your original post probably counts as instrumental rationality in that it's about how to implement better general decision algorithms. In general, LW will probably have much more of an advantage relative to other sites in self-help that's inspired by the basic logic/math of optimal behavior than in other kinds of self-help. Re: 1, obviously one needs both of those things, but the question is which is more useful at the margin. The average LWer will go through life with some degree of productivity/success/etc even if such topics never get discussed again, and it seems a lot easier to get someone to allocate 2% rather than 1% of their effort to "what needs doing" than to double their general productivity. I feel like noting that none of the ten most recent posts are about epistemic rationality; there's nothing that I could use to get better at determining, just to name some random examples, whether nanotech will happen in the next 50 years, or whether egoism makes more philosophical sense than altruism. On the other hand, I think a strong argument for having self-help content is that it draws people here.

But part of my point is that LW isn't "focusing on rationality", or rather, it is focusing on fun theoretical discussions of rationality rather than practical exercises that are hard to work implement but actually make you more rational. The self-help / life hacking / personal development community is actually better (in my opinion) at helping people become more rational than this site ostensibly devoted to rationality.

The self-help / life hacking / personal development community is actually better (in my opinion) at helping people become more rational than this site ostensibly devoted to rationality.

Hmm. The self-help / life hacking / personal development community may well be better than LW at focussing on practice, on concrete life-improvements, and on eliciting deep-seated motivation. But AFAICT these communities are not aiming at epistemic rationality in our sense, and are consequently not hitting it even as well as we are. LW, for all its faults, has had fair success at teaching folks how to thinking usefully about abstract, tricky subjects on which human discussions often tend rapidly toward nonsense (e.g. existential risk, optimal philanthropy, or ethics). It has done so by teaching such subskills as:

  • Never attempting to prove empirical facts from definitions;
  • Never saying or implying “but decent people shouldn’t believe X, so X is false”;
  • Being curious; participating in conversations with intent to update opinions, rather than merely to defend one’s prior beliefs;
  • Asking what potential evidence would move you, or would move the other person;
  • Not expecting all sides of a policy di
... (read more)
Could you elaborate on what you mean by that claim, or why you believe it? I love most of your recent comments, but on this point my impression differs. Yes, folks often learn more from practice, exercises, and deep-seated motivation than from having fun discussions. Yes, some self-help communities are better than LW at focussing on practice and life-improvement. But, AFAICT: no, that doesn’t mean these communities do more to boost their participants’ epistemic rationality. LW tries to teach folks skills for thinking usefully about abstract, tricky subjects on which human discussions often tend rapidly toward nonsense (e.g. existential risk, optimal philanthropy, or ethics). And LW, for all its flaws, seems to have had a fair amount of success in teaching its longer-term members (judging from my discussions with many such, in person and online) such skills as: * Never attempting to prove empirical facts from definitions; * Never saying or implying “but decent people shouldn’t believe X, so X is false”; * Being curious; participating in conversations with intent to update opinions, rather than merely to defend one’s prior beliefs; * Asking what potential evidence would move you, or would move the other person; * Not expecting all sides of a policy discussion to line up; * Aspiring to have true beliefs, rather than to make up rationalizations that back the group’s notions of virtue. Do you mean: (1) self-help sites are more successful than LW at teaching the above, and similar, subskills; (2) the above subskills do not in fact boost folks’ ability to think non-nonsensically about abstract and tricky issues; or (3) LW may better boost folks’ ability to think through abstract issues, but that ability should not be called “rationality”?
I'm surprised that you seem to be saying that LW shouldn't getting more into instrumental rationality! That would seem to imply that you think the good self-help sites are doing enough. I really don't agree with that. I think LWers are uniquely suited to add to the discussion. More bright minds taking a serious, critical look at all thing, and, importantly, urgently looking for solutions contains a strong possibility of making a significant dent in things. Major point, though, of GGP is not about what's being discussed, but how. He's bemoning that when topics related to self-improvement come up that we completely blow it! A lot of ineffectual discussion gets upvoted. I'm guilty of this too, but this little tirade's convinced me that we can do better, and that it's worth thinking about how to do better.

Instead, the sentiment is more, "Shit, none of us can do much about it directly. How 'bout we all get freaking rich and successful first!"

Well, I think that's the rational thing to do for the vast majority of people. Not only due to public good problems, but because if there's something bad about the world which affects many people negatively, it's probably hard to fix or one of the many sufferers would have already. Whereas your life might not have been fixed just because you haven't tried yet. It's almost always a better use of your resources. Plus "money is the unit of caring", so the optimal way to help a charitable cause is usually to earn your max cash and donate, as opposed to working on it directly.

I suspect the empathy formed from face to face contact can be a really great motivator.

Agreed. Not just a motivator to help other people - but f2f contact is more inherently about doing, while web forums are more inherently about talking. In person it is much more natural to ask about someone's life and how it is going - which is where interventions happen.

Yet if we're intentional about it I think we can keep it real here too.

Perhaps. I thi... (read more)

I'm new here at LW -- are there any chapters outside of the New York meetup? If not, is there a LW mechanism to gather location info from interested participants to start new ones? Top-level post and a Wiki page? I created a Wiki to kick things off, but as a newb I think I can't create an article yet, and quite frankly I'm not confident enough that that's the right way to go about it to do it even if I could. So if you've been here longer and think that's the right way, please do it and direct LWers to the Wiki page.
Only hoping I'm parsing this ramble correctly, but I agree if you mean to say: We have plenty of people asking, "Why" but we need to put a lot more effort asking, "What are we going to do about it?"
To people who go to meetups in other parts of the world: are they all like this? How do they vary in terms of satisfaction and progress in achieving goals?

Interestingly, the people who seem most interested in the topic of instrumental rationality never seem to write a lot of posts here, compared to the people interested in epistemic rationality. Maybe that's because you're too busy "doing" to teach (or to ask good open questions), but I'm confident that's not true of all the I-Rationality crowd.

Of course, as an academic, I'm perfectly happy staying on the E-Rationality side.

Instrumental rationality is one of my primary interests here, but I don't post much -- the standard here is too high. All I have to offer is personal anecdotal evidence about various self-help / anti-akrasia techniques I tried on myself, and I always feel a bit guilty when posting them because unsubstantiated other-optimizing is officially frowned upon here. Attempting to extract any deep wisdom from these anecdotes would be generalizing from one example.

An acceptable way to post self-help on LW would be in the form of properly designed, properly conducted long-term studies of self-help techniques. However, designing and conducting such studies is a full-time job which ideally requires a degree in experimental psychology.

If that's true, we absolutely need to lower the bar for such posts. Three good sorts of posts that are not terribly difficult are: (1) a review of a good self-help book and what you personally took from it; (2) a few-sentence summary of an academic study on an income-boosting technique, a method for improving your driving safety, or other useful content, with a link to the same; or (3) a description of self-intervention you tried and tracked impacts from, quantified self style.

When someone says they have anecdotes but want data, I hear an opportunity for crowdsourcing.

Perhaps a community blog is the wrong tool for this? What if we had a tool that supported tracking rationalist intervention efficacy? People could post specific interventions and others could report their personal results. Then the tool would allow for sorting interventions by reported aggregate efficacy. Maybe even just a simple voting system?

That seems like it could be a killer app for lowering the bar toward encouraging newcomers and data-poor interventions from getting posted and evaluated.

I have been thinking that LW really needs categorization system for top level post, this would create a way to post on 'lighter' topics without feeling like you're not matching people's expectations.
Tags do not affect how the site is read by most people, some predefined categories can be used to drive navigation.
I've had this very failure to communicate with Tom McCabe (so the evidence is mounting that the problem is with me, rather than all of you) - [edit]Tags[/edit] are categories, only with more awesome and fewer constraints. If "predefined categories can be used to drive navigation", then surely [edit]Tags[/edit] can be used to drive navigation, without having to be predefined. Is the problem just that the commonly used [edit]Tags[/edit] need to be positioned differently in the site layout?
Tags are categories. I think xamdam meant that there should be a category of "lighter" posts that people could opt out of (ie, not see in their feed of new posts) so that they wouldn't have the right to complain that they didn't live up to their expectations. Promotion means that there are two tiers, but I'm not sure whether people read the front page or the new posts. Incidentally, I think people are using the tags too much for subject matter and not enough for indicating this kind of weight or type of post. For example, I don't see a tag for self-experimentation. If the tags were visible in the article editing mode, that would encourage people to reuse the same tags, which is important for making them function (thought maybe retagging is the only way to go). If predefined tags were visible in the article editing mode, that would encourage posts on those topics; in particular, it could be used to indicate that some things are acceptable, as in Anna's list above.
Excellent (it was me). Ideas in commets below:
Easy change #1 would be to list the most popular tags in the edit interface, just below the tags inputbox.
Idea #3 (less easy) is to support saveable searches that include or exclude tags (and rss feeds of those searches) so that users can view the site through that customized lens.
Easy change #2 would be to add categories (or tags) to Tags, and to group the tag list by category, like: Mood: flippant, serious, light, humbly_curious Subject: standard_biases, etc.
I think there is definitely some of that, and I've heard that from other LW "fringers" like myself - people who love the concept of rationality and support the philosophy of LW, but have no time to write posts because their lives are full to the brim with awesome projects. One problem, i think, is that teaching and writing things up well/usefully is work. I spend time reading and writing blogs, and I do that in my "fun time" because it is fun. Careful writing about practical rationality would be work and come out of my work time, and my work time is very very full. Which suggests that to advance, we need people whose job it is to do this work. Which is part of what we see in the self-improvement world - people get paid to write books and run workshops, and while there is lots of crap out there generally the result is higher quality and more useful material.
I agree 100%. This reminds me about a recent interview with Robin Hanson in which he commented something along the lines of: "If you want to really be rational or scientific you need a process with more teeth, just having a bunch of people who read the same web pages is not enough."
What does a "rationality dojo" as you envision it look like? One thing you could do to help LW become more the kind of forum you'd like it to be is write a top-level post. Another, if you don't want to do that, is to comment somewhere with the kind of top-level topics you would like to see addressed.
rationality dojo - group of people practicing together to become more rational, not as an intellectual exercise ("I can rattle off dozens of cognitive biases!") but by actually becoming more rational themselves. It would spend a lot more time on boring practical things, and less on shiny ideas. The effort would be directed towards irrationalities weighted by their negative impact on the participant's lives, rather than how interesting they are. Sure, I will see if I can find the time to write a top-level post on this, thanks for asking.
Bump. Do it.
Really? Could you point out some posts you think are particularly helpful? Recent posts? I used to read his site and remember finding it gradually more disappointed and dropping it off my list. I don't really remember why, though.
I thought his recent "time and attention" talk was excellent, and of course his writing on email is classic.
Ah, his email theory - I used to think that looked like a message from an alien world. Re-reading it briefly now it still looks completely alien, describing a situation I have never found myself in. I just haven't ever had the feeling of being overwhelmed by email or having any sort of management problem with email. Still, I'm sure there are people who do have that problem and find Mann's writings helpful. I remember a guy back in college who swore by this inbox zero stuff. (I also remember having exchanges with him like: "That info you need is in the email I sent you a few days ago." "Uh, could you resend that? I delete all my email.") I'll see if I can find the time and attention to check out the time and attention video. I would have strongly preferred text, though. Watching 80 minute lectures is not something I can always easily arrange.
Mann (after David Allen) recommends processing your email, then moving it out of your inbox to the place it belongs. He does not recommend deleting emails you have not finished with yet.
Mann has post titles like Inbox Zero: Delete, delete, delete - my friend took that to heart. I'm personally never 'finished with' an email in the sense that I'm confident that I'll never ever want to look at it again. I search through my email archives all the time. Admittedly, Mann, in that article, says that he archives his mail and doesn't delete it - but he presents that as a "big chicken" option and a couple of paragraphs up he's lambasting "holding" folders. Anyway, I've got nothing in particular against Mann - I just don't find what he's saying useful or fun (I tried the recommended video but 10 minutes in I turned it off, he didn't seem to be saying anything interesting I hadn't heard before) while I do find LessWrong frequently useful or fun.
1J. Benjamin1y
"frustrated by all the comments about the fact that we're just barely intelligent" From "frustrated" to hinting at your own take just six words later
So now you have a highly-voted comment which contains no solutions to the problem but only a criticism of how many highly-voted comments here contain no solutions but only criticisms? I'm not saying that pointing out that something is wrong without proposing an alternate solution is necessarily a bad idea. In fact, I think it can often be helpful, and I think the specific complaint your comment makes is a good one. But, I also think that your statement isn't self-consistent. If you only value comments that propose solutions, then propose a solution!
I implied solutions. Like, people who want to get more rational should go read self-help / life hacking books instead of LW. And, if LW wants to be more useful, it should become more like self-help & life hacking community - focused on practical changes one can make in one's own life, explicit exercises for increasing rationality, groups that work together in-person to provide feedback, monitor performance, provide social motivation, etc.

I can think of at least 3 ways that people fail to make strategic, effective decisions.

  1. (as the above post pointed out) it's difficult to analyze options (or even to come up with some of them), for any number of reasons: too many of them (and too little time), lack of information, unforeseeable secondary consequences, etc.. One can do one's best in the most rational fashion, but still comes out with a wrong choice. That's unfortunate, but if this is the only kind of mistakes I am making, i am not too worried. it's a matter of learning better heuristics, building better models, gathering more data... or, in the limit, admitting that there's a limit to how much human intelligence and limited time/resources can go, even if correctly applied to problems.

  2. A second, more worrisome, mistake is not to even realize that one can step out of one's immediate reactions, stop whatever one's doing, and think about the rationality of it, and alternatives. This mistake differs from (1). As a hypothetical example, suppose the wannabe comedian generated a list of things he could do, and decided to watch the Garfield cartoon. His choice might be wrong, but it's a conscious, deliberate choice that h

... (read more)

A few years ago, Paul Graham wrote an essay[1] about type (3) failures which he referred to as type-B procrastination. I've found that just having a label helps me avoid or reduce the effect, e.g. "I could be productive and creative right now instead of wasting my time on type-B procrastination" or "I will give myself exactly this much type-B procrastination as a reward for good behavior, and then I will stop."

(Embarrassing aside: I hadn't looked at the essay for several years and only now realized that I've been mentally calling it type-A procrastination this whole time.)

EDIT: The essay goes on to link type-C procrastination with doing the impossible, yielding a nice example of how I-rationality and self-help are linked.

[1] Paul Graham, Good and Bad Procrastination

Once I held passing interest in Mensa, thinking that an org of super-smart people would surely self-organize to impact the world (positively perhaps, but taking it over as a gameboard for the new uberkind would work too). I was disappointed to learn that mostly Mensa does little, and when they get together in meatspace it is for social mixers and such. I also looked at Technocracy, which seemed like a reasonable idea, and that was different but no better. Now I'm a few decades on in my tech career, and I have learned that most technical problems are really people problems in disguise, and solving the organization and motivational aspects are critical to every endeavor, and are essentially my full-time job. What smoker or obese person or spendthrift isn't a Type 3, above? Who doesn't absorb into their lives with some tunnel vision and make type 2 mistakes? Who, as a manager, hasn't had to knowingly make a decision without sufficient information? I know I have audibly said, "We can't afford to be indecisive, but we can afford to be wrong", after I make such decisions, and I mean it. Reading some of these key posts, though, points out part of the problem faced in this thread: we're trying to operate at higher levels of action without clear connections and action at lower levels. We have a forum for level 3+ thinking, without clear connections to level 1-3 action. The most natural, if not easy, step would be to align as a group in a fashion to impact other policy-making organizations. To me, we are perfecting a box of tools that few are using; we should endeavor to have ways to try them out and hone the cutting edges, and work then to go perform. A dojo approach helps with this by making it personal, but I'm not sure it is sufficient nor necessary, and it is small-scale and from my newbie perspective lacking shared direction. Take dieting, for a counter example: I can apply rationality and Bayesian thinking to my dietary
Good stuff. Would you consider turning it into a top level post?
thanks. how do i turn top level? I walked around the site and don't see a button that lets me do that. I am new to this forum (in fact i registered to reply to the original post, which I saw on some other site.)
Once you reach 20 points of karma, there will be a "Create new article" button in the upper right - same general area as your name and current karma score. To "turn your comment into a top level post" you mainly need to copy and paste, but you should also include some introductory context information, including a link to the top-level-article that inspired yours.
Actually, the "Create new article" button is always there. Posting to Less Wrong proper is disabled until you have 20 karma points, but you can always save draft articles.
You need more karma before you can make a top-level post. (I think you need 20, unless it's been changed since the site started.)
It was changed to 50 for a short while, then changed back to 20.
There's a grand tradition of women withholding sex for political reasons (usually to end a war), starting with Lysistrata. People resurrect this idea from time to time, and often achieve quite remarkable results.
As an aside: The interesting thing to remember about Lysistrata is that it's originally intended as humorous, as the idea that women could withhold sex, especially withhold it better than men, was hilarious at the time. Not because they weren't allowed, but because they were the horny sex back then.

There's an important piece missing from the articles analysis.

As humans we are inherently social in nature.

We delegate a lot of our reasoning to the wider social group around us. This is more energy efficient.

The article asks 'why do many people go through long training programs "to make money" without spending a few hours doing salary comparisons ahead of time'. We do long training programs (eg, college degrees) mostly because they are socially esteemed. This social esteem serves as a proxy to their worth, and its typically information that has a lower personal cost to obtain, than going and looking at salary surveys.

The reason we do so little systematic testing for ourselves is that we have trusted our wider social grouping to do it for us. I don't find a rational argument about the bungie jump mechanism nearly as compelling evidence of safety, as I do my talking with enthusiastic friend who has done it 20 times. If I was to learn about my cars braking mechanism in sufficient detail to convince myself of why it worked, I would never go anywhere. Instead, I see others who I trust driving the car, and 'delegate' to them.

This is simply a heuristic. It doesn't always wo... (read more)

We delegate a lot of our reasoning to the wider social group around us.... the vast majority of the time its a much more cost/information efficient way of doing things.

This strikes me as half right. Specifically: Yes, we often use social indicators to take the place of personal reasoning. And, yes, these indicators are better than nothing. But given the rapid (relative to the EEA) of change in e.g. what jobs pay well, what we know about how to avoid accidents, what skills can boost your productivity (e.g., typing on computers is now important, and, thus, it's important to learn more than two-fingered typing), etc., and the fact that social recommendations update fairly slowly, it seems that most on this site can do far better by adding some internet research and conscious thought to standard socially recommended productivity heuristics.

Most basically, because humans are only just on the cusp of general intelligence.

This a point I've been thinking about a lot recently - that the time between the evolution of a species whose smartest members crossed the finish line into general intelligence, and today, is a blink of an eye in evolutionary terms, and therefore we should expect to find that we are roughly as stupid as it's possible to be and still have some of us smart enough to transform the world. You refer to it here in a way that suggests this is a well-understood point - is this point discussed more explicitly elsewhere?

It occurs to me that this is one reason we suffer from the "parochial intelligence scale" Eliezer complains about - that the difference in effect between being just barely at the point of having general intelligence and being slightly better than that is a lot, even if the difference in absolute capacity is slight.

I wonder how easy it would be to incorporate this point into my spiel for newcomers about why you should worry about AGI - what inferential distances am I missing?

We who are the first intelligences ever to exist ... our tiny little brains at the uttermost dawn of mind ... as awkward as the first replicator (2:01 in).
I watched the end of this video and liked it quite a lot. Pretty good job, Eliezer. And thanks for the link. And wow, the Q&A at the end of the talk has some tragically confused Q. And I'm sure these are people who consider themselves intelligent. Very amusing, and maddening.
Selection pressure might be even weaker a lot of the time than a 3% fitness advantage having a 6% chance of becoming universal in the gene pool, or at least it's more complicated-- a lot of changes don't offer a stable advantage over long periods. ---------------------------------------- I think natural selection and human intelligence at this point can't really be compared for strength. Each is doing things that the other can't-- afaik, we don't know how to deliberately create organisms which can outcompete their wild conspecifics. (Or is it just that there's no reason to try and/or we have too much sense to do the experiments?) And we certainly don't know how to deliberately design a creature which could thrive in the wild, though some animals which have been selectively bred for human purposes do well as ferals. This point may be a nitpick since it doesn't address how far human intelligence can go. ---------------------------------------- Another example of attribution error: Why would Gimli think that Galadriel is beautiful? ---------------------------------------- Eliezer made a very interesting claim-- that current hardware is sufficient for AI. Details?

Another example of attribution error: Why would Gimli think that Galadriel is beautiful?

To be fair, the races of Middle-Earth weren't created by evolution, so the criticism isn't fully valid. Ilúvatar gave the dwarves spirits but set them to sleep so that they wouldn't awaken before the elves. It's not unreasonable to assume that as he did so, he also made them admire elven beauty.

Another example of attribution error: Why would Gimli think that Galadriel is beautiful?

Why do humans think dolphins are beautiful?

Is a human likely to think that one specific dolphin is so beautiful as to be almost worth fighting a duel about it being the most beautiful?

Well, it's always possible that Gimli was a zoophile.
Yeah, I mean have you seen Dwarven women?
I'm a human and can easily imagine being attracted to Galadriel :) I can't speak for dwarves.

Well, elves were intelligently designed to specifically be attractive to humans...

Most who think Moravec and Kurzweil got this about right think that supercomputer hardware could run something similar to a human brain today - if you had the dollars, were prepared for it to run a bit slow - and had the right software.
"Another example of attribution error: Why would Gimli think that Galadriel is beautiful?" A waist:hip:thigh ratio between 0.6 & 0.8 & a highly symmetric fce.

A waist:hip:thigh ratio between 0.6 & 0.8 & a highly symmetric fce.

But she doesn't even have a beard!

but he did have a preoccupation with her hair...

If I'm not mistaken, all those races were created, so they could reasonably have very similar standards of beauty, and the elves might have been created to match that.
[From Wikipedia:}( On the other hand, I suppose it's possible that if humans find Elves that much more beautiful than humans, maybe Dwarves would be affected the same way, though it seems less likely for them.
Also, perhaps dwarves don't have their beauty-sense linked to their mating selection. They appreciate elves as beautiful but something else as sexy.
Yeah, as JamesAndrix alludes to (warning: extreme geekery), the Dwarves were created by Aulë (one of the Valar (Gods)) because he was impatient for the Firstborn Children of Iluvatar (i.e., the Elves) to awaken. So you might call the Dwarves Aulë's attempt at creating the Elves; at least, he knew what the Elves would look like (from the Great Song), so it's pretty plausible that he impressed in the Dwarves an aesthetic sense which would rank Elves very highly.
Yes this is definitively correct. Also, it's a world with magic rings and dragons people.
There are different kinds of plausibility. There's plausibility for fiction, and there's plausibility for culture. Both pull in the same direction for LOTR to have Absolute Beauty, which by some odd coincidence, is a good match for what most of its readers think is beautiful. What might break your suspension of disbelief? The usual BEM behavior would probably mean that the Watcher at the Gate preferencially grabbing Galadriel if she were available would seem entirely reasonable, but what about Treebeard? Shelob?
Particularly when referring to the movie versions, you could consider this simply a storytelling device, similar to all the characters speaking English even in movies set in non-English speaking countries (or planets). It's not that the Absolute Beauty of Middle-Earth is necessarily a good match for our beauty standards, it's that it makes it easier for us to relate to the characters and experience what they're feeling.
You write "Eliezer made a very interesting claim-- that current hardware is sufficient for AI. Details?" I don't know what argument Eliezer would've been using to reach that conclusion, but it's the kind of conclusion people typically reach if they do a Fermi estimate. E.g., take some bit of nervous tissue whose function seems to be pretty well understood, like the early visual preprocessing (edge detection, motion detection...) in the retina. Now estimate how much it would cost to build conventional silicon computer hardware performing the same operations; then scale the estimated cost of the brain in proportion to the ratio of volume of nervous tissue. See for the conclusion of one popular version of this kind of analysis. I'm pretty sure that the analysis behind that slide is in at least one of Moravec's books (where the slide, or something similar to it, appears as an illustration), but I don't know offhand which book. The analysis could be grossly wrong if the foundations are wrong, perhaps because key neurons are doing much more than we think. E.g., if some kind of neuron is storing a huge number of memory bits per neuron (which I doubt: admittedly there is no fundamental reason I know of that this couldn't be true, but there's also no evidence for it that I know of) or if neurons are doing quantum calculation (which seems exceedingly unlikely to me; and it is also unclear that quantum calculation can even help much with general intelligence, as opposed to helping with a few special classes of problems related to number theory). I don't know any particularly likely for way the foundations to be grossly wrong, though, so the conclusions seem pretty reasonable to me. Note also that suitably specialized computer hardware tends to have something like an order of magnitude better price/performance than the general-purpose computer systems which appear on the graph. (E.g., it is much more cost-effective t
Thanks. I'm not sure how much complexity is added by the dendrites making new connections.
The dwarves were intelligently designed by some god or other. That a dwarf can find an elf more beautiful than dwarves could be an unfortunate design flaw. (Elves were also intelligently designed, but their creator was perhaps more intelligent.) Edit: The creator-god of dwarves probably imbued them with some of his own sense of beauty.
With all respect to Eliezer I think nowadays the gravely anachronistic term "village idiot" shouldn't be used anymore. I wanted to say that almost every time when I see the intelligence scale graphic in his talks.
Why do you think the term "village idiot" is "gravely anachronistic"? It's part of an idiom. "Idiot" was briefly used as a quasi-scientific label for certain range of IQs, and that usage is certainly anachronistic, but "idiot" had meaning before that, and continues to. The same is true for "village idiot".
You're right, wnoise, "village idiot" is part of an idiom but one I don't like at all and I don't think I'm particular in this regard. I should have put my objection as "'Village idiot' is gravely anachronistic unless you want to be insensitive by subsuming a plethora of medical conditions and social determinants under a dated, derogatory term for mentally disabled people." This may sound like nit-picking but obviously said intelligence graph is an important item in SIAI's symbolic tool kit and therefore every detail should be right. When I see the graph, I'm always thinking: Please, "for the love of cute kittens", change the "village idiot"!
For what it's worth, I don't find anything wrong with the term "village idiot". However, from previous discussions here, I think I might be on the low side of the community for my preference for "lengths to which Eliezer and the SIAI should go to accommodate the sensibilities of idiots" - there are more important things to do, and a never-ending supply of idiots. Still, maybe it should be changed. It's not because it doesn't offend me that it won't offend anybody reasonable.
0Paul Crowley13y
In conversation with friends I tend to use George W Bush as the other endpoint - a dig at those hated Greens but it's uncontentious here in the UK, and if it helps keep people listening (which it seems to) it's worth it.

This seems a bad example to use given the context. If you are trying to convince people that greater than human intelligence will give AIs an insurmountable advantage over even the smartest humans then drawing attention to a supposed idiot who became the most powerful man in the world for 8 years raises the question of whether you either don't know what intelligence is or vastly overestimate its ability to grant real world power.

2Paul Crowley13y
For the avoidance of doubt, it seems very unlikely in practice that Bush doesn't have above-average intelligence.
Wikipedia gives him an estimated IQ of 125, which may be a wee bit off for the low end of the IQ distribution. Still, if that's the example that requires the less explanation in practice, why not. Maybe Forrest Gump would work as well?
0Paul Crowley13y
My most recent use of this example got the response George W Bush Was Not Stupid.
OK, but if you buy the idea that environment has a substantial impact on intelligence, which I do, then it seems that the average modern human would have passed the finish line by a somewhat substantial amount. Really there is no finish line for general intelligence--intelligence is a continuous parameter. Chimpanzees and other apes do experience cultural evolution, even though they're substantially stupider than us. "I'm just about as stupid as a mind can get while still being able to grasp x. Therefore it's likely that I don't fully understand its ramifications."
You are equivocating "cultural evolution". If you fix the genetic composition of other currently existing apes, they will never build an open-ended technological civilization.
Technological progress makes the average person smarter through environmental improvements, and technological progress is dependent on a very small number of people in society. Let's say the human race had gotten lucky very early on in its history and had a streak of accidental geniuses who were totally unrepresentative of the population as a whole. If those geniuses improved the race's technology substantially, that would improve the environment, cause everyone to become smarter due to genetic factors, and bootstrap the race out of their genetic deficits.
I don't see how this note is relevant to either your original argument, or my comment on it.
It's basically a new argument. Would you prefer it if I explicitly demarcated that in the future? I briefly started writing out some sort of concession or disclaimer but it seemed like noise.
The problem here is that it's not clear what that comment is argument for, and so the first thing to assume is that it's supposed to be an argument about the discussion it was made in reply to. It's still unclear to me what you argued in that last comment (and why).
Trying to argue against a magical level of average societal genetic intelligence necessary for technological takeoff.
You can't get geniuses who are "totally unrepresentative" in the relevant sense, since we are still the same species, with the same mind design.
So: you are arguing that the point where intelligent design "takes off" is a bit fuzzy - due to contingent factors - chance? That sounds reasonable. There is also a case to be made that the supposed "point" is tricky to pin down. It was obviously around or before the 10,000 year-old agricultural revolution - but a case can be made for tracing it back further - to the origin of spoken language, gestural language, or to perhaps to other memetic landmarks.
It seems to me that once our ancestors' tools got good enough that their reproductive fitness was qualitatively affected by their toolmaking/toolusing capabilities (defining "tools" broadly enough to include things like weapons, fire, and clothing), they were on a steep slippery slope to the present day, so that it would take an dinosaur-killer level of contingent event to get them off it. (Language and such helps a lot too, but as they say, language and a gun will get you more than language alone.:-) Starting to slide down that slope is one kind of turning point, but it might be hard to define that "point" with a standard deviation smaller than one hundred thousand years. The takeoff to modern science and the industrial revolution is another turning point. Among other things related to this thread, it seems to me that this takeoff is when the heuristic of not thinking about grand strategy at all seriously and instead just doing what everyone has "always" done loses some of its value, because things start changing fast enough that most people's strategies can be expected to be seriously out of date. That turning point seems to me to have been driven by arrival at some combination of sufficient individual human capabilities, sufficient population density, and sufficient communications techniques (esp. paper and printing) which serve as force multipliers for population density. Again it's hard to define precisely, both in terms of exact date of reaching sufficiency and in terms of quite how much is sufficient; the Chinese ca. 1200AD and the societies around the Mediterranean ca. 1AD seem like they had enough that you wouldn't've needed enormous differences in contingent factors to've given the takeoff to them instead of to the Atlantic trading community ca, 1700.
Only if the "improved environment" meant stronger selection pressure for intelligence. That's not clear at all.
This point of view drastically oversimplifies intelligence. We are not 'just on the cusp' of general intelligence - if there was such a cusp it was hundreds of thousands of years ago. We are far far into an exponential expansion of general intelligence, but it has little do with genetics. Elephants and whales have larger brains than even our brainiest Einsteins - with more neurons and interconnects, yet the typical human is vastly more intelligent than any animal. And likewise, if Einstein had been a feral child raised by wolves, he would have been mentally retarded in terms of human intelligence. Neanderthals had larger brains than us - so evolution actually tried that direction, but it ultimately was largely a dead end. We are probably near some asymptotic limit of brain size. In three very separate lineages - elephant, whale and hominid - brains reached a limit around 200 billion neurons or so and then petered out. In the hominid case it actually receded from the Neanderthal peak with homo sapiens having around 100 billion neurons. Genetics can surely limit maximum obtainable intelligence, but its principally a memetic phenomenon
Yes, because brain size does not equal neuron count; there are scaling laws at play, and not in the whales'/elephants' favor. On neurons, whales and elephants are much inferior to humans. Since it's neurons which compute, and not brain volume, the biological aspect is just fine; we would not expect a smaller number of neurons spread over a larger area (so, slower) to be smarter... See / , , , , Cite for the 200b and 100b neuron claims? My understanding too was that H. sapiens is now thought to have more like 86b neurons & the 100b figure was a myth ( ), which indicates the imprecision even for creatures which are still around and easy to study...
Yes. - When I said 'large', I was talking about size in neurons, not physical size. Physical size, within bounds, is mostly irrelevant. (although it does effect latency of course). No - they really do have more neurons, ~257 billion in the elephant's case. 1 (2014) According to google, an elephant brain is about 5kg vs a human's 1.4kg. So we have 51 billion neurons per kg for the elephant vs 75 to 60 per kg for the human. This is by the way, a smaller difference than I would have expected. The elephant's brain has a larger cerebellum than us but smaller cortex: about 5 billion neurons vs our 15 billion ish. Interestingly the elephant cortex is also sparser while its cerebellum is denser, perhaps suggesting that we should look at more parameters, such as synapse density as well (because of course there are many tradeoffs in neural micro-circuits). Anyway the human cortex's 3x neuron count is a theory for our greater intelligence. But this by itself is insufficient: * the elephant interacts with the world mainly through its trunk which is cerebellum controlled * humans/primates use up a large chunk of their cortex for vision, the elephant much less so * humans rely far more on their cortex for motor control, such that humans completely lacking a cerebellum are largely functional Now - is having a larger cortex better for general intelligence than a larger cerebellum? - most likely. It appears to be a better hardware platform for unsupervised learning. But again the key to intelligence is software - we are smart because of our ability to accumulate mental programs , exchange them, and pass them on to later generations. Our brain is unique mainly in that it was the first general platform for language, not because our brains are larger or have some special secret circuit sauce. (which wouldn't make sense anyway - humans are recent and breed slowly; the key low level circuit developments were already made many millions of years back in faster breeding ancestor l
[emphasis added] Wait, what?
I think jacob_cannell is correct in that whales and elephants have larger brains, but that he's extrapolating incorrectly when he implies through the conjunction that larger brain size == more neurons and more interconnects; so I'm agreeing with the first part, but pointing out why the second does not logically follow and providing cites that density decreases with brain size & known neuron counts are lower than humans.
I don't always take the time to cite refs, but I should have been more clear I was talking about elephant and whale brains as being larger in neuron counts. "We are probably near some asymptotic limit of brain size. In three very separate lineages - elephant, whale and hominid - brains reached a limit around 200 billion neurons or so and then petered out." Ever since early tool use and proto-language, scaling up the brain was advantageous for our hominid ancestors, and it in some sense even overscaled, such that we have birthing issues. For big animals like elephants and whales especially, the costs for larger brains are very low. So the key question is then why aren't their brains bigger? Trillions of neurons would have almost no extra cost for a 100 ton monster like a blue whale, which is already the size of a hippo at birth. But instead a blue whale just has order 10^11 neurons, just like us or elephants, even though its brain only amounts to a minuscule 0.007% of its mass. The reasonable explanation: there is no advantage to further scaling - perhaps latency? Or more likely, that there are limits of what you can do with one set of largely serial IO interfaces. These are quick theories - I'm not claiming to know why - just that its interesting.

I woke up this morning with a set of goals. After reading this post, my goals abruptly pivoted: I had a strong desire to compose a reply. I like this post and think it is an excellent and appropriate reply to Lionhearted's (also a nice post), and would have liked to proffer some different perspectives. Realizing that this was an exciting but transient passion, I didn't allow my goals to be updated and persisted in my previous plans. An hour or two into my morning's work, I finally recalled the motivation behind my original goals and was grateful. It took some time, though, before I felt emotionally that I had chosen the right set of goals for my morning. Working through those transient periods of no-emotional-reward is tough. You need to have faith in the goal decisions of previous selves, but not too much.

I believe this comment is along the lines of what I would have written yesterday.. If you measure intelligence against the goals we haven’t met, we certainly come up short. However, zooming out to look at humanity as a whole, I am impressed by how productive we are. Huge cities, dozens of them, with gorgeous and functional buildings and everyone milling about being productive, all over the world. The infrastructure of our civilization is enormous. And all the art we output – books, movies, gardens. I think we’re amazingly successful at achieving some types of goals, when seen as a single complex system. When you zoom in to the individual, I think it becomes more difficult to judge from among the small-scale effects if humans are meeting their goals. The problem of individual success is so complex not only because we have trouble achieving our goals, but because it is a much more difficult task to decide on appropriate goals, and distribute resources among them. * Whatever our goals are, x,y,z; our goal is rarely to “have x, no matter what”. There’s always a trade-off and a limit to the resources we’re willing to expend towards x. Several comments have already mentioned the cost considerations in decision-making about goals. In particular, it can be argued that considering resource costs, one might better pursue nothing than pursue sub-optimal goals – pursuing goals of unknown value sub-optimally may be a reasonable middle ground. * Choosing goals appropriately so as to not waste effort depends upon an environment we have limited information about. Unknown variables and chance play a very large role in whether you will be successful or not. Instead of choosing a goal and directly pursuing it, it can be wise to do nothing and wait for opportunities. In life philosophies, this is described as ‘not fighting the universe’ or ‘yang instead of yin’. * There is a mind-body ‘wholistic’ aspect to meeting our goals, which unfortunately gives the impression that success

The fact that we so blatantly fail to optimize for using reason to solve our problems, and so effortlessly use it to rationalize our actions, is another strong piece of evidence for the thesis that reasoning evolved primarily for arguing.

Do you agree with (a)-(g) above?

  • (a) Yes. I have to do that consciously, verbally.
  • (b) Same – I have to mentally talk with myself about this;
  • (c) Thankfully, this one comes easy to me – I usually become genuinely interested in whatever I happen to be doing because I'm a damn perfectionist. This held true for all jobs I had during my career and all my past and current hobbies.
  • (d) Same as a) and b) – I have to concsiously gather such information. Thankfully, I usually become interested in the subject, provided that it aligns with my abilities and interests to at least some degree;
  • (e) Speaking of "methods that aren’t habitual for us", I'm fascinated with the idea of Nakatomi space (not math), and I'd very much like to level up my own Nakatomi navigation abilities;
  • (f) No opinion yet;
  • (g) I sort of failed this one last time. I had a conjunction in my goal definition: "Build the best Widget on the planet AND have at least one million dollars per year in profit". The overlap between the two subgoals turned out to be small. Plus, the goal had an internal conflict: I wasn't really ready to sacrifice the perfection of the Widget in exchange for the million. As a
... (read more)
Cf. Umeshisms.
Yep, that's where I took it from, couldn't remember the source.

There's a reason why we don't think strategically, and it's actually a very good reason and is unfortunately why we will never have an innately strategic mentality: cost. Specifically, the cost of time. i.e. it's always cheaper in terms of time to make a correct lucky guess on the first try than to work out a solution properly over a significant length of time.

Imagine there was a such thing as a lucky charm, and by holding it, you were, say, 70% more likely to always get the right answer on your calculus test without even needing to completely understand the problem. In this situation, taking the calculus test would take you just a few minutes, and you'd still score well enough to pass the class. In fact, you could take the entire years worth of tests, perhaps, in the same amount of time that it takes the rest of the students to work their way through the first one, yet still most likely pass. Your lucky charm didn't give you the best grade, but it allowed you to quickly solve all the problems you needed to solve and now you can spend the rest of the year taking other classes.

Well, the thing is, the human mind has evolved just such a "lucky charm", specifically our... (read more)

These unconscious strategies optimized or satisficed in the ancestral environment, when people weren't conscious of enough relevant factors to make long chains of reasoning (or quantitative thinking) obviously superior to their unconscious heuristics and biases. They're clearly far from optimal (and sometimes far from satisfactory) in the modern developed world. Some things have changed way too fast for evolution to keep up.
I completely agree; we think with our 'gut' as much as with our 'brain'. Only, I wouldn't denigrate "pattern matching". It's much more than a lucky charm; it's a powerful and high-level component of intelligence. It's something that we haven't systematized yet, and so we don’t understand it or always trust it very well. All my comments today will be defending human intelligence.. I wonder about the motive behind this goal, since I agree people could easily be more intelligent, and that would be great. Also -- in comparison to what? It's not like my saying 'humans are so intelligent they're at least 8.3!' means anything different than, 'humans are so dumb they're no more than 8.6!'.
I think the statement "Humans act a lot stupider than they think they do" has a pretty non-arbitrary meaning.
Stephen Colbert recommends that we think with our gut.

Part of it is that achieving success through means other than the standard things you're supposed to achieve success by doing well at can feel like cheating, possibly for some sort of signaling reason. Part of it is there are serious psychological and social costs not only to doing things that other people don't do, but to doing things for different kinds of reasons. Part of it is you're suggesting the benefits of what you call being strategic are larger than they really are by focusing on available cases where it changed someone's life and ignoring a great many forgettable and hard to pinpoint cases where it was just a time/energy sink, or where considering it was a time/energy sink, or where there was good reason to believe the relevant strategy had already been taken into account by whatever caused you to be doing the default thing, or where there seemed to be such good reason absent an appreciation of the world's madness.

Perhaps 5% of the population has enough abstract reasoning skill to verbally understand that the above heuristics would be useful once these heuristics are pointed out.

I think you're underestimating the average person.

I might well be. Given the value of empiricism-type virtues, anyone want to go test it (by creating an operationalized notion of what it is to understand the heuristics, and then finding randomly choosing several people independently from e.g. your local grocery store and testing it on them), and let us know the results?

Jasen Murray and Marcello and I tried this the other day concerning what portion of native English speaking American adults know what a "sphere" is ("a ball" or "orange-shaped" count; "a circle" doesn't), and found that of the five we sampled, three knew and two didn't.


I once taught middle- and high-school teachers who wanted to get certified to teach math. I was a TA for a class in geometry (basically 8th or 9th grade Euclidean geometry.) I had an incredibly hard time explaining to them that "draw a circle with center point A" means that A goes in the middle of the circle, instead of on the boundary. As I recall, it took more than a week of daily problem sessions before they got that.

Of course, I may have been a bad teacher. But I was trying.

I find that very surprising; I thought of using "circle" to refer to just the boundary and not the interior as being primarily a mathematical usage... though I suppose not to the same extent as it is with "sphere".
Did you do this test by asking them to define the word "sphere" verbally? Because I can easily imagine a less-articulate person saying "circle" when they really do understand the difference between a plane figure and a sphere. It might be better to ask them to select which of a given set of objects is a sphere, or even to name something that is shaped like a sphere, although in the latter case they might use the rote knowledge that the earth is a sphere, which could create bias in the opposite direction.
My estimate would be far on the other side: I think at least 95% of the population could understand and agree with those heuristics. I pay less attention to what people say they understand, and look at what they do, and am usually impressed by how intelligent people are -- in ways academic tests would not typically fully measure. .. I think only 5% could compose these heuristics, if asked to. And only half of 1% could know to compose them, without being told to... Regarding your study, I'm not sure what you could deduce other than that 'sphere' is not in common usage, at least not as the geometric object. (For example, any 4 year old child can distinguish a sphere from other shapes, and then 'sphere' is just a label.) Perhaps 'sphere of influence' is heard slightly more frequently than sphere as a geometric object. I would expect that the former connotation, if superseding the geometric one, would result in a little confusion and waving of hands, since it is so abstract.
What about "a 3D circle"?
We counted that as correct.

Did the ones who failed to give correct answers say something like "a species of worm found in south America," or did they refrain altogether from answering--possibly from fear of a trick question, or that they might be asked to explain the Banach-Tarski theorem about sphere doubling via the axiom of choice if they worded their answer in a way vulnerable to that?

Did you hold clipboards or wear lab coats while doing the questioning?

We tried to be friendly and unintimidating and, if asked, we explained with a bit of embarrassment that it had to do with a bet. Many just assumed we needed to know what a "sphere" was, though. We might have said we weren't looking for a fancy answer, I'm not sure. (Ideal, if you want to repeat this experiment, would be to get a child to do the asking and to say it's for their homework or something.) I don't clearly remember what wrong answers we got; it's possible that someone said "Does it mean circle-shaped?" but couldn't give follow-up detail and someone else, who looked rather blank, said something like "Um. 'Sphere?' Do you know what that is, Frank?" and then asked the man she was with, who answered correctly.

Like SarahC, I used to tutor folks who were en route to becoming high school math teachers, and who had to pass a math exam to be allowed to teach. Many of them genuinely didn't know what a sphere was, in the sense that often their eyes would light up if I told them that "sphere" meant "ball-shaped" (and, if I didn't, they would memorize the formula for the volume of a sphere but would often not know they could app... (read more)

It is important to note that real Bayesians wear robes, not lab coats. And they carry with them archival quality notebooks and archival quality pens. Lab coats are just silly. the weeks and months that followed, San Franciscans became accustomed to being accosted and asked a brief series of questions by a friendly young person carrying an archival quality notebook and wearing a clown suit.

... I think San Franciscans are already accustomed to that. It's just that kind of place.

My memory suggests either Jasen or Louie.
Thanks, Kaj.
Yeah, I thought the post was largely well-reasoned, but that that statement was reckless (largely because it seems ungrounded and plays to a positive self-image for this group.)

Is it really fair to say there has been "no strong evolutionary or cultural force sufficient to focus us on the very narrow behavior patterns that would actually be effective"?

Clearly we've evolved the ability (trainable hardware) to do the kind of planning, abstract reasoning, and analysis that would help us find these optimal courses of action. Furthermore, we've evolved the tendency to do a fair amount of this (compared to other life forms) automatically.

This isn't just a hardcoded ability to execute plans that bring food, shelter, and sex. If you decide you want a new pair of shoes, it's trivial for you to mentally construct and carry out the relatively complex (again, comparing to other species) plan required for you to get them. You'll even carry out some optimizations without too much effort ("wait, there's a closer shoe store east of here").

While it's trivially true that we haven't evolved to automatically seek the optimum path in all things (which there might be a good reason for, e.g. time-constraints on assessing and choosing paths), I think it's fair to say evolution has given us a running start.

And the selective pressures are pretty clear: somethi... (read more)

This is true and a valuable correction, however I would argue that our planning ability was evolved for very different goals in a very different environment, and while it works pretty well at "figuring out if your friend is backstabbing you" or "figuring out how to get calories", when it comes to long-term goals in the modern environment ("how do I manipulate this laptop so as to make me millions of dollars over the next 5 years?") it performs miserably, and all Anna's points then apply. Paul Graham recently made a related point, the world is getting more and more addictive, and in order to be productive we must develop more effective screening and anti-time-suck methods: On the plus side, there are a few great people working on this problem - Merlin Mann comes to mind.

Here's a strategic thing I figured out:

When I wake up really early, I get a lot more work done because the morning hours have no distractions and I feel like I'm ahead of the day, like I'm using 100% of the possible day.

Therefore I wake up really early now - 3-5am.

I wonder how much this differs from person to person. I tried correlating 2.5 years of data (when I got up from bed with my self-ratings of productivity for that day), and looking at the LOESS & cubic fits, it seems merely like getting up a bit after 8AM correlates with productivity but later is worse and earlier is much worse (albeit with limited sampling): And it's not hard to tell a non-causal or reverse-causation story: I can't be very eager to wake up and get started on work if I'm willing to sleep in to 10AM, now can I...? So I dunno. Maybe it's literally more time from simple sleep deprivation. That said, I'll have to remember to recheck this later; I'm trying out caffeine pills for causing earlier rising, so if earlier rising itself causes more productivity, there should be an attenuated effect from the caffeine.
Haha nice graph, good luck.
In general, getting an isolated environment is really important for certain types of work, and early or late are the simplest methods of isolation given how social humans and our environments are.
Depending on your environment, the late night hours could also serve the same purpose.

The deciding factor there is likely to be biochemistry, not environment. Many people simply can't be very productive late at night. They run into issues like caffeine crashes, as well as other biochemical fatigue causes that're harder to identify.

Yup. I'm one of four new hires; two of us keep a relatively normal workday, one wakes up at 5 and does all his work in the morning, and one stays up and does all his work between 10 PM and 4 AM. (Thank goodness for academia.)
Me, I suck at mornings. (I got out of bed at 2:00 PM today.)
Yeah, any arbitrary hack that leads to better results is a rationality win.
I find it easy to keep working when it's late. Eventually I realize that I've become slow and tired, and I would have been better off had I gone to sleep hours ago, and resuming work after the rest. I realize that by "late night hours" you didn't necessarily mean staying awake when tired. I also think the immediate post-waking hour is potentially valuable, in that I feel different during that time, so might work different (in a good way? I don't know). Maybe I just feel different because of what I'm typically doing, and if I sat down and worked, my state would quickly normalize.

I've wrestled with this disparity myself, the distance between my goals and my actions. I'm quite emotional and when my goals and my emotions are aligned I'm capable of rapid and tireless productivity. At the same time my passions are fickle and frequently fail to match what I might reason out. Over the years I've tried to exert my will over them, developing emotionally powerful personal stories and habits to try and control them. But every time I have done so it tends to cause more problems that it fixes. I experience a lot of stress fighting with myself in this way and quickly lose the ability to maintain perspective or, more importantly, to prioritise. My reason becomes a tunnel visioned rationalisation, and rather than being a tool for appropriate action becomes a tool to reinforce an unwise initial judgement of my priorities.

More recently, I've come to accept that my conscious reasoning self is, to an extent, a passenger in an emotional mind. What's more, that that emotional mind often has a much more sophisticated understanding of what will lead to a satisfying future than my own reasoning can provide. If I have the patience to listen (and occasionally offer it suggestions) I... (read more)

I've had the same experiences re: passion and productivity. On your last comment: "I think there is a real risk of having ones culture and community define goals for ourselves that are not actually what we want." It's not clear to me what your concern is. You draw a distinction between cultural goals and values, and personal goals and values, but how would you be able to draw the line between the two? (What does it mean to feel something "deep down"?) And even if you could draw that distinction, why is it automatically bad to acquire cultural goals? What would be the consequences of pursuing these "incorrect" goals or values? The most eye-opening article I've read recently, of possible relation to the subject, is a series on hunter-gatherer tribes by Peter Gray (see While I'm skeptical of Gray's seemingly oversimplified depiction of hunter-gatherer tribes, the salient point of his argument is that there is a strong anti-authority norm in typical hunter-gather tribes. This leads me to think that the "natural" human psyche is resistant to authority, and conformity has to be "beaten in." Some of my own emotional conflicts have been due to a conflictedness about obeying authority; it seems to me that the "emotional mind" is more in line with these primal psychologies, which are exhibited more strongly in hunter-gatherer tribes than in modern society. Certainly I would argue that following the emotional mind is not something everyone should do; it seems like there are a few niches in our society for the totally "free", who have the luxury of being able to make a living while largely ignoring the demand for individuals to find and conform to a specific externally-rewarded role in society. The positive and negative feedback individuals receive for following or ignoring their emotional minds, I would hypothesize, plays a large part in determining how much they ultimately listen to their emotional mi
Thanks for the link. You make a good point about the lack of a clear distinction, and at a fundamental level I believe that our genes and external environment determine our behaviour (I am a determinist, i.e. I don't believe in free will). However, I think that it is also possible to be highly motivated about different things which can cause a lot of mental stress and conflict. I think this occurs because we have a number of distinct evolved motivations which can drive us in opposing ways (e.g. the desire to eat, the status desire of being thin, the moral desire to eat healthily etc.). What I mean by "deep down" is the result of balancing these motivations to provide a satisfying compromise. The reason I emphasise culture is because I feel that society has developed powerful means of manipulating our motivations. This is good to the extent that it can make our sense of motivation (and enjoyment) more intense but can also lead to these strong internal conflicts, which, at least for myself, are not enjoyable. I am fascinated by how these manipulations of our motivation occur and like yourself experience a strong resistance towards authority. I think the strength of these feelings is a reflection of my personality. On a Myers Briggs assessment I am an ENTP and descriptions of this type indicate a common resistance to authority. In part I suspect this is because I don't find arguments not based on reason to be that legitimate. I'm not sure whether this personality is 'more natural' or is merely one form of survival strategy reflected by the interaction of my genes with the environment. I do feel a strong disparity between the world as it is and how I think it could (should?) be. In particular I think there is a great difference between peoples internal stories of why they act as they do and the true dynamics of how they have been influenced. For example, I find the ideas of Adam Curtis, John Taylor Gatto and Alain de Botton very interesting. I recognise that the soci

I agree with all of this.

Have some of you trained yourselves to be substantially more strategic, or goal-achieving, than you started out?

At my organization, the leaders regularly (every 3-12 months) get together and say "what have we been doing? Is it the most useful thing? If not (as has always been the case when we've done this) why not? how can we do better". We always find ourselves having made substantial errors, and over our 2+ years have found that our activities are slowly getting more focused on what matters - although still much less than we'd like.

Personally, the standard goal-setting / time-management techniques don't work great for me, but they are better than nothing. At least yearly, I explicitly review my life goals and annual sub-goals, which has some effectiveness. I keep them printed out on my laptop, which has had no effect. I have been experimenting lately with tracking time spent on each project (the Pomodoro Technique), which has been going quite well - it is harder to deny that you aren't working on the right thing when the timer is staring you in the face saying "I am off because you are not working on one of your projects, you mus... (read more)

Thanks for mentioning "Eat That Frog". I'm skimming through a PDF version and so far it seems to be an excellent book. I'm ordering a paperback from Amazon.
I was going to ask what your biggest complaints with Getting Things Done were, but then I saw that you have a "gtd" tag on your blog.

It has little to contribute about what to work on when and how to make that happen. I'm somewhat ADHD, so my problems are filtering my mass of ideas, and focusing on the ones that are most important, not most shiny. Tracking all my to-dos just results in my having lots of long lists of things I will never do. GTD has a teeny bit of this with their 50,000 foot through 10,000 foot review, but it mostly ignores the question of "how do I decide what to do, what to defer, and what do dump?", and to me that's the crux.

Contrast with something like "Eat That Frog!" which is about repeating again and again the simple message that if you focus your time working on the most useful task for your most important project, you will be much more productive. (Plus various heuristics for identifying such projects, such tasks, and building up the habit). It's a very simple message, yet following it, for me, yields much greater productivity returns than GTD.


Thanks for the list, and to you and Lionhearted for the posts. I haven't yet figured it all out. But I'm trying to get started on this approach:

Time "working toward your goals" as you usually do is habitual. There's no harm in writing out a calendar for your pre-existing habits, and it's probably very useful for most people to do so to form new habits. My system mostly revolves around calendars.

In my calendar, the habit I've written in is a bit of planning or "meta" time. Twice a week, I plan out a full week. By re-evaluating the course of action half-way through, I'm hoping it should be easier to track where I go off-track.

Once a month, this planning time must include meta-planning. During this time, the idea is to review that my planning method is the most effective. This is the time for reviewing the past month's calendar, and also for reading any books on planning.

As for evaluating sub-goals, I've decided that the best step after some initial self-reflection is consultation. Therapy/coaching can be valuable for anyone working to solve an internal problem that defends itself, and it seems prudent to gain what I can from professional guidance. I've stated that... (read more)

Having regular time which is explicitly for planning, not working, is vital. Daily, weekly, monthly, and yearly seems to work pretty well. Daily - what are my most important tasks? Weekly - how did the last week go? What are my critical projects/tasks for next week? And so forth. That's one of the simple-but-massively-effective insights of things like GTD, even though I disagree with their tactics - regularly spend time explicitly planning rather than working.

2Luke Stebbing13y
Yes, my approach is similar. I schedule planning time where the level of abstraction is proportional to the logarithm of the recurrence period, and it seems effective at pruning cached goals and sanity-checking my meta-goals. (However, it's difficult to test because of the time scales involved and the fact that I can't fork myself.) Recently, I noticed that my general skills aren't improving as fast as I'd like, so I decided to take advantage of compound interest[1] and created a parallel structure for working, learning, and meta-learning. 1. Richard Hamming, "You and Your Research" EDIT: Fixed link misparse.
This brings to my mind the idea of a complete n-ary tree (with n being the base of your logarithm), with the highest abstraction level at the root - if you spend equal time on each node, then you'll portion time across levels as you described. I found this amusing - I'm not sure I know of any generally meaningful meta-thinking levels beyond say, 2.

Perhaps the only way to train yourself to achieve long-term goals is to use short-term motivation to improve your automatic behaviours, instead of trying to train ourselves to have motivational systems that work on long-term multi-step plans.

What if we broke down the action steps of your algorithm into:

  • ask yourself what kind of person achieves goals like this by habit
  • ask yourself how you could change yourself into that kind of person, perhaps by establishing new habits
  • evaluate whether your new habits are effectively causing you to do things that work t
... (read more)

I think you rather overstate your case here. When you say:

But there are clearly also heuristics that would be useful to goal-achievement (or that would be part of what it means to “have goals” at all) that we do not automatically carry out. We do not automatically:

I'm not sure who you are referring to by 'we'. Most of these tactics are fairly commonly advised by everything from management and business books to self help and sports training. Some of them are things that come naturally to me and seem to come naturally to quite a few other people I know ... (read more)

I agree that many of these heuristics are discussed in the business and self-help literatures reasonably often. My point was simply that we for the most part do not automatically implement them -- humans seem not to come with goal-achievement software in that sense -- and so it should not be surprising that most human "goal-achievement" efforts are tremendously inefficient. These heuristics are relatively obvious to our verbal/analytic reasoning faculties when we bother to think about them, but, absent training, are mostly not part of our automatic reward-gradients and motives.

If you find that e.g. (a) and (c) come fairly naturally to you, ask yourself why, and see if you can spell out the mechanics in ways that may work for more of us. The question here isn't "are (a)-(h) novel ideas that demonstrate amazing original insight?" but rather: "how can we get our brains to automatically, habitually, reliably, carry out heuristics such as (a)-(h), which seem to offer straight-forward gains in goal-achievement but seem not to be what we automatically find ourselves doing".

I think d) for example (gather information) is pretty 'automatic' for many (if not most) people. It is the natural first step for many people. It is often difficult to find accurate information and detect and ignore misinformation so simply taking this step is not sufficient on its own however and I think it is in the execution that most people fail. Both a) and c) have come naturally to me for as long as I can remember. I'm afraid I can't offer any more detail through introspection. It still strikes me as odd when people don't do these automatically even though I've learned over time that many people do not. For some of the other heuristics, e) for example, I've had to consciously work to develop them as habits of thought (still imperfectly in this case). My general approach has been to consciously think through what other heuristics I could apply periodically (usually prompted by getting stuck / not making progress on some goal) and then apply any heuristics that I realize I have neglected. Over time some things can move from this 'meta' level of analysis to become more automatic habits.
I disagree for everything people have enough information of to have performed a prior opinion. Gathering information is predicated on the idea that you do not have enough information. Most people believe they already know what they need to know, and all that is left are the details. The perfect example is the one in the article: I want to become a comedian, so I will watch Garfield. Where is the intermediate step of finding out whether or not watching a funny show is a good way to learn how to be funny? You need more information to even begin to answer that, yet he skips this step. Why? It is almost certainly because he has already decided that the way to learn to be funny is to study funny things, and he thinks Garfield is funny, so he is going to study. Now, it is entirely possible he could learn to be funny just by watching Garfield and asking the right questions, but given his track record I seriously doubt it. It's also re-inventing the wheel, because other people have figured out the secret of funny before him (else there would be no one funny to study) and the information is available for those who seek it. If a person is aware he lacks information, then yes I would agree that gathering information is automatic. However, most people in most situations where this comes up are not aware that they lack information. They believe they know exactly how to do what it is they want to do, even though they are almost certainly wrong, and even though they are wrong on these matters all the time (the many failures to achieve their goals). Therefore, there is no need to seek new information, so seeking information is not automatic. Another way of putting it is that you can't seek the right information if you aren't looking for it. I would agree that, when people are aware that they lack information, they generally try to inform themselves.
You can edit your comment to fix the quote formatting. We use Reddit Markdown syntax - you can see the most-used options by clicking "Help" below the comment box while you are writing/editing a comment (to the right of the "comment" and "cancel" buttons). To quote something, just start the paragraph with > .
Thanks :)
Ah... silly me, I meant to look up exactly how to do quotes before I posted, and I forgot. My mistake.
Likewise. I somewhat envy those who can form or decide "doing (or achieving) X will make me happy", and it really turns out to be true (whether it's an accurate or merely self-fulfilling prophecy doesn't matter too much). I've considered whether this sort of confusion (about what goals will give lasting happiness in their pursuit or accomplishment) might have a solution in caring less about some things (to lessen constraints until there's a reachable solution). For example, I like to do things that give me evidence that I'm unusually talented. Perhaps if I gave up that reward, I would find myself doing things that are more pleasurable or valuable. I definitely don't think scorched earth Buddhist "don't care about anything" is a good move for me. I'm trying to give up just what seems optional and harmful (while expecting sometimes to find that I can't and so shouldn't try to, even though a hyper-rational person would be able to).

I somewhat envy those who can form or decide "doing (or achieving) X will make me happy", and it really turns out to be true (whether it's an accurate or merely self-fulfilling prophecy doesn't matter too much).

Don't ask what will make you happy, ask what future conditions you would prefer to experience, and what self-descriptions you would prefer to judge yourself as having.

Why? Because our brains aren't evolved to optimize happiness, they're evolved to steer the world to more-preferred states, and to optimize our expectations of others' perception of us. So if you start from those points, your inquiry (and subsequent optimizations) will benefit from hardware assistance.

(Whereas, if you try to optimize "what will make me happy", your brain will get confused, and/or try to optimize what things, socially speaking are "supposed to" make you happy, i.e. what your brain expects would cause your peers/tribe members to judge you as being happy.)

Have you written elsewhere in more detail about this? I'm particularly interested in any tips you have on using our social expectation machinery successfully.
Well, I did a multi-part video series/audio CD on this topic a couple months ago (called, "The Secrets of 'Meaning' and 'Purpose'"); my comment above was more or less an attempt to summarize one of its key ideas in a couple of sentences. I've also written about it in my newsletter before, but none of these materials are publicly available at the moment, even for sale. (I keep meaning to put them up for sale but I'm usually too busy getting my current month CD, newsletter, and workshop put together to spend much time on trying to get more business. Probably I should think more strategically and move "posting on LW" a bit lower on my priorities... ;-) ) Think character/identity-priming. What "kind of person" do you want to be, in the sense of "the kind of person who would X"... where X is whatever you would like to motivate yourself to be/do. What kind of person do you want to see yourself as? Be sure to see it from the outside, as if it were someone else. Experiments show that "kind-of-personness" priming has a big effect on people's decisions; when our identity is primed as belonging to a particular group, we automatically behave more like a stereotype of that group. So, pick what group(s) you want to prime yourself as a member of, and go for it. ;-)
This seems right. The things people have described to me as being goals they have reached that, as they predicted, made them happy, were definitely of the two broad types you described. If you construe hedonic experiences as falling under "future conditions you would prefer", then perhaps your dichotomy is exhaustive. For sure nobody needs to be told to do what feels best locally - and most of us have reached a limit in that respect (there are only so many cheesecakes you can benefit from). Some complaints, however: seems just as hard as predicting what I can accomplish that will make me happy also, I have been hesitant to indulge in such satisfactions, because it seems to me that they're most often achieved by or result in hypocrisy. However, I should probably just do it if it feels good. You seem to contradict yourself. Other than (A) and (B), are there any other things that can make me happy? If not, then you seem to be arguing that evolved human brain-nature does in fact help me become happy. Also, why do you argue only from evopsych/biology? I'm mostly limited by the options permitted by the society I live in, and may still be crippled by some religious upbringing or other social programming that lacks force of law or threatened violence. The second is a subcategory of the first. I assume you mean preferred for various genes' survival. I think there is a lot about us that is accidental and serving no particular gene (it's just some artifact of the reachable or actually reached evolutionary "design"). I do think it's fine to ask of my present state "am I happy (in other words, how do I feel)?", and to wonder "what will make me happier if I get it?" For the latter, I do like your two suggested (vague) subgoals. I think the former is still essential, although I suppose you could ask how you feel in relation to your two general happiness subgoals.
What I'm saying is that the machinery is better at answering concrete questions relating to these matters, than abstract ones. To our abstract thinking machinery, it seems like there should be no logical difference between "what will make me happy?" and A) "what kind of world do I want to live in?" or B) "what kind of person do I want to be?" However, as the saying goes, the difference between theory and practice is that in theory, there's no difference, but in practice, there is. ;-) No, I meant, "preferred", as in "what would you prefer?" Not your genes. (Your genes already have another level of control over what sort of preferences you're able to learn, but that's not relevant to the issue at hand.) This is another one of those seemingly nitpicky things that actually makes a difference: try asking what you want, not what will make you happier. (Also, what you feel, not whether you're happy.) The problem with asking "am I happy" is that it discards information that would be useful to you about what you do feel, in favor of a one-bit, yes-or-no answer. (At minimum, knowing the difference between the broad non-happy categories of sad, afraid, and mad would be good!) Next, the problem with "what will make me happier" is that it presupposes ("have you stopped beating your wife?"-style) that there is something that will "make" you happy, as though it's something you don't have any control over. Essentially, the question itself is continually re-priming the idea that you are not in control of your happiness! Keep that up, and pretty soon you'll be thinking things like: Oops. Too late. ;-) Truth be told, the question is more a symptom than a cause; I'm not saying you feel limited or stuck because you asked the question, so much as that the question is both an expression and reinforcement of the stuckness you already feel. To change your answers, change your questions! (And be aware of what those questions are priming, because the questions you habitually ask you
It feels like you're obsessed with the specific words I've used to express a line of introspection/deciding/planning, as if I'm going to verbally ask myself a question, and parts of me will react very superficially to the phrasing. I don't think I need to worry about it, because when I think about something in depth, I really think about it. If I'm really thinking, then it doesn't matter what words I use to describe the topic. However, I am in general willing to experiment with priming tricks, because it's true that I can't afford to think deeply all the time. I haven't found any such trick yet that I can definitely say works. You quoted a phrase "I'm mostly limited ..." from my claim that social constraints and programming matter as much as brain architecture, but didn't respond to the substance. I'll assume this means that you agree. Do you have any advice on exploiting those factors? What you've given here is based only on evopsych brain-architecture guesses (a "hardware advantage" reachable by well-phrased self questioning)?
Not quite - I'm also saying that people's choice of words is rarely random or superficial, and tends to reflect the deeper processes by which they are reasoning... and vice versa. (i.e., the choice of words tends to have non-random, non-superficial effects on the thinking process). Note that how a question is phrased makes a big difference to survey results, so if you think this somehow doesn't apply to you, then you are mistaken. It only feels like such things don't apply to ourselves, like the people in the "Mindless Eating" popcorn experiments who insist that the size of the popcorn container had nothing to do with how much they ate. They (and you) only think this because of the limited point of view from which the observation is made. Of course - for the same reason that people don't think the size of the container makes any difference to how much they eat. It's easy to write off unconscious influences. That being said, choice of questions makes a big difference to answers, but it's not solely a matter of priming. After all, if you use the words "What do I want?" and go on internally translating that in the same way as you asked, "What will make me happy?", then of course nothing will change! So, it's not merely the surface linguistics that matter, but the deep structure of how you ask yourself, and the kind of thinking you intend to apply. Based on the challenge you described, my guess is that the surface structure of your questions is in fact a reflection of how you're doing the questioning... because for most people, most of the time, it is. The reason I quoted "I'm mostly limited" is because I wanted to highlight that the thought process you appeared to be using was one in which you already assume you're limited, before you even know what it is that you want! (It sounded to me as though you were implying that it doesn't matter if you know what you want, because you're not really going to get it anyway -- and that wasn't just from that one phrase; that
While it's true that I can't rule out things that I can't detect, I can't really believe in them, either. I understand where you're coming from. You've tried much harder than most people do to understand your own emotions and motivations, and you're pretty sure you've actually done so. I agree that there are many people who think they have, but haven't. Similarly, sometimes people think they're really trying, but aren't. I'm impressed with how much you know about my thoughts :) I won't suggest that we're fundamentally different in any way, but I do sometimes wonder if there are significant architectural emotion/motivation differences in "normal" people, other than the obvious (male/female). The popcorn container example doesn't surprise me or change my views in any way - but cool. I feel like I'm pretty flexible in what I want - that is, I can ask what it is I currently want, but I also ask what I maybe should want, because I've had some success simply provisionally choosing to care more or less about particular things. I sometimes find out that I couldn't actually maintain that level of (dis)interest, and I take this as evidence (not certainty; just some evidence) that such a (lack of) desire is a fixed part of my personality.

More examples please in the likes of [1]. I am bright enough to understand them, but not to come up with too many on my own.

Some examples off the top of my head: 1. A designer who has spent 12 years working in Photoshop but haven't learned even basic hotkeys because doing everything with the mouse is "more convenient". The same person also always clicks the Open button in file opening dialogs with the mouse cursor instead of just double-clicking the filenames. 2. A guy who often gets out of the building to get some junk food instead of checking out a new affordable cafeteria on the ground floor of the same building, which has been open for 3 months. (That was me, prior to today.) 3. A manager who has been working in software development companies for 15 years who still uses IE7 as his main web browser.

Perhaps 5% of the population has enough abstract reasoning skill to verbally understand that the above heuristics would be useful once these heuristics are pointed out. That is not at all the same as the ability to automatically implement these heuristics.


I was dubious of this also. 5% seems too low.

I disagree - I think that people usually do know how they could be more productive. This argument is really about people who TALK versus people who DO - the talkers know that optimally they should be "do"ing. But, being a sheep (talker) is BORING, and being a fox (do-er) is LONELY.

In the author's example, the comedian knows that watching re-runs is the easy way out. He'll be bored, but he'll learn a little bit and he can tell his friends he's working.

He also knows that, ideally, he'd be working comedy all the time instead. But he's already workin... (read more)

Hi psyklic, and welcome to Less Wrong! Be sure and introduce yourself in the welcome thread.

What do you do when the answer to (a) is "Nothing in particular"?

Keep introspecting. If you find yourself preferring to e.g. play a video game, rather than to lie in bed, there's a reason you prefer it. Micro-goals count too.

Introspection? I try to avoid that, and I think I have a pretty good reason. I don't like to do introspection because I don't like what I find. When I query my brain for what I ultimately want out of life, the answer that comes back is "I want to die." And it's not that I'm particularly unhappy at the moment; "death" seems to feel like a kind of freedom, freedom from all the annoying things that other people insist that I do (and I can't justify saying "no" to) and all the annoying things that I have to do to maintain this body, such as eat and go to the bathroom, freedom from, as Shakespeare put it, "the heart-ache and the thousand natural shocks / That flesh is heir to". The emotion I feel most strongly when I contemplate the state of being dead is not fear, not sadness, but relief - and that scares me. I don't think I ought to want to die. And if I did die, that would make many people who know me very sad, and I definitely don't want that. So I haven't killed myself yet; I'm waiting for my parents to die first. And until then, I just waste time doing nothing in particular. Sorry to be so morbid. :(
Umm... I want to point out the contraction between you saying introspection says "die" and the fact that you, having reflected on this, deciding not to do "introspection" because doing so leads to the thought that dying would be good and you don't want to die. If you could change yourself such that doing "introspection" didn't lead to the thought of death would you? The fact you haven't killed yourself suggests that you're not actually introspecting on your true values, just some unhappy subset thereof (or perhaps, introspecting with your true values on an incomplete subset of the data you have about the quality of your life/the universe). Also, if I promise to spend 5 minutes crying upon notice of your death, will you not kill yourself in order to spare me the unpleasantness?
Probably. I currently have compelling reasons to refrain from killing myself, regardless of my general lack of personal interest in continued existence. Alas, like so many other things, the peace of the grave is denied to me. I don't expect you to get such notice. You're just some guy on the internet; if I simply stop posting, you'll probably never know why. But no, that wouldn't be enough to dissuade me from implementing Really Extreme Altruism if I ever decided to actually go through with it.
Often the main reason that I do anything seems to boil down to "sheer force of habit."
May I recommend an experiment then? Try ignoring force of habit for a few days and see how you feel about all those activities. It may help you to come up with internal reasons to want to do things rather than relying on the external pressures of habit and expectations. If, after a few days, it turns out that lying in bed doing nothing is actually preferable to escapism through computer games and surfing the Internet, I submit that it means your medication isn't doing everything that it should and that getting that fixed should be your first priority. In all other cases I would expect that it will turn out that you do have reasons to get out of bed that aren't dependent on habit. For me, no matter how depressed I am I always get out of bed at the very least, even if it's just so I can stare at the wall while I try to focus and motivate myself to do something enjoyable or productive. If I inspect my reasons for doing so, "habit" is definitely a large part of it. But a larger part is "boredom", as in, I can only contemplate my utter worthlessness for so long before my thoughts start feeling repetitive and boring, and I feel the need to distract myself by getting up and doing something that I find at least marginally engaging.
I've tried the whole "lying in bed doing nothing" thing. When I wake up, I'm usually groggy and can end up spending an hour or two in bed half-asleep. I'm usually not thinking about much of anything at all during this time, or at least I'm not thinking in words, so I'm not "contemplating my utter worthlessness". When trying to go to sleep, though, I tend to get frustrated if I don't fall asleep quickly, so I'll often turn on a portable game system (leaving the lights in the room off) and play until I basically can't stay awake any more. I strongly suspect that this is a bad idea, though, as it tends to shift my sleep schedule later and later. I also have a tendency to take naps during the "day" and then get back up. (I do this once or twice a week, I guess.) Sometimes, I really do play video games because the playing of the game itself is fun. (Persona 3 Portable is the most recent game to have taken over my life.) Some games have both boring parts and more interesting parts, and I play through the boring parts so I can get to the more interesting parts. Once in a while I'm playing one so I can say I've finished it before I go on to another one; I'm a bit of a completionist and often get annoyed if I don't get Hundred Percent Completion. Or sometimes it's because I'm simply curious about what happens next even though the game itself isn't really all that good. (I'll occasionally see a movie I don't expect to be very good simply to satisfy my curiosity about it.) And I've found that carrying around a portable video game system (or a novel) is a great way to avert boredom when doing things like waiting in line. So "habit" and "convenience" aren't the only reasons I play lots of video games. There is one specific thing that I've noticed about games, though: even a bad game gets a lot more interesting when I have some work to avoid. It's often exciting for me to have something that I should be doing but don't want to, and then not do it. (I noticed this phenomenon whe
Right, so it sounds like you do value engagement over doing nothing. That's certainly a good start. Basically, I think it should be possible for you to find some better (as in: likely to help you change your terminal value) goals that you actually want to do, without necessarily having to introspect about your desire to kill yourself. Of course, I could well be generalising from one example. Oh boy do I know that feeling. The corollary being that after I finally got the work done or sat the exam or whatever I suddenly realised that I'd wasted 20+ hours on some piece of dreck :)
Sounds more like a biochemical issue to me; that sort of laziness is likely to mean something's wrong that's not just psychological. Are you taking a multivitamin regularly?
No, but I do take antidepressants.

I predict with p=0.95 that you have at least one micronutrient deficiency which is greatly contributing to your depression, and that starting to take a multivitamin regularly would be enormously to your benefit. I predict with p=0.6 that you are specifically deficient in thiamine, and that a single dose of sulbutiamine (a molecule that crosses the blood-brain barrier and then breaks into two thiamine molecules) would cause a large and sudden reduction in your depression. I am basing this on my own experience with thiamine deficiency (caused by T1 diabetes), which produced in me a specific type of apathy which I recognize in your comments.

Unless you either lied about taking a multivitamin to your current doctor, or ignored their advice to take one, fire him or her and find a new one. Also, thoroughly research every drug you're currently taking. At a minimum, search for the name of each one on PubMed, skim the first few pages of titles and read some of the abstracts. Don't adjust anything without consulting a qualified doctor, but do make sure to have that consultation.

Following up on this may be the most important thing you ever do.

EDIT: One other thing - if you're on antidepressants, you should be getting blood work, of the "large checklist of tests" variety, done on a regular basis. Make sure your TSH has been tested at least once in the past two years (result will be interesting with p=0.1, but very interesting if it is).

I have been getting blood work; everything always comes out just fine. (Yes, thyroid hormone is one of the things that's been checked.) And none of the many doctors I've been dragged to have told me to take vitamins, although my psychiatrist has occasionally asked about my diet. There are multivitamins in my house, but I stopped taking them a long time ago because they're these really annoying, very large chewable tablets the size of quarters.

In terms of vitamin deficiency, I'm actually most suspicious of vitamin B12. Both my maternal grandmother and my mother have low levels and get B12 injections regularly. (My mom is currently 60.) I once asked my psychiatrist to have my B12 checked, but I don't think it actually has been.

Also, the basic effect of my antidepressants has been "Well, I am more cheerful now, but my life still sucks every bit as much as it did when I wasn't taking them." I'll quote a doctor's anecdote:

“I remember one patient who came in and said she needed to reduce her dosage,” he says. “I asked her if the antidepressants were working, and she said something I’ll never forget. ‘Yes, they’re working great,’ she told me. ‘I feel so much better. But I’m st

... (read more)
Hello CronoDAS, You're story sounds somewhat similar to mine (but I'm considerably older than you). My mother had Multiple Sklerosis, too; I was her main caretaker until her death. It's strange that it didn't dawn on me how much my upbringing and my mother's illness has shaped my father's and my life - and furthermore I didn't really understand until recently how unusually withdrawn my life has been so far. Now, social isolation is a well-known danger when you're severely ill but I was (at least on a physical level) healthy and still I wasn't able to break out of the habits that I (to a certain degree) adopted because of my former circumstances and a general inclination towards shyness. I have a very unoriginal proposition for you: Act as soon as possible and change your situation! Believe me, things don't get easier once you're ten years older than you are now. What about a "trial move"? The way you describe your parents I think you could always return if for one reason or another you can't cope with being "on your own". I'm "in the process" (as vaguely as that may sound) to finally get my act together and make some serious, so-long-overdue-you-won't-believe-it life changes. I know some of the depressive symptoms you're describing: A general world-weariness, an enmity to my own body, avoidance of "boring" errands up to a point where it got seriously damaging, seeing no sense in dragging this carcass of mine through a pointless world etc pp. But somehow things begin to click for me a bit more. If it's "meant to be" that I'm going down, then at least I'm putting up a fight (i.e. trying to beat some amount of rationality into my skull which is thick with irrational believes and blocks)! Take care!
The most immediate change I probably need to make is "get an income". It's a prerequisite for most other changes I'd want to make. (My mom's MS is unusual, because she started showing symptoms late in life, only a few years ago.)
Then your mom is lucky in more than one regard! Because of medical progress it is very different to be diagnosed with MS today than it was in 1973, when my mother had her first MS episode at the age of 27. You wrote earlier that a lot of what you don't like about your life is simply due to habits. Personally, I find the key to change is to persistently chip away at my mountain of bad habits (my main nemesis is procrastination) and to think more from day to day, to try to implement some (any!) positive difference in my life at a daily basis, and be it only to show a friendly face when you're not really feeling like it, or to do that one more household chore you try to avoid, or to confront another uncomfortable truth about yourself and verbalize it to (well-chosen!) friends and acquaintances. I know, these strategies are so basic they almost don't qualify for Self-Help 101 but once you "really want to change" I found they work quite well.
Actually, what I said was that a lot of the activities I do (video games, blog commenting) are generally done because they're what I've gotten used to spending time doing, not that the habits themselves are necessarily causing the problem.
I apologize in advance for the long-shot other-optimizing but, well, here goes. Something that has repeatedly worked for me to move from a lethargic, somewhat depressed state to an active and happy (if restless) state is to deliberately refrain from sexual release while not refraining from exposure to sexual stimuli. I came upon this independently but I've since found the same basic idea in Taoist literature and in femdom literature. It could also easily be pitched as an evo-psych idea.
Uh, what's the mechanism there?
I'm not aware of any research on this exact question so what literature there is is mostly religious or pseudo-scientific. What I do think is fairly well-established is that lack of sexual release makes men restless. Why 'restless' in my case translates to "active and happy" rather than, say, "aggressive and abusive" I don't exactly know. Some factors that may be relevant (but I had not thought of before now): a) My baseline personality is quite docile and submissive, b) Like many people here, I enjoy toying with self-hacking, c) I have lots of projects to pour extra energy into, projects that are satisfying intellectually and status-wise.
Presumably sublimation. At least, Freud's sublimation reminds me a heck of a lot of the Tantric Buddhism and Taoist ideas of collecting ch'i from sexual activities (or lack thereof) and using it for other purposes.
Do you have a skill that you are willing to offer potential roommates? I'm currently exchanging my culinary expertise for room and board. It's a good deal. I can get away with having a really minuscule income to cover discretionary expenses and mostly I do whatever I want all day until it's time to mix up a batch of muffin batter.
I suggest trying emergen-c or your local generic version. It's mostly marketed for the vitamin C megadose, but 416% of the recommended minimum of B12 isn't insignificant. The generic I use has an odd taste when prepared according to the directions, but is good when mixed with a sweet drink like kool-aide.
B-12 deficiency is usually caused by problems with absorption, not by a lack of B12 in the diet.
Yes, but sometimes (often?) it can be cured by increasing dietary sources. Acute doses might not be ideal, though.
Have you ever tried cognitive therapy? If antidepressants made you more cheerful but haven't otherwise changed your outlook then maybe some systematic effort at altering your thought patterns would? Maybe combined with antidepressants if they make you more likely to complete homework assignments (I think cognitive therapy involves those).
But you seem to be quite smart. Sigh. I guess you know that you will be happier with a decent-paying and/or intellectually engaging job (even one you "can't stand"), because you'll then have a realistic chance for some of the things you want, so if taking antidepressants lets you tolerate finding and performing a job, then it makes sense to keep on using them. Without knowing you well enough, I'll still guess that it's unlikely that you "don't think you can" based on your actual ability and opportunity, but more because of the helplessness of depression (naturally I could be completely wrong).
Well... I've had some pretty bad experiences with employment. The last time I was employed, I sat in a cubicle and surfed the Internet all day while feeling guilty about not getting anything done. It was really awful. I once signed up with a temp agency. My first assignment lasted a week. After it was done, the customer complained about me (please don't ask why) and I was fired from the temp agency. Another time, I worked as a cashier at a supermarket, and I lasted all of three days before being fired for insubordination. Money's never been a very big motivator for me. I've got over twenty thousand dollars sitting in the bank, so if I want to spend $50 on a video game, or $300 on a video game system, I can. And I have enough unplayed video games sitting on my shelf to last me a long, long time. What would I do with more money? Well, I did decide within the last 24 hours that I definitely can't cope with being my mom's caretaker any more, so I'd want to move out of my parents' house, and I'd want to get a cat, and I once calculated that it would cost me a few thousand dollars a year to play Magic: the Gathering competitively, but that's about it. The usual "carrot-and-stick" approach to motivation doesn't work too well on me; I just give up on getting the carrots and resign myself to enduring the sticks. Is that what they call "learned helplessness"? I've had people trying to drum the lesson "you're going to have to do what you're told, regardless of what you want to do, and fighting will only make things worse" into me my whole life, and it seems like they were mostly right: as a child, you're pretty powerless to get what you want, if what you want is "not to go to school".
On the plus side, I think I could probably teach or tutor math without going crazy.
Most people find teaching (well) to be difficult. If you're good at it, then that's quite valuable.
What sense of valuable are you using here? I've seen very little evidence in my interactions with the education system that being good at teaching is highly valued either in terms of direct financial rewards or career prospects.
Effective tutoring would be very valuable to rich parents. Perhaps passively building your reputation wouldn't work; self-promotion would be necessary. Public school teachers are well compensated overall over an entire career (including pension), although I doubt the job is very fun, and you're right that the rewards are in no way contingent on actually teaching well.
Are rich parents able to distinguish effective tutors? In my experience they largely hire based on elite education. Plus, most of their "tutoring" time is really guarding the child to make sure the child actually does homework. But there are also non-rich parents. I don't think that DAS should have any trouble getting hired and keeping tutoring positions for $20 or maybe $50 hourly, if he can find parents who want a tutor. This is a very different skill and I think the main determinant of people actually tutoring. (ETA: I seem to have missed JG's second sentence. Sorry.)
I poked around a little earlier today, and found a few sites that do paid online tutoring. This one was the most open about hiring new tutors of the ones I looked at. Their FAQ says that their most active Chemistry tutors earn $800-$1600/month. Even given that that's an upper bound, it may be worth looking into. (I lived pretty comfortably on $1200/month last year, with about Crono's expectation of lifestyle, and without having someone to share bills with.)
If you are really capable of playing Magic competitively if only you had the cards, etc., I would be glad to start you up, and you can pay me back whenever. But I would need to know that e.g. you are up-to-date on what decks/strategies work, tournament formalities (so you don't lose because of using the wrong "done with turn" indicator or tapping rotation angle), etc. (I made this offer over a year ago, but was strongly criticized for having the proviso that Crono put his karma at stake to indicate seriousness and as a motivator.)
I'm not yet capable of playing professionally. I might be able to reach that level, but I'm not there yet. And by "playing professionally" I don't mean "play well enough to make a living at it." There are very few people in the world who have ever made enough money from Magic tournaments to live on, although the number of people who at least manage to make back their expenses is much larger. (The "several thousand dollars a year" figure is an upper bound and doesn't take into account potential winnings.) I actually do have a plan to get better, though; if I can put up a good showing in a few tournaments, Zvi Moshowitz will let me join his Magic playing social circle. (I think.) The current plan is to wait for the next Pro Tour Qualifier season to start - it's Sealed Deck with the soon-to-be-released Scars of Mirrodin set - and just attend as many as I can get to while also getting in plenty of practice by playing on Magic Online.
I once knew a gamer, indeed an MtG player, who made a decent (though certainly not extravagant) living out of playing online poker. Smart guy. I never observed his poker skills first hand but he certainly kicked the shit out of me in MtG. I don't know how difficult it is to use poker as an income source but you probably have the basic skill set (math/rationality/gaming) required for good poker playing.
Right now, I'm pretty bad at poker, and I never found it to be all that fun when I played it with my brother and his friends.
Just curious, do you name Zvi Mowshowitz because he comments here? :)
Sort of. I see him at the NYC LessWrong meetups.
Erm... Unless Crono's disregarding his potential winnings, your question about whether he thinks he'd be able to earn money that way seems to have been answered.
Yes, but from earlier discussions he had suggested he'd be able to play professionally, so that's what I interpreted him to mean here, and the cost is gross rather than net, so he'd only need the first year's expenses to be self-sustaining. So I was indeed sneaking in assumptions from earlier exchanges.
If that is gross, sure. I did mention that he might be disregarding potential winnings. It seems odd to me that he'd word it that way in that case, though.
Learned helplessness applies more to specific stimuli and specific rewards; what you're describing sounds more like general lack of energy. My advice is to tweak your biochemistry until you feel more energetic, and try the cubicle environment again.
Interesting. For sure you will need to save more money than that in the long run (when you are older and really not able to do much work). It sounds good that you've decided that you need to move out, provided you actually do so.
I'm with wnoise, but I have a question to clarify my position. How many diagnoses do you expect a competent physician to get wrong? I would say that more than 1 in 20 is at least reasonable. However, without meeting CronoDAS, or performing tests of any kind, based purely on the scant evidence in his posts, you have diagnosed him with a micronutrient deficiency, and have a confidence of 95% in your diagnosis. Seriously? What's your prior? Even for thiamine, a 60% confidence that this near-stranger is deficient in it seems dramatically too high.

How many diagnoses do you expect a competent physician to get wrong?

I expect physicians to be bewildered rather a lot. I spent years severely anemic. My father is an MD, my uncle is an MD, I saw a variety of doctors during this time, I was eating cups and cups and cups of ice every single day and was unremittingly tired and ghostly pale, partway through I became a vegetarian - and it took the Red Cross's little machine that goes beep to figure out that maybe I wasn't getting enough iron. I have a vast host of symptoms less serious than that which no doctor, med student, or random interlocutor has been able to offer plausible guesses about.

I expect bewildered people to make things up.

Agreed. Even if they don't make things up, the responsible thing to do is to iterate through harmless or nearly-harmless treatments for conditions that the physician thinks are unlikely, but more likely than any other ideas he or she has. This is exactly the opposite problem; not being at all bewildered or in doubt, despite a paucity of evidence. Doctors do that too. Both making things up and jumping to conclusions happen because doctors are humans and are wired to see patterns, whether or not they exist. While we're busy refining the art of human rationality, we ought to try to curb that behavior.
These numbers are uncalibrated estimates (I spent 60s looking for population statistics to use as priors, and didn't find any), but I don't think they're at all unreasonable. Keep in mind that deficiencies come in degrees, and only the most severe ones ever get diagnosed. Anyways, here's a breakdown (again, just estimates) of that 0.95: P(micronutrient deficiency) = 0.2 P(micronutrient deficiency|no multivitamin) = 0.8 P(micronutrient deficiency|no multivitamin & depressed) = 0.95 I certainly wouldn't say it's the only problem, but it's very likely a contributing factor. Anyways, we can find this out directly. CronoDAS, could you take a look at the wikipedia page on thiamine, go through the lists of thiamine-containing and thiaminase-containing foods, and estimate your intake? Or better yet, order sulbutiamine and report its effects here?
P(micronutrient deficiency) = 0.2

I would go as high as 0.3 if you extend to third world countries, but suspect it's lower among people like ChronoDAS who can afford a variety of food. Either way, it's good enough.

P(micronutrient deficiency|no multivitamin) = 0.8

The law of conditional probability indicates that you think that a minimum of 75% of the population takes a multivitamin. I think this is way too high, especially for a population that has a 20% micronutrient deficiency rate.

P(micronutrient deficiency|no multivitamin & depressed) = 0.95

So the rate of depression among those with micronutrient deficiencies (and who don't take their vitamins) is about 119% that of the general population? I can buy that, but if it's that low, then why are you so sure that a micronutrient deficiency is "greatly contributing" to his depression?

I agree that there's no harm in having CronoDAS gather data or experiment a little, since sulbutiamine seems to have very few negative side effects with recommended doses.

My main reason for brining it up is that I see some very high probabilities tossed about on Less Wrong, and it bothers me when I feel like they're assigning numbers that ... (read more)

From Wikipedia: Hmmm... as it turns out, I've been eating quite a lot of thiamine-fortified pasta lately, and it's also in cold cereal, orange juice, and bread. I don't think I have an unusually low amount of thiamine in my diet when compared to the average American.
What Robin said. Good for making easily testable predictions. But it really sounds like you're generalizing from one example here.
Upvoted for suggesting an easily-tested claim of material relevance. That alone makes the advice worth trying as an aid to the calibration and education of others.

How has your strategy (a-h) changed since you wrote this? Are there resources you can share for learning to be more strategic? A method for finding quality resources? Methods for practicing and assessing strategic skill?

Thanks for writing this. It has enabled me to articulate the rationales behind a lot of the "crazy" thoughts I have. For example:

  • People are horrible at choosing careers. They hardly explore their options at all, and thus limit themselves greatly.

  • People are bad at choosing who their girl/boyfriends are. They make decisions impulsively based on romantic love when the should really be considering the expected value of true attachment. A lot of times it seems that certain relationships "work", but are clearly suboptimal. Also,

... (read more)
Both choosing a career and choosing a mate seem to suffer from this weird expectation of finding the one that fits me perfectly. This sort of thinking has always been very alien to me, and to this day I don't understand what causes it. I suppose media has something to do with it.
6Adam Zerner10y
I think that career/mate are both huge decisions. Both will be a huge part of your life for ~50 years!!! If you could improve your career/mate even a little bit, the impact is multiplied by this large duration of time... thus making career/mate decisions important, and worthy of a lot of thought. Still, I don't think it's worthy of so much thought that you should be looking for a perfect fit. The chances of a perfect fit happening are small enough to outweigh the huge reward. Also, I would say that people look for a mate/career that they think fits them perfectly, not one that actually does. And what they think is just this romantic and general idea that is based off of generalized maps that are many levels above the territory. As for how they develop these maps, I don't have much of an idea.
It seems to me a special case of a broader habit of inferring individual agents where the reality is more distributed statistical patterns, which I expect pre-dates media in the modern sense (though I suppose we could say modern media has something to do with it in the sense of reinforcing a pre-existing tendency, if we wanted).

Very interesting and revealing post. (I'm new)
I recently picked up a habit that makes me more strategic and goal-achieving that might be useful to share. 
I have instated a rule to start the day by: Making a list of options of things that I could do and ranking them in importance and how much effort they cost, and then the rule is to do the most important / greatest effort or unpleasant first. Then, when I have done it, I have moved toward my goal and feel better about myself. Before doing this, I would choose what to do based on what you WANT to do fi... (read more)

What a thought provoking article. Thank you so much for writing this. I am especially interested in the question "why do people spend their Saturdays 'enjoying themselves' without bothering to track which of their habitual leisure activities are actually enjoyable. When I was younger I spent a large amount of my summer vacation and weekends playing Call of Duty Modern Warfare online. The bizarre thing was that I would always stop infuriated. It did not make me happy. In fact, there are few things more infuriating than what you hear while playing an ... (read more)

This was a magnificent post, Anna. I'd like to write a longer reply and more analysis later, but for the moment I wanted to say this was really fantastic and amazing, and there's wisdom and insight packed very densely here. Thank you for writing this up, it's inspiring and insightful.


Why do even unusually numerate people fear illness, car accidents, and bogeymen, and take safety measures, but not bother to look up statistics on the relative risks?

Disease, motor vehicles, and humans are very dangerous. Currently, everyone dies eventually(1), and almost everyone who dies is killed by one of these three things. The CDC has charts about this. See 10 Leading Causes of Death by Age Group, United States – 2007. Boxes that aren't one of the Big Three Of Doom are extremely rare. This chart breaks down the unintentional injuries. As you can s... (read more)

I agree plane crash concerns are generally more irrational. But I mean... take me, for example. I know plane crashes and sharks are mostly negligible while car accidents and humans present larger risks; that much information reached me be accident. But, even though I regularly go out of my way to "reduce my risk from car accidents", I haven't ever bothered to look up info on e.g. which lane is safest to drive in, or how accident rates scale with sleep deprivation, or which freeways near my home present the largest risk. I'm motivated to do activities I associate with driving safety, but not to systematically estimate and reduce the risks. If a book was published on how to actually reduce my risk, I might read it, but more because it fits my identity as an aspiring rationalist and an aspiring goal-oriented person than than to, you know, actually reduce my risk of death. Which is the point.

wrt sleep deprivation, according to a DOT driver's manual, driving without having slept in 18 hours is equivalent in risk to driving with a .08 blood alcohol level. Driving without having slept in 24 hours corresponds to a .10 blood alcohol level.
I choose a live style that lets me limit my driving severely. Might be easier in Europe than the US. If you must drive, then doing a training in safe driving can help a bit. It trains some reflexes for emergency situations. Also avoid to drive at the specific times when most accidents happen. Which here is Friday and Saturday night, when the drunk drive home after the disco, and the few first days of icy weather each year. Also one should have a up-to date car, with Air-bags. Safety is for the most part a play with statistics, but it is really easy to reduce your risk below the average. And then you will never find you what kind of troubles you managed to avoid.
This is a good approach. It's not the primary reason I choose a life style that minimizes car usage but it is definitely an additional benefit of arranging for a largely car free existence and one I am conscious of.
It also helps financially, and I picked my current room so i could walk to work in 10-20 minutes. I would hate to have to commute each day. But those preferences might change with different living situations. The general idea is just: if it is dangerous, do it less and learn how to do it well.
Yes, a primary reason for aiming for a lifestyle where I have a reasonably short (30 minute) walk into work is my dislike of commuting by car. I figured out early on that it made me miserable (and wasted a lot of time) but I've subsequently seen a fair bit of evidence that the common trade off of a longer commute for a larger house is a poor one for most people. I don't always follow this rule. Some activities I enjoy are relatively dangerous (snowboarding for example) so I just aim to do them as safely as possible but I don't necessarily try and do less of dangerous activities if I enjoy them. It's a win-win to do less of dangerous activities I don't particularly enjoy however.
Why? Car accident death rate is 1/10000 per year for your age/gender and probably substantially less for you personally under ordinary circumstances; do the present-value-of-time math.
I always assumed that "Gravity" was replacing the "Taxes" part of "nothing is inevitable except Death and Taxes", because the Culture had clearly dispensed with taxes.
Possible, but the Culture is also no longer subject to mandatory gravity (drones float), or mandatory biological death.

In my case, I don't run into "not being able to make myself pursue my goals effectively" a whole lot. What I do run into a lot is, "not being able to figure out what goals I actually want to pursue."

I think that what's going on is this in part. When I find resistance within myself to pursuing some goal (which I read into the comedian watching reruns), I take that as evidence that this goal isn't what I'm really after. I don't spend a lot of time in a state of trying to make myself do something, because of my assumption that whatever I r... (read more)

Link text: there "Humans are not automatically strategic ("

Even Pythagoras believed in the laity - laypeople.

Laypeople do not have goals and generally only engage thier reason after action has been taken in order to justify why they did what they weren't thinking about. 

I don't see this as a problem. I think it's just the way it is and getting all people to be actors (instead of being acted upon) is a fool's errand (probably).

I really enjoyed your writing :)

Thanks so much for writing this great article! I'm new so for all of you this is an old hat. I want to add my 2ct anyways.

Do you agree with (a)-(h) above?  Do you have some good heuristics to add?  Do you have some good ideas for how to train yourself in such heuristics?

The above mentioned steps are the best system for progressing in life in general which I was able to find so far. I've read and applied lots of self-help in recent years and I can definitely agree that applying the theory is incredible hard (and I fail at that like >... (read more)

Why do many who type for hours a day remain two-finger typists, without bothering with a typing tutor program?

Because science shows, that being a two-finger typists can be of comparable speed of a ten-finger typist. I'm guilty of being a two-finger typists. But I'm also guilty of having learned the 10 finger way, practicing ot for days ongoing and then just dropping it when I realized that "this learning curve is way to steep for my 5 % realistic speed improvements".

Besides I figured "why the heck do I need to write fast anyways? 9... (read more)


I automatically do points (a) through (h).

I have always automatically done points (a) through (h).

I always attributed this to the fact that I had no identity with which to value particular opinions with. As impossible as I already know it is for anyone to accept, you have to let go of the idea that your opinions are even remotely correct. Not because your opinions are incorrect, but because you will not be able to effectively correct them until you accept that they could all be just outright blatantly wrong. But if I say it that way, you'll try to retai... (read more)

I've found that the most helpful thing for me in achieving my goals seems to be picking the right goals to begin with. I try to find goals that I really care about with a large portion of my being, rather than goals that only a small portion of my being cares about. This requires a fair amount of introspection. What do I want? It's not an easy question; counterintuitively, we don't know what we want. But, if I know what I want, then I can get it.

I'll give a couple examples. I used to have the conscious goal, "write music." My real goals, though I... (read more)

[This comment is no longer endorsed by its author]Reply

In common with all animal species, our sensory perceptual interpretation and behavioural action is also recognisable in basic physiological structure of (a) the peripheral nervous system, in our case the eyes, ears etc., and (b) parts of the central nervous systems, frontal lobes, the visual cortex, hypothalamus, amygdala, etc. that are within the brain. These are significant and extensive hardwired components. Using these structures, we can detect, recognise and evaluate a huge number of sensory patterns. For each of us these patterns are given emotion... (read more)

I think the term "abstract reasoning" is being conflated with acting on good or bad information (among other things). E.g., in most cases, one basically has to take it on faith ice cream is good or bad. And since most people aren't in a position to rationally make a confident choice re: the examples the author provides or comparable ones that could be imagined, agnosticism would seem the only rational alternative.*

More generally, I think a lot of these problems stem from radically defective education (if people aren't merely mostly morons as ... (read more)

We: (1) tell ourselves and others stories of how we’re aiming for various “goals”; (2) search out modes of activity that are consistent with the role, and goal-seeking, that we see ourselves as doing (“learning math”; “becoming a comedian”; “being a good parent”); and sometimes even (3) feel glad or disappointed when we do/don’t achieve our “goals”.

Well put. I've realized that really planning (and acting) in order to reach hard goals, is something I almost never do. Most of the time I'm just working on what feels most rewarding locally.

humans are on

... (read more)

The calculus example is a good one for examining goal-achievement.

I am currently taking Calculus 2, Integration by Trigonometric Substitution is one of the methods.

The textbook I am using is very Implicit in examples explaining this method, and I have thought many times about how much easier it would be if it were to use more Explicit examples.

Implicit examples by nature take more time and effort than explicit examples, making the implicit less likely to be chosen than the explicit.

It would have to be one very highly motivated 8-year-old to pass the calcul... (read more)

The examples on might help.
Thanks, nice link. I must say though that my example is mainly to illustrate the point of Implicit learning (breaking the code) being harder than explicit learning (being given a key). I prefer breaking the code most times. I guess the double entendre about Carlin was a bit to implicit... maybe just not funny... :)

Doing things the wrong way is a good way of discovering new ways and ideas. If we were programmed to go always in the right direction we couldn't explore the landscape and we should be trapped in a local minima. Random behaviour is part of an intelligent design to evolve and mature. Humour is a way of jumping across island of rationality.

Firstly, welcome to Less Wrong! Be sure and introduce yourself on the welcome thread. You raise a valid point, but the benefit you mention doesn't explain doing the wrong thing again and again, after enough evidence has accumulated; and it also doesn't explain that we do lots of things wrong in the exact same ways.
You seem to be confusing the right thing with the obvious thing?
The Futility of Chaos is the sequence that responds to this sort of claim. (That sequence depends on Mysterious Answers to Mysterious Questions, if you haven't read it yet.)