Goals for which Less Wrong does (and doesn't) help

by AnnaSalamon3 min read18th Nov 2010104 comments


Pitfalls of RationalityCommunityRationality

Related to: Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality

We’ve had a lot of good criticism of Less Wrong lately (including Patri’s post above, which contains a number of useful points). But to prevent those posts from confusing newcomers, this may be a good time to review what Less Wrong is useful for.

In particular: I had a conversation last Sunday with a fellow, I’ll call him Jim, who was trying to choose a career that would let him “help shape the singularity (or simply the future of humanity) in a positive way”.  He was trying to sort out what was efficient, and he aimed to be careful to have goals and not roles.  

So far, excellent news, right?  A thoughtful, capable person is trying to sort out how, exactly, to have the best impact on humanity’s future.  Whatever your views on the existential risks landscape, it’s clear humanity could use more people like that.

The part that concerned me was that Jim had put a site-blocker on LW (as well as all of his blogs) after reading Patri’s post, which, he said, had “hit him like a load of bricks”.  Jim wanted to get his act together and really help the world, not diddle around reading shiny-fun blog comments.  But his discussion of how to “really help the world” seemed to me to contain a number of errors[1] -- errors enough that, if he cannot sort them out somehow, his total impact won’t be nearly what it could be.  And they were the sort of errors LW could have helped with.  And there was no obvious force in his off-line, focused, productive life of a sort that could similarly help.

So, in case it’s useful to others, a review of what LW is useful for.

When you do (and don’t) need epistemic rationality

For some tasks, the world provides rich, inexpensive empirical feedback.  In these tasks you hardly need reasoning.  Just try the task many ways, steal from the best role-models you can find, and take care to notice what is and isn’t giving you results.

Thus, if you want to learn to sculpt, reading Less Wrong is a bad way to go about it.  Better to find some clay and a hands-on sculpting course.  The situation is similar for small talk, cooking, selling, programming, and many other useful skills.

Unfortunately, most of us also have goals for which we can obtain no such ready success/failure data. For example, if you want to know whether cryonics is a good buy, you can’t just try buying it and not-buying it and see which works better.  If you miss your first bet, you’re out for good.

There is similarly no easy way to use the “try it and see” method to sort out what ethics and meta-ethics to endorse, or what long-term human outcomes are likely, how you can have a positive impact on the distant poor, or which retirement investments *really will* be safe bets for the next forty years.  For these goals we are forced to use reasoning, as failure-prone as human reasoning is.  If the issue is tricky enough, we’re forced to additionally develop our skill at reasoning -- to develop “epistemic rationality”.

The traditional alternative is to deem subjects on which one cannot gather empirical data "unscientific" subjects on which respectable people should not speak, or else to focus one's discussion on the most similar-seeming subject for which it *is* easy to gather empirical data (and so to, for example, rate charities as "good" when they have a low percentage of overhead, instead of a high impact). Insofar as we are stuck caring about such goals and betting our actions on various routes for their achievement, this is not much help.[2]

How to develop epistemic rationality

If you want to develop epistemic rationality, it helps to spend time with the best epistemic rationalists you can find.  For many, although not all, this will mean Less Wrong.  Read the sequences.  Read the top current conversations.  Put your own thinking out there (in the discussion section, for starters) so that others can help you find mistakes in your thinking, and so that you can get used to holding your own thinking to high standards.  Find or build an in-person community of aspiring rationalists if you can.

Is it useful to try to read every single comment?  Probably not, on the margin; better to read textbooks or to do rationality exercises yourself.  But reading the Sequences helped many of us quite a bit; and epistemic rationality is the sort of thing for which sitting around reading (even reading things that are shiny-fun) can actually help.


[1]  To be specific: Jim was considering personally "raising awareness" about the virtues of the free market, in the hopes that this would (indirectly) boost economic growth in the third world, which would enable more people to be educated, which would enable more people to help aim for a positive human future and an eventual positive singularity.

There are several difficulties with this plan.  For one thing, it's complicated; in order to work, his awareness raising would need to indeed boost free market enthusiasm AND US citizens' free market enthusiasm would need to indeed increase the use of free markets in the third world AND this result would need to indeed boost welfare and education in those countries AND a world in which more people could think about humanity's future would need to indeed result in a better future. Conjunctions are unlikely, and this route didn't sound like the most direct path to Jim's stated goal.

For another thing, there are good general arguments suggesting that it is often better to donate than to work directly in a given field, and that, given the many orders of magnitude differences in efficacy between different sorts of philanthropy, it's worth doing considerable research into how best to give.  (Although to be fair, Jim's emailing me was such research, and he may well have appreciated that point.) 

The biggest reason it seemed Jim would benefit from LW was just manner; Jim seemed smart and well-meaning, but more verbally jumbled, and less good at factoring complex questions into distinct, analyzable pieces, than I would expect if he spent longer around LW.

[2] The traditional rationalist reply would be that if human reasoning is completely and permanently hopeless when divorced from the simple empirical tests of Popperian science, then avoiding such "unscientific" subjects is all we can do.