One of the likely characteristics of someone who sets out to be a "rationalist" is a lower-than-usual tolerance for flaws in reasoning.  This doesn't strictly follow.  You could end up, say, rejecting your religion, just because you spotted more or deeper flaws in the reasoning, not because you were, by your nature, more annoyed at a flaw of fixed size.  But realistically speaking, a lot of us probably have our level of "annoyance at all these flaws we're spotting" set a bit higher than average.

    That's why it's so important for us to tolerate others' tolerance if we want to get anything done together.

    For me, the poster case of tolerance I need to tolerate is Ben Goertzel, who among other things runs an annual AI conference, and who has something nice to say about everyone.  Ben even complimented the ideas of M*nt*f*x, the most legendary of all AI crackpots.  (M*nt*f*x apparently started adding a link to Ben's compliment in his email signatures, presumably because it was the only compliment he'd ever gotten from a bona fide AI academic.)  (Please do not pronounce his True Name correctly or he will be summoned here.)

    But I've come to understand that this is one of Ben's strengths—that he's nice to lots of people that others might ignore, including, say, me—and every now and then this pays off for him.

    And if I subtract points off Ben's reputation for finding something nice to say about people and projects that I think are hopeless—even M*nt*f*x—then what I'm doing is insisting that Ben dislike everyone I dislike before I can work with him.

    Is that a realistic standard?  Especially if different people are annoyed in different amounts by different things?

    But it's hard to remember that when Ben is being nice to so many idiots.

    Cooperation is unstable, in both game theory and evolutionary biology, without some kind of punishment for defection.  So it's one thing to subtract points off someone's reputation for mistakes they make themselves, directly.  But if you also look askance at someone for refusing to castigate a person or idea, then that is punishment of non-punishers, a far more dangerous idiom that can lock an equilibrium in place even if it's harmful to everyone involved.

    The danger of punishing nonpunishers is something I remind myself of, say, every time Robin Hanson points out a flaw in some academic trope and yet modestly confesses he could be wrong (and he's not wrong).  Or every time I see Michael Vassar still considering the potential of someone who I wrote off as hopeless within 30 seconds of being introduced to them.  I have to remind myself, "Tolerate tolerance!  Don't demand that your allies be equally extreme in their negative judgments of everything you dislike!"

    By my nature, I do get annoyed when someone else seems to be giving too much credit.  I don't know if everyone's like that, but I suspect that at least some of my fellow aspiring rationalists are.  I wouldn't be surprised to find it a human universal; it does have an obvious evolutionary rationale—one which would make it a very unpleasant and dangerous adaptation.

    I am not generally a fan of "tolerance".  I certainly don't believe in being "intolerant of intolerance", as some inconsistently hold.  But I shall go on trying to tolerate people who are more tolerant than I am, and judge them only for their own un-borrowed mistakes.

    Oh, and it goes without saying that if the people of Group X are staring at you demandingly, waiting for you to hate the right enemies with the right intensity, and ready to castigate you if you fail to castigate loudly enough, you may be hanging around the wrong group.

    Just don't demand that everyone you work with be equally intolerant of behavior like that.  Forgive your friends if some of them suggest that maybe Group X wasn't so awful after all...

    New to LessWrong?

    New Comment
    87 comments, sorted by Click to highlight new comments since: Today at 1:38 AM
    Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

    I'm going to make a controversial suggestion: one useful target of tolerance might be religion.

    I think we pretty much all understand that the supernatural is an open and shut case. Because of this, religion is a useful example of people getting things screamingly, disastrously wrong. And so we tend to use that as a pointer to more subtle ways of being wrong, which we can learn to avoid. This is good.

    However, when we speak too frequently, and with too much naked disdain, of religion, these habits begin to have unintended negative effects.

    It would be useful to have resources on general rationality to which to point our theist friends, in order to raise their overall level of sanity to the point where religion can fall away on its own. This is not going to work if these resources are blasting religion right from the get-go. Our friends are going to feel attacked, quickly close their browsers, and probably not be too well-disposed towards us the next time we speak (this may not be an entirely hypothetical example).

    I'm not talking about respect. That would be far too much to ask. If we were to speak of religion as though it could genuinely be true, we would be spectacular liars. Still, not bringing up the topic when it's not necessary, using another example if there happens to be one available, would, I think, significantly increase the potential audience for our writing.

    The problem with tolerating religion is that, as Dawkins pointed out, it has received too much tolerance already. One reason religion is so widespread and obnoxious is that it has been so off limits to criticism for so long.

    A good solution to this is to have some diversity of rhetoric. Some people can be blunt, others openly contemptuous, and others more friendly and overtly tolerant. There's room enough for all of these.

    The less tolerant people destroy the special immunity to criticism that religion has long enjoyed, and get to be seen as the "extremists". Meanwhile they make the sweetness-and-light folks look more moderate by comparison, which is a useful thing. A lot of people reflexively reject extremism, which they define as simply the most extreme views that they're hearing expressed on a contentious issue. Make the extremists more extreme, and more moderate versions of their viewpoint become more socially acceptable.

    Someone has to play the villains in this story.

    I'm very much in favor of what you wrote there. I've been thinking to start a separate thread about this some time. Though feel free to beat me to it, I won't be ready to do so very soon anyway. But here's a stab at what I'm thinking.

    This is from the welcome thread:

    A note for theists: you will find LW overtly atheist. We are happy to have you participating, but please be aware that other commenters are likely to treat religion as an open-and-shut case. This isn't groupthink; we really, truly have given full consideration to theistic claims and found them to be false.

    This is fair. I could, in principle, sit down and discuss rationality with a group having such a disclaimer, except in favor of religion, assuming they got promoted to my attention for some unrelated good reason (like I've been linked to an article and read that one and two more and I found them all impressive). Not going to happen in practice, probably, but you get my drift.

    Except that's not the vibe of what Less Wrong is actually like, IMO, that we're "happy to have" these people. Atheism strikes me as a belief that's necessary for acceptance to the tribe. This is not a Good Thing, for many reasons, the ... (read more)

    I think this is a good analysis. However, in some areas, it is particularly difficult to keep things separate. The two cultures are simply very different; discussions have a way of finding the largest differences. To be more specific: a recent conversation about rationalism came to the point of whether we could depend on the universe not to kill us. (To put it as it was in the conversation: there must be justice in the universe.)
    Well, I think you're absolutely right except, perhaps, regarding the claim that "Atheism strikes me as a belief that's necessary for acceptance to the tribe." I'm not an atheist, and while when I mention this fact I get mobbed by people asking me to refute arguments I've heard a thousand times before, I've never found myself or seen others be rejected as members of the tribe for admitting to religious beliefs.

    I'm going to make a controversial suggestion: one useful target of tolerance might be religion.

    I'll try to tolerate your tolerance.

    (I blog using any examples that come to hand, but when I canonicalize I try to remove explicit mentions of religion where possible. Bear in mind that intelligent religious people with Escher-minds will see the implications early on, though.)

    You canonicalize? Where can we find your canon, and is it marked as canonical?
    This might (partly) answer your question:
    So he means a future canon? I can't go somewhere today and find it? (I disapprove of anyone calling some of their own non-fiction works 'canonical', but without conviction, never having thought about it before.)
    The term "canonical" has a somewhat different definition in the fields of math and computer science. Eliezer is probably using it influenced by this definition, in the sense of "converting his writing into canonical form", as opposed to an ad-hoc or temporary form. In my experience, the construction "canonicalize" refers almost exclusively to this sense of the word. See the Jargon File entry for clarification.
    Sadly true.

    I think you point up the problem with your own suggestion - we have to have examples of rationality failure to discuss, and if we choose an example on which we agree less (eg something to do with AGW) then we will end up discussing the example instead of what it is intended to illustrate. We keep coming back to religion not just because practically every failure of rationality there is has a religious example, but because it's something we agree on.

    It should be noted that if all goes according to plan, we won't have religion as a relevant example for too much longer. One day (I hope) we will need to teach rationality without being able to gesture out the window at a group of intelligent adults who think crackers turn into human flesh on the way down their gullets. Why not plan ahead? ETA: Now I think of it, crackers do, of course, turn into human flesh, it just happens a bit later.

    It's not so much that I'm trying to hide my atheism, or that I worry about offending theists - then I wouldn't speak frankly online. The smart ones are going to notice, if you talk about fake explanations, that this applies to God; and they're going to know that you know it, and that you're an atheist. Admittedly, they may be much less personally offended if you never spell out the application - not sure why, but that probably is how it works.

    And I don't plan far enough ahead for a day when religion is dead, because most of my utility-leverage comes before then.

    But rationality is itself, not atheism or a-anything; and therefore, for aesthetic reasons, when I canonicalize (compile books or similar long works), I plan to try much harder to present what rationality is, and not let it be a reaction to or a refutation of anything.

    Writing that way takes more effort, though.

    they may be much less personally offended if you never spell out the application - not sure why, but that probably is how it works.

    Once you connect the dots and make the application explicit, they feel honor-bound to take offense and to defend their theism, regardless of whether they personally want to take offense or not. In their mind, making the application explicit shifts the discussion from being about ideas to being about their core beliefs and thus about their person.

    For me, this appears to be correct.
    4Paul Crowley15y
    If all goes according to plan, by then we will be able to bring up more controversial examples without debate descending into nonsense. Let's cross that bridge when we come to it.
    I think there are other examples with just as much agreement on their wrongness, many of which have a much lower degree of investment even for their believers. Astrology for instance has many believers, but they tend to be fairly weak beliefs, and don't produce such a defensive reaction when criticized. Lots of other superstitions also exist, so sadly I don't think we'll run out of examples any time soon.
    8Paul Crowley15y
    But because people aren't so invested in it, they mostly won't work so hard to rationalise it; mostly people who are really trying to be rational will simply drop it, and you're left with a fairly flabby opposition. Whereas lots of smart people who really wanted to be clear-thinking have fought to hang onto religion, and built huge castles of error to defend it.

    "of someone who I wrote off as hopeless within 30 seconds of being introduced to them."

    Few college professors would do this because many students are unimpressive when you first talk with them but than do brilliantly on exams and papers.

    I've known people be hopeless for months, then suddenly for no observable reason begin acting brilliantly, another reminder that small data sets aren't sufficient to predict a system as complex as human behaviour.

    I usually have something nice to say about most things, even the ideas of some pretty crazy people. Perhaps less so online, but more in person. In my case the reason is not tolerance, but rather a habit that I have when I analyse things: when I see something I really like I ask myself, "Ok, but what's wrong with this?" I mentally try to take an opposing position. Many self described "rationalists" do this, habitually. The more difficult one is the reverse: when I see something I really don't like, but where the person (or better, a whole group) is clearly serious about it and has spent some time on it, I force myself to again flip sides and try to argue for their ideas. Over the years I suspect I've learnt more from the latter than the former. Externally, I might just sound like I'm being very tolerant.

    Note that tolerance is part of a general conversion strategy. Nitpicking everyone who disagrees with you in the slightest isn't likely to make friends, but it is likely to make your opponents think you are an arrogant jerk. Sometimes you just have to keep it too yourself.

    Punishing for non-punishment is an essential dynamic for preserving some social hierarchies, at least in schoolyards and in Nazi Germany.

    Abby was just telling me this afternoon that psychologists today believe that when kids are picked on in school, it's their own fault - either they are too shy, or they are bullies. (There is a belief that bullies are picked on in school, something I never saw evidence of in my school days except when it was me doing the picking-on.)

    My theory is that the purpose of picking on kids in school is not to have effects on the ... (read more)

    The theory is that bullies are often in the middle of a bullying hierarchy. For example, when I was in high school, one of my friends was harassed by seniors when he was a freshman. When he became a senior himself, he, in turn, harassed freshmen, saying that he was going to give as good as he got. From what I've read, in high school at least, bullies tend to be those in the middle of the social hierarchy; those at the top (the most popular) are secure in their position and can afford to be nice, while those who are at risk for backsliding work hard at making sure there is at least one person who is a more tempting victim than they are.
    Seem to be assuming there is a higher purpose for bullying, which seems to be making a mistake along the same lines as the parable of group selection. Possibly bullies bully because they enjoy it and aren't stopped from doing so. What additional explanation is needed?
    Well, as a kid I got bullied at school, quite a bit, and I DO remember bullying other a handful of times. I remember being conscious about it and feeling like shit for it, but at the same time being so relieved because as long as someone else was being bullied, I wasn't. I certainly did not enjoy it, mainly because it contradicted my vision of myself as a courageous victim.

    We can and should reach whatever conclusions about people we wish. But we should be very slow to fail to observe and accept new evidence about them.

    Excluding people from discussion may screen out their nonsense (or at least the things you thought were nonsense), but it also prevents you from discovering that you made a hasty decision. Once you've started ignoring someone, you can no longer observe what they say - and possibly find that they're smarter than you thought they were.

    It's worth acquiring new data even from those you've discarded, at least once in a while.


    M*nt*f*x! K*b*! Y*g-S*th*th!

    Obviously the other two need to be bowdlerized, but what's wrong with attracting Kibo? I think he'd fit in well here.
    H*st*r! H*st*r! H*st*r!

    I think there is an important distinction between cheap and expensive tolerance. If I am sitting on a plane and don't have a good book and am talking to my seatmate, and they seem stupid and irrational, being tolerant is likely to lead to an enjoyable conversation. I may even learn something.

    But if I am deciding what authors to read, whose arguments to think about more seriously, etc., then it seems irrational to not judge and prioritize with my limited time.

    And this relates to indirect tolerance - someone who doesn't judge and prioritize good arguments ... (read more)

    The advice isn't about your attitude towards your seatmate's stupidity and irrationality. It's directed at your rationalist buddy sitting on your other side -- she's being advised not to be annoyed at you if you choose to be tolerant.

    Eliezer is correct, but this post should be followed up by one about the many places where failing to punish non-punishers, in other words, tolerating free-riders, has negative consequences.

    If you transgress, I might have a problem with you. If you actively shield a transgressor, I might have a problem with you. If you just don't punish a transgressor, the circumstances where I might have a problem are pretty rare I think!

    The application of this principle to [outrage over the comments and commenters which a blogger declines to delete/ban] is left as an exercise for the reader.


    My attitude toward Ben's tolerance depends on the context. When he does it as a person, I appreciate it. When he does it as chair of AGI, I don't. There were some very good presentations this year, but there were also some very bad time-wasters.

    But probably I should blame the reviewers instead.

    Damn M-nt-f-x! Damn every one that won't damn M-nt-f-x!! Damn every one that won't put lights in his windows and sit up all night damning M-nt-f-x!!!

    Since I saw this comment before the post it goes with, I thought it was some sort of rant about people not using Emacs for their comments. ;-)

    Great post. I think I'd already sort of started trying to do this, although I couldn't have put it as well. Now what I want to know is how much to tolerate people who are less tolerant than me. I'm not quite sure what to do when I meet someone who is infuriated by patterns of thinking that I consider only trivially erroneous or understandable under certain circumstances.

    4Eliezer Yudkowsky15y
    I'd say, tolerate them! Though I speak as one with a certain conflict of interest, being on the other side of that judgment. But it seems like the logical mirror image and hence still the thing to do. Judge people only on non-borrowed trouble?
    What about tolerating people who don't tolerate you? I think this calls for a tit-for-tat strategy.
    ´Well, tolerating them has a good chance of signalling to neutral observers that you are not a pompous jerk, and therefore listen to your ideas favorably.

    I am going to disagree with the idea that 'being "intolerant of intolerance"' is inherently inconsistent. The problem is with the word tolerance, which contains multiple meanings. I think that it is morally wrong to discriminate against people for things that they can't change. Believing that someone of a different race can't possibly be intelligent is a moral wrong. Furthermore, it is so indicative of stupidity that I do not wish to associate with such a person, if they are in a culture where theirs is the minority view.To put it another way, to... (read more)

    The second statement here doesn't follow from the first. If intelligence is something that a person can't change, then it follows that it's morally wrong to discriminate against someone for being unintelligent. It doesn't follow that it is morally wrong to believe that one factor a person cannot change (their race) can determine other factors that they cannot change, such as their intelligence. Whether there are actually average inherent genetic differences in intelligence between races is still a matter of some debate (although the issue is so politically charged that it's hard to get any effective unbiased research done, and attempting to do so can be dangerous for one's reputation.) It's certainly unlikely that any race exists that has negligible odds of any particular individual reaching an arbitrarily defined cutoff point for "intelligent" compared to other races, but this is an empirical matter which is to be determined on the basis of evidence, and moral considerations have no bearing on whether or not it's true.

    There's a question of whether there's an important difference in kind between sorts of tolerance. Here's an analogy which might or might not work: assume that, in general, a driver of a vehicle drives as fast as they think it is safe for cars to be driven in general. Only impatience would cause them to not tolerate people who drive slower than they; a safety concern could cause them to be upset by people who drive faster, since they consider that speed unsafe. Say you have two people who each drive at 50 mph. One of them tolerates only slower drivers b... (read more)

    I don't get it. You want us to work with those who refuse to 'punish' foolishness but who aren't fools themselves to, presumably, fight against foolishness. All right, I can see the sense in that.

    Why does it follow that we should censor ourselves when dealing with these non-foolish foolishness enablers? Why can't we work with them and show our disapproval of their enabling?

    Because in human society, voicing your disapproval is a form of punishment, which people will react badly to, and will make it hard to work with them.

    a far more dangerous idiom that can lock an equilibrium in place even if it's harmful to everyone involved.

    Could I get a reference for this? I wanted to refer someone else to it, and my Google searches failed me.

    5Eliezer Yudkowsky15y
    Punishment allows the evolution of cooperation (or anything else) in sizable groups, by Boyd & Richerson
    4Paul Crowley15y
    Thanks! They credit the initial discovery to D Hirshleifer, E Rasmusen: Cooperation in a repeated prisoners’ dilemma with ostracism, Journal of Economic Behavior and Organization, 1989 and happily the PDF is freely available

    In a situation where someone who seems to be very like-minded is more tolerant to another person X than I would be, I would be very interested in why, if I don't already know. Perhaps my friend has reasons that I would agree with, if I only knew them. (Some pragmatic reasons come to mind.)

    If I still disagree with my friend, even after knowing his reasons, I would then express the disagreement and see if I couldn't convert my friend on the basis of our common views. If I fail to convert him, it is because our views differ in some way. Is the view we disagr... (read more)

    Not necessarily. It could just be a personality difference, and you don't actually disagree on any beliefs.
    The way "views" is usually used, it includes "values".

    ... punishment of non-punishers, a far more dangerous idiom that can lock an equilibrium in place even if it's harmful to everyone involved.

    Have you done the math? This would have important implications for the development of intolerant societies - it was clearly crucial to Nazism - but I've never heard of any studies on the subject. People are still working on first-order punishment.

    A good reference on that; Simon Gächter, Elke Renner, and Martin Sefton, "The Long-Run Benefits of Punishment", Science 5 December 2008 322: 1510 [DOI: 10.1126/... (read more)

    2Paul Crowley15y
    See my comment above giving the references: the math shows that punishing non-punishers is an evolutionary stable strategy that can enforce cooperation where simpler strategies fail.

    Whether someone agrees with us isn't as important as why.

    If someone has sufficiently low standards of quality that they fail to disapprove of even the worst garbage, then they're of little use in distinguishing value from nonsense.

    As a great deal of nonsense is not only passively but actively harmful (not just failing to be correct, but inclining people towards error), it is vitally important to tell the two apart. People who can't or won't do this are not only not-helpful, but make our tasks harder.

    Strive to have good standards and apply them. Don't worry about being tolerant or intolerant -- the right mix of behaviors will naturally arise from the application of correct standards.


    The communities that I've been a part of which I liked the best, which seemed to have the most interesting people, were also the nastiest and least tolerant.

    If you can't call a retard a retard, you end up with a bunch of retards, and then the other people leave. When eventually someone nice came to power, this is invariably what happened.

    Eliezer isn't suggesting that you refrain from calling fools "fools". He's suggesting you tolerate people who are otherwise non-foolish except that they don't call fools "fools".

    Tolerating fools might not be a good idea. Tolerating non-fools who themselves tolerate fools is, AFAICT, a glaringly good idea. If you create an atmosphere where everyone has to hate the same people... we run into some of the failure modes of objectivism.

    8Eliezer Yudkowsky15y
    ...I think this post might be over the meta threshold where some people lack the reflective gear and simply can't process the intended meaning! Really, I went to some lengths to spell it out here!
    "If you create an atmosphere where everyone has to hate the same people... " Again: it's why those people have to be hated that's important. If standards reflect real properties of reality, people who're seeking the truth will tend to generate similar standards. If people have similar standards, they'll tend to reach the same sorts of judgments. What matters is that our judgments arise from accurate standards, not from merely imitating others. Error leads to condition X, but it doesn't follow that ~X is therefore correct.
    4Paul Crowley15y
    If you never feel the need to say "Damn X for not damning Y" then good for you, but I think that is at least sometimes felt, and leads to judgements not being as you describe independent.
    Only if the judgers care what others think of them. There are some very real advantages to being a sociopath if you want to be a rationalist... and some very real advantages to societies that have a sufficiently great concentration of sociopaths.
    I steer clear of such communities, unless I need to extract some specific bit of information out of them (and I leave immediately when I'm done). Perhaps that's because in my upbringing calling someone a fool (let alone a retard) was considered extremely rude. Do you know the person you're calling a retard well enough, our you're judging by a couple of their posts? Would you say "you are a retard" to their face in real life? When you call someone a retard, what do you imply, "your mental abilities in general are very poor" or "you are incompetent at activity X which we discuss here"?
    4Paul Crowley15y
    In my experience, actually ejecting disruptive people from an online community can have a powerful positive effect, but replying to them with insults only encourages them and achieves nothing.

    In Hanson and Simler's 'The Elephant in the Brain', they mention Axelrod's (1986) "meta-norm" modelling which shows that cooperation is stable only when non-punishers are punished.

    Just a small point-- tolerating tolerance seems to me to be a less powerful tool than the principle of charity, of which plenty has been said on this site. For me, the image:

    One of the likely characteristics of someone who sets out to be a "rationalist" is a lower-than-usual tolerance for flaws in reasoning.

    doesn't even start to feel right for me from a 'should' perspective (though it is quite familiar from an 'is' perspective). My image of a rationalist is someone exceptionally concerned with making sense of what others are saying, because arguments are not battles.

    I have a massively huge problem with this. Every time a non-fiction author or scientist I respect gives credit to a non-rational I cringe inside. I have to will myself to remember that just because they have a lower rationality threshold, does not automatically discredit their work.

    IAWY. However, regarding the practice of reminding yourself every time in order to prevent the behavior, why expend two units of mental force, opposing each other, when you could just remove both forces? It'd be more efficient just to get rid of whatever underlying belief or judgment makes you feel the need to be intolerant of the tolerant... and you'd suffer less.


    I'm programmed to get angry when there's misbehavior and I don't know that I can just shut this off when the misbehavior consists of underpunishing. Maybe I should try channeling the anger toward the nonpunishee rather than the nonpunisher?


    This post has motivated me to put my foot down aroudn one friend who is so bitchy about others.