From when I was still forced to attend, I remember our synagogue's annual fundraising appeal.  It was a simple enough format, if I recall correctly.  The rabbi and the treasurer talked about the shul's expenses and how vital this annual fundraise was, and then the synagogue's members called out their pledges from their seats.

    Straightforward, yes?

    Let me tell you about a different annual fundraising appeal.  One that I ran, in fact; during the early years of a nonprofit organization that may not be named.  One difference was that the appeal was conducted over the Internet.  And another difference was that the audience was largely drawn from the atheist/libertarian/technophile/sf-fan/early-adopter/programmer/etc crowd.  (To point in the rough direction of an empirical cluster in personspace.  If you understood the phrase "empirical cluster in personspace" then you know who I'm talking about.)

    I crafted the fundraising appeal with care.  By my nature I'm too proud to ask other people for help; but I've gotten over around 60% of that reluctance over the years.  The nonprofit needed money and was growing too slowly, so I put some force and poetry into that year's annual appeal.  I sent it out to several mailing lists that covered most of our potential support base.

    And almost immediately, people started posting to the mailing lists about why they weren't going to donate.  Some of them raised basic questions about the nonprofit's philosophy and mission.  Others talked about their brilliant ideas for all the other sources that the nonprofit could get funding from, instead of them.  (They didn't volunteer to contact any of those sources themselves, they just had ideas for how we could do it.)

    Now you might say, "Well, maybe your mission and philosophy did have basic problems—you wouldn't want to censor that discussion, would you?"

    Hold on to that thought.

    Because people were donating.  We started getting donations right away, via Paypal.  We even got congratulatory notes saying how the appeal had finally gotten them to start moving.  A donation of $111.11 was accompanied by a message saying, "I decided to give **** a little bit more.  One more hundred, one more ten, one more single, one more dime, and one more penny.  All may not be for one, but this one is trying to be for all."

    But none of those donors posted their agreement to the mailing list.  Not one.

    So far as any of those donors knew, they were alone.  And when they tuned in the next day, they discovered not thanks, but arguments for why they shouldn't have donated.  The criticisms, the justifications for not donating—only those were displayed proudly in the open.

    As though the treasurer had finished his annual appeal, and everyone not making a pledge had proudly stood up to call out justifications for refusing; while those making pledges whispered them quietly, so that no one could hear.

    I know someone with a rationalist cause who goes around plaintively asking, "How come the Raelian flying-saucer cult can get tens of thousands of members [probably around 40,000] interested in complete nonsense, but we can't even get a thousand people working on this?"

    The obvious wrong way to finish this thought is to say, "Let's do what the Raelians do!  Let's add some nonsense to this meme!"  For the benefit of those not immediately stopped by their ethical inhibitions, I will observe that there may be a hundred failed flying-saucer cults for every one that becomes famous.  And the Dark Side may require non-obvious skills, which you, yes you, do not have:  Not everyone can be a Sith Lord.  In particular, if you talk about your planned lies on the public Internet, you fail.  I'm no master criminal, but even I can tell certain people are not cut out to be crooks.

    So it's probably not a good idea to cultivate a sense of violated entitlement at the thought that some other group, who you think ought to be inferior to you, has more money and followers.  That path leads to—pardon the expression—the Dark Side.

    But it probably does make sense to start asking ourselves some pointed questions, if supposed "rationalists" can't manage to coordinate as well as a flying-saucer cult.

    How do things work on the Dark Side?

    The respected leader speaks, and there comes a chorus of pure agreement: if there are any who harbor inward doubts, they keep them to themselves.  So all the individual members of the audience see this atmosphere of pure agreement, and they feel more confident in the ideas presented—even if they, personally, harbored inward doubts, why, everyone else seems to agree with it.

    ("Pluralistic ignorance" is the standard label for this.)

    If anyone is still unpersuaded after that, they leave the group (or in some places, are executed)—and the remainder are more in agreement, and reinforce each other with less interference.

    (I call that "evaporative cooling of groups".)

    The ideas themselves, not just the leader, generate unbounded enthusiasm and praise.  The halo effect is that perceptions of all positive qualities correlate—e.g. telling subjects about the benefits of a food preservative made them judge it as lower-risk, even though the quantities were logically uncorrelated.  This can create a positive feedback effect that makes an idea seem better and better and better, especially if criticism is perceived as traitorous or sinful.

    (Which I term the "affective death spiral".)

    So these are all examples of strong Dark Side forces that can bind groups together.

    And presumably we would not go so far as to dirty our hands with such...

    Therefore, as a group, the Light Side will always be divided and weak.  Atheists, libertarians, technophiles, nerds, science-fiction fans, scientists, or even non-fundamentalist religions, will never be capable of acting with the fanatic unity that animates radical Islam.  Technological advantage can only go so far; your tools can be copied or stolen, and used against you.  In the end the Light Side will always lose in any group conflict, and the future inevitably belongs to the Dark.

    I think that one's reaction to this prospect says a lot about their attitude towards "rationality".

    Some "Clash of Civilizations" writers seem to accept that the Enlightenment is destined to lose out in the long run to radical Islam, and sigh, and shake their heads sadly.  I suppose they're trying to signal their cynical sophistication or something.

    For myself, I always thought—call me loony—that a true rationalist ought to be effective in the real world.

    So I have a problem with the idea that the Dark Side, thanks to their pluralistic ignorance and affective death spirals, will always win because they are better coordinated than us.

    You would think, perhaps, that real rationalists ought to be more coordinated?  Surely all that unreason must have its disadvantages?  That mode can't be optimal, can it?

    And if current "rationalist" groups cannot coordinate—if they can't support group projects so well as a single synagogue draws donations from its members—well, I leave it to you to finish that syllogism.

    There's a saying I sometimes use:  "It is dangerous to be half a rationalist."

    For example, I can think of ways to sabotage someone's intelligence by selectively teaching them certain methods of rationality.  Suppose you taught someone a long list of logical fallacies and cognitive biases, and trained them to spot those fallacies in biases in other people's arguments.  But you are careful to pick those fallacies and biases that are easiest to accuse others of, the most general ones that can easily be misapplied.  And you do not warn them to scrutinize arguments they agree with just as hard as they scrutinize incongruent arguments for flaws.  So they have acquired a great repertoire of flaws of which to accuse only arguments and arguers who they don't like.  This, I suspect, is one of the primary ways that smart people end up stupid.  (And note, by the way, that I have just given you another Fully General Counterargument against smart people whose arguments you don't like.)

    Similarly, if you wanted to ensure that a group of "rationalists" never accomplished any task requiring more than one person, you could teach them only techniques of individual rationality, without mentioning anything about techniques of coordinated group rationality.

    I'll write more later (tomorrow?) on how I think rationalists might be able to coordinate better.  But today I want to focus on what you might call the culture of disagreement, or even, the culture of objections, which is one of the two major forces preventing the atheist/libertarian/technophile crowd from coordinating.

    Imagine that you're at a conference, and the speaker gives a 30-minute talk.  Afterward, people line up at the microphones for questions.  The first questioner objects to the graph used in slide 14 using a logarithmic scale; he quotes Tufte on The Visual Display of Quantitative Information.  The second questioner disputes a claim made in slide 3.  The third questioner suggests an alternative hypothesis that seems to explain the same data...

    Perfectly normal, right?  Now imagine that you're at a conference, and the speaker gives a 30-minute talk.  People line up at the microphone.

    The first person says, "I agree with everything you said in your talk, and I think you're brilliant."  Then steps aside.

    The second person says, "Slide 14 was beautiful, I learned a lot from it.  You're awesome."  Steps aside.

    The third person—

    Well, you'll never know what the third person at the microphone had to say, because by this time, you've fled screaming out of the room, propelled by a bone-deep terror as if Cthulhu had erupted from the podium, the fear of the impossibly unnatural phenomenon that has invaded your conference.

    Yes, a group which can't tolerate disagreement is not rational.  But if you tolerate only disagreement—if you tolerate disagreement but not agreement—then you also are not rational.  You're only willing to hear some honest thoughts, but not others.  You are a dangerous half-a-rationalist.

    We are as uncomfortable together as flying-saucer cult members are uncomfortable apart.  That can't be right either.  Reversed stupidity is not intelligence.

    Let's say we have two groups of soldiers.  In group 1, the privates are ignorant of tactics and strategy; only the sergeants know anything about tactics and only the officers know anything about strategy.  In group 2, everyone at all levels knows all about tactics and strategy.

    Should we expect group 1 to defeat group 2, because group 1 will follow orders, while everyone in group 2 comes up with better ideas than whatever orders they were given?

    In this case I have to question how much group 2 really understands about military theory, because it is an elementary proposition that an uncoordinated mob gets slaughtered.

    Doing worse with more knowledge means you are doing something very wrong.  You should always be able to at least implement the same strategy you would use if you are ignorant, and preferably do better.  You definitely should not do worse.  If you find yourself regretting your "rationality" then you should reconsider what is rational.

    On the other hand, if you are only half-a-rationalist, you can easily do worse with more knowledge.  I recall a lovely experiment which showed that politically opinionated students with more knowledge of the issues reacted less to incongruent evidence, because they had more ammunition with which to counter-argue only incongruent evidence.

    We would seem to be stuck in an awful valley of partial rationality where we end up more poorly coordinated than religious fundamentalists, able to put forth less effort than flying-saucer cultists.  True, what little effort we do manage to put forth may be better-targeted at helping people rather than the reverse—but that is not an acceptable excuse.

    If I were setting forth to systematically train rationalists, there would be lessons on how to disagree and lessons on how to agree, lessons intended to make the trainee more comfortable with dissent, and lessons intended to make them more comfortable with conformity.  One day everyone shows up dressed differently, another day they all show up in uniform.  You've got to cover both sides, or you're only half a rationalist.

    Can you imagine training prospective rationalists to wear a uniform and march in lockstep, and practice sessions where they agree with each other and applaud everything a speaker on a podium says?  It sounds like unspeakable horror, doesn't it, like the whole thing has admitted outright to being an evil cult?  But why is it not okay to practice that, while it is okay to practice disagreeing with everyone else in the crowd?  Are you never going to have to agree with the majority?

    Our culture puts all the emphasis on heroic disagreement and heroic defiance, and none on heroic agreement or heroic group consensus.  We signal our superior intelligence and our membership in the nonconformist community by inventing clever objections to others' arguments.  Perhaps that is why the atheist/libertarian/technophile/sf-fan/Silicon-Valley/programmer/early-adopter crowd stays marginalized, losing battles with less nonconformist factions in larger society.  No, we're not losing because we're so superior, we're losing because our exclusively individualist traditions sabotage our ability to cooperate.

    The other major component that I think sabotages group efforts in the atheist/libertarian/technophile/etcetera community, is being ashamed of strong feelings.  We still have the Spock archetype of rationality stuck in our heads, rationality as dispassion.  Or perhaps a related mistake, rationality as cynicism—trying to signal your superior world-weary sophistication by showing that you care less than others.  Being careful to ostentatiously, publicly look down on those so naive as to show they care strongly about anything.

    Wouldn't it make you feel uncomfortable if the speaker at the podium said that he cared so strongly about, say, fighting aging, that he would willingly die for the cause?

    But it is nowhere written in either probability theory or decision theory that a rationalist should not care.  I've looked over those equations and, really, it's not in there.

    The best informal definition I've ever heard of rationality is "That which can be destroyed by the truth should be."  We should aspire to feel the emotions that fit the facts, not aspire to feel no emotion.  If an emotion can be destroyed by truth, we should relinquish it.  But if a cause is worth striving for, then let us by all means feel fully its importance.

    Some things are worth dying for.  Yes, really!  And if we can't get comfortable with admitting it and hearing others say it, then we're going to have trouble caring enough—as well as coordinating enough—to put some effort into group projects.  You've got to teach both sides of it, "That which can be destroyed by the truth should be," and "That which the truth nourishes should thrive."

    I've heard it argued that the taboo against emotional language in, say, science papers, is an important part of letting the facts fight it out without distraction.  That doesn't mean the taboo should apply everywhere.  I think that there are parts of life where we should learn to applaud strong emotional language, eloquence, and poetry.  When there's something that needs doing, poetic appeals help get it done, and, therefore, are themselves to be applauded.

    We need to keep our efforts to expose counterproductive causes and unjustified appeals, from stomping on tasks that genuinely need doing.  You need both sides of it—the willingness to turn away from counterproductive causes, and the willingness to praise productive ones; the strength to be unswayed by ungrounded appeals, and the strength to be swayed by grounded ones.

    I think the synagogue at their annual appeal had it right, really.  They weren't going down row by row and putting individuals on the spot, staring at them and saying, "How much will you donate, Mr. Schwartz?"  People simply announced their pledges—not with grand drama and pride, just simple announcements—and that encouraged others to do the same.  Those who had nothing to give, stayed silent; those who had objections, chose some later or earlier time to voice them.  That's probably about the way things should be in a sane human community—taking into account that people often have trouble getting as motivated as they wish they were, and can be helped by social encouragement to overcome this weakness of will.

    But even if you disagree with that part, then let us say that both supporting and countersupporting opinions should have been publicly voiced.  Supporters being faced by an apparently solid wall of objections and disagreements—even if it resulted from their own uncomfortable self-censorship—is not group rationality.  It is the mere mirror image of what Dark Side groups do to keep their followers.  Reversed stupidity is not intelligence.

    New Comment
    211 comments, sorted by Click to highlight new comments since:
    Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

    In this community, agreeing with a poster such as yourself signals me as sycophantic and weak-minded; disagreement signals my independence and courage. There's also a sense that "there are leaders and followers in this world, and obviously just getting behind the program is no task for so great a mind as mine".

    However, that's not the only reason I might hesitate to post my agreement; I might prefer only to post when I have something to add, which would more usually be disagreement. Since I don't only vote up things I agree with, perhaps I should start hacking on the feature that allows you to say "6 members marked their broad agreement with this point (click for list of members)".


    However, that's not the only reason I might hesitate to post my agreement; I might prefer only to post when I have something to add, which would more usually be disagreement. Since I don't only vote up things I agree with, perhaps I should start hacking on the feature that allows you to say "6 members marked their broad agreement with this point (click for list of members)".

    That would be great.

    That would be a great feature, I think. Ditto on on broad disagreements.

    This is a good point, but I think there's a ready solution to that. Agreement and disagreement, by themselves, are rather superficial. Arguments, on the other hand, rationalists have more respect for. When you agree with someone, it seems that you don't have the burden to formulate an argument because, implicitly, you're referring to the first person's argument. But when you disagree with someone, you do have the burden of formulating a counterargument. So I think this is why rationalists tend to have more respect for disagreement than agreement, because disagreement requires an argument, whereas agreement doesn't need to.

    But on reflection, this arrangement is fallacious. Why shouldn't agreement also require an argument? I think it may seem to add to the strength of an argument if multiple people agree that it is sound, but I don't think it does in reality. If multiple people develop the same argument independently, then the argument might be somewhat stronger; but clearly this isn't the kind of agreement we're talking about here. If I make an argument, you read my argument, and then you agree that my argument is sound, you haven't developed the same argument independently... (read more)

    In that case you're "writing the last line first", I suspect it might not reduce bias. Personally, I often try to come up with arguments against positions I hold or am considering, which sometimes work and sometimes do not. Of course, this isn't foolproof either, but might be less problematic.
    In real life this is common, and the results are not always bad. It's incredibly common in mathematics. For example, Fermat's Last Theorem was a "last line" for a long time, until someone finally filled in the argument. It may also be worth mentioning that the experimental method is also "last line first". That is, at the start you state the hypothesis that you're about to test, and then you test the hypothesis - which test, depending on the result, may amount to an argument from evidence for the hypothesis. Another case in point, this time from history: Darwin and natural selection. At some point in his research, natural selection occurred to him. It wasn't, at that point, something that he had very strong evidence for, which is why he spent a lot of time gathering evidence and building argument for it. So there's another "last line first" which turned out pretty well in the end.
    No. When you state the hypothesis, it means that, depending on the evidence you are about to gather, your bottom line will be that the hypothesis is true or that the hypothesis is false (or that you can't tell if the hypothesis is true or false). Writing the Bottom Line First would be deciding in advance to conclude that the hypothesis is true. Depending on where the hypothesis came from, the experimental method may be Privileging the Hypothesis, which the social process of science compensates for by requiring lots of evidence.
    Deciding in advance to conclude that the hypothesis is true is not a danger if the way you decide to do that is by some means that in reality won't let you do that if the hypothesis is false. Keep in mind: you can decide to do something and still be unable to do it. Suppose I believe that a hypothesis is true. I believe it so strongly, that I believe a well-designed experiment will prove that it is true. So I decide in advance to conclude that the hypothesis is true by doing what I am positive in advance will prove the hypothesis, which is to run a well-designed experiment which will convince the doubters. So I do that, and (suppose) that the experiment supports my hypothesis. The fact that my intentions were to prove the hypothesis don't invalidate the result of the experiment. The experiment is by its own good design protected from my intentions. A well-designed experiment will yield truth whatever the intentions of the experimenter. What makes an experiment good isn't good intentions on the part of the experimenter. That's the whole point of the experiment: we can't trust the experimenter, and so the experiment by design renders the experimenter powerless. (Of course, we can increase our confidence even further by replicating the experiment.) Now let's change both the intention and the method. Suppose you don't know whether a hypothesis is true and decide to discover whether it is true by examining the evidence. The method you choose is "preponderance of evidence". It is quite possible for you completely erroneously and unintentionally to in effect cherry-pick evidence for the hypothesis you were trying to test. People make procedural mistakes like this all the time without intending to do so. For example, you see one bit of evidence, and make note of the fact that this particular bit of evidence makes the the hypothesis appear to be true. But now, uh oh! You're subject to confirmation bias! That means that you will automatically, without meaning to, start to
    I think the thing which is jumping out as strange to me is doing this after you've been convinced, seemingly to enhance your credence. Still, this is a good point.
    The danger that Eliezer warns against is absolutely real. So what's special about math? In the case of math, I think that there is something special, and that is that it's really, really hard to make a bogus argument in math and pass it by somebody who's paying attention. In the case of experimental science, the experiment is deliberately constructed to take the result out of the hands of the experimenter. At least it should be. The experimenter only controls certain variables. So why is there ever a danger? The problem seems to arise with the mode of argument that involves "the preponderance of evidence". That kind of argument is totally exposed to cherry-picking, allowing the cherry-picker to create whatever preponderance he wants. It is, unfortunately, maybe the most common argument that you'll find in the world.
    2Yoav Ravid
    The two methods can be combined: When you read something you agree with, try to come up with a counterargument, if you can't refute the counterargument, post it, if you can, then post both the counterargument and its refutation.
    Sorry, I'm not exactly sure what "writing the last line first" means. I'm guessing you referring to the syllogism, and you take my proposal to mean arguing backwards from the conclusion to produce another argument for the same conclusion. Is this correct?
    I'm referring to this notion of knowing what you want to conclude, and then fitting the argument to that specification. My intuition, at least, is that it would be more useful to focus on weaknesses of your newly adopted position - and if it's right, you're bound to end up with new arguments in favor of it anyway. I agree, though, that agreement should not be taken as license to avoid engaging with a position. I suppose I should note, given the origin of these comments, that I recommend these things only in a context of collaboration - and if we're talking about a concrete suggestion for action or the like rather than an airy matter of logic, the rules are somewhat different.
    1Idan Arye
    Should arguers be encouraged, then, to not write all the arguments if favor of their claim in order to leave more room for those who agree with them to add their own supporting arguments? This requires either refraining from fully exploring the subject (so that you don't think of all the arguments you can) or straight out omitting arguments you thought of. Not exactly Dark Side, but not fully Light Side either...
    Y'know, you may be right. I also suspect this is something that depends to a significant extent on the type of proposition under consideration.
    Does it really signal that to other readers, or is that just in your mind? If you see someone posting an agreement, do you really judge him as a weak-minded sycophant?

    If they post just a "Amazing post, as usual Eliezer" without further informative contribution, then I too get this mild sense of "sucking up" going on.

    Actually, this whole blog (as well as Overcoming Bias) does have this subtle aura of "Eliezer is the rationality God that we should all worship". I don't blame EY for this; more probably, people are just naturally (evolutionarily?) inclined to religious behaviour, and if you hang around LW and OB, then you might project towards the person who acts like the alpha-male of the pack. In fact, it might not even need to have any religious undertones to it. It could just be "alpha-male mammalian evolution society" stuff.

    Eliezer is a very smart person. Certainly much smarter than me. But so is Robin Hanson. (I won't get into which one is "smarter", as they are both at least two levels above me) and I feel he is often-- "under-appreciated" perhaps is the closest word?-- perhaps because he doesn't posts as often, but perhaps also because people tend to "me too" Eliezer a lot more often than they "me too" Robin (but again this might be because EY posts much more frequently than RH).


    It's simpler than that: 1) Eliezer expresses certainty more often than Robin, and 2) he self-discloses to a greater degree. The combination of the two induces tendency to identification and aspiration. (The evolutionary reasons for this are left as an exercise for the reader.)

    Please note that this isn't a denigration -- I do exactly the same things in my own writing, and I also identify with and admire Eliezer. Just knowing what causes it doesn't make the effect go away.

    (To a certain extent, it's just audience-selection -- expressing your opinions and personality clearly will make people who agree/like what they hear become followers, those who disagree/dislike become trolls, and those who don't care one way or the other just go away altogether. NOT expressing these things clearly, on the other hand, produces less emotion either way. I love the information I get from Robin's posts, but they don't cause me to feel the same degree of personal connection to their author.)

    I do believe I under appreciate Robin. However, what it feels like to me is that my personality at I suspect a gentic level is more similar to that of Eleizer than of Robin. In particular my impression of Robin is that he is more talented than Eleizer at social kinds of cognition. That does not mean I think Robin is less rational. It means that when I read Eleizer's work I think "yeah, that's bloody obvious!" whereas some of Robin's significant contributions I actually have to actively account for my own biasses and work to consider his expertise and that of those he refers to. My suspicion is that people who have similar minds to Robin would be less inclined to be involved in rationalist discourse than the more instinctively individualist. This accounts somewhat for the differences in 'me too's but if anything makes Robin more remarkable.
    "If you see someone posting an agreement, do you really judge him as a weak-minded sycophant?" It depends greatly on what they're agreeing with, and what they've said and done before.

    The nice thing about karma/voting sites like this one is that they provide an efficient and socially acceptable mechanism for signaling agreement: just hit the upmod button. Nobody wants to read or listen to page after page of "me too"; forcing people to tolerate this would be bad enough to negate the advantage of making agreement visible. Voting accomplishes the same visibility without the irritating side-effects.


    There's a bit of noise, as I sometimes vote up someone I disagree with if they raise an interesting point, and I very, very rarely vote someone down just because I disagree with them.

    This "bit of noise" becomes significant on sites with a small number of subscribers, as a +/-2 vote is a "big deal".

    I think that's a feature, not a bug. What an upvote expresses is nearer to "you should listen to this guy" than to "I agree with this guy", but I think the former is more useful information.
    There should be an emotional display of how many upvotes a post got. Numbers are, well, too numbery for that. Either a smile with ever growing smile. or a ballon that grows bigger and bigger (for posts that really get way too upvoted, the ballon could explode into colorfull bright carnival paper, or candy, or Brad Pitt, or Russian Redheads...) Ok, ballon or smile, who is with me?
    I like the idea, but they seem kind of gimmicky. (thinking of LW's comments section, it would be hard to give another icon the kind of prominence we want, without making it too big). How about a green/red bar, like the one on YouTube?

    I must admit, I think I do find myself going into Vulcan mode when posting on LW. I find myself censoring very simple social cues -- expressions of gratitude, agreement, emotion -- because I imagine them being taken for noise. I think I'm going to make an effort to snap myself out of this.

    Same here. It's very natural for me to thank people when they say or do something awesome, to encourage promising newbies, and to express my agreement when I do agree, but I got the impression that such things are generally frowned upon here, so I found myself suppressing them.

    Actually, I didn't mind that much -- the power of ideas discussed here way outweighs these social inconveniences, and I can easily live with that. But personally, I would prefer to be able to express my agreement and gratitude without spending too much calories on worrying about my tribal status.

    (Of course we'll need to keep the signal/noise ratio in check, but I'll post my ideas on that in a separate comment).

    Two thoughts.

    1. In any relationship where I have influence, I expect to get more of what I model.

    For example, in a community where I have influence, I expect demonstrating explicit support to push community norms towards explicit support, and demonstrating criticism to push norms towards criticism.

    This creates the admittedly frustrating situation where, if a community is too critical and insufficiently supportive, it is counterproductive for me to criticize that. That just models criticism, which gets me more criticism; the more compelling and powerful my criticism, the more criticism I'll get in return.

    If a community is too critical and insufficiently supportive, I do better to model agreement as visibly and as consistently as I can, and to avoid modeling criticism. For example, to criticize people privately and support them publicly.

    1. In any relationship where I have influence, I expect to get more of what I reward.

    If a community is too critical and insufficiently supportive, I do well to be actively on the lookout for others' supportive contributions and to reward them (for example: by praising them, by calling other people's attention to them, and/or by paying attention to them myself). I similarly do well to withhold those rewards from critical contributions.

    Voted up. (Explicit support and rewards, ahoy!)

    Heh, it seems like this post has primed me for agreement, and I upvoted a lot more comments than I usually do. And it looks like many others did this as well -- look at the upvote counts! I was reading and voting with Kibitzer on, and was surprised to see the numbers.

    (Have I just lowered my status by signaling that I'm susceptible to priming?)


    Nah, you've raised it, by signaling that you're honest. At least, that's how it would work among true rationalists (as opposed to anti-irrationalists). ;-)

    They surprised me too. (I actually felt the urge to use an unnecessary exclamation point there the priming's made me so enthusiastic...) And I think that the status gained from the fact that you noticed being primed probably outweighed any lost due to it us being told it happened. Though now that we're noticing it, we need to decide which frequency of upvoting we should be using so we can avoid the effect.

    This article seems to model rational discourse as a cybernetic system made of two opposite actions that need to be balanced:

    • Agreement / support of shared actions
    • Disagreement / criticism

    Agreement and disagreement are not basic elements of a statement about base reality, they're contextual facts about the relation of your belief to others' beliefs. Is "the sky is blue" agreement or dissent? Depends on what other people are saying. If they're saying it's blue, it's agreement. If they're saying it's green, it's dissent. Someone might disagree with someone by supporting an action, or agree with a criticism of what was previously a shared story. When you have a specific belief about the world, that belief is not made of disagreement or agreement with others, it's made of constrained conditional anticipations about your observations.

    This error seems likely related to using a synagogue fundraiser as the central case of a shared commitment of resources, rather than something like an assurance contract! There's a very obvious antirational motive for synagogue fundraisers not to welcome criticism - God is made up, and a community organized around the things its members would genuinely like to... (read more)

    In hindsight, a norm against criticizing during a fundraiser, when there is always a fundraiser, leads to a community getting scammed by people telling an incoherent story about an all-powerful imaginary guy, just like they did in the synagogue example.

    Many points that are both new and good. Like prase, and like a selection of other fine LW-ers with whom I hope to be agreeing soon, I think your post is awesome :)

    One root of the agreement/disagreement asymmetry is perhaps that many of us aspiring rationalists are intellectual show-offs, and we want our points to show everyone how smart we are. Status feels zero-sum, as though one gains smart-points from poking holes in others' claims and loses smart-points from affirming others' good ideas. Maybe we should brainstorm some schemas for expressing agreement while adding intellectual content and showing our own smarts, like "I think your point on slide 14 is awesome. And I bet it can be extended to new context __", or "I love the analogy you made on page 5; now that I read it, I see how to take my own research farther..."

    Related: maybe we feel self-conscious about speaking if we don't have anything "new" to add to the conversation, and we don't notice "I, too, agree" as something new. One approach here would be to voice, not just agreement, but the analysis that's going into each individual's agreement, e.g. "I agree; that sounds just ... (read more)

    I've often wrestled with this myself, and hesitated to comment for just this reason.
    Me too.
    Me too!
    Me too.
    Me too
    I would encourage you to make this a fornt page post if you have the time. I think these thoughts and strategies are positive, rational and necessary group building skills for any long term group that fulfills rationalist goals. Or maybe it should be in the community guidelines(do these exist? I imagine the sequences as extended community guidelines) so most new members read them over.

    “If I agree, why should I bother saying it? Doesn’t my silence signal agreement enough?”

    That’s been my non-verbal reasoning for years now! Not just here: everywhere. People have been telling me, with various degrees of success, that I never even speak except to argue. To those who have been successful in getting through to me, I would respond with, “Maybe it sounds like I’m arguing, but you’re WRONG. I’m not arguing!”

    Until I read this post, I wasn’t even aware that I was doing it. Yikes!

    The fact is that there is a strong motive to disagree: either I change my opinion, or you do. On the other hand, the motives for agreeing are much more subtle: there is an ego boost; and I can influence other people to conform. Unless I am a very influent person, these two reasons are important as a group, but not much individually. Which lead us to think: There is a similar problem with elections, and why economists don´t vote . Anyway there is a nice analogy with physics: eletromagnetic force are much stronger than gravitational, but at large scale gravity is much more influent. (which is kinda obvius and made me think why no one pointed this on this post before)

    BRAVO, Eliezer! Huuzah! It's about time!

    I don't know if you have succeeded in becoming a full rationalist, but I know I haven't! I keep being surprised / appalled / amused at my own behavior. Intelligence is way overrated! Rationalism is my goal, but I'm built on evolved wet ware that is often in control. Sometimes my conscious, chooses-to-be-rationalist mind is found to be in the kiddy seat with the toy steering wheel.

    I haven't been publicly talking about my contributions to the Singularity Institute and others fighting to save us from ourselves. Part of that originates in my father's attitude that it is improper to brag.

    I now publicly announce that I have donated at least $11,000 to the Singularity Institute and its projects over the last year. I spend ~25 hours per week on saving humanity from Homo Sapiens.

    I say that to invite others to JOIN IN. Give humanity a BIG term in your utility function. Extinction is Forever. Extinction is for ... us?

    Thank you, Eliezer! Once again, you've shown me a blind spot, a bias, an area where I can now be less wrong than I was.

    With respect and high regard,
    Rick Schwall, Ph.D.
    Saving Humanity from Homo Sapiens™ :-|

    Cool! Just am curious.. What do you do for 25 hours a week to save humanity from itself?
    Mostly, I study. I also go to a few conferences (I'll be at the Singularity Summit) and listen. I even occasionally speak on key issues (IMO), such as (please try thinking WITH these before attacking them. Try agreeing for at least a while.): * "There is no safety in assuring we have a power switch on a super-intelligence. That would be power at a whole new level. That's pretty much Absolute Power and would bring out the innate corruption / corruptibility / self-interest in just about anybody." * "We need Somebody to take the dangerous toys (arsenals) away." * "Just what is Humanity up to that requires 6 Billion individuals?" All of that is IN MY OPINION. <-- OK, the comments to this post showed me the error of my ways. I'm leaving this here because comments refer to it. Edited 07/14/2010 because I've learned since 2009-09 that I said a lot of nonsense.
    I'm not sure what this was supposed to add, especially with emphasis. Whose opinion would we think it is?
    I've been told that my writing sounds preachy or even religious-fanatical. I do write a lot of propositions without saying "In my opinion" in front of each one. I do have a standard boilerplate that I am to put at the beginning of each missive: First, please read this caveat: Please do not accept anything I say as True. Ever. I do write a lot of propositions, without saying, "In My Opinion" before each one. It can sound preachy, like I think I've got the Absolute Truth, Without Error. I don't completely trust anything I have to say, and I suggest you don't, either. Second, I invite you to listen (read) in an unusual way. "Consider it": think WITH this idea for a while. There will be plenty of time to refute it later. I find that, if I START with, "That's so wrong!", I really weaken my ability to "pan for the gold". If you have a reaction (e.g. "That's WRONG!"), please gently save it aside for later. For just a while, please try on the concept, test drive it, use the idea in your life. Perhaps you'll see something even beyond what I offered. There will plenty of time to criticize, attack, and destroy it AFTER you've "panned for the gold". You won't be missing an opportunity. Third, I want you to "get" what I offered. When you "get it", you have it. You can pick it up and use it, and you can put it down. You don't need to believe it or understand it to do that. Anything you BELIEVE is "glued to your hand"; you can't put it down. -=-= END Boilerplate In that post, I got lazy and just threw in the tag line at the end. My mistake. I apologize. I won't do that again. With respect and high regard, Rick Schwall Saving Humanity from Homo Sapiens (playing the game to win, but not claiming I am the star of the team)
    This only makes it worse, because you can't excuse a signal. (See rationalization, signals are shallow). Also: just because you believe you are not fanatical, doesn't mean you are not. People can be caught in affective death spirals even around correct beliefs.
    Vladimir_Nesov wrote on 11 September 2009 08:34:32AM: This only makes what worse? Does it makes me sound more fanatical? Please say more abut "you can't excuse a signal". Did you mean I can't reverse the first impression the signal inspired in somebody's mind? Or something else? OK I'll start with a prior = 10% that I am fanatical and / or caught in an affective death spiral. What do you recommend I do about my preachy style? I appreciate your writings on LessWrong. I'm learning a lot. Thank you for your time and attention. With respect and high regard, Rick Schwall, Ph.D. Saving Humanity from Homo Sapiens (seizing responsibility, (even if I NEVER get on the field)

    What do you recommend I do about my preachy style?

    I suggest trying to determine your true confidence on each statement you write, and use the appropriate language to convey the amount of uncertainty you have about its truth.

    If you receive feedback that indicates that your confidence (or apparent confidence) is calibrated too high or too low, then adjust your calibration. Don't just issue a blanket disclaimer like "All of that is IN MY OPINION."

    OK. Actually, I'm going to restrain myself to just clarifying questions while I try to learn the assumed, shared, no-need-to-mention-it body of knowledge you fellows share. Thanks.
    I can't help but think that those activities aren't going to do much to save humanity. I don't want to send you into an existential crisis or anything but maybe you should tune down your job description. "Saving Humanity from Homo Sapiens™" is maybe acceptable for Superman. It might be affably egotistical for someone who does preventive counter-terrorism re: experimental bioweapons. "Saving Humanity from Homo Sapiens one academic conference at a time" doesn't really do it for me. Plus wishing for all people to be under the rule of a god-like totalitarian sounds to me like the best way to destroy humanity.
    Jack wrote on 09 September 2009 05:54:25PM: I don't wish for it. That part was inside parentheses with a question mark. I merely suspect it MAY be needed. Please explain to me how the destruction follows from the rule of a god-like totalitarian. Thank you for your time and attention. With respect and high regard, Rick Schwall, Ph.D. Saving Humanity from Homo Sapiens (seizing responsibility, (even if I NEVER get on the field)
    Maybe some Homo Sapiens would survive, humanity wouldn't. Are the human animals in 1984 "people"? After Winston Smith dies is there any humanity left? I can envision a time when less freedom and more authority is necessary for our survival. But a god-like totalitarian pretty much comes out where extinction does in my utility function.
    IIRC, Winston Smith doesn't die; by the end, his spirit is completely broken and he's practically a living ghost, but alive.
    Oh. My mistake. When you wrote, "Plus wishing for all people to be under the rule of a god-like totalitarian sounds to me like the best way to destroy humanity.", I read: * [Totalitarian rule... ] ... [is] ... the best way to destroy humanity, (as in cause and effect.) * OR maybe you meant: wishing ... [is] ... the best way to destroy humanity It just never occurred to me you meant, "a god-like totalitarian pretty much comes out where extinction does in my utility function". Are you willing to consider that totalitarian rule by a machine might be a whole new thing, and quite unlike totalitarian rule by people?
    Jack wrote on 09 September 2009 05:54:25PM : I hear that. I wasn't clear. I apologise. I DON'T KNOW what I can do to turn humanity's course. And, I decline to be one more person who uses that as an excuse to go back to the television set. Those activities are part of my search for a place where I can make a difference. ... but not acceptable from a mere man who cares, eh? (Oh, all right, I admit, the ™ was tongue-in-cheek!) Skip down to END BOILERPLATE if and only if you've read version v44m First, please read this caveat: Please do not accept anything I say as True. Ever. I do write a lot of propositions, without saying, "In My Opinion" before each one. It can sound preachy, like I think I've got the Absolute Truth, Without Error. I don't completely trust anything I have to say, and I suggest you don't, either. Second, I invite you to listen (read) in an unusual way. "Consider it": think WITH this idea for a while. There will be plenty of time to refute it later. I find that, if I START with, "That's so wrong!", I really weaken my ability to "pan for the gold". If you have a reaction (e.g. "That's WRONG!"), please gently save it aside for later. For just a while, please try on the concept, test drive it, use the idea in your life. Perhaps you'll see something even beyond what I offered. There will plenty of time to criticize, attack, and destroy it AFTER you've "panned for the gold". You won't be missing an opportunity. Third, I want you to "get" what I offered. When you "get it", you have it. You can pick it up and use it, and you can put it down. You don't need to believe it or understand it to do that. Anything you BELIEVE is "glued to your hand"; you can't put it down. -=-= END BOILERPLATE version 44m I think we may have different connotations. I'm going to reluctantly use an analogy, but it's just a temporary crutch. Please drop it as soon as you get how I'm using the word 'saving'. If I said, "I'm playing football," I wouldn't be implying that

    Two observations:

    • In American culture, when you give money to a charity, you aren't supposed to tell people. Christian doctrine frowns heavily on that, and we are all partly indoctrinated with that doctrine. That's why no one sent their "yes" response to the list.

    • You just wrote a post with 22 web links, and 19 of them were to your own writings. I think that says more about why we can't cooperate than anything else in the post.

    Far from being a negative aspect of the post, the self-linking is a key element of Eliezer's effort to build a common vocabulary for rationalists. I've personally found them extremely helpful for reminding myself of the context of the words, when I've forgotten. They're basically footnotes.

    How can we cooperate if we don't even speak the same language?

    It's better to have those links than not to have them. It's a bit as if Eliezer were writing a large, hypertext book that we are writing footnotes in. But the lack of links to the writings of other people shows a lack of engagement and a self-preoccupation that smart people tend to have. Too often, when we ask others for co-operation, we really mean "get behind my ideas and my agenda". Cooperation involves compromise. It involves participating in the critique of those ideas. It requires, as a prerequisite, believing that others are smart enough to look at the same evidence and see things that you missed. In a forum like this, actual interest in cooperation is evidence by writing relatively short posts, and then responding at length to many of the comments; rather than by writing extremely long posts, and then making a few short responses to comments.

    I link to myself because I know what I have written.

    Maybe you should read something written by somebody else sometime.
    This is an unhelpful comment and did not contribute to the conversation and I interpret it as an attack. Instead of attack why not engage EY on why he thinks it is so important to link to want he has written rather then what other people have written. Any time I get the urge to use a "witty" oneliner I instead ask for the persons reasoning, perspective and logic that lead them to their conclusion.

    First let me say that I do not think that attacks are by their very nature impermissible, and if you do, how dare you put "witty" in scare quotes? That's just flat-out unkind.

    Anyway, it's a little hard for me to defend my comments of two years ago against attack, because I no longer remember what prompted me to make them. I will do my best to reconstruct my mental state leading up to the comment I made.

    I don't think I was necessarily on PhilGoetz's side when I read his comment. I think I agreed, and still agree, with Technologos. But when I read the Wise Master's response to it, it didn't sit right with me. It read like an attempt to fight back against attack with anything that came to hand, rather than an attempt to seek truth. Surely, I must have felt, if the Wise Master were thinking clearly, he would see that unfamiliarity with the works of others is not an excuse, but in fact the entire problem. I feel that I wanted to communicate this insight. I chose the form that I did probably because it was the first one that came to mind. I hang out on some pretty rough and tumble internet forums, described by one disgruntled former poster as "geek bevis[sic] and ... (read more)

    "witty" was describing my remark, as in the remarks I hold back on may not actually be witty, I was not trying to reference your remark though in retrospect it does seem easy to infer that so I apologize for communicating sloppily. Attacks that do not forward the conversation are not useful. If the attacker does not expose the logic and data behind their attack then the person being attacked has no logic or data to pick a part and respond to and has no reason to believe that the attacker is earnest in seeking the truth.
    Your attack against Nominull was, in fact, stronger and less ambiguous than Nominull's. The logic behind the point was actually quite obvious, which is not to say I would have presented it in this context. As Perplexed points out, sometimes there are benefits to taking the effort that you do know what other people have written. (Incidentally, I upvoted both Eliezer Phil and left Nominull alone). Nominull's comment, discourteous or not, furthered the actual conversation while yours did not (and nor did mine). So that isn't the deciding factor here of why your kind of attack is different from Nominull's kind. I think the difference in perception is that you are responding to provocation, which many people perceive as a whole different category - but that can depend which side you empathise with.
    I disagree. It is a very appropriate response to Eliezer's flip dismissal of Goetz's quite sincere (and to my mind, good) suggestions. Eliezer is, of course, very well-read for a man of his age, but he is actually a bit parochial given the breadth of his ambitions and the authoritative, didactic writing style. His credibility, his communication ability, his fundraising, and even his ideas could probably benefit if he made a conscious effort to make his writing a bit more scholarly. I understand that Eliezer is both very busy and very prolific, but I thought that his excuse (that he cited himself so much only for reasons of convenience (or laziness)) was much too dismissive of Phil's arguments - in large part because I think his excuse is quite likely the truth.
    With only a sentence and without back and forth conversation do you have the ability to pull out flippant intent from: I do not know EY so I can not assign myself a high probability of doing so. In truth I subconsciously assigned a high probability that Nominull was in the same boat as me, in other words I jumped to conclusions. Do you assign yourself a high probability of determining EY's intent from the above? If so please share if you can. I can imagine EY's statement made with helpful intent(I could have made that statement with helpful intent), responding to it as if it was made with unhelpful intent with out evidence does not seem rational/helpful to me.
    I think you are attaching too much importance to inferring the intent (flippant vs helpful) of Eliezer's one-line response to several dozen lines of discussion, and attaching too little importance to assessing the tone. In any case, the dictionary definition of flippant: seems to be about tone, rather than intent. Eliezer's comment qualifies as flippant. Nominull's response was also flippant by this definition. This matching tone strikes me as appropriate - which is exactly what I said. At the point where Eliezer made his comment, he was being mildly criticized. His flippant comment, which I think was exactly truthful, carried the subtext that he was not particularly interested in discussing those criticisms at that time. He is totally within his rights sending that message. The criticism was mild, and formulating a serious and thoughtful response to the criticism is not something he was required to do. He could have just ignored it. He chose not to. Sometimes clever, conversation-stopping responses don't stop conversations. Particularly when they are a little bit rude. Eliezer got a clever and rude response back. And for almost two years, everyone was satisfied with that ending.
    I think there is a high probability that lack of further comments is just due to the propensity not to post in old conversations. I figured if the sequences and in post links are to be taken seriously then the comments should be too. Old comments should not be treated as if they were perserved in carbonite but living arguments.
    You can replace intent with tone and I would stand by that point. I could make the same remark without disrespectful, shallow, lacking seriousness, and with out levity. By your description Eliezer makes a true but rude remark and receives a rude response back and this is "appropriate." I do not see how a rude response to what is believed to be a rude comment is productive, it does not bring any logic or new data to the table.
    This example did.
    Are you replying to this?
    It is long past time for chastisement, if it was ever required.
    I respond to a similar comment here. It is not about chastisement, it is about the people, like me, who come and read it later.
    You seem to be remarkably willing to assert how your comments should be interpreted with respect to intent, meaning and social implications. Yet you do not seem to have paid Nominull that same courtesy.
    Well I know what my intent is I know what I want my social implications to be. It makes sense that I try and communicate them. I accept that Nominull hangs "out on some pretty rough and tumble internet forums" and did not have unproductive intentions. I have not claimed that Nominull had unproductive intentions. An example of impoliteness is needed if you want to continue this conversation.
    The observation about American culture (which applies to Australian culture too) is a good one. I don't agree that the 19 links paint such negative picture. In fact, three external links in a single post is remarkable.

    In hindsight, the problem with your fundraiser was obvious. There were two communications channels: one private channel for people who contributed, and one channel for everyone else. Very few people will post a second message after they've already posted one, so the existence of the private channel prevented contributors from posting on the mailing list. Removing all the contributors from the public channel left only nay-sayers and an environment that favored further nay-saying. The fix would be to merge the two channels: publish the messages received from contributors, unless they request otherwise.


    I agree with everything you said in your talk, and I think you're brilliant.

    I've noticed that I am often hesitant to publicly agree with comments and posts here on LessWrong because often agreement will be seen as spam. While upvotes do count as something, it is far easier to post a disagreement than to invent an excuse to post something that mostly agrees. This can be habit forming.

    Comparing say Less Wrong with a Mensa online discussion group I've noticed that my probaility of disagreement is far lower with the self identified rationalists than with the self and test identified generic smart people. The levels of Dark Side Argument are almost incomparable. I have begun disengaging from Dark debates wherever convenient purely to form better habits at agreement.


    In fact, agreement is a sort of spam - it consumes space and usually doesn't bring new thoughts. When I imagine a typical conference where the participants are constantly running out of time, visualising the 5-minute question interval consumed by praise to the speaker helps me a lot in rationalising why the disagreement culture is necessary. Not that it would be the real reason why I would flee screaming out of the room, I would probably do even if the time wasn't a problem.

    When I read the debates at e.g. I am often disgusted by how much agreement there is (and it is definitely not a Dark Side blog). So I think I am strongly immersed in the disagreement culture. But, all cultural prejudices aside, I will probably always find a discussion consisting of "you are brilliant" type statements extraordinarily boring.


    It doesn't have to bring new thoughts to serve a purpose. A chorus of agreement is an emotional amplifier.

    Not only that, it becomes a glue that binds people together, the more agreement the stronger the binding (and the more that get bound). At least that is the analogy that I use when I look at this; we (rationalists) have no glue, they (religions) have too much.
    Agreement does not need to be contentless and therefore spam. It can fill in holes in the argument, take a different perspective(helping a different segment of the reading population), add specific details to the argument that were glossed over and much more. It sounds like you have a problem with lack of content more then you do with agreement. I am sure you would find contentless disagreement just a boring.
    Agreements are a lot more often contentless, as a rule. When disagreeing, people feel motivated to include some reasons, and even if they don't, the one who was disagreed with feels motivated to ask for the reasons. But in principle you are right that my objections don't primarily aim at agreement.
    I think you are focusing too much on discussions. There are other activities where success can depend heavily on not acting alone, and it is in those types of activities (such as fundraising, seizing political power, reforming institutions, etc) that rationalist-types are disadvantaged by their lack of coordination.
    I agree!
    You didn't read Eliezer's post very carefully, did you? You need more practice in agreement and conformity. There are a limited number of "right" answers out there. It's alright to agree on them, when they are found.

    I'm going to agree with the people saying that agreement often has little to no useful information content (the irony is acknowledged). Note, for instance, that content-free "Me too!" posts have been socially contraindicated on the internet since time immemorial, and content-free disagreement is also generally verboten. This also explains the conference example, I expect. Significantly, if this is actually the root of the issue, we don't want to fight it. Informational content is a good thing. However, we may need to find ways to counteract the negative effects.

    Personally, having been somewhat aware of this phenomenon, when I've agreed with what someone said I sometimes try to contribute something positive; a possible elaboration on one of their points, a clarification of an off-hand example if it's something I know well, an attempt to extend their argument to other territory, &c.

    In cases like the fundraising one, where the problem is more individual misperception of group trends, we probably want something like an anonymous poll--i.e., "Eliezer needs your help to fund his new organization to encourage artistic expression from rationalists. Would you donate money to this cause?", with a poll and a link to a donation page. I would expect you'd actually get a slightly higher percentage voting "yes" than actually donating, though I don't know if that would be a problem. You'd still get the same 90% negative responses, but people would also see that maybe 60% said they would donate.

    "A slightly higher percentage"? More like: no correlation.

    I recall that McDonalds were badly burned by "would you X". Would people buy salads? oh god yes, they'd love an opportunity to eat out and stick to their diets. Did they buy salads, once McDonalds had added them? Nope.

    Similarly I recall that last US election the Ron Paul Blimp campaign was able to get a lot more chartable pledges than real-world money, and pretty quickly died from underfunding.

    Someone[1] must be buying those salads, as McDonalds is keeping them on the market, and given that food spoils, it doesn't make financial sense for them to keep offering a product which doesn't sell. 1: I've actually tried the McDonalds salad 3 times. The first time, it was very (and surprisingly) good. The other two times it was mediocre.
    You can keep small stocks of an item, and it can have positive effects beyond direct revenues, e.g. if families with one dieting or vegetarian member don't avoid McDonald's because that person can eat a salad.
    4Paul Crowley
    I think the positive effect is that they can say that they sell salads, people can convince themselves they intend to buy the salad, and so on.
    I saw a study recently that said that the mere presence of a salad on the menu increases people's consumption. I deeply doubt that fast food chains were surprised by that result. From the nature of the study, it's not even about convincing themselves they intend to buy a salad. By merely seriously having considered the option, they give themselves virtue points which offset the vice of more consumption.
    Or rather, another positive effect. These explanations aren't mutually exclusive. That being said, nice insight.
    Yes, excellent point that should be underlined for the readers here. People's metaknowledge is very poor. Their knowledge about themselves, especially so.
    You make an excellent point, I was not really thinking clearly there. However, I will note that my intent was not that it should produce an accurate prediction of donations, but to better gauge public opinion on the idea to counteract the tendency to agree silently but disagree loudly.

    I've worked for a number of non profits and in analysis of our direct mailings, we would get a better response from a mailing that included one of two things

    1. A single testimonial mentioning the amount that some person gave
    2. Some sort of comment about the group average (listeners are making pledges of $150 this season)

    This is one of the reasons that some types of nonprofits choose to create levels of giving; my guess is that it is gaming these common level of giving ideas by creating artificial norms of participation. Note You can base your levels on actual evidence and not just round numbers! (plus inflation, right?)

    We also generally found that people respond well to the idea of a matching donation (which is rational since your gift is now worth more).

    I do believe that anonymous fund raising removes information about community participation that is very valuable to potential donors. Part of making a donation is responding the signal that you are not the only one sending a check to a hopeless office somewhere.

    Anonymous polls might be a good idea, but especially among rational types, you might want the individual testimony: you get to see some of the reasoning!

    I think the synagogue in the story picked up on these ideas and used them effectively. But the nice thing about raising money through direct mailing and the internet is that you can run experiments!

    The reason I specified anonymity was to reduce the likelihood of a social stigma attached to not donating. The idea of pressuring people into an otherwise voluntary gesture of support makes me very uncomfortable. However, I may be overcautious on that aspect, and I defer to your greater experience with fundraising. Do you have any other empirical observations about response to fundraising efforts? You could consider submitting an article on the subject, either as it relates to instrumental rationality, or for the benefit of anyone else who might want to organize a rationality-related non-profit.
    I think your caution is warranted, the fact that you can see the other people in the synagogue who don't stand up could be very hurtful to the nonparticipants. Highlighting individual donors or small groups is a good way to show public support without giving away to much information about your membership's participation as a whole. If you are interested in more rigorous studies (we did ours in excel), you might want to try Dean Karlan's "Does Price Matter in Charitable Giving? Evidence from a Large-Scale Natural Field Experiment " I will try to dig up some other papers online
    Amongst a group of people who know and interact with each other regularly such as a synagogue those who have the means to donate money and those who do not would be an extremely obvious piece of information to the members of that group. There are actually two actions taken here by members, they either do not donate or they donate a certain amount. To the members of the group the amount donated is as much of an information channel as the choice to donate or not to donate. Those who donate a lot and are rich may cause offence by donating less than expected, those who donate a little when there is no expectation may gain esteem. You are proposing a situation in which an individual donates less than expected by such a magnitude that it seriously affects people's esteem for them. This is possible, although given social pressures unlikely. It can occur at all because the magnitude of the donation combined with the wealth of the individual and the support for the cause are all easily calculable. Magnitude of donation is known, wealth is implied by clothes, status symbols or frank discussions about income, and support for the place of worship is expected to be high. In a group of rational people donating to support a cause they have the option of donating, not donating and voicing support or criticism. You have established a reasonable grounds for why people do not arbitrarily voice support, and for why people voice criticism. But let's look at the amount donated and imagine it were being done publicly, is there a state where people can be hurt by donation or non-donation? Even if the amount donated and a reasonable guess at the wealth of the individual are available, the amount donated can still vary by the level of support the person feels for the cause. There is no level of donation that is 'incorrect' just as there is no arbitrary 'correct' level of support. Therefore the situation is most unlikely to cause social harm to the individual donating, or those who do not

    To be honest, I suspect a lot of those folks, and I include myself here, were anti-collectivists first.

    In my own mind, the emotive rule "I might follow, but I must never obey" is built over a long childhood war and an eventual hard-fought and somewhat Pyrrhic victory. I know it's reversed stupidity, but it's hard to let go.

    What good rationalist techniques are there for changing such things?


    Ask "what's bad about obeying?" Imagine a specific concrete instance of obeying, and then carefully observe your automatic, unconscious response. What bad thing do you expect is going to happen?

    Most likely, you will get a response that says something about who you are as a person: your social image, like, "then I'll be weak". You can then ask how you learned that obeying makes someone weak... which may be an experience like your peers teasing you (or someone else) for obeying. You can then rationally examine that experience and determine whether you still think you have valid evidence for reaching that conclusion about obedience.

    Please note, however, that you cannot kill an emotional decision like this without actually examining your own evidence for the proposition, as well as against it. The mere knowledge that your rule is irrational is not sufficient to modify it. You need to access (and re-assess) the actual memor(ies) the rule is based on.


    Recognizing that "I might follow, but I must never obey" is an emotional rule is already a good first step, much better than trying to rationalize it.

    I've recognized that same pattern in myself - a bad feeling in response to the idea of following / obeying even when it's an objectively good idea to do so. I imagined an "asshole with a time machine" who would follow me around, observe what I did (buy a ham sandwitch for lunch, enter a book store...), go back in time a few seconds before my decision and order me to do it.

    Once I realized I was much more angry against this hypothetical asshole than it was reasonable to, I tried getting rid of that anger. I guess I succeeded (the idea doesn't bug me as much), but I don't know if it means I won't have any more psychological resistance to obeying. I am probably still pretty biased towards individualism / giving more value to my opinion just because it's my own, but I'd like to find ways to get rid of that..

    "What good rationalist techniques are there for changing such things?" Carefully examining the potential reasons for going along with someone else. Emile's point below is a very good one. 'Obedience' implies that we must go along with what someone says we should do. It's much better to think (hopefully accurately) that we've choosing to do something which coincidentally is also what someone has suggested. We don't need to choose to obey to go along. Carefully examining the justifications for actions is also important. If there are compelling reasons to do X, the fact that we've been "ordered" to do X is irrelevant, just as being ordered NOT to do X is.
    1Bruno Mailly
    Unfortunately, "doing what they say" tend to make people believe they are the top dog. And a bit too many people are prompt to get this idea, reluctant to abandon it, and abuse it to no end. So, pragmatically, sometimes it's better to find another way to get the desired result, or at least delay action to diminish that bad association.
    Really? I've always thought my similar rule was embedded in my DNA.
    Stating that you are not obeying and that you are take a particular course of action because it is a good idea seems to work/help some people. Realize that the anti-collectivist pull is an explotable weakness it leaves you vulnerable to people who are perceptive and want to harm you. Some would say that you should just avoid getting people to want to harm you, however a consequence is that you would have to avoid standing up to people who harm the world, people you care for and some time yourself.

    Wait a second, now we're using Jews trying to run a synagogue as an example of a group who cooperate and don't always disagree with each other for the sake of disagreeing? Your synagogue must have been very different from mine. You never heard the old "Ten Jews, ten opinions - or twenty if they're Reform" joke? Or the desert island joke?

    I also agree with everyone. In particular, I agree with Cameron and Prase that it's tough to just say "I agree". I agree with ciphergoth that I worry that I'm sucking up to you too much. I agree with Anna Salamon that we tend to be intellectual show-offs. I agree with Julian that many of us probably started off with a contrarian streak and then became rationalists. I agree with Jacob Lyles that there's a strong game theory element here - I lose big if rationalists don't cooperate, I win a little if we all cooperate under Eliezer's benevolent leadership, but to a certain way of thinking I win even more if we all cooperate under my benevolent leadership and there's no universally convincing proof that cooperating under someone else is always the highest utility option. And I agree with practically everything in the main post.

    One thing I don't agree with: being ashamed of strong feelings isn't a specifically rationalist problem. It's a broader problem with upper/middle class society. Possibly more on this later.

    3Eliezer Yudkowsky
    I've never been dragged to any other religious institution, so I wouldn't have any other example to use. I expect these forces are much stronger at Jesus Camp or the Raelians. But yes, even Jewish institutions still coordinate better than atheist ones.
    Granting that the jokes you refer to are generally accurate, wouldn't that make the synagogue a better example for a rationalist Cat Herd than some other religious organization where people "think" in lockstep with the Dear Leader? The synagogue would represent an example of a group of people who manage to cooperate effectively even with a high level of dissensus (neologism for the opposite of consensus). Which, as I understand it, is the goal Eliezer is aiming for in this post.
    And you win the most when the group is so rational that almost anyone could serve as the benevolent leader.
    The group trait required is not rationality - it is other traits that also share positive affect.
    I was not asserting that rationality is all that you need to make the most efficient group, if that was what you are getting at. I think we agree that starting with groups A and B both with x skills if group A is more rational it will also be the more effective group. My argument was as the ability of the group to act rationally increases, the utility difference between being a member and being the leader will decreases as the group becomes better at judging the leaders value.

    I personally see public disagreements as a way to refine the intent of the person under the spotlight rather than a social display of individualism. When I disagree with someone it is not for the sake of disagreeing but rather to refine what I may think is a good idea that has a few weak points. I do this to those I respect and agree with because I hope that others will do this to me.

    I think the broader question here is not whether we should encourage widespread agreement in order to create cohesion - but rather if we can ensure that the tenets we collecti... (read more)

    On 'What Do We Mean By "Rationality"?' when you said "If that seems like a perfectly good definition, you can stop reading here; otherwise continue." - I took your word for it and stopped reading. But apparently comments aren't enabled there.

    You have significantly altered my views on morality (Views which I put a GREAT deal of mental and emotional effort into.) I suspect I am not alone in this.

    I think there's a fine line between tolerating the appearance of a fanboy culture, and becoming a fanboy culture. The next rationalist pop star ... (read more)

    Agreement and disagreement look more like skills that we can develop (and can improve at both of) than ends of a continuum (where moving toward one means moving away from the other). I mean, we can reduce the apparent and actual extent to which we're an Eliezer fan-club or echo chamber, and improve our armor against the emotional and social pressures that "we all think the Great Leader is perfect" tends to form. And we can also, simulateously, improve our ability to endorse good ideas even when someone else already said that idea, and to actually coordinate to get stuff done in groups.
    I think Eli has succeeded in attracting enough very clever people to the community that this is not a massive danger. If Robin, you, Carl S, Yvain, Nick T, Nick Hay, Vladimir N, etc all disagreed with him for the same reason, and he didn't retract, he would look silly.

    "[A] survey of 186 societies found, belief in a moralising God is indeed correlated with measures of group cohesion and size." - God as Cosmic CCTV, Dan Jones

    I'm not sure if this was at work in your fundraiser, but I know I tend to see exhortations from others that I give to charitable causes/nonprofits as attempts at guilt tripping. (I react the same way when I'm instructed to vote, or brush my teeth twice a day, or anything else that sounds less like new information and more like a self-righteous command.) For this reason, I try to keep quiet when I'm tempted to encourage others to give to my pet charity/donate blood/whatever, for fear that I'll inspire the opposite reaction and hurt my goal. I don't always succeed, but that's an explanation other than a culture of disagreement for why some people might not have contributed to the discussion from a pro-giving position.

    Good points.

    This may be why very smart folks often find themselves unable to commit to an actual view on disputed topics, despite being better informed than most of those who do take sides. When attending to informed debates, we hear a chorus of disagreement, but very little overt agreement. And we are wired to conduct a head count of proponents and opponents before deciding whether an idea is credible. Someone who can see the flaws in the popular arguments, and who sees lots of unpopular expert ideas but few ideas that informed people agree on, may giv... (read more)

    There is no guarantee of a benevolent world, Eliezer. There is no guarantee that what is true is also beneficial. There is no guarantee that what is beneficial for an individual is also beneficial for a group.

    You conflate many things here. You conflate what is true with what is right and what is beneficial. You assume that these sets are identical, or at least largely overlapping. However, unless a galactic overlord designed the universe to please homo sapien rationalists, I don't see any compelling rational reason to believe this to be the case.

    Irration... (read more)

    I one-box on Newcomb's Problem, cooperate in the Prisoner's Dilemma against a similar decision system, and even if neither of these were the case: life is iterated and it is not hard to think of enforcement mechanisms, and human utility functions have terms in them for other humans. You conflate rationality with selfishness, assume rationalists cannot build group coordination mechanisms, and toss in a bit of group selection to boot. These and the referenced links complete my disagreement.

    Thanks for the links, your corpus of writing can be hard to keep up with. I don't mean this as a criticism, I just mean to say that you are prolific, which makes it hard on a reader, because you must strike a balance between reiterating old points and exploring new ideas. I appreciate the attention. Also, did you ever reply to the Robin post I linked to above? Robin is a more capable defender of an idea than I am, so I would be intrigued to follow the dialog.
    If you are rational enough, perceptive enough and EY's writing is consistant enough at some point you will not have to read everything EY writes to have a pretty good idea of what his views on a matter will be. I would bet a good some of money that EY would prefer to have his reader gain this ability then read all of his writings.
    "However, unless a galactic overlord designed the universe to please homo sapien rationalists, I don't see any compelling rational reason to believe this to be the case." Except that we are free to adopt any version of rationality that wins. Rationality should be responsive to a given universe design, not the other way around. "Irrational belief systems often thrive because they overcome the prisoner dilemmas that individual rational action creates on a group level. Rational people cannot mimic this." Really? Most of the "individual rationality -> suboptimal outcomes" results assume that actors have no influence over the structure of the games they are playing. This doesn't reflect reality particularly well. We may not have infinite flexibility here, but changing the structure of the game is often quite feasible, and quite effective.
    For example, we could establish a social norm that compulsive public disagreement is a shameful personal habit, and that you can't be even remotely considered "formidable" if you haven't gotten rid of the urge to seek status by pulling down others.
    I disagree.
    I don't think your argument applies to jacoblytes' argument. Jacoblytes claims that there is no reason for "rational" to equal "(morally/ethically) right", unless an intelligent designer designed the universe in line with our values. So it's not about winning versus losing. It's that unless the rules of the game are set up just in a certain way, then winning may entail causing suffering to others (e.g. to our rivals).
    My writing in these comments has not been perfectly clear, but Nebu you have nailed one point that I was trying to make: "there is no guarantee that morally good actions are beneficial". The Christian morality is interesting, here. Christians admit up front that following their religion may lead to persecution and suffering. Their God was tortured and killed, after all. They don't claim that what is good will be pleasant, as the rationalists do. To that degree, the Christians seem more honest and open-minded. Perhaps this is just a function of Christianity being an old religion and having the time to work out the philosophical kinks. Of course, they make up for it by offering infinite bliss in the next life, which is cheating. But Christians do have a more honest view of this world in some ways. Maybe we conflate true, good, and prudent because our "religion" is a hard sell otherwise. If we admitted that true and morally right things may be harmful, our pitch would become "Believe the truth, do what is good, and you may become miserable. There is no guarantee that our philosophy will help you in this life, and there is no next life". That's a hard sell. So we rationalists cheat by not examining this possibility. There is some truth to the Christian criticism that Atheists are closed-minded and biased, too.
    In that case, believing in truth is often non-rational. Many people on this site have bemoaned the confusing dual meanings of "rational" (the economic utility maximizing definition and the epistemological believing in truth definition). Allow me to add my name to that list. I believe I consistently used the "believing in truth" definition of rational in the parent post.
    I agree that the multiple definitions are confusing, but I'm not sure that you consistently employ the "believing in truth" version in your post above.* It's not "believing in truth" that gets people into prisoners' dilemmas; it's trying to win. *And if you did, I suspect you'd be responding to a point that Eliezer wasn't making, given that he's been pretty clear on his favored definition being the "winning" one. But I could easily be the one confused on that. ;) "In that case, believing in truth is often non-rational." Fair enough. Though I wonder whether, in most of the instances where that seems to be true, it's true for second-best reasons. (That is, if we were "better" in other (potentially modifiable) ways, the truth wouldn't be so harmful.)
    "Except that we are free to adopt any version of rationality that wins." There's only one kind of rationality.
    I agree, but that one kind is able to determine an optimal response in any universe, except one where no observable event can ever be reliably statistically linked to any other, which seems like it could be a small subset, and not one we're likely to encounter except Certainly, there are any number of world-states or day-to-day situations where a full rigorous/sceptical/rational and therefore lengthy investigation would be a sub-optimal response. Instinct works quickly, and if it works well enough, then it's the best response. But obviously, instinct cannot self-analyze and determine whether and in what cases it works "well enough," and therefore what factors contribute to it so working, etc. etc. Passing the problem of a gun jamming the Rationality-Function might return the response, "If the gun doesn't fire, 90% of the time, pulling the lever action will solve the problem. The other 10% of the time, the gun will blow up in your hand, leading to death. However, determining to reasonable certainty which type of problem you're experiencing, in the middle of a firefight, will lead to death 90% of the time. Therefore, train your Instinct-Function to pull the lever action 100% of the time, and rely on it rather than me when seconds count." Does this sound like what you mean by a "beneficial irrationality"? Also: I propose that what seems truly beneficial, seems both true and beneficial, and what seems beneficial to the highest degree, seems right. To me, these assertions appear uncontroversial, but you seem to disagree. What about them bothers you, and when will we get to see your article?
    No. That's not really what I meant at all. Take nationalism or religion, for example. I think both are based on some false beliefs. However, a belief in one or the other may make a person more willing to sacrifice his well-being for the good of his tribe. This may improve the average chances of survival and reproduction of an individual in the tribe. So members of irrational groups out-compete the rational ones. In the post above Eliezer is basically lamenting that when people behave rationally, they refuse to act against their self-interest, and damn it, it's hurting the rational tribe. That's informative, and sort of my point. There is some evidence that we have brain structures specialized for religious experience. One would think that these structures could only have evolved if they offered some reproductive benefit to animals becoming self-aware in the land of tooth and claw. In the harsh world that prevailed up until just the last few centuries, religion provided people comfort. Happy people are less susceptible to disease, more ambitious, and generally more successful. Atheism has always been as true as it is today. However, I wouldn't recommend it to a 13th century peasant. This is not true a priori. That is my point. My challenge to you, Eliezer, and the other denizens of this site is simply: "prove it". And I offer this challenge especially to Eliezer. Eliezer, I am calling you out. Justify your optimism in the prudence of truth. Disprove the parable of Eve and the fruit of the tree of knowledge.
    I don't know 'bout no Eve and fruits, but I do know something about the "god-shaped hole". It doesn't actually require religion to fill, although it is commonly associated with religion and religious irrationalities. Essentially, religion is just one way to activate something known as a "core state" in NLP. Core states are emotional states of peace, oneness, love (in the universal-compassion sense), "being", or just the sense that "everything is okay". You could think of them as pure "reward" or "satisfaction" states. The absence of these states is a compulsive motivator. If someone displays a compulsive social behavior (like needing to correct others' mistakes, always blurting out unpleasant truths, being a compulsive nonconformist, etc.) it is (in my experience) almost always a direct result of being deprived of one of the core states as a child, and forming a coping response that seems to get them more of the core state, or something related to it. Showing them how to access the core state directly, however, removes the compulsion altogether. Effectively, it's like wireheading directly to the core state internally drops the reward/compulsion link to the specific behavior, restoring choice in that area. Most likely, this is because it's the unconditional presence of core states that's the evolutionary advantage you refer to. My guess would be that non-human animals experience these core states as a natural way of being, and that both our increased ability to anticipate negative futures, and our more-complex social requirements and conditions for interpersonal acceptance actually reduce the natural incidence of reaching core states. Or, to put it more briefly: core states are supposed to be wireheaded, but in humans, a variety of mechanisms conspire to break the wireheading.... and religion is a crutch that reinstates it externally, by exploiting the compulsion mechanism. Appropriately trained rationalists, on the other hand, can simply reinstate the wirehead
    1Eliezer Yudkowsky
    Reply here.
    "Eliezer is basically lamenting that when people behave rationally, they refuse to act against their self-interest, and damn it, it's hurting the rational tribe. That's informative, and sort of my point." So if that's Eliezer's point, and it's also your point, what is it that you actually disagree about? I take Eliezer to be saying that sometimes rational individuals fail to co-operate, but that things needn't be so. In response, you seem to be asking him to prove that rational individuals must co-operate - when he already appears to have accepted that this isn't true. Isn't the relevant issue whether it is possible for rational individuals to co-operate? Provided we don't make silly mistakes like equating rationality with self-interest, I don't see why not - but maybe this whole thread is evidence to the contrary. ;)
    My point isn't exactly clear for a few reasons. First, I was using this post opportunistically to explore a topic that has been on my mind for awhile. Secondly, Eliezer makes statements that sometimes seem to support the "truth = moral good = prudent" assumption, and sometimes not. He's provided me with links to some of his past writing, I've talked enough, it is time to read and reflect (after I finish a paper for finals).
    True, but that "one kind of rationality" might not be what you think it is. Conchis's point holds if you use "rationality" = "everything should always be taken into account, if possible" or something alike. A "rational" solution to a problem should always take into account those "but in the real word it doesn't work like that...". Those are part of the problem, too. For example, a political leader acting "rationally" will take into account the opinion of the population (even if they are "wrong" and/or give to much importance to X) if it can affect his results in the next election. The importance of this depends on his "goal" (position of power? well being of the population?) and on the alternative if not elected (will my opponent's decisions do more harm?).

    I completely agree with this post. It's heartwarmingly and mindnumbingly agreeable, I would like to praise it and applaud it forever and ever. On a more serious note, personally it feels like not contributing anything into the conversation if you're just agreeing. Like for an example if I read a 100 posts in here, I don't feel compelled to add a comment which says just "I agree." to each of them because it feels like it doesn't add to the substance of the issue. - So I'm totally doing what the post predicts.

    I have really read a hundred or so post... (read more)

    [This comment is no longer endorsed by its author]Reply

    On the other hand, if you are only half-a-rationalist, you can easily do worse with more knowledge. I recall a lovely experiment which showed that politically opinionated students with more knowledge of the issues reacted less to incongruent evidence, because they had more ammunition with which to counter-argue only incongruent evidence.

    What exactly is the problem with this? The more knowledge I have, the smaller a weighting I place on any new piece of data.

    Seems so: I Am probably so rational, because I have ASD, people with ASD don't include emotions into their reasoning:,fear%2C%20disgust%2C%20and%20anger. And I Am great in logic!!! I have aphantasia - meaning no imagination. Even if I understand some logic perfectly - I couldn't make you an exercise for it! And I can't give any examples almost, as next to 0 imagination! That's also maybe why I Am so rational: But I Am somehow very creative - because ADHD? And I have overexcitabilities and I Am very emotional & sensitive person! Emotions are tied to creativity! I get excited easily, but also bored easily! Also I have bad memory, so I forget things and have to constantly reinvent them and revise them from 0. But I Am very logical-critical-rational! This is so interesting I saw some guy, which has it same and his almost exactly the way I Am except ASD and ADHD! So interesting!  I also didn't know anything until my 21 and just played games, then I read like 1 million articles in a year about "Free Will". I always try yo take everything from 0 and I have ADHD so I see it from all perspectives. I also revise my views, even if I Am wrong, I will learn so much from that! Being wrong is as important for learning as being right! So you see something doesn't work and get information from that! Who tries to be only right, doesn't really learn why and something is bad and why something is good! We don't know anything for sure except: "I think therefore I Am" Even that maybe, what if I Am dead and alive at the same time - QM (even isn't this only analogy?, no idea if this can be extended to human death!) unless I saw all permutations of everything how could I know? And even most brillia

    You're awesome, Eli. I love the mix of rationality and emotion here. Emotion is a powerful tool for motivating people. We of the Light Side are rightfully uncomfortable with its power to manipulate, but that doesn't mean we have to abandon it completely.

    I recently suggested a rationality "cult" where the group affirmation and belonging exercise is to circle up and have each person in turn say something they disagree with about the tenets of the group. Then everyone cheers and applauds, giving positive feedback. But now I see that this is goi... (read more)

    I think there's an interesting moral of the anecdote, but I'm not sure it's the one you expressed.

    My conclusion is: rationalists who desire to discard the burdensome yoke of their cultural traditions, linked inextricably as they are to religion, will have to relearn an entirely new set of cultural traditions from scratch. For example, they will need to learn a new mechanism design that allows them to cooperate in donating money to cause that is accepted as being worthwhile (I think the "ask for money and then wait for people to call out contributions" scheme is damned brilliant).

    Here's an even better one, under the right circumstances: "Would everyone please stand up for a moment? Thank you. Now, please remain standing if you believe that our organization is doing important things for the good of the world. Terrific, terrific. Okay, please continue to stand if you're going to make a pledge of at least $X. Fantastic! Now, please continue to stand if you're going to make a pledge of at least $X*2..." Of course, it won't work very well on a room full of non-conformists... you might have trouble getting them to stand in the first place, especially if they know what's coming.
    5Eliezer Yudkowsky
    That only works once, if that much. People don't like feeling forced and manipulated.
    "Right circumstances" includes support for your cause and rapport with your audience, such that most of them don't feel manipulated. The one time I saw that method used, the speaker already had the audience in the palm of his hand, such that they felt they'd already gotten their money's worth just from having listened to him. The stand-up/opt-out trick was just to push an already-high expected conversion rate higher. (An example of how good a rapport he had: early in the presentation, he asked that people please promise to not even attempt to give him any money that day... and several people laughed and shouted "No!") Of course, I suppose if you're that good, the trick is moot. On the other hand, the public approach your synagogue used is equally manipulative... it just builds the conformity pressure more slowly, instead of all at once.

    As the old joke says: What do you mean 'we', white man?

    The real reason ostensibly smart people can't seem to cooperate is that most of them have no experience with reaching actual conclusions. We train people to make whatever position they espouse look good, not to choose positions well.

    What makes a position well-chosen or more likely to assit in reaching actual conclusions?
    The logical structure of the best argument supporting it, the quality of the evidence in that argument, and the extensiveness of that evidence. Instead of those things, most of us pay attention to rhetoric and status. Take a look at high school speech and debate organizations, and the things they stress. What development of skills and techniques do their debates encourage?
    A good point, and a serious problem. When I was in high school debate (Lincoln-Douglas), I hated the degree to which the competition was really about jargon and citation of overwhelming but irrelevant "evidence." I think the tipping point was when somebody claimed that teaching religion in public schools would lead to an environmental catastrophe (and even more, it was purely an argument from authority). At one point, I ran a case that relied on no empirical evidence whatsoever (however abhorrent that may sound here): it was a quasi-Aristotelian argument that if you accept the value in the first premise--I believe it was "knowledge"--then the remainder followed. The whole case was perhaps three minutes long, half the allowed time, and formatted to make the series of premises and conclusions very obvious. Best I could tell, there was only one weak link in the argument that was easily debatable. I correctly guessed that the people I was debating were more used to listing "evidence" than arguing logic, and most people had absolutely no idea how to handle even clearly stated premises and conclusions. I was arguing against the position I actually hold, which is why there was still a flaw in the argument, but it won the majority of the debates nonetheless. Sad, more than anything.
    This "best argument" idea disconsiders the danger of one argument against an army

    Perhaps a way to have comments of agreement that can also work as signalling your own smarts would be to say that you agree, and that the best part/most persuasive part/most useful part is X while providing reasons why. 

    Isnt the secret power of Rationality that it can stand up to review? Religious cults are able to demand extreme loyalty because the people are not presented alternatives and are not able to question the view they are handed. One of our strengths seems to be in discernment and argumentation which naturally leads to fractious in-fighting. What would we call "withholding criticism for the Greater Good"?

    The difference is simply in the critic's motivation: are they trying to improve the situation, or just trying to avoid the expected outcome of agreement? E.g., are you criticizing charities because you want them to do better, or because you don't want to shell out the money AND don't want to admit it? (I'm unashamedly in the "I don't want to send money to Africa and I don't care if I have a logical reason for it" camp, and so have no need to make up a bunch of reasons it's bad.) If the critic were really interested in improvement, they'd be suggesting improvements or better yet, DOING something about improvement.

    "But if you tolerate only disagreement - if you tolerate disagreement but not agreement - then you also are not rational. You're only willing to hear some honest thoughts, but not others. You are a dangerous half-a-rationalist."

    • Excellent point. I agree completely, and have had similar thoughts about the problem with the "skeptic" community myself. upvote

    To point in the rough direction of an empirical cluster in personspace. If you understood the phrase "empirical cluster in personspace" then you know who I'm talking about.

    If someone understands the phrase "empirical cluster in personspace," they probably are who you're talking about. =)

    6Eliezer Yudkowsky
    That was what the first draft said, but I considered it for a few moments and realized that as eloquent statements go, it suffered the unfortunate flaw of not actually being true.

    This is very interesting; I have usually refrained from replying because I could not think of anything to say that wasn't trivial. Will take care to voice agreement in th future where applicable.

    But none of those donors posted their agreement to the mailing list. Not one.

    Couldn't you just ask contributors for the right to make their donations public?

    The Christian and other ethics often demand that the left hand not know what the right hand is doing. However, you can certainly indicate the sum of donations so far without violating anyone's privacy. The commitment of those who do donate may be more inspiring than the excuses of those who do not.
    An automated reply system could make a post with the donated amount and unique anonymous user name. That way people reading the counter arguments see people donating between some posts.

    Then clearly your fund-raising drive would have benefited from a mechanism for publicizing and externalizing support.

    Charitable organizations commonly use a variety of such methods. The example you gave is just one. If correctly designed the mechanisms do not cause support to be swamped by criticism, and they can operate without suppressing any free thought or speech.

    E.g. publishing (with their agreement) the names of donors, the amounts, and endorsements; using that information to solicit from other donors; getting endorsements from respected peo... (read more)

    Way to go Eliezer, you have my full support! And another great posting, btw!

    To some extent, this was discussed in "The Starfish and the Spider", which is about "leaderless groups". The book praises the power of decentralized, individualistic cultures (that you describe as "Light Side"). However, it admits that they're slower and less-well coordinated than hierarchical organizations (like the military, or some corporations).

    You've outlined some of the benefits (recruitment, coordinated action) of encouraging public agreement and identifying with the group. You've also outlined some of the dangers (plur... (read more)

    I have been thinking about this subject for a while because I saw the same type of culture of disagreement prevent a group I was a member of from doing anything worthwhile. The problem is very interesting to me because I come from the opposite side of the spectrum being heavily collectivist. I take pleasure in conforming to a group opinion and being a follower but I also have nurtured a growing rationalist position for the last few years. So despite my love of being a follower I often find myself aspiring to a leadership position in order to weld my favore... (read more)

    "Those who had nothing to give, stayed silent; those who had objections, chose some later or earlier time to voice them. That's probably about the way things should be in a sane human community"

    Personally I think that you were speaking to the wrong crowd when trying to fund raise. Or perhaps I should say too wide a crowd. Like trying to fundraise for tokamak fusion in a mailing list where people are interested in fusion in the generality. People who don't believe that tokamaks will ever be stable/usable are duty bound to try and convince the ot... (read more)

    I have to agree completely.

    I don't have to agree completely. But I choose to. I also choose to link the donation's page for the SIAI here. Yes, this felt great... my emotions seem to be in tune with my high-level goals.
    Me too!

    There's an easy and obvious coordination mechanism for rationalists, which is just to say they're building X from science fiction book Y, and then people will back them to the hilt, as long as their reputation and track record for building things without hurting people is solid. Celebrated Book Y is trusted to explain the upsides and downsides of thing X, and people are trusted to have read the book and have the Right Opinions about all the tradeoffs and choices that come with thing X. 

    So really, it all comes down to the thing that actually powers the... (read more)

    I have a modest amount of pair programming/swarming experience, and there are some lessons I have learned from studying those techniques that seem relevant here:

    • General cooperation models typically opt for vagueness instead of specificity to broaden the audiences that can make use of them
    • Complicated/technical problems such as engineering, programming, and rationality tend to require a higher level of quality and efficiency in cooperation than more common problems
    • Complicated/technical problems also exaggerate the overhead costs of trying to harmonize though
    ... (read more)

    I agree. I don't often say I agree for efficiency. You've made the point more eloquently than I could and my few sentences in support of you would probably strengthen your point socially, but it wouldn't improve the argument in some logical sense.

    I love signaling agreement when I can do it and be just as eloquent as the writing I'm agreeing with. Famous authors put a lot of work into the blurbs they write recommending their friend's books. And that work shows. "X is a great summertime romp, full of adventure!" sure is a glowing recommendation, but it's not... (read more)

    Our culture puts all the emphasis on heroic disagreement and heroic defiance, and none on heroic agreement or heroic group consensus.

    There's a lot more of this in anime, I feel. A lot of characters end up trusting someone from the bottom of their hearts, agreeing to follow their vision to the end, and you see whole group of good guys that are wholeheartedly committed and united to the same idea. Even main characters often show this trait toward others.

    "Yes, a group which can't tolerate disagreement is not rational.  But if you tolerate only disagreement—if you tolerate disagreement but not agreement—then you also are not rational". Well, agreement may just be perceived default. If I sit at a talk and find nothing to say about (and, mind you, that happens R. A. R. E. L. Y) it means either that I totally agree or that it is so wrong I don't know where to begin.

    Also, your attitude on "we are not to win arguments, we are to win", your explicit rejection of rhetorics (up to the ... (read more)

    Wow. I don't identify as a cynic or spock, but of the many articles I have read on Less Wrong since I discovered it yesterday, this one is perhaps the most perspective changing.

    It makes me happy that those traits you list as what rationalists are usually thought of ----disagreeable, unemotional, cynacal, loners---are unfamiliar. The rationalists I have grown up in the past few years reading this site are both optimistic and caring, along with many other qualities.

    Eliezer, I applaud your post. Bravo. I agree.

    I'm new to this site and I was compelled to sign up immediately.

    There's not much to add here, but that I hope people appreciate the significance of not shutting off all emotions, much like you argue in this post.

    Those who suspect me of advocating my unconventional moral position to signal my edgy innovativeness or my nonconformity should consider that I have held the position since 1992, but only since 2007 have I posted about it or discussed it with anyone but a handful of friends.

    I believe rhollerith. I met him the other week and talked in some detail; he strikes me as someone who's actually trying. Also, he shared the intellectual roots of his moral position, and the roots make sense as part of a life-story that involves being strongly influenced by John David Garcia's apparently similar moral system some time ago. Hollerith doesn't mean he was applying his moral position to AI design since '92, he means that since '92, he's been following out a possible theory of value that doesn't assign intrinsic value to human life, to human happiness, or to similar subjective states. I'm not sure why people are stating their disbelief.
    Good point, Anna: John David Garcia did not work in AI or apply his system of values to the AI problem, but his system of values yields fairly unambiguous recommendations when applied to the AI problem -- much more unambiguous than human-centered ways of valuing things.
    0Eliezer Yudkowsky
    Off-topic until May, all.
    Unfortunately, they can't consider that you have have held the position since 1992 -- all they can consider is that you claim to have done so. You could get your handful of friends to testify, I suppose...
    Cyan points out, correctly, that all the reader can consider is that I claim to have held a certain position since 1992. But that is useful information for evaluating my claim that I am not just signaling because a person is less likely to have deceived himself about having held a position than about his motivations for a sequence of speech acts! And I can add a second piece of useful information in the form of the following archived email. Of course I could be lying when I say that I found the following message on my hard drive, but participants in this conversation willing to lie outright are (much) less frequent than participants who have somehow managed to deceive themselves about whether they really held a certain position since 1992, who in turn are less frequent than participants who have somehow managed to deceive themselves about their real motivation for advocating a certain position.
    I don't disagree with the above post -- I just wanted to make a pedantic distinction between claims and facts in evidence. (Also, my choice of the pronoun "they" rather than "we" was deliberate.)
    I don't believe you.
    Don't believe my advocacy of the moral position is not really just signaling or don't believe I've held the moral position since 1992?
    I don't know how long you've held the position, or much care - I don't think it's relevant. But it is signaling, I think, for 2 reasons: * Your public concern with saying it's not signaling is just a way of signaling; * Claiming a certain timespan of belief is just an old locker room way of saying "I got here first." Which surely is signaling. This is the sort of thing that causes unnecessary splintering in groups. I have a very visceral reaction to this sort of signaling (which I would label preening, actually). Perhaps I should examine that.
    It is likely the case that rhollerith's moral position contains at least some element of signalling. His expression thereof probably does too. In fact, there are few aspects of social behavior that could be credibly claimed to be devoid of signalling. That said, these points do not impress me in the slightest. Yes, public concern surely involves signalling. That doesn't mean that which is concerned about isn't also true. Revealing truth is usually an effective form of signalling. It is completely unreasonable to dismiss claims because they are similar to something that was signalling in the locker room. Even the "I got here first" signalling in said locker room quite often accompanies the signaller, in fact, getting there first.
    I suspect that you have not become acquainted with my moral position! If you knew my moral position, you would be more likely to say I am ruining the party by crapping in the punchbowl than to say I am preening. (Preen. verb. Congratulate oneself for an accomplishment).

    People are also unwilling to express agreement because they know, and fear, group consensus and the pressure to fit in. Those usually lead to groupspeak and groupthink.

    Given that one of the primary messages of the local Powers That Be is that other people's evaluations should be a factor in your own - that other people's conclusions should be considered as evidence when you try to conclude - and that's incompatible with effective rationality, as well as the techniques needed to prevent self-reinforcing mob consensus.

    Not only the culture of disagreement takes place. When I see "+1", I think what a mind processes do that: commenter needs some attention but have nothing to say? And so when I want to post "+1", I do not do that, for someone didn't think the same about me. Usually I'm trying to make some complement to original post, or little correction to it with clear approval of the rest. Something not important and, at the same time, not just "+1".

    There is a way to solve this problem, but it dangerous. Rationalist can watch discussion clos... (read more)

    I wonder if one person can have a big effect on this sort of thing.

    For example, I've known charity organizers to publish the number of donors and the total money donated every few days. Even without identifying donors, that does a lot to make people feel less alone.

    An alternate explanation: I've noticed a trend where rationalists seem more likely to criticize ideas in general. Perhaps a key experience that needs to happen before some people choose to undergo the rigors of becoming a rationalist is a "waking up" after some trauma that makes them err on the side of being paranoid. I have observed that most people without a "wake up" trauma prefer to simply retain optimism bias and tend to conserve thinking resources for other uses. Someone who thinks as much as you do probably does not feel a ne... (read more)

    organizing atheists has been compared to herding cats, because they tend to think independently and will not conform to authority - The God Delusion

    Maybe - but they seem to work together well enough - if you pay them.

    Whereas theists will pay tithes to be ordered around.
    They war with other theists as well. Cooperation benefits from a shared mission.

    Rather than ourselves making the drastic cultural changes that Eli talks about, perhaps it would be more efficient to piggyback on to another movement which is further down that path of culture change, so long as that movement isn't irrational. See this URL:

    Check out the rest of the web site if you have time, or better yet, buy and read the book the web site is promoting. As you can see from the URL above, cooperation is an important value in the group.

    I have been observing the spiritual practices promoted by ... (read more)

    1Eliezer Yudkowsky
    This isn't a comment, this is an attempted post in which you say in more detail what's going on over there and which "practices" you're talking about. It then gets voted up or voted down. In any case, don't try to do this sort of thing in one comment. ...though I see you don't have enough karma yet to post; but that's exactly what we've got the system for, eh?

    Hrm, overall makes sense. But now, HOW do you suggest, for something here, an online forum, actually doing that sort of thing in the general case without it translating to a whole bunch of people going, effectively, "me too"?

    I do remember when for a certain unnamed organization you started the "donate today and tomorrow" drive (or whatever you called it, something to that effect), I did post to a certain mailing list my thoughts that both led me to donate and what I was thinking in response to that sort of appeal, etc etc.


    In the pursuit of truth it is rational to argue and, at first glance, irrational to agree. The culling of truth proceeds by "leaving be" the material that is correct and modifying (arguing with) the part that is not. (While slightly tangential, it is good to recall that the scientific method can only argue with a hypothesis; never confirm it.)

    At a conference where there is a dialogue it is a waste of time to agree, as a lack of argument is already implicit agreement. After the conference, however, the culling of truth further progresses by assi... (read more)

    I'm a beginner that thinks meta-discussions are fun..

    Eliezer is asking about whether we should tolerate tolerance. Let's suppose -- for the sake of argument -- that we do not tolerate tolerance. If X is intolerable, then the tolerance of X is intolerable.

    So if Y tolerates X, then Y is intolerable. And so on.

    Thus, if we accept that we cannot tolerate toleration, then also we cannot tolerate toleration of tolerance, and also we cannot tolerate toleration of toleration of tolerance.

    I would think of tolerance as a relationship between X and Y in which Y acquires the intolerability of X.


    I think that there are parts of life where we should learn to applaud strong emotional language, eloquence, and poetry. When there's something that needs doing, poetic appeals help get it done, and, therefore, are themselves to be applauded.

    That may be, but I generally find YOUR poetic appeals to make me throw up in my mouth. I read my mother your bit about how amazing it was that love was born out of the cruelty of natural selection, and even she thought it was sappy.

    I, on the other hand, nearly started sobbing, so I guess it takes all kinds.

    I don't see how individualism can beat out collectivism as long as groups = more power. for individualism to work each person would have to wield equal power to any group.

    One view doesn't need to "beat out" the other; for each societal state, there's a corresponding equilibrium between individualistic- and group-think (or rather, group-think for varying sizes of groups) as each person weigh the costs and benefits of adherence for them. In a world of individuals, an organized and specialized group of any size "= more power." Witness sedentary farmers displacing hunter-gatherers. On the other hand, in a world of groups, a rogue individualistic prisoner's-dilemma-defector is king. Witness sociopaths in corporate structures, or the plots of far too many Star Trek episodes. The balance of power can shift as Individualism becomes a better choice, due to its risks lessening and rewards increasing, whether due to culture, technology, or extensive debates on websites.