(At the suggestion and request of Tom McCabe, I'm posting the essay that I sent to the New York LW group after my first visit there, and before the second visit:)
Having some kind of global rationalist community come into existence seems like a quite extremely good idea. The NYLW group is the forerunner of that, the first group of LW-style rationalists to form a real community, and to confront the challenges involved in staying on track while growing as a community.
"Stay on track toward what?" you ask, and my best shot at describing the vision is as follows:
"Through rationality we shall become awesome, and invent and test systematic methods for making people awesome, and plot to optimize everything in sight, and the more fun we have the more people will want to join us."
(That last part is something I only realized was Really Important after visiting New York.)
Michael Vassar says he's worried that you might be losing track of the "rationality" and "world optimization" parts of this - that people might be wondering what sort of benefit "rationality" delivers as opposed to, say, paleo dieting. (Note - less worried about this now that I've met the group in person. -EY.)
I admit that the original Less Wrong sequences did not heavily emphasize the benefits for everyday life (as opposed to solving ridiculously hard scientific problems). This is something I plan to fix with my forthcoming book - along with the problem where the key info is scattered over six hundred blog posts that only truly dedicated people and/or serious procrastinators can find the time to read.
But I really don't think the whole rationality/fun association you've got going - my congratulations on pulling that off, by the way, it's damned impressive - is something that can (let alone should) be untangled. Most groups of people capable of becoming enthusiastic about strange new nonconformist ways of living their lives would have started trying to read each other's auras by now. Rationality is the master lifehack which distinguishes which other lifehacks to use.
The way an LW-rationality meetup usually gets started is that there is a joy of being around reasonable people - a joy that comes, in a very direct way, from those people caring about what's true and what's effective, and being able to reflect on more than their first impulse to see whether it makes sense. You wouldn't want to lose that either.
But the thing about effective rationality is that you can also use it to distinguish truth from falsehood, and realize that the best methods aren't always the ones everyone else is using; and you can start assembling a pool of lifehacks that doesn't include homeopathy. You become stronger, and that makes you start thinking that you can also help other people become stronger. Through the systematic accumulation of good ideas and the rejection of bad ideas, you can get so awesome that even other people notice, and this means that you can start attracting a new sort of person, one who starts out wanting to become awesome instead of being attracted specifically to the rationality thing. This is fine in theory, since indeed the Art must have a purpose higher than itself or it collapses into infinite recursion. But some of these new recruits may be a bit skeptical, at first, that all this "rationality" stuff is really contributing all that much to the awesome.
Real life is not a morality tale, and I don't know if I'd prophesy that the instant you get too much awesome and not enough rationality, the group will be punished for that sin by going off and trying to read auras. But I think I would prophesy that if you got too large and insufficiently reasonable, and if you lost sight of your higher purposes and your dreams of world optimization, the first major speedbump you hit would splinter the group. (There will be some speedbump, though I don't know what it will be.)
Rationality isn't just about knowing about things like Bayes's Theorem. It's also about:
- Saying oops and changing your mind occasionally.
- Knowing that clever arguing isn't the same as looking for truth.
- Actually paying attention to what succeeds and what fails, instead of just being driven by your internal theories.
- Reserving your self-congratulations for the occasions when you actually change a policy or belief, because while not every change is an improvement, every improvement is a change.
- Self-awareness - a core rational skill, but at the same time, a caterpillar that spent all day obsessing about being a caterpillar would never become a butterfly.
- Having enough grasp of evolutionary psychology to realize that this is no longer an eighty-person hunter-gatherer band and that getting into huge shouting matches about Republicans versus Democrats does not actually change very much.
- Asking whether your most cherished beliefs to shout about actually control your anticipations, whether they mean anything, never mind whether their predictions are actually correct.
- Understanding that correspondence bias means that most of your enemies are not inherently evil mutants but rather people who live in a different perceived world than you do. (Albeit of course that some people are selfish bastards and a very few of them are psychopaths.)
- Being able to accept and consider advice from other people who think you're doing something stupid, without lashing out at them; and the more you show them this is true, and the more they can trust you not to be offended if they're frank with you, the better the advice you can get. (Yes, this has a failure mode where insulting other people becomes a status display. But you can also have too much politeness, and it is a traditional strength of rationalists that they sometimes tell each other the truth. Now and then I've told college students that they are emitting terrible body odors, and the reply I usually get is that they had no idea and I am the first person ever to suggest this to them.)
- Comprehending the nontechnical arguments for Aumann's Agreement Theorem well enough to realize that when two people have common knowledge of a persistent disagreement, something is wrong somewhere - not that you can necessarily do better by automatically agreeing with everyone who persistently disagrees with you; but still, knowing that ideal rational agents wouldn't just go around yelling at each other all the time.
- Knowing about scope insensitivity and diminishing marginal returns doesn't just mean that you donate charitable dollars to "existential risks that few other people are working on", instead of "The Society For Curing Rare Diseases In Cute Puppies". It means you know that eating half a chocolate brownie appears as essentially the same pleasurable memory in retrospect as eating a whole brownie, so long as the other half isn't in front of you and you don't have the unpleasant memory of exerting willpower not to eat it. (Seriously, I didn't emphasize all the practical applications of every cognitive bias in the Less Wrong sequences but there are a lot of things like that.)
- The ability to dissent from conformity; realizing the difficulty and importance of being the first to dissent.
- Knowing that to avoid pluralistic ignorance everyone should write down their opinion on a sheet of paper before hearing what everyone else thinks.
But then one of the chief surprising lessons I learned, after writing the original Less Wrong sequences, was that if you succeed in teaching people a bunch of amazing stuff about epistemic rationality, this reveals...
...that, having repaired some of people's flaws, you can now see more clearly all the other qualities required to be awesome. The most important and notable of these other qualities, needless to say, is Getting Crap Done.
(Those of you reading Methods of Rationality will note that it emphasizes a lot of things that aren't in the original Less Wrong, such as the virtues of hard work and practice. This is because I have Learned From Experience.)
Similarly, courage isn't something I emphasized enough in the original Less Wrong (as opposed to MoR) but the thought has since occurred to me that most people can't do things which require even small amounts of courage. (Leaving NYC, I had two Metrocards with small amounts of remaining value to give away. I felt reluctant to call out anything, or approach anyone and offer them a free Metrocard, and I thought to myself, well, of course I'm reluctant, this task requires a small amount of courage and then I asked three times before I found someone who wanted them. Not, mind you, that this was an important task in the grand scheme of things - just a little bit of rejection therapy, a little bit of practice in doing things which require small amounts of courage.)
Or there's Munchkinism, the quality that lets people try out lifehacks that sound a bit weird. A Munchkin is the sort of person who, faced with a role-playing game, reads through the rulebooks over and over until he finds a way to combine three innocuous-seeming magical items into a cycle of infinite wish spells. Or who, in real life, composes a surprisingly effective diet out of drinking a quarter-cup of extra-light olive oil at least one hour before and after tasting anything else. Or combines liquid nitrogen and antifreeze and life-insurance policies into a ridiculously cheap method of defeating the invincible specter of unavoidable Death. Or figures out how to build the real-life version of the cycle of infinite wish spells. Magic the Gathering is a Munchkin game, and MoR is a Munchkin story.
It would be really awesome if the New York Less Wrong groups figures out how to teach its members hard work and courage and Muchkinism and so on.
It would be even more awesome if you could muster up the energy to track the results in any sort of systematic way so that you can do small-N Science (based on Bayesian likelihoods thank you, not the usual statistical significance bullhockey) and find out how effective different teaching methods are, or track the effectiveness of other lifehacks as well - the Quantitative Self road. This, of course, would require Getting Crap Done; but I do think that in the long run, whether we end up with really effective rationalists is going to depend a lot on whether we can come up with evidence-based metrics for how well a teaching method works, or if we're stuck in the failure mode of psychoanalysis, where we just go around trying things that sound like good ideas.
And of course it would be really truly amazingly awesome if some of you became energetic gung-ho intelligent people who can see the world full of low-hanging fruit in front of them, who would go on to form multiple startups which would make millions and billions of dollars. That would also be cool.
But not everyone has to start a startup, not everyone has to be there to Get Stuff Done, it is okay to have Fun. The more of you there are, the more likely it is that any given five of you will want to form a new band, or like the same sort of dancing, or fall in love, or decide to try learning meditation and reporting back to the group on how it went. Growth in general is good. Every added person who's above the absolute threshold of competence is one more person who can try out new lifehacks, recruit new people, or just be there putting the whole thing on a larger scale and making the group more Fun. On the other hand there is a world out there to optimize, and also the scaling of the group is limited by the number of people who can be organizers (more on this below). There's a narrow path to walk between "recruit everyone above the absolute threshold who seems like fun" and "recruit people with visibly unusually high potential to do interesting things". I would suggest making extra effort to recruit people who seem like they have high potential but not anything like a rule. But if someone not only seems to like explicit rationality and want to learn more, but also seems like a smart executive type who gets things done, perhaps their invitation to a meetup should be prioritized?
So that was the main thing I had to say, but now onward to some other points.
A sensitive issue is what happens when someone can't reach the absolute threshold of competence. I think the main relevant Less Wrong post on this subject is "Well-Kept Gardens Die By Pacifism." There are people who cannot be saved - or at least people who cannot be saved by any means currently known to you. And there is a whole world out there to be optimized; sometimes even if a person can be saved, it takes a ridiculous amount of effort that you could better use to save four other people instead. We've had similar problems on the West Coast - I would hear about someone who wasn't Getting Stuff Done, but who seemed to be making amazing strides on self-improvement, and then a month later I would hear the same thing again, and isn't it remarkable how we keep hearing about so much progress but never about amazing things the person gets done -
(I will parenthetically emphasize that every single useful mental technique I have ever developed over the course of my entire life has been developed in the course of trying to accomplish some particular real task and none of it is the result of me sitting around and thinking, "Hm, however shall I Improve Myself today?" I should advise a mindset in which making tremendous progress on fixing yourself doesn't merit much congratulation and only particular deeds actually accomplished are praised; and also that you always have some thing you're trying to do in the course of any particular project of self-improvement - a target real-world accomplishment to which your self-improvements are a means, not definable in terms of any personality quality unless it is weight loss or words output on a writing project or something else visible and measurable.)
- and the other thing is that trying to save people who cannot be saved can drag down a whole community, because it becomes less Fun, and that means new people don't want to join.
I would suggest having a known and fixed period of time, like four months, that you are allowed to spend on trying to fix anyone who seems fixable, and if after that their outputs do not exceed their inputs and they are dragging down the Fun level relative to the average group member, fire them. You could maybe have a Special Committee with three people who would decide this - one of the things I pushed for on the West Coast was to have the Board deciding whether to retain people, with nobody else authorized to make promises. There should be no one person who can be appealed to, who can be moved by pity and impulsively say "Yes, you can stay." Short of having Voldemort do it, the best you can do to reduce pity and mercy is to have the decision made by committee.
And if anyone is making the group less Fun or scaring off new members, and yes this includes being a creep who offends potential heroine recruits, give them an instant ultimatum or just fire them on the spot.
You have to be able to do this. This is not the ancestral environment where there's only eighty people in your tribe and exiling any one of them is a huge decision that can never be undone. It's a large world out there and there are literally hundreds of millions of people whom you do not want in your community, at least relative to your current ability to improve them. I'm sorry but it has to be done.
Finally, if you grow much further it may no longer be possible for everyone to meet all the time as a group. I'm not quite sure what to advise about this - splitting up into meetings on particular interests, maybe, but it seems more like the sort of thing where you ought to discuss the problem as thoroughly as possible before proposing any policy solutions. My main advice is that if there's any separatish group that forms, I am skeptical about its ability to stay on track if there isn't at least one high-level epistemic rationalist executive type to organize it, someone who not only knows Bayes's Theorem but who can also Get Things Done. Retired successful startup entrepreneurs would be great for this if you could get them, but smart driven young people might be more mentally flexible and a lot more recruitable if far less experienced. In any case, I suspect that your ability to grow is going to be ultimately limited by the percentage of members who have the ability to be organizers, and the time to spend organizing, and who've also leveled up into good enough rationalists to keep things on track. Implication, make an extra effort to recruit people who can become organizers.
And whenever someone does start doing something interesting with their life, or successfully recruits someone who seems unusually promising, or spends time organizing things, don't forget to give them a well-deserved cookie.
Finally, remember that the trouble with the exact phrasing of "become awesome" - though it does nicely for a gloss - is that Awesome isn't a static quality of a person. Awesome is as awesome does.
At the LW meetups I've been to so far, I've seen what I would call 'swarming' around each female present. It doesn't seem malicious, but they each end up being in the center of a group...
I guess this is something for other people to corroborate, I'm just a lonely data point waiting for my line.
Edit - please disregard this post
Should this be added to the "community" sequence?
It also has a more subtle and counterintuitive failure mode. People can derive status and get much satisfaction by handing out perfectly honest and well-intentioned advice, if this advice is taken seriously and followed. The trouble is, their advice, however honest, can be a product of pure bias, even if it's about something where they have an impressive track record of success.
Moreover, really good and useful advice about important issues often has to be based on a no-nonsense cynical analysis that sounds absolutely awful when spelled out explicitly. Thus, even the most well-intentioned people will usually be happier to concoct nice-sounding rationalizations and hand out advice based on them, thus boosting their status not just as esteemed advice-givers, but also as expounders of respectable opinion. At the e... (read more)
An example would help this comment.
"Munchkinism" already has a commonly-known name. It's called hacking.
Yes, let's please call it "hacking," or anything other than "Munchkinism."
Isn't 'munchkin' sort of taken too? The impression I got from a little googling is that the word as used by RPG players is a derogatory term. Calling someone that isn't a compliment on their cleverness in exploiting the mechanics but mockery for missing much of the point of the game and being an annoyance to other players.
If that's true then calling cryonics munchkinism would sound like agreement with people who say that death gives meaning to life or something like that.
The core of the insult is in the framing of the behavior as a negative (and an assertion of higher status of the speaker). The actual descriptive element of the behavior is a pretty close match to what we are talking about. This is perhaps enough of a reason to discard the word and create a synonym that doesn't have the negative association.
The problem with the MIN-MAXing munchkin - or rather the thing that causes munchkin-callers to insult them is that they think Role Playing Games are about actually taking on the role and doing what the character should do. The whole @#%@# world is at stake so you learn what you need to about the physics and the current challenges. You work out the best way to eliminate the threat and if possible ensure a literal 'happily ever after' scenario. Then you gain the power necessary to ensure that your chance of success is maximised.
But the role of the character is not what (the name-caller implies) the point of the game is about. It is about out what the game master expects, working out your own status within ... (read more)
There is no problem with "Munchkinism." The problem is that in old RPG's the rules imply poorly designed (see lack of challenge upon full understanding of the system) tactical battle simulation games with some elements of strategy, while the advertising implies a social interaction and story-telling game without giving the necessary rules to support it. Thus different people think they're playing different games together and social interaction devolves into what people imagine they would do given a hypothetical situation without consequences (at least until the consequences are made explicit, violating their expectations as you note in your example).
Put all my points into charisma and charm skills and go find me some wenches? Oh, you mean saving the world. Got it.
Actually that is another problem with RPG designs. There are social skills and stats provided but they are damn near pointless in practice. Even when you want to role play a lovable rogue who can charm, manipulate and deceive his way out of problems you may as well put your skills into battle axes. Because the only person that you need to use social skills on is the DM and that is an out of character action. Unless you somehow manage to find a DM who considers the interaction to be about the character trying to persuade an NPC and not the player trying to persuade him and just lets the player roll some dice already.
"What is the skill check for "seduce the maidservant and get her to show you the secret entrance to the castle"?"
... "No, I don't need to tell you what lines I'm going to use... since I would just have to lie so as to not offend the sensibilities of the company. Dice. I want to use dice and charm wenches!"
... "What? Oh, this is just too much hassle. Let's do what we know works. Guys, you take the guard on the left and I'll take the guard on the right. Rescue the princess and kill everything that tries to stop us."
I wonder if it's accurate to say that for hacks, it's the means that's considered "cheating", whereas for cryonics, it's the end itself that's considered "cheating".
That seems like a good distinction between Munchkinism and Hacking, as I've seen them used by their respective cultures. Munchkinism is about using the rules to accomplish an "unacceptable" goal, whereas Hacking is about accomplishing acceptable goals via "unacceptable" methods. Thank you for helping me cement why the two terms felt like very separate ones :)
As another example, the Jargon file has a general definition of 'hacker':
That seems to fit pretty well.
It certainly fits 'hacker' (and myself) well. It doesn't fit people who are indifferent to intellectual challenge but just want to live (and so do cryonics) or just want to win (and so min-max the @#%$ out of life).
Okay, but that's not what defines munchkins. Munchkinism, as I see it, is less about getting points in good areas by burning points in bad areas (min-maxing) than it is about getting points in good areas by burning the spirit of the game.
I think that willingness to burn the spirit of the game when it comes to things like signing up for cryonics instead of confronting the inevitability of your mortality, drinking extra-light olive oil instead of trying to diet by sheer willpower, or building a recursively self-improving AI instead of trying to solve the world's problems the normal way, is exactly what distinguishes Munchkinism from mere hacking.
Not really. It involves the ability to do things that would make other people look at you funny, and a relentlessly optimizing attitude toward all of real life and not just computer science problems or particular locks. There may be something more to it, too. In any case Timothy Ferriss != John McCarthy (albeit McCarthy himself may also have the Munchkin-nature) and people who build championship Magic decks don't think in quite the same way as great programming hackers, though you can also be both.
I've been having some sort of half-formed thoughts recently that this has brought back into my foreground that I'm curious to see other people's thoughts on.
It seems to me that the likelihood is quite high that there are people on here who have inherently competing utility functions (these examples were chosen merely because they are fairly common, directly competing, not obviously insane sets of motivations. I intend no value judgment on either of them). Thus, making one of the people whose utility function is dramatically different from yours more rational could be an extremely counterproductive move for you to make in terms of satisfying your own utility function. Imagine a libertarian rationalist accidentally training a socialist guerilla, who goes on to be very successful at fulfilling his own utility function, and thus dramatically harmful to his teacher's. Or perhaps more realistically, a socialist teacher trains a libertarian who goes on to found a company that does business in the Third World in a way that the teacher disapproves of.
How would we avoid this? Should we avoid this?
A few months I ago I was roundly, and rightly, rebuked for suggesting that rationality will lea... (read more)
In other words, if my opponent begins to make choices that better optimize their goals, do I gain or lose?
It seems clear that the answer depends on how many of their goals I share, how many I oppose, and how much I value the shared goals relative to the opposed goals.
Suppose we are Swift's Big-Endians and Little-Endians, who agree on pretty much everything that matters (even by their own standards!) and are bitterly divided over a single relatively trivial issue. If one side is suddenly optimized, everybody wins. That is, the vast majority of everyone's current goals are more effectively and efficiently met, including those of the opposition.
Sure, the optimized party gets all of that plus the value of having everyone open their eggs on the side they endorse... which means their opponents suffer in turn the value-loss of everyone opening their eggs on the side they reject. But they will be suffering that value-loss in the context of an overall increase in their value. I'm not saying everyone wins equally, just that everybody wins. Whether they are happy about this or not depends on other factors, but they seem pretty clearly to be better off.
In that scenario, upgrading my opponents... (read more)
Survivors and cult historians alike agree that this post, combined with the founding of the "rationalist boot camps", set in motion the sequence of events which culminated in the tragic mass cryocide of 2024.
At every step, Yudkowsky's words seemed rational to his enthralled followers - and also to all outside observers. And yet, when it became clear that commercial pressures were causing strong AI to be deployed long before Coherent Awesomeness Extrap-volition Theory could be made mathematically rigorous, the cult turned against itself.
One by one, each member's failure to invent and deploy Friendly AI before IBM-Halliburton turned on its Appallingly Parallel Cheney Emulation Cluster was taken by the feared Bayes Tribunal as evidence that they were insufficiently awesome, and must be ejected from the subterranean bunker complex. With each Bayesian update, the evidence that the cult's ultimate goal could not be achieved was strengthened - and yet, as the number of followers fell, the more Yudkowsky came to fear a fate worse than death - exploring the possible endings to his life within the simulation spaces of Cheney's mind - in a game-theoretic reprisal for his work on Fr... (read more)
Erm, maybe my standards are too high, but this didn't seem overwhelmingly well-written as fiction and I really worry when material that attacks a target that's supposed to be attacked gets a free pass as art. Or maybe you all actually enjoyed that, and I'm being unreasonable in expecting blog comments to meet publishable quality standards.
This got a few chuckles from me, but I have found that fiction in which present-day issues escalate implausibly into warfare is a strong indicator and promoter of affective death spirals. You do realize that this story features prominent falsehoods that people actually believe, and is completely absurd in ways not inhereted from the things it's satirizing, right?
I spent most of January 1990 (I think that was the month) reading the entire run of Astounding/Analog from 1953 to 1985. That was better than quite a lot of the extrapolations therein. Anthologies of the best modernist SF gloss over really quite a lot of the awfulness that was actually published, even in the best magazine ...
Or in this case, evaporative freezing.
I voted it down for decidedly non-clever thinking about quantum suicide and a complete misrepresentation (or misunderstanding) of rational thinking. It attributes to Eliezer the complete opposite of the 'Shut Up And Do The Impossible' attitude that Eliezer is notorious for.
This section is a little confusing to me, so I'm going to lay out my thoughts on the subject in order to help myself organise them and to see what other people think.
I do attempt to improve myself by thinking "what shall I do to Improve Myself today?" Or rather, I spent several days coming up with plans as to how to impr... (read more)
I'm extremely relieved to hear that you and Vassar are worrying about dilution of rationality, but if all you require is reaching the absolute threshold of competence, you may not be worrying about it enough. I think it's very possible that the best options available to a group in which the average level of rationality is 9 out of 10 are several times as effective on a per-person basis as the best options available to a group in which the average level of rationality is 8 out of 10.
I am not sure that worrying about the perils of growth to the degree you suggest is wise. Given how difficult it is to separate personal dislikes from competence, it seems to me that having a process to identify and remove specific problems (X is scaring off the ladies, let's train him or boot him) is much better than trying to optimize the group (I have more fun when Y isn't there, let's stop inviting them).
I also suspect this isn't intended to be an ivory tower coterie, but a growing movement- which means you want all people above minimum competence regardless of their current skill level. If you've got that sort of growth atmosphere, you'll eventually get enough people that you can sort, and your immediate group will have more members of the average quality you want.
...Your writing style in this reminds me of that of Paul of Tarsus. You need to write more of these. One for every LW meetup in a new city you go to.
Hrmm, this makes me think about the Rationalist equivalent of the Bible.
We'd have the Rationalist Old Testament, which chronicles the invention of the scientific method and some of its many successes, like relativity and computers and evolutionary biology. This is obviously the longer of the testaments, in order for its larger subject matter. We learn about many of the facts and rules. This is the basis of the... (read more)
This post makes me literally sad.
Living in rural Missouri limits my opportunities to interact with similar awesome-seekers.
Wife, child, family, friends, business.
that is sad - I know a friend in rural Nebraska who is in a similar predicament (college) and he says if it wasn't for LW, he might have just concluded that people were just un-awesome.
It is sad that demographics limits potential awesome-seekers. That is another reason why I admire Eliezer so much for making this online community.
Some of this post makes me wonder where Less Wrong sits within social networks. I suspect we have close ties to the BoingBoing-Make ecosystem and may even be part of it.
I live in New York and have been lurking on this site for a while (plus reading HPMoR of course). This post has inspired me to try to get involved with the NY rationalist community. What is the deal with how the community actually functions? How are often are there meet ups? Other basic, boring but necessary questions?
"And the more fun we have the more people will want to join us. That last part is something I only realized was Really Important after visiting New York"
This suggests a strong "I don't do the people stuff" bias (HP:MOR24) which will be one of the many points I address in my upcoming epic "How to save the world" sequence.
Stay tuned on the LW discussion area for this. I think I'll lose a lot of friends here if I pollute the main LW board with my particular agenda ;-)
Downvote to -10 if I haven't written a discussion post along th... (read more)
Most trivial nitpick of the week contender:
If you are 'becoming awesome' then the trait must already dynamic. I'd perhaps go with 'concrete' or just leave out 'static' without replacing it. At least I would if I expected the epistle to be declared divinely inspired and made gospel for the next 2,000 years. And this essay does remind me a lot of Paul's letter to Galatians - in an entirely good way!
I'm fairly sure that was supposed to read "trust you not to be offended if they're frank with you."
It's also important not to stonewall.
I just realized why resistance training has been working amazingly well for almost 3 months now, but all my other projects have been failing left and right. My exercise has an actual, independent goal - I want to look good. I'm willing to do whatever it takes to get there. The other stuff I'm doing more for its own sake. Abandoning the "one true method" would spoil the fun.
Consider who might resent a friend's exclusion from the group, especially if it appears capricious. If there are clear norms and people are emotionally prepared to accept the group's priorities (who it wants to include/exclude), then the collateral damage of a person's friends leaving in protest would be taken with relative equanimity by those who remain.
Part of the great trouble of being a rationalist is the great, great trouble of finding like minded people. I am thrilled at the news of such successful meetups taking place - the reason rationalists don't have the impact they should is poor organization
On the other hand, I really like what Eliezer says about courage. It is one thing to preach and repeat meaningless words about being courageous and facing the Truth, but if we are too afraid to look like a fool in society - who says we won't be too afraid to speak the Truth in the scientific community?
Trying to measure improvement also lets you track it to make sure you're also improving. Vague unmeasured improvement isn't particularly convincingly an improvement.