Background Context, And How to Read This Post

This post is inspired by and a continuation of comments I made on the post 'What is the Rationalist Berkeley Community's Culture?' by Zvi on his blog Don't Worry About the Vase. As a community organizer both online and in-person in Vancouver, Canada, my goal was to fill in what appeared to be some gaps in the conversation among rationalists mostly focused on the Berkeley community. Zvi's post was part of a broader conversation pertaining to rationalist community dynamics within Berkeley. 

My commentary pertains to the dynamics between the Bay Area and other local rationality communities, informed by my own experience in Vancouver and those of rationalists elsewhere. The below should not be taken be taken as comment on rationalist community dynamics within the Bay Area. This post should be considered an off-shoot from the original conversation Zvi was contributing to. For full context, please read Zvi's original post


I. The Rationality Community: Berkeley vs. The World

While I didn't respond to them at the time, several community members commented on Zvi's post they had similar experiences: that while some local rationality communities and their members perceive themselves in a zero-sum game with Berkeley they didn't sign up for (and, to be fair, the Berkeley community didn't consciously initiate as though it's a single agency), and some don't, a sense of what Zvi was trying to point appears ubiquitous. An example:

In my experience, the recruitment to Berkeley was very aggressive. Sometimes it felt like: “if you don’t want to move to Berkeley as soon as possible, you are not *really* rational, and then it is a waste of our time to even talk to you.” I totally understand why having more rationalists around you is awesome, but trying to move everyone into one city feels like an overkill.

Similar anecdata from local rationality communities around the world:

Melbourne. When I met several rationalists originally from Melbourne in Berkeley a few years ago, the way they talked about the exodus of the core of the Melbourne rationality community to the Bay Area, it was a mixed assessment. Melbourne is an example of very successful local rationality community outside the Bay Area, with the usual milestones like successful EA non-profits, for-profit start-ups and rationalist sharehouses. So that many rationalists from Melbourne left for the Bay Area passed a cost-benefit analysis as high-impact individuals it was obvious to them they should be reducing existential risks on the other side of the world.

In conversation, Helen Toner expressed some unease that a local rationality community which had successfully become a rationality hub second only to the Bay Area had had a whole generation of rationalists from Melbourne leave at once. This could have left open the possibility a sustainable system for rationalist development for years had been gutted. My impression since then is around this time the independent organization of the Melbourne EA community began to pick up, and between that and the remaining rationalists, the Melbourne community is doing well. If past or present members of the Melbourne rationality community would like to add their two cents, it would be greatly appreciated. 

The rationality community growth strategy out of Berkeley by default became to recruit the best rationalists from local communities around the world at a rate faster than rationalist organizers could replenish the strength of those local communities. Given the stories I've heard from outside Melbourne being more lopsided, with the organization of local rationality communities utterly collapsing, only recovering after multiple years if ever, I'd consider the case of Melbourne rationality community surviving the exit of its leadership for Berkeley to have been a lucky outlier.


Seattle. The Seattle rationality community has experienced a bad case of exodus to Berkeley over the last few years. My understanding of this story is as follow:

    • Like with rationalists around the world, effective altruism came along and said "hey, while our communities have significant differences, we care about existential risk reduction and other common goals; we've got several billion dollars; and worldwide network of thousands rising through every kind of institution to coordinate the globe". At the time, the whole strategy for AI alignment wasn't much more than "read the Sequences and then donate to MIRI...?", so at the time EA's value proposition couldn't be beat. In Seattle the organizers of the rationality community took off their rationalist hats and switched it for an effective altruist one, albeit while prominently placing a rationalist button on it. This is what started happening in Vancouver as well circa 2013. The Seattle rationalists started a successful Rationality Reading Group in 2015 which got through the whole LessWrong Sequences.
    • Things went swimmingly in Seattle until AI safety 'went mainstream', and as the financial resources flowed into the institutions of the Berkeley rationality community, the demand and pressure to acquire the resources that were distant rationalists and their skill-sets intensified. In a period of several months but less than two years, the Seattle rationality community lost at least a half-dozen members, including some local organizers and other veteran community members. The Rationality Reading Groups ceased as regular meetups for over a year, and local community organization was at best intermittent. 
    • The excitement of EA brought many more Seattleites into the world of x-risk reduction, and the EA and rationality communities of Seattle effectively merged to survive. Since then, they're thriving again, but Seattle is still gradually exuding community members to Berkeley. Because of its proximity to the Bay Area, and the excellence of the Seattle rationality community, I expect it might have experienced more absolute loss from leaking members to Berkeley more than any other. Due to its size, the Seattle community has sustained itself, so the relative loss of local rationality communities which totally collapsed may be greater than has been the case in Seattle. As with Melbourne, if any community members who have lived or are living in Seattle wish to provide feedback, that is encouraged. 

Vancouver. The experience in Vancouver has in the past certainly felt like "“if you don’t want to move to Berkeley as soon as possible, you are not *really* rational". The biggest reason Vancouver may not have exuded as many rationalists to the Bay Area as cities in the United States is the difficulty being Canadian poses to gaining permanent residence in the United States and hence moving to the Bay Area. A couple friends of mine who were early attendees of a CFAR workshop lived in the Bay Area for several months in 2013, and returned home with stories of how wondrous the Bay Area was. They convinced several of us to attend CFAR workshops as well, and we too returned home with the sense of wonderment after our brief immersion in the Berkeley rationality community. But when my friends and I each returned, somehow our ambition transformed into depression. I tried rallying my friends to try carrying back or reigniting the spark that made the Berkeley rationalist community thrive, to really spread the rationalist project beyond the Bay Area. 

But the apparent consensus was it just wasn't possible. Maybe the rationality community a few years ago lacked the language to talk about it, but rationalists who'd lived in Berkeley for a time only to return felt the rationality-shaped hole in their heart could only be filled in the Berkeley. A malaise had fallen over the Vancouver rationality community. All of us were still around, but with a couple local EA organizations, many of us were drawn to that crowd. Those of us who weren't were alienated from any personal connection to the rationality community. I saw in my friends a bunch of individual heroes who together were strangely less than and  not greater than the sum of their parts.

Things have been better lately, and a friend remarked they're certainly better than a few years ago, when everyone was depressed about the fact it was too difficult for us to all move to the Bay Area. In the last several months, the local rationality community has taken on as our mission our own development, and we've not rebounded so much as flourished like never before. But it took the sorts of conversations about the Berkeley rationalist community last year Zvi and others had to break the spell we had cast on ourselves, that apparently Berkeley had running a rationalist community like a well-oiled machine down to an art and a science.


II. The Berkeley Community and the Mission of Rationality

Benquo commented on Zvi's post:

This is a good description of why I feel like I need to leave Berkeley whether or not there’s a community somewhere else to participate in. This thing is scary and I don’t want to be part of it.

I think this is some evidence that the Rationalist project was never or only very briefly real and almost immediately overrun by MOPs, and largely functions as a way for people to find mates. Maybe that’s OK in a lot of cases, but when your branding is centered around “no really, we are actually trying to do the thing, literally all we are about is not lying to ourselves and instead openly talking about the thing we’re trying to do, if you take things literally saving the world really literally is the most important thing and so of course you do it,” it’s pretty disappointing to find it’s just another flavor.

Since he wrote this comment, Benquo has actually continued to participate in the rationality community. This conversation was mired in tension in the rationality community it must have been difficult to think about impersonally, and so a charitable interpretation would be while these problems exist, Benquo and others are generally not as fatalistic about the rationality community as they were the time they wrote the comments. While I and others in thread saw grains of truth in Benquo's statement, precision nonetheless remains a virtue of rationality, and I felt compelled to clarify. I commented:

I’d say the rationality community started whenever Eliezer forked off LessWrong Overcoming Bias, which was around 2008 or 2009. That’s certainly not when it peaked. Even in a way MIRI never was, CFAR started out a project built by the rationality community. That was happening in 2012 or 2013. Above Sarah is also quoted as saying she thinks the Berkeley rationality community hit the right balance of focusing on being a welcoming community qua community, and aspiring to the whatever the core mission(s) of the aspiring rationalist project are.

Unless you’re arguing there was a latency effect where the MOPs overran the community in 2009, but the consequences of such were buried for several years, the period between 2008/09 and 2012/13 doesn’t constitute being “immediately overrun”.

I get you’re pessimistic, but I think you’re overshooting. Matching the map to the territory of what went wrong in the Berkeley rationality community is key to undoing it, or making sure similar failures don’t occur in the future.
FWIW, I’m sorry you’ve had to experience so directly what you feel like is a decline in an aspect of your local rationality community. As someone who connects with rationalists primarily online, I can tell you they’re everywhere, and even if there isn’t a meatspace community as developed as the one in Berkeley, there are rationalists who won’t let the Craft disappear everywhere, and they want meatspace communities of their own built up outside of Berkeley as much as anyone.

Other comments in-thread from community members who had been around longer than Benquo or I confirmed my impression from their own personal experiences, so unless Benquo would further dispute these accounts, this thread seems put to rest. However, Zvi then replied to me:

I think we need to realize the extent to which Berkeley is actively preventing the formation of, and destroying, these other communities. The majority of high-level rationalists who started in the New York community are in the Berkeley community, which caused New York to outright collapse for years before recovering, and they just now once again caused a crisis by taking away a pair of vital community members and almost wiping out the only rationalist group space in the process. From meeting other community leaders in other cities, I hear similar stories A LOT.

I do agree that Plan A for most members can and should be Fix It, not walking away, and that pointing out it needs fixing is the requirement for perhaps fixing it.

To respond to Zvi here, indeed it appears to be an uncannily ubiquitous problem. I've collected a few stories and described them in some detail above. Between that and several comments from independent rationalists on Zvi's original post giving the impression members of their local communities were being sucked to Berkeley as though through a pneumatic tube and leaving a vacuum of community and organization in its wake, it appears these many local stories could be a single global one.

The original mission of the rationality community was to raise the sanity waterline to ensure human values get carried to the stars, but we're still godshatter, so doing so can and should take different forms than just ensuring superintelligence is aligned with human values. If ever the goal was to seed successful, stable rationalist communities outside Berkeley to coordinate projects beyond the Bay Area, it's been two steps forward, one step back, at best. Even if we assume for the sake of argument it's a good idea for rationalists worldwide to view Berkeley as a nucleus and their own rationalist communities as recruitment centres to drive promising individuals to Berkeley for the mission of AI alignment or whatever, the plan isn't working super well. That's because the apparent rate of local rationalist communities sending their highest-level rationalists Berkeley is occurring at a much faster rate than those rationalist communities can level up more rationalists to replenish their leadership and sustain the local community at all.

The state of affairs could be worse than it is now. But it creates the possibility that if enough local rationalist communities around the world outside the Bay Area simultaneously collapsed, the Berkeley rationalist community (BRC) could lose sufficient channels for recruitment to sustain itself. Given the tendency of communities like all things toward entropy, communities decay over time. The BRC could not be rubbing any of its members the wrong way and we would probably still observe some naturally occurring attrition. In a scenario where the decay rate of the BRC was greater than its rate of replenishment, which has historically largely depended on rationalists from outside communities, the BRC would start decaying. If we were to assume the BRC acts as a single agency, it's in the BRC's self-interest as the nucleus of the worldwide rationality movement to sustain communities-as-recruitment centres at least to the extent they can sustainably drive their highest-level rationalists to Berkeley over the long-term.

While this worst-case scenario could apply to any large-scale rationalist project, with regards to AI alignment, if the locus of control for the field falls out of the hands of the rationality community, someone else might notice and decide to pick up that slack. This could be a sufficiently bad outcome rationalists everywhere should pay more attention to decreasing the chances of it happening.

So whether a rationalist sees the outcome of the primary purpose of rationalist communities acting as a recruitment centres for the Berkeley rationalist community as an excellent plan or an awful failure mode, there's a significant chance it's unsustainable either way. It appears a high-risk strategy that's far from foolproof, and as far as I know virtually nobody is consciously monitoring the situation to prevent further failure.


III. Effective Altruism and the Rationalist Community

In another thread, I responded directly to Zvi. I commented:

While rationalists are internally trying to figure out how there community has changed, and they’re lamenting how it’s not as focused on world-saving, there’s a giant factor nobody has talked about yet. The only community which is more focused on the rationality community’s way of world-saving than the rationality community is effective altruism. To what extent is the rationalist community less world-save-y than it used to be because the rationalists whose primary rationalist role was “world saver” just switched to EA as their primary world-saving identity. I think as things have gotten less focused since LessWrong 1.0 died, and the rationalist diaspora made entryism much easier as standards fell, what you’re saying is all true. You might be overestimating the impact of entryism, though, and underestimating people who exited not because they had no voice, but for sensible reasons. If at any point a rationalist felt they could better save the world within the EA rather than through the rationality community, it’d internally make sense to dedicate one’s time and energy to that community instead.

The EA community doesn’t seem able to build bonds as well as the rationality community. However, the EA community seems better at making progress on outward-facing goals. In that case, I for one wouldn’t blame anyone who find more at home as a world-saver in EA than they did in the rationalist community.

Zvi replied:

Definitely an elephant in the room and a reasonable suspect! Certainly partially responsible. I haven’t mentioned it yet, but that doesn’t mean I’ve missed that it is in the picture. I wanted to get this much out there now, and avoid trying to cover as many bases as possible all at once.

There have been many (Sarah [Constantin] and Benquo among them) who have been trying to talk for a long time, with many many words, about the problems with EA. I will consider that question beyond scope here, but rest assured I Have Thoughts.

Since then Zvi and others have made good on their intentions to point out said problems with effective altruism. I intend to engage these thoughts at length in the future, but suffice to say for now local rationalist communities outside the Bay Area appear to definitely have experienced being 'eaten' by EA worse than Berkeley.


I never bothered to tie up the loose ends I saw in the comments on Zvi's post last year, but something recently spurred me to do so. From Benquo's recent post 'Humans need places':

I am not arguing that it would merely be a nice thing for Bay Arean EAs and Rationalists to support projects like this; I am arguing that if you have supported recruiting more people into your community, it is morally obligatory to offer a corresponding level of support for taking care of them once you are in community with them. If you can’t afford to help take care of people, you can’t afford to recruit them.

If you don’t have enough for yourself, take care of that first. But if you have more than enough to take care of your private needs, and you are thinking of allocating your surplus to some combination of (a) people far away in space or time, and (b) recruiting others to do the same, I implore you, please first assess - even approximately - the correct share of resources devoted to direct impact, recruiting more people into your community, and taking care of the community’s needs, and give accordingly.
[...]
The Berkeley EA / Rationalist community stands between two alternatives:
1.Pull people in, use them up, and burn them out.
2. Building the local infrastructure to support its global ambitions, enabling sustainable commitments that replenish and improve the capacity of the people making them.

It's important for rationalists in Berkeley to know that from where they're standing, to rationalists around the world, these statements could ring hollow. The perception of the Centre for Effective Altruism slighting the Berkeley REACH is mirrored many times over in rationalists feeling like Berkeley pulled in, used up and burned out whole rationalist communities. The capital of a nation receives resources from everyone across the land. If the capital city recruits more citizens to the nation, is it not morally obligatory for the capital city offer a corresponding level of support for taking care of them once they joined your nation? Is it not the case if the rationality community can not afford to take care of our people, then we can't afford to recruit them?

The worldwide rationalist project stands between two alternatives:

  1. Seed new local communities, use them up, and burn them out.
  2. Building the global infrastructure to support its global ambitions, enabling sustainable commitments that replenish and improve the capacity of the local communities making them.

This isn't about the Berkeley rationalist community, but rationalist communities everywhere. In reading about the experiences of rationalists in Berkeley and elsewhere, I've learned their internal coordination problems are paralleled in rationalist communities everywhere. The good news in the bad news is if all rationalist communities face common problems, we can all benefit from working towards common solutions. So global coordination may not be as difficult as one might think. I wrote above the Vancouver rationality community has recently taken on as our mission our own development, and we're not recovering from years of failures past so much as flourishing like never before. We haven't solved all the problems a rationalist community might face, but we've been solving a lot. As a local community organizer, I developed tactics for doing so that if they worked in Vancouver, they should work for any rationalist community. And they worked in Vancouver. I think they're some of the pieces of the puzzle of building global infrastructure to match the rationality community's global ambitions. To lay that out will be the subject of my next post.

New to LessWrong?

New Comment
70 comments, sorted by Click to highlight new comments since: Today at 1:42 AM
A couple friends of mine who were early attendees of a CFAR workshop lived in the Bay Area for several months in 2013, and returned home with stories of how wondrous the Bay Area was. They convinced several of us to attend CFAR workshops as well, and we too returned home with the sense of wonderment after our brief immersion in the Berkeley rationality community. But when my friends and I each returned, somehow our ambition transformed into depression. I tried rallying my friends to try carrying back or reigniting the spark that made the Berkeley rationalist community thrive, to really spread the rationalist project beyond the Bay Area. 

You seem to be conflating "CFAR workshop atmosphere" with "Berkeley Rationalist Community" in this section, which makes me wonder if you are conflating those things more generally.

The depressive slump post-CFAR happens *in Berkeley* too. The thriving community you envision Berkeley as having *does not exist,* except at CFAR workshops. The problem you're identifying isn't a Bay-Area-vs-the-world issue, it's a general issue with the way CFAR operates, building up intense social connections over the course of a weekend, then dropping them suddenly.

it's a general issue with the way CFAR operates, building up intense social connections over the course of a weekend, then dropping them suddenly.

So, this is definitely a thing that happens, and I'm aware of and sad about it, but it's worth pointing out that this is a generic property of all sufficiently good workshops and things like workshops (e.g. summer camps) everywhere (the ones that aren't sufficiently good don't build the intense social connections in the first place), and to the extent that it's a problem CFAR runs into, 1) I think it's a little unfair to characterize it as the result of something CFAR is particularly doing that other similar organizations aren't doing, and 2) as far as I know nobody else knows what to do about this either.

Or are you suggesting that the workshops shouldn't be trying to build intense social connections?

6clone of saturn6y
I don't think he was criticizing CFAR workshops, but people who implicitly expect their own communities to automatically produce the same intense social connections.
2Evan_Gaensbauer6y
Yes, this is what I was getting at. Thanks.
2Evan_Gaensbauer6y
I agree with these statements, and clone of saturn is correct I was talking about an implicit expectation other rationalist communities will produce the same intense social connections found at CFAR workshops (and also attributed to the Berkeley community generally, but as stardust points out this isn't as amazing as myself and others had built it up to be).
8Zvi6y
Is this suggesting that top-tier Berkeley is even eating the seed corn of Berkeley and making everyone but its own top-tier depressed in its wake?
7Raemon6y
I think there is specifically a "work on x-risk" subgroup, which yes recruits from within Berkeley, and yes has some debilitating effects. I wouldn't quite characterize it the way Zvi does but will say it's not obviously wrong. [Edit: I have mixed feelings about whether or how bad the current dynamics are. I think it actually is the case that x-risk desperately needs agents, and yes this competes with non-x-risk community building which also needs agents. I think it's possible to make pareto-optimal improvements to the situation but there will probably be at least some tradeoffs that need to get made and I think reasonable people can disagree about where to draw those tradeoffs]
7Zvi6y
We can all agree that x-risk prevention is a Worthy Cause, or even the most worthy cause. And at some point, you need to divert increasing parts of your resources to that rather than to building resources to be spent, and that this time is, as one otherwise awful teacher of mine called it, immediately if not sooner. The key question, in terms of implications/VOI, is: Is 'work on x-risk' the kind of all-consuming task (a la SSC's scholars who must use every waking moment to get to those last few minutes where they can make progress, or other all-consuming jobs like start-up founder in a cash crunch) where you must/should let everything else burn, because you have power law returns to investment and the timeline is short enough that you'll burn out now and fix it later? Or is it where you can and should do both, especially given there isn't really a cash crunch and the timeline distribution is highly uncertain and so is what would be helpful? I want vastly more resources into x-risk, but some (very well meaning) actors have taken the attitude of 'if it's not directly about x-risk I have no interest' and otherwise making everything fit into one of the 'proven effective' boxes, which starves community for resources since it doesn't count as an end goal. It's a big problem. Anyway, whole additional huge topic and all that. And I'm currently debating how to divide my own resources between these goals!
2Evan_Gaensbauer6y
I've got a lot of thoughts on this myself I haven't gotten done yet either, but it appears many effective altruists and rationalists share your perspective of a common problem disrupting other community projects. See this comment.
2Evan_Gaensbauer6y
This ties into an underrated factor I talked about in this comment:
3stardust6y
Perhaps? I am not sure if if there is even a coherent top-tier. If there is I am not part of or aware of it.
5Evan_Gaensbauer6y
This was the experience in Vancouver after CFAR workshops, and the atmosphere persisted for a long time. It wasn't only me who was conflating "[big event] atmosphere" with "Berkeley Rationalist Community". Not just me but a lot of other people in Vancouver, and also how a lot of rationalists from elsewhere talk about the Berkeley Rationalist Community (I'm going to call it the Bayesian Area), it's often depicted as super awesome. The first thing that comes to mind is a lot of rationalists from outside of Berkeley only visit town for events like CFAR workshops, CFAR alumni reunions, EA Global, Burning Man, etc. So if one rationalist visits Berkeley a few times a year and always returns to their home base talking about their experiences in Berkeley right after these exciting events, it makes the Berkeley community itself seem constantly exciting. I'm guessing the reality is Berkeley community isn't always buzzing with conferences and workshops, and organizing all those things is actually very stressful. There definitely is a halo around the Berkeley Rationalist Community for other reasons: * It's often touted 'leveling up' to the point one can get hired at an x-risk reduction organization or working on another important project like a startup in Berkeley is an important and desirable thing for rationalists to do. * There's often a perception resources are only invested in projects based in the Bay Area, so trying to start projects with rationalists elsewhere and expect to sustain them long-term is futile. * Moving to Berkeley is still inaccessible or impractical for a lot of rationalists scattered everywhere that (especially if their friends leave) it breeds a sense of alienation and being left behind/stranded as one watches everyone else talk about how they *can* flock to the Berkeley. Combined with the rest of the above, this can also unfortunately breed feelings of resentment. * Rationalists from outside Berkeley often report feeling as though the benefi
[-]Zvi6y230

Thank you for writing this. I think your statement of the fundamental puzzle is basically accurate. I don't know what to do about it. If I felt that by investing in NYC (or some other place) I could build up a community I'd want to be a part of in the long term, I'd devote effort to that, but I don't know how to prevent my work from being raided and destroyed by Berkeley, so I don't do the work. Hell, I don't even know how to get those people to stop recruiting me, or my wife, every chance they get. Mentioning 'the fire of a thousand suns' and writing many articles about this does not seem to prevent it causing direct stress and serious damage to my life, on an ongoing basis, even after the posts this references.

Hell, the latest such attempt was yesterday.

[Brainstorming]

One idea is to try to differentiate the NYC 'product' from the Berkeley 'product'. For example, the advantage of Vancouver over the Bay Area is that you can live in Vancouver if you're Canadian. The kernel project attempted to differentiate itself through e.g. a manifesto. In the same way, you could try to create an identity that contrasts with the Bay Area's somehow (for example, figure out the top complaints people have about the Bay Area, then figure out which ones you are best positioned to solve--what keeps you in NYC?) Academic departments at different universities are known for different things; I could imagine a world where rationalist communities in different cities are known for different things too.

7Zvi6y
It's a good idea if there's something we can come up with that's a sufficient draw and is actually raid-proof. The other issue is that trying and failing is a disaster - e.g. MetaMed was an attempt to do many things, this was one of them (even if that wasn't the intent), and its failure cost us several key community members like Sarah+Andrew.

I'm sorry that happens to you.

[-]Zvi6y100

Thank you. I appreciate that.

3Chris_Leong6y
That's interesting. I would expect that New York would be a large enough city that it should be possible to build up a strong community there.
8Raemon6y
It was – but as Zvi says, it keeps getting gutted.
The perception of the Centre for Effective Altruism slighting the Berkeley REACH

I had hoped this was clear in my original post, but apparently it wasn't - I'm not saying CEA owes Berkeley REACH anything. I'm just saying we shouldn't conflate CEA with the sort of organization that would support the Berkeley REACH, and that Bay Area locals should fund the neglected cause of themselves having nice things locally.

CEA turned down my proposal because there were other, more established groups than REACH with clearer track records of success and better thought out metrics for success/failure applying for the same round of grants. I am working on building up a track record and metrics/data capture so that I can reapply later.

[-]Zvi6y110

I read this as "CEA cares more about procedures that appear objective and fair and that can be defended, and not making mistakes, than doing the right/best thing." That may or may not be fair to them.

I do know that someone recently claiming to be brought in to work for CEA (and raided by SF from NYC, and who proceeded to raid additional people from NYC), claimed that CEA is explicitly looking to do exactly this sort of thing, and was enthusiastic about supporting an NYC-based version of this (this was before either of knew about REACH, I believe), despite my obvious lack of track record on such matters, or any source of good metrics.

If they'll only support REACH after it has a proven track record that can point to observable metrics to demonstrate an impact lower bound, it's the proverbial bank that only gives you a loan you don't need.

I do think Benquo was clear he wasn't calling on CEA to do anything, just observing that they'd told us who they were. And we were free to not like who they were, but the onus remained on us. That sounds right.

9stardust6y
I think funding REACH before there was a track record would've been financially risky. I chose to take that risk personally because I didn't see how it would happen without someone doing something risky. It certainly would have been nice to have gotten support from CEA right away, but I don't think they were wrong to choose to focus resources on people who'd been working on community building for longer, and likely had fewer resources to spare.
[-]Zvi6y170

I can appreciate that. If CEA is budget constrained, and used all its resources on proven community builders doing valuable projects, I can't really argue with that too hard. However...

If CEA did it because you had personal resources available to sacrifice in their place, knowing you would, that seems like a really bad principle to follow.

If CEA feels it can't take 'risk' on this scale, in the sense that they might fund something that isn't effective or doesn't work out, that implies curiously high risk aversion where there shouldn't be any - this would be a very small percent of their budget, so there isn't much effective risk even if CEA's effectiveness was something to be risk averse about, which given its role in the overall ecosystem is itself questionable. It's a much smaller risk for them to take than for you to take!

4Evan_Gaensbauer6y
Peter Hurford wrote last year on the Effective Altruism Forum about the 'hits-based giving' approach the Open Philanthropy Project takes toward funding projects, inspired by YCombinator: Peter also summarizes 80,000 Hours' application of start-up principles to evaluating projects (For those unfamiliar, 80,000 Hours is the careers advising organization part of the Centre for Effective Altruism, and both 80,000 Hours and the CEA were incubated by YCombinator). He makes a couple points relevant to your and Benquo's observation about how non-profit investing should be more risk-neutral than appears to be the case between CEA and the Berkeley REACH.
3Benquo6y
As another person who's worked in finance, I endorse this analysis completely.
2Evan_Gaensbauer6y
I had difficulty finding a word to get across what I meant, so I went with 'slighted' but, I didn't think that's what you meant. My interpretation of your original post was not that you think CEA owes Berkeley REACH in particular, but you think CEA ought to be the kind of organization more willing to consider community projects like the Berkeley REACH. Of course it's apparent now that isn't what you meant either. Thanks for clarifying. One point I was getting across is there is a perception among the rationality community not the Berkeley rationality community itself, but maybe leaders or key organizations there should play the role for rationalist communities around the world you're hoping Berkeley rationalists play as patrons of the Berkeley REACH, as the Berkeley has received so much from the rest of the rationality community. Whether this sense of owed reciprocity is fair is another question entirely (I personally don't think it's the right question to be asking if we want to find solutions to problems the community faces. I'm still working on my thoughts on that though).

I can understand the frustrations of people like Zvi who don't want to invest in local rationality communities, but I don't think that reaction is inevitable.

I went to a CFAR mentor's workshop in March and it didn't make me sad that the average Tuesday NYC rationality meetup isn't as awesome. It gave me the agency-inspiration to make Tuesdays in NYC more awesome, at least by my own selfish metrics. Since March we've connected with several new people, established a secondary location for meetups in a beautiful penthouse (and have a possible tertiary location), hosted a famous writer, and even forced Zvi to sit through another circle. The personal payoff for investing in the local community isn't just in decades-long friendships, it's also in how cool next Tuesday will be. It pays off fast.

And besides, on a scale of decades people will move in an out of NYC/Berkeley/anywhere else several times anyway as jobs, schools, and residential zoning laws come and go. Several of my best friends, including my wife, came to NYC from the Bay Area. Should the Areans complain that NYC is draining them of wonderful people?

One of my favorite things about this community is that we're all geographically diverse rootless cosmopolitans. I could move to a shack in Montana next year and probably find a couple of people I met at NYC/CFAR/Solstice/Putanumonit to start a meetup with. Losing friends sucks, but it doesn't mean that investing in the local rationality community is pointless.

6Evan_Gaensbauer6y
Thank you for making this comment. Of all the reactions to this post, this one best captures how I want rationalists outside the Bay Area to relate to it going forward. Of course it doesn't go as far as I'd like, but I'm unsure of how I want to take it. I've been reading some of Zvi's posts from last year, which are wrongly pessimistic not because they're a self-fulfilling prophecy preventing non-Berkeley rationality communities from achieving their values, but because it's a map of how rationality communities develop not matching the territory. (I'm aware things were more tense between NYC and Berkeley a year ago, and while I don't know all the details, I imagine Zvi had sufficient reason for how he felt, and may not endorse as strongly now everything he said then.) At the same time, not regarding inter-community dynamics, but the whole rationality movement, I feel like the Community has failed to uphold the Craft. This isn't the same as not devoting enough resources or doing so in the right way toward AI alignment or another mission. It's about the sense I got from reading posts like this one from last year, and my sense other rationality communities are like Berkeley now: rationalists have an aversion to the changes trying to level up might bring to their communities because it would disturb the local state of affairs too much. In Vancouver, we never blamed the Bay Area for our woes. I think it partially induced our woes, but I don't think from the scale of the individual to the whole Berkeley rationality community, or any subset in between should be blamed for what's happened. We depressed ourselves with how inadequate we seemed relative to Berkeley, and to the extent the Berkeley rationality community perpetuates that mindset, they're preventing the expansion of the Craft. That nobody in Berkeley talks about that, and barely anybody who complains about Berkeley mentions this, leads me to think it's a huge blind spot for all of us. In the past there have been

Reading this I was reminded of something. Now, not to say rationality or EA are exactly religions, but the two function in a lot of the same ways especially with respect to providing shared meaning and building community. And if you look at new, not-state-sponsored religions, they typically go through an early period where they are are small and geographically colocated and only later have a chance to grow after sufficient time with everyone together if they are to avoid fracturing such that we would no longer consider the growth "growth" per se and would more call it dispersion. Consider for example Jews in the desert, English Puritans moving to North America, and Mormons settling in Utah. Counterexamples that perhaps prove the rule (because they produced different sorts of communities) include early Christians spread through the Roman empire and various missionaries in the Americas.

To me this suggests that much of the conflict people feel today about Berkeley is around this unhappiness at being rationalists who aren't living in Berkeley when the rationality movement is getting itself together in preparation for later growth, because importantly for what I think many peo... (read more)

2Evan_Gaensbauer6y
I agree with all of this, except for existential risk reduction and other potential goals of the rationality community don't fit with waiting rationality to coalesce into a world religion, which you've already acknowledged. Also, I feel like just because it's the rationality community we should find a way to create tighter feedback loops of coalescing into worldwide community in a shorter period than religions typically take. Personally I'm more motivated by the Craft than the Community, but I figure to rally the whole community both are necessary (and interdependent?), so I'm still trying to hack together a way to balance both while accelerating sustainable development of local rationality communities.

On a sufficiently meta-level, the cause of the problem may be both rationality and EA thought leaders have roots in disciplines like game theory, microeconomics, and similar. These styles of analysis usually disregard topology (structure of interactions).

For better or worse, rationalists and effective altruists actually orient themselves based on such models.

On a less meta level

Possibly I'm overconfident, but from a network science inspired perspective, the problem with the current global movement structure seems quite easily visible, and the solutions are also kind of obvious (but possibly hard to see if people are looking mainly through models like "comparative advantage"?).

So what is the solution? A healthy topology of the field should have approximately power-law distribution of hub sizes. This should be true also for related research fields we are trying to advance, like AI alignment or x-risk. If the structure is very far from that (e.g. one or two very big hubs, than nothing, than a lot of two orders of magnitude smaller groups fighting for mere existence), the movement should try to re-balance, supporting growth of medium-tier hubs.

It seems this view is now gradually spreading at least in European effective altruism community, so the structure will get better.

(Possible caveat: if people have very short AGI timelines and high risk estimates, they may want to burn whatever is available, sacrificing future options.)

A healthy topology of the field should have approximately power-law distribution of hub sizes. This should be true also for related research fields we are trying to advance, like AI alignment or x-risk. If the structure is very far from that (e.g. one or two very big hubs, than nothing, than a lot of two orders of magnitude smaller groups fighting for mere existence), the movement should try to re-balance, supporting growth of medium-tier hubs.

Although my understanding of network science is abecedarian, I'm unsure of both whether this feature is diagnostic (i.e. divergence from power-law distributions should be a warning sign) or whether we in fact observe overdispersion even relative to a power law. The latter first.

1) 'One or two big hubs, then lots of very small groups' is close to what a power law distribution should look like. If anything, it's plausible the current topology doesn't look power-lawy enough. The EA community overlaps with the rationalist community, and it has somewhat better data on topology: If anything the hub sizes of the EA community are pretty even. This also agrees with my impression: although the bay area can be identified as the... (read more)

6Jan_Kulveit6y
1) Thanks for the pointer to the data, I have to agree that if the surveys are representative of EA / rationalist community, than actually there are enough medium sized hubs. When plotting it, the data seem to look reasonably power-lawy - (an argument for a greater centralization could have the form of arguing for a different exponent). I'm unsure about what the data actually show - at least my intuitive impression is much more activity is going on in Bay area than suggested by the surveys. A possible reason may be the surveys count equally everybody above some relatively low level of engagement (willingness to fill a survey), and if we had data weighted by engagement/work effort/... it would look very different. If the complains that hubs are "sucking in" the most active people from smaller hubs, than big differences between "population size" and "results produced" can be a consequence (effectively wasting the potential of some medium sized hubs, because some key core people left, damaging the local social structure of the hub) 2) Yes there are many effects leading to power laws (and influencing their exponents). In my opinion, rather than trying to argue from the first principles which of these effects are good and bad, it may be more useful to find comparable examples (e.g. of young research fields, or successful social movements), and compare their structures. My feel is rationality/EA/AI safety communities are getting it somewhat wrong. Certain 'jobs' seem to have this property: a technical AI researcher in (say) Japan probably can have greater EV working in an existing group (most of which are in the bay) rather than trying to seed a new AI safety group in Japan. This certainly seems to be the prevalent intuition in the field, based on EV guesstimates, etc., and IMO could be wrong. Or, speculation, possibly isn't wrong _per se_, but does not take into account that people want to be in the most prestigious places and groups anyway, and already include th
9Chris_Leong6y
I don't suppose I could persuade you to write up a post with what you consider to be some of the most important insights from network theory? I've started to think myself that some of our models that we tend to use within the rationality community are overly simplistic.
This isn't about the Berkeley rationalist community, but rationalist communities everywhere. In reading about the experiences of rationalists in Berkeley and elsewhere, I've learned their internal coordination problems are paralleled in rationalist communities everywhere.

I'm not sure to what extends that true. It seems to me like Berkeley has problems of status competition that come through scale that I don't see in my local LessWrong community the way I see them described when I talk with people about the Bay Area.

If there are more people interested to go to an event then there are spaces for the event you need to restrict entry and thus people have to compete over entry.

2stardust6y
I don't think I've ever seen an event with more people interested than able to attend in Berkeley. If anything, it's difficult to get people to come out for events.

I actually think this happens fairly frequently, although may be happening sort of invisibly:

  • I think it most concretely happened at the last Winter and Summer Solstice – in this case it was explicitly due to event insurance concerns and explicit attendee caps.
  • More often and more generally: I think, esp for medium-sized parties (basically any time it's a private FB event, and the room ends up pretty full), I think it's often the case that, before you got to the point where people notice and feel excluded, there's a pre-emptive pass where only a smaller subset of people get invited in the first place. The competition is happening quietly in the social network.
5stardust6y
Ah, yeah, it did happen at last summer's solstice, I had forgotten. I was not involved with the winter solstice and didn't know about similar problems there. I do agree that house parties are often selective, but I have never seen an event with a topic (as opposed to a purely social party) have more interest than the space allowed, which was the category of thing that was in my head when I said "event" above. I consider house parties to be more about hanging out with friends than about "the community" or whatnot.
5Raemon6y
Yeah, agreed that events that are "expecting effort" on the part of participants don't usually have this problem. The place where it seems most relevant are events that are sort of on the border between "hanging out with friends" and "hanging out with community" – house parties that play a large role in determining the overall social scene for Berkeley, where, say, 50-100 people get invited, but there 200 people in the area. (This is not me saying anyone is doing anything wrong, just, it's a thing to be aware of)
7stardust6y
Yeah. For me, events at REACH are a good way to get to know new people and decide if I trust them enough to invite them to more private events. I think a lot of folks in the community are already at capacity for how many social connections they can keep up and so don't end up wanting to get to know new people. I think some of this stems from the fact that many people seem to prefer talking to folks one on one which makes it hard to parallelize social time. My personal preference is for groups of 5-10, sometimes within a larger social setting, and have been sorta trying to impose this preference on others through doing things at REACH :P
5ChristianKl6y
That's basically the dynamic I was referring to. You don't have that to the same extend with less people in a community.
2Raemon6y
Yuppers. I definitely with the addition of REACH has a good chance of reducing or eliminating this problem – having lots of public facing events solves a lot of the issues.
3gwillen6y
The winter solstice last year used the same venue it had used the previous year, but the venue imposed a new, lower restriction on the maximum number of attendees, due to some new interpretation of the fire code or something. As a result, tickets did sell out. (I wasn't close enough to organization last year to know how last-minute the change was, but my impression was that there was some scrambling in response.) This year a new venue is being sought that can better accommodate the number of people who want to attend.

Two somewhat independent thoughts:

1) If you think tech money is important, you need to be in the bay area. Just accept that. There's money elsewhere, but not with the same concentration and openness.

2) Are you focused on saving the world, or on building communitiy/ies who are satisfied with their identity as world-savers? "bring them in, use them up" _may_ be the way to get the most value from volunteer sacrifices. It may not - I haven't seen a growth plan for any org that explicitly has many orders of magnitudes of increase while... (read more)

4Zvi6y
The money in the Bay uses 'if you're not in the Bay you're not serious, and even if you are other Bay money won't take you seriously so I can't afford to' as a coercive strategy to draw people there. Parallel with the community issues. Giving in to such tactics makes the problem that much worse and it snowballs. Yes, Bay tech money is bigger and more our flavor there, but there's lots in many other places, and we'd get more out of what money exists if we were spread out than if we all chased the biggest pile, even with that pile playing hostile negative-sum games on us.
9Dagon6y
Right. That's my "just accept it" point. If you want that money, you (currently) have to play by those rules. If you don't want to play that way, you need to stand up and say that your plan isn't based on bay-area money/support levels. It's hard for me to understand the use of "coercive" here. Other than choosing not to give you money/attention, what coercion is being applied? Even so, I think that strategy (to draw the serious people who have the capability to contribute) is a small part of it. It's mostly just a simple acknowledgement that distance matters. it's just a bit more hassle to coordinate with distant partners, and that's enough to make many want to invest time/effort/money more locally, all else equal. This is compounded by the (weak but real) signals about your seriousness if you won't find a way to be in the center of things.
2Evan_Gaensbauer6y
This dovetails with my experience from what I've heard in other points in the community, as I described in this comment:
2Evan_Gaensbauer6y
This is true. There are reasons other than community-building to not be concentrated in one place. I don't think trying to reverse the relatively high concentration of rationalists in the Bay Area is at this time a solution to common community problems. This strikes me as pretty unlikely. Often even moreso among EA organizations than ones in the rationality community, world-saving operations which try this strategy appear to have a higher turnover rate, and they don't appear to have improved enough to compensate for that. The Centre for Effective Altruism and the Open Philanthropy Project are two organizations which have close ties and are the two biggest funders in effective altruism, which also covers x-risk/world-saving rationalist projects. They're taking more of a precision approach building community/ties in a way they think will maximize the world-saving-ness of the community. Not everyone agrees with the strategy (see this thread), but it's definitely more of a hands-on approach moving away from a "bring them in, use them up" model that was closer to what EA organizations tended to do a few years ago. Many of the other comments on this post point to an issue of concern being a trade-off between a world-saving focus and rationality community-building, but my sense of why it is tense is because both are considered important, so the way is to find better ways to not lose community-building to world-saving.

I am sort of agnostic about whether the Berkeley community is a good idea or not. On one hand it certainly feels pointless to try to build up any non-Berkeley community. If someone is a committed rationalist they are pretty likely to move to Berkeley in the near future. In addiiton it is very hard to constantly lose friends. This post probably best captures the emotional reality:

"I have lost motivation to put any effort into preserving the local community – my friends have moved away and left me behind – new members are about a decade younger than my... (read more)

One pattern I'm noticing is because of the fact that because of the relative comparative advantage of citizenship in other countries, and the relative difficulty of attaining permanent residency in the United States, the communities of rationalists abroad are more stable over time because of the practical difficulty of convincing people to move to the United States. For example, having post-secondary education that is more subsidized not just in undergrad but in graduate studies as well in countries aside from the United States keeps non-American rationalists in their home countries until their mid-to-late twenties. That's young enough I know rationalists who musing about moving to Berkeley to work on AI alignment or another community project someday, but I also know a lot of rationalists who have set down roots where they are by then, and aren't inclined to move. Another thing if is a rationalist doesn't have a university degree or highly in-demand skills (e.g., STEM) for big corporations, it's difficult enough to get health insurance and visas for a lot of rationalists emigrating to the United States it doesn't make sense to try. This first post I wr... (read more)

So its hard to tell people to refrain from moving to Berkeley

I apologize for possibly/probably twisting your words a bit here, but I never have trouble telling people to refrain from moving to the Bay/Berkeley. I tell them I lived there for a few years and it’s a pretty unpleasant place, objectively, along any of ten different metrics relevant to comfort and peace of mind. I tell them I never actually developed any sense of belonging with the local Rationalist Community, so it’s not gauranteed that that will happen. I tell them I make a pretty good amount of money in many cities, but since I’m not a Comp Sci grad that doesn’t translate to a decent living in Berkeley. I tell them on top of that, Berkeley is one of the most expensive places to live in the world and if there were some kind of objective ratio of cost of living divided by objective comfort/quality/value-of-a-dollar, Berkeley would be near the top worldwide.

I also don’t find the proposition that you have to literally move to an expensive unpleasant overcrowded dystopian city in order to be rational to be particularly, uh, rational.

8Zvi6y
If you could turn that warning into a post, I think it might be helpful, especially if you can be explicit about things. Having it come from someone with experience living there helps make the message credible, and helps you craft a better message. I worry my words ring hollow, and I can't make clear much of what I see.
4sapphire6y
I don't tell everyone to move to Berkeley. But if you are heavily invested socially in the rationalist community you are passing up alot of personal utility by not moving to Berkeley. Other considerations apply of course. But I think the typical hghly invested rationalist would be personally better off if they moved to Berkeley. Whether this dynamic is good for the community longterm or not is unclear.
2Elo6y
Or you could start a new branch.
2Evan_Gaensbauer6y
What do you mean by a new branch of the rationality community? John Maxwell suggested in another thread local rationality communities aside from Berkeley could have comparative advantages in specializing in offering rationalists the sort of things they might want but typically can't find in Berkeley. This has been the intention of other projects to build up like local rationality communities like Project Kernel (which is currently experiencing significant problems).
4Elo6y
I meant "a new local meetup".
2Evan_Gaensbauer6y
Alright, that makes sense. I was reading some of Zvi's other posts on his blog about the rationality community, and I think there are significant advantages to start a new local meetup he was missing. Some of them used to apply to me until the last few months we've had success in starting a new local meetup after organization fell through for almost a year.
2Elo6y
That seems common. "keep at it" is my only advice. Let us know if you make breakthroughs.
7Evan_Gaensbauer6y
Thanks for your response. So there is the 'Craft' and the 'Community', or at least that is sometimes how rationality is modeled. And the Community could be broken down into Berkeley and other communities elsewhere. But if rationality is also about a mission to ensure human values are carried to the stars, and right now that hinges on AI alignment, it makes sense to me the rationality community is significantly concentrated in the Bay Area. This or other mindsets of singular focus in the name of the Craft appear they might come at some expense to the Community in Berkeley as well. The last year has seen some people in the Berkeley community ask if the Berkeley community is good for the community. I think this might be part of a worldwide problem in rationality, which I only have half an idea of how to tackle. I might need to get a lot of thoughts down before I figure out where I'm going with them. There are stories of mixed success throughout the community in building a rationalist community outside of Berkeley, and it gives me some hope, but then I read about these experiences and I feel ambivalent. I'm afraid anything other local rationality community organizers might recommend is something NYC or another once-flourishing rationalist community has already tried before, and it didn't work. I'm also afraid if a new community takes advice on how to build up while retaining membership over time that worked somewhere else, and then fails, it will greatly discourage the new community someone tried launching. Ultimately I consider the struggles the community faces as hard optimization problems, and right now I'm holding off on proposing solutions until I've discussed the solution more. Ideally we would find ways to create similar outcomes for rationalists in lots of different places, which I see as a hard optimization problem I'm holding off on proposing solutions to until I've looked at it from more angles.

I'm confused by either your Seattle timeline or your use of the term "Rationality Reading Group."

As far as I know, I started the Rationality Reading Group in 2015, after my Jan CFAR Workshop. We read through a bunch of the Sequences.

I left Seattle in late 2016 and left RRG in some other capable hands. To this day, RRG (afaik) is still going and hasn't had any significant breaks, unless they did and I just didn't know about it.

In any case, I'd appreciate some kind of update to your post such that it is either more accurate or less confusing...

7Unreal6y
Also, the story is basically: for a while there was a LessWrong meetup, but then this got dropped and transformed into an EA Meetup. Then there were only EA meetups for a while. Then I started RRG and brought rationality back as its own hub, creating the Seattle Rationality FB group as well. The rationality community grew. Now there are multiple rationalist group houses including a new hub. People did leave for Berkeley, but weekly RRG is still going afaik, and there is still an active community, although its composition is perhaps quite different now.
2Evan_Gaensbauer6y
I've edited my post. Thanks for clarifying.
As a local community organizer, I developed tactics for doing so that if they worked in Vancouver, they should work for any rationalist community.

As a fellow community organizer (Berlin), I would be happy to read about them.

7stardust6y
I'm working on building up a similar reproducible set of operating guidelines for REACH and would be very interested in comparing notes.
4ChristianKl6y
I just ran my third open LessWrong meetup in Berlin about gratitude. Before I did run one in Hamburg a while ago and Christian Kamm was responsible for running the monthly LessWrong meetup in Berlin. After running the first meetup I wrote up the idea for the meetup under How do we change our minds? A meetup blueprint. I did organize Quantified Self meetups in Berlin from 2011 to 2013 and have a bit of other community leading experience. I'm happy to talk more.
3Evan_Gaensbauer6y
I've got a bunch of different ideas, some of which are about creating a local rationalist culture, which depending on how they pan out might be a sequence of blog posts. I also have some tips for what worked well in Vancouver, and they might work even better at a community center. * In Vancouver, having a Facebook group has helped. Not everyone is into FB for any number of reasons, so mailing lists or a Discord server also works. Keeping people in touch with online as well as offline keeps local people who can't make it out in person so often but are invested in the community in the loop, and it helps promote bonds between people. * The biggest thing might be posting housing opportunities, and even pinning a post in our Facebook group about requests for housemates/housing, which has helped several local community members find new roommates. It's helped contribute to increase the number of rationalist sharehouses in Vancouver from 1 to 3 in 8 months. Using an online group as a digital bulletin board for housing opportunities has helped local rationalists get more involved in the community, creates more rationalist spaces, and helps the houses in question retain a community culture over time. We haven't tried it for things like rationalists sharing employment opportunities with each other, but supposedly trying to create a virtual bulletin board could help create more opportunities for material mutual support/exchange between rationalists. Of course at REACH or another rationality community center (hi Seattle!), you can also do this with a literal bulletin board. * In the Vancouver rationality/LW FB group I made two polls: one for what periods of day and day of the weeks people we're most available for doing things; and another for what kinds of activities different people wanted to do with other people. This worked well with Facebook groups because visually FB group polls immediately tell you what the most popular choices are, and shows you the names and profil
2Evan_Gaensbauer6y
It's taking me longer than get everything written down, but I explained the tactics that have generated the most value for the Vancouver rationality community in the last 6 months in this reply to stardust's comment.

Thanks for writing this post, this is a worry that I have as well.

I also believe that more could be done to build the global rationality community. I mean, I'm certainly keen to see the progress with LW2.0 and the new community section, but if we really want rationality to grow as a movement, we at least need some kind of volunteer organisation responsible for bringing this about. I think the community would be much more likely to grow if there was a group doing things like advising newly started groups, producing materials that groups could use or cr... (read more)

2Evan_Gaensbauer6y
Creating some kind of volunteer organization like that is an end-goal I have in mind, and I've started talking to other people about this project. I've volunteered and been friends for a long time with a local EA organization, Rethink Charity, which runs the Local Effective Altruism Network (LEAN), which does exactly that for EA: advising newly started groups, producing materials the groups can use, and innovating ways to help groups get organized. So as part of a volunteer organization I could get advice from them on how to optimize it for the rationality community. Conceivably a community other than rationality steering the trajectory of AI alignment as a field might increase existential risk directly if they were abysmal at it, or counterfactually increase x-risk relative to what would be achieved by the rationality community. By 'rationality community', I also mean organizations that were started from within the rationality community or have significantly benefited from it, such as CFAR, MIRI, BERI, FLI and BERI. So my statement is based on 2 assumptions: 1. AI alignment is a crucial component of x-risk reduction, which is in turn a worthwhile endeavour. 2. The rationality community as a coalition, including the listed organizations, form a coalition which has the best track record of advancing AI alignment with epistemic hygiene relative to any other, and so on priors the loss of relative influence on AI alignment by the rationality community to other agencies would decrease x-risk less than it otherwise would be. If someone doesn't share those assumptions, my statement doesn't apply.