I've received a few notifications in the last couple weeks that this post I wrote in 2018 about the relationship between the rationality community in Berkeley and the broader rationality community elsewhere has received some upvotes. I don't know why more people have been reading that post more recently but I've changed my mind about much of what I wrote then, so I'm writing this post as a follow-up.

What Has Changed Between 2018 and Now

The theme of how the post characterized the nature of the flow of the community members between the Berkeley rationality community and other local rationality groups is exemplified by this excerpt from another post also written in 2018 by Zvi on his personal blog

In my experience, the recruitment to Berkeley was very aggressive. Sometimes it felt like: “if you don’t want to move to Berkeley as soon as possible, you are not *really* rational, and then it is a waste of our time to even talk to you.” I totally understand why having more rationalists around you is awesome, but trying to move everyone into one city feels like an overkill.

I have no longer been a rationality group organizer for a few years now, so I am not very aware of how perspectives of participants in the rationality community outside the Bay Area may have changed since 2018. My impression is that any such trend exclusively between rationality groups has been superseded by such a flow being driven by how participants in effective altruism (EA) worldwide are migrating to the Bay Area. I emphasize how for EA it's more of a worldwide trend because it's less of a matter of people moving from other cities in the United States to the Bay Area, as it is for rationalists. EA as a movement is much bigger than the rationality community and has a much faster growth rate. Far more people are through EA driven to migrate to the Bay Area from countries around the world. 

How the Relationship Between the Rationality Community and Effective Altruism Has Changed 

That this dynamic in EA may be the predominant factor determining the same dynamic in the rationality community is because of changes in the relationship between EA and the rationality community. The greatest change is the development and growth of long-termism as a philosophy in and from EA. This has brought EA much closer to the rationality community in its relative prioritization of reducing existential risks posed by advanced AI (i.e., 'AI x-risks'), leading both communities to mutually have an even closer relationship with each other and even more overlap. 

The second greatest change in the last few years may be the increased urgency to solve the AI control problem. My impression is that in the last few years that rationalists in the Bay Area have dovetailed with a conclusion that it's not so ideal for the rationality community to be so concentrated in one place. Please comment if you have a very different impression but my further sense is that there is less resentment from rationalists elsewhere toward the Bay Area community. 

My hunch is that while there are many reasons for that change, one of the greatest ones is how the rationality community, as well as EA, have become more united in the face of how much more pressing the imperative of AI alignment has become. Other things being equal, it would in theory be better for either community to have a more robust geographic distribution. Yet coordinating such a major, community-wide effort would conflict with the greater priority of optimizing community organization for improving AI alignment. 

The increased urgency of AI alignment and the hyper-concentration of the worldwide development of advanced AI in the Bay Area causes there to be an inertia that poses too costly a trade-off against fundamentally re-organizing the community. I at least perceive that to be an increasing consensus both in the rationality community and EA, both in the Bay Area and elsewhere. That is a conclusion I have also been increasingly drawn to myself but I am probably less confident in that than many others. 

Considerations for the Future

What remains is a potentially major problem in practice is the risk of the Bay Area attracting organizers from local groups elsewhere at a rate exceeding how fast those local groups can secure new organizers. Even given an assumption the strategy of a rapid talent pipeline from everywhere else to the Bay Area is the best one in practice, community organization across too many other local nodes in the global network experiencing brain drains may pose a potential catastrophe. If core organization(s) depend on university, local and national groups sustaining a talent pipeline, if too many of groups lose the very talent needed to sustain them all at once, the entire talent pipeline may collapse. 

Especially in EA, successful group organizers or staff at budding EA-affiliated organizations are becoming likelier and likelier to be hired by major organizations based in the Bay Area (or another centralized hub, such as Oxford in England). That those opportunities will avail themselves to successful grant recipients for local group organizing or to build other community infrastructure is explicitly promoted by the Centre for Effective Altruism (CEA) to attract grant applications. From the Community Building Grants page on the CEA's website:

Working as a professional community builder is a great way to prepare for other impactful work. Community builders develop skills, networks, and experience that could serve them well in management, operations, research, fundraising, entrepreneurship, and more. Many of our alumni move on to working for other organizations in the EA movement, or pursue long-term careers in community-building.

There need not be much extra concern from the rationality community if it seems like the approach being taken poses major risks. There is already a lot of concern about it in EA. An article from May of this year critical of how the current strategy has resulted in aggressive and counterproductive growth and recruitment tactics was recently very well-received on the EA Forum. 

The CEA has also for over a year now been trying to support the sustained organization for city or national EA groups that serve as significant nodes in the global network and talent pipeline. This may have the effect of ensuring greater continuity between changes in local or national leadership in a way that lessens the overall risk of damage to the capacity of EA to coordinate itself on a global level. The expectations of what beneficial change these marginal patches to bugs in movement-building strategy are only tentative, given how relatively recent the full recognition of these problems and the implementation of intended solutions has been. 

What This May Mean for Rationality Is Up to All of You

With so much about effective altruism, this isn't so relevant to the rationality community anymore. I intend to address the problems presented here in more depth on the EA Forum at a later date. It's of course still relevant for the rationality community to have some greater awareness of these dynamics in EA as well. Yet I have intended this follow-up to indeed be a conclusion to what remains of my insights directly for rationality community-building. It's your community, so you must decide what to do next together!

New to LessWrong?

New Comment
6 comments, sorted by Click to highlight new comments since: Today at 8:05 AM

Some of what follows is similar to something I wrote on EA Forum a month or so ago.

Returns on meatspace are counterfactually important to different people to different degrees. I think it's plausible that some people simply can't keep their eye on the ball if they're not getting consistent social rewards for trying to do the thing, or that the added bandwidth you get when you move from discord to meatspace actually provides game-changing information.

I have written that if you're not this type who super needs to be in meatspace with their tribe, who can cultivate and preserve agentiness online, that it may be imperative for you to defect in the "everyone move to the bay game" specifically to guard against brain drain, because people who happen to live in non-bay cities really do, I think, deserve access to agenty/ambitious people working on projects. An underrated movement building theory of change is that someone fails out of the university entrance exam in Minneapolis, and we're there to support them.

However, I'm decreasingly interested in my hypothesis about why brain drain is even bad. I'm not sure the few agenty people working on cool projects in Philly are really doing all that much for the not-very-agenty sections of the movement that happen to live in Philly, which is a conclusion I really didn't want to draw, but I've had way too much of going to an ACX or EA meetup and meeting some nihilist-adjacent guy who informs me that via free will being fake trying to fix problems is pointless. People have to want to cultivate ambition/agentiness and epistemics before I can really add any value, I'm concluding. I read this as a point against heeding the brain drain concern. There's a sense in which I can take PG's post about cities very seriously then conclude that the nihilist-adjacent guy is a property of Philly, and conclude that it's really important for me to try other cities since what I'm bringing to Philly is being wasted and Philly isn't bringing a lot to me. There's another sense in which I take PG's post seriously but I think Philly isn't unique among not-quite-top-5 US cities, and another sense in which I don't take PG's post seriously. The fourth sense, crucially, is that my personal exhaustion with nihilist-adjacent guy doesn't actually relate to the value I can add if I'm there for someone when they flunk out of the university entrance exam (I want a shapley points allocation for saving a billion lives, dammit!).

Another remark is that a friend who used to live in the bay once informed me that "yeah you meet people working on projects very much all the time, but so many of the projects are kinda dumb". So I may end up being just as frustrated with the Bay as I am with Philly if I tried living there. Uncertain.

Thank you for this detailed reply. It's valuable, so I appreciate the time and effort you've put into it. 

The thoughts I've got to respond with are EA-focused concerns that would be tangential to the rationality community, so I'll draft a top-level post for the EA Forum instead of replying here on LW. I'll also read your EA Forum post and the other links you've shared to incorporate into my later response. 

Please also send me a private message if you want to set up continuing the conversation over email, or over a call sometime. 

A couple of missing words I had to think about:

  • "My impression [is?] that any such trend"
  • "a conclusion that it's not so [bad an?] idea for the rationality community to"

Thanks for catching those. The word 'is' was missing. The word "idea" was meant to be "ideal." I've made the changes. 

Then should the last sentence in that paragraph perhaps say "less resettlement" instead of "less resentment"?

My impression is that in the last few years that rationalists in the Bay Area have dovetailed with a conclusion that it's not so ideal for the rationality community to be so concentrated in one place. Please comment if you have a very different impression but my further sense is that there is less resentment from rationalists elsewhere to the Bay Area community.

I've edited the post so It's now "resentment from rationalists elsewhere to the Bay Area community" to "resentment from rationalists elsewhere toward the Bay Area community" because that seems to reduce the ambiguity some. My use of the word 'resentment' was intentional.