This is the second post in the Project Hufflepuff sequence. It’s also probably the most standalone and relevant to other interests. The introduction post is here.
I used to use the phrase "Rationality Community" to mean three different things. Now I only use it to mean two different things, which is... well, a mild improvement at least. In practice, I was lumping a lot of people together, many of whom neither wanted to get lumped together nor had much in common.
As Project Hufflepuff took shape, I thought a lot about who I was trying to help and why. And I decided the relevant part of the world looks something like this:
I. The Rationalsphere
The Rationalsphere is defined in the broadest possible sense - a loose cluster of overlapping interest groups, communities and individuals. It includes people who disagree wildly with each other - some who are radically opposed to one another. It includes people who don’t identify as “rationalist” or even as especially interested in “rationality” - but who interact with each other on a semi-regular basis. I think it's useful to be able to look at that ecosystem as a whole, and talk about it without bringing in implications of community.
Some noteworthy features include:
- Overlapping Patterns of Thought. There is no one central feature defining people in the rationalsphere, but there tend to be overlapping habits and patterns of thought (i.e. every two people in the cluster share at least one aspect). Some examples include:
- Being attentive to ways your mind is unreliable
- Desire to understand objective reality.
- Willingness to change one’s mind about ideas that are important to you.
- Having goals, which you care about achieving badly enough to decide “if my current habits of thought are an obstacle to achieving those goals, I want to prioritize changing those habits.”
- Ambitious goals, that require a higher quality of decision-making than most humans have access to.
People invested in the rationalsphere seem to have three major motivations:
- Truthseeking - How do we improve our thinking? How do we use that improved-thinking to better understand the world?
- Impact - A lot of things in the world could be a lot better. Some see this in moral terms - there are suffering people and unrealized potential and we have an obligation to help. Others see this purely in opportunity and excitement, and find the concept of “altruism” offputting or harmful. But they share this: an interest in having a big impact, while understanding that ‘having a big, intentional impact’ is very hard. And confusing. Lots of people have tried, and failed. If we’re to succeed, we will need better understanding and resources than we have now.
- Human/Personal - Your individual life and the people you know could also be a lot better. How can you and the people you love have as much fulfillment as possible?
For some people in the rationalsphere, “Doing a good job at being human” is a thing they’re already doing and don’t feel a need to approach from especially “rationality” flavored perspective, but still use principles (such as goal factoring) gleaned from the overall rationality project.
Others specifically do want to be part of a culture lets them succeed at Project Human that is uniquely “rationalist” - either because they want rationality principles percolating through their entire life, or because they like various cultural artifacts.
II. The Broader "Rationality Community"
Within the Rationalsphere, there is a subset of people that specifically want a community. They *also* disagree on a lot, but often want some combination of the following:
- Social structures that make it easy to make friends, colleagues, and perhaps romantic partners, who also care about one or more of the three focus areas.
- Social atmosphere that inspires and helps one to improve at one or more of the three focus areas.
- Institutions that actively pursue one of the three in a serious fashion, and that collaborate when appropriate.
- Sharing memes/culture/history. Feeling like “these are my people.”
The overlapping social structures for each focus benefit each other. Here are some examples. (I want to note that I don’t think all of these are unambiguously good. Some might trigger alarm bells, for good reason)
- CFAR is able to develop techniques that help people communicate better, think more clearly, be more effective, and choose to work on more high-impact (I think even with their AI focus, they will continue to have this effect in areas non-adjacent to AI).
- In addition to helping CFAR grads progress on their own truthseeking, impact and human-ing, it leaves in its wake a community more energetic about trying additional experiments of their own.
- Giving What We Can encourages people to fund various projects (“EA” and non-EA) more seriously than they otherwise would - in a cluster of people who might otherwise fail to do so at all.
- Startup culture helps encourage people to launch ambitious projects of various stripes, which build people’s individual skills in addition to hopefully having a direct impact on the world of some sort.
- There are spaces where the Human and Truthseeking foci overlap, that create an environment friendly for people who like to think and talk deeply about complex concepts, for whom this is an important part of what they need to thrive as individuals. It’s really hard to find environments like this elsewhere.
- Givewell helps people concerned with Impact in obvious ways, but this in turn plays an important role for the Human focus - it gives people who don’t intrinsically care that much about effective altruism a way to contribute to it *without* taking too much of their attention. This is good for the world *and* their own sense of meaning and purpose. (Although I want to note that getting too attached to a particular source of meaning can be harmful, if it makes it harder to change your mind)
- Various other EA orgs that need volunteer work done provide an outlet for people who are not ready to jump-head-first into a major project, also providing sense-of-purpose as well as.
- Parties, meetups, etc (whether themed around Human-ing, Rationality, or EA) provide value to all three projects. They’re fun and mostly satisfy human-ing needs in the moment, but they let people bump into each other and swap ideas, network, etc, valuable for Impact and Understanding.
- A community need not be fully unified. Literal villages include people who disagree on religion, morality or policy - but they still come together to build marketplaces and community-centers and devise laws and guidelines on how to interact.
In addition to the “broader rationality community”, there are local groups that have more specific cultures and needs. (For example, NYC has a Less Wrong and Effective Altruism meetup group, which have different cultures both from each other, and from similar groups in Berkeley and Seattle)
This can include both physical-meet-space communities and some of the tighter knit online groups.
Where does Project Hufflepuff fit into this?
I think each focus area has seen credible progress in the past 10 years - both by developing new insights and by getting better at combining useful existing tools. I think we've gotten more and more glimpses of the Something More That's Possible - at our best, I think there's a culture forming that is clever, innovative, compassionate, and most all - takes ideas seriously.
But we're often not at our best. We make progress in fits and spurts. And there's a particular cluster of skills, surrounding interpersonal dynamics, that we seem to be systematically bad at. I think this is crippling the potential of all three focus areas.
We've been making progress at this over the past 10 years - in the form of people writing individual blogposts, facebook conversations, dramatic speeches and just plain in-person-effort. This has helped shift the community - I think it's laid the groundwork for something like Project Hufflepuff being possible. But the thing about interpersonal dynamics is that they require common knowledge and trust. We need to believe (accurately) that we can rely on each other to have each other's back.
This doesn't mean sacrificing yourself for the good of the community. But it means accurately understanding what costs you are imposing on other people - and therefore the costs they are imposing on you, and what those norms mean when you extrapolate them community-wide. And making a reflective decision on what kind of community we want to live so we can actually achieve our goals.
Since we don't all share the same goals and values, I expect this not to mean that community overall shifts towards some new set of norms. I'm hoping for individual people to think about the tradeoffs they want to make, and how those will affect others. And I suspect that this will result in a few different clusters forming, with different needs, solving different problems.
In the next post, I will start diving into the grittier details of what I think this requires, and why this is an especially difficult challenge.