Status: I wrote the draft for this article a few months before the more recent discussions. While I'm not sure that I get everything right, I believe that simply publishing my post is better then to keep it in my drawer.


CFAR was founded in the spirit of being a place where we learn to make better decisions through thinking better about the involved issues. As founders left it drifted into an organization that tries to solve issues by bureaucracy instead of trying to solve them through thinking better. 

I will focus this post on the underlying ideas and not the people involved. I do value the involved people that I know and think they are generally great people.

CFAR handling of Brent

One episode that CFAR itself sees as a mistake was how they dealt with Brent. Instead of investigating the ideas and reasoning processes that lead to the mistake, CFAR considered the solution to be that they need more bureaucracy. The bureaucratic solutions are solutions that any other organization might have come up with. While CFAR was founded to be a place for new ideas they just copied bureaucratic ideas from elsewhere.

Where do reasoning errors come from? They come from what people believe. The document states in a shallow way that I could have read anywhere else:

As event organizers, we have a responsibility to help keep our attendees safe.

One staff member left CFAR over making the mistake regarding Brent. The staff member currently has their own personal development program. While explaining how the program works the staff member said about responsibility:

Everyone's learning and growth is their own responsibility. I'm responsible for myself and I'm here to learn and grow and you are responsible for yourself, your learning and your growth.

One person who is in the program told me about it and took it as a great message of individual empowerment. The core issue here is that believing it's the role of the event organizers to keep everybody safe is at odds with the idea that everybody is responsible for themselves. The decision to invite a person to a workshop is and considering it the responsibility of the person to ensure their safety is at odds with the idea that it's the teachers job to keep people safe.

I learned a lot from that staff member, I think he's a great person.

Where does the other idea come from?

If this value difference surprises you, you might ask yourself where it comes from. The idea of people being mainly responsible for themselves exists in many New Age spaces. One space where I found it particularly troubling is Circling Europe. They are explicit about what they believe. I know that CFAR did plenty of circling and likely a lot out of the Circling Europe perspective, so that might have been a major road of the idea.

Do I think the Circling Europe people are bad? I heard some reports about abuse of power, but in general I consider Circling Europe good people. I know one of them from before he was involved in Circling, and learned a lot from him before. I'm happy that I learned Circling from him. I also have circled with his spouse who's one of the major teachers of Circling Europe and consider her a great person as well. I do circling nearly every week with friends. I think circling is great.

If I can figure that out from Berlin, why isn't it something CFAR could figure out themselves? I think it's because they failed to think deeply about responsibility and have conversations about why they made the decisions they made and shared what they believed with each other. 

A lot of implicit knowledge gets lost through CFAR's relatively high turnover for a personal development company. The people I consider my teacher in personal development contexts all have over two decades teaching experience. If you think the timelines don't allow for two decades, that doesn't change anything about the harm done by losing implicit knowledge inside people's heads. I see no reason why rationality would be different in that it takes time to build up really deep experience at teaching it. It's very easy to just take "As event organizers, we have a responsibility to help keep our attendees safe" as a cached thought instead of doing the intellectual labor to think deeply about responsibility with each other.

As far as intellectual respect goes, the Circling Europe people have thought deeply about their view and taken a position, so in some sense it's more respectable than just taking the next cached thought.

The problem with that is that bureaucracy doesn't provide safety or prevents abuse of power. People who push the responsibility for the responsibility to the bureaucratic processes likely won't catch themselves when they fail at responsibility and get into patterns where they abuse power. Powerful people in an organization use the bureaucracy in their favor, other people bend the bureaucracy in their favor.

Rationalist discourse on responsibility

Eliezer introduced the concept of heroic responsibility. In HPMOR the concept is delivered in a way that speaks to people's system I. It's written up in Hero licensing. Earlier, Eliezer wrote Something to Protect, in which he writes:

I have touched before on the idea that a rationalist must have something they value more than "rationality":  The Art must have a purpose other than itself, or it collapses into infinite recursion.  But do not mistake me, and think I am advocating that rationalists should pick out a nice altruistic cause, by way of having something to do, because rationality isn't all that important by itself.  No.  I am asking:  Where do rationalists come from?  How do we acquire our powers? 

Unfortunately, if you look at responsibility from that perspective, it's not obvious that the "Something to Protect" are your workshop participants instead of protecting the world from unaligned AI. From an utilitarian perspective the well-being of workshop participants, in so far as it doesn’t have an impact on their productivity at saving the world, is a rounding error compared to cosmic scales of saving the world.

In 2017 Benque wrote Against responsibility. It's primarily about how the idea of heroic responsibility overextends what's reasonable and how we need a new moral theory. In the article Benque speaks about Gleb, another case where he thinks misbehavior wasn't well identified:

I don't think this is because Gleb is especially clever, or because EAs are especially bad at noticing things. I think this is because EAs identify each other by easy-to-mimic shibboleths rather than meaningful standards of behavior.

I haven't interacted with Brent personally but from what I heard, meaningful standards of behavior might have helped identifying him earlier as a problem. In the best case the social pressure of having to adhere to a standard of behavior might also have changed his actions. We likely need a conception of responsibility that does include holding people who we take as volunteers to standards. Benque's post and how it lays out the case for needing a better moral theory seems to have been completely ignored when CFAR tried to understand what went wrong.

To also admit an error on my own, when it comes to Gleb, I personally have given him too much attention.

As far as CFAR's current thinking of responsibility goes, the word appear once in the CFAR handbook:

Instead, we recommend a focus on freeing attention.The question to ask is, “How can I rearrange my environment or my way of interacting with it so that [insert responsibility] takes as little attention as possible?” Another good framing is “How can I make problems like this one take care of themselves to the greatest possible degree?”

This is a really bad way to think about responsibility. When it comes to taking responsibility for workshop participants, your concern should not be to organize your environment so that it takes as little attention as possible.

Good-Faith Principle

Besides talking about responsibility the handbook advocates the Good-Faith Principle:

The Good Faith Principle states that, in any interaction(absent clear and specific evidence to the contrary), one should assume that all agents are acting in good faith—that all of them have positive motive sand are seeking to make the world a better place. And yes, this will be provably wrong some fraction of the time, which means that you may be tempted to abandon it preemptively in cases where your opponent is clearly blind, stupid, or evil. But such judgements are uncertain, and vulnerable to all sorts of biases and flaws-of-reasoning, which raises the question of whether one should err on the side of caution, or charity.

Brent wasn't clearly blind, stupid, or evil and that's why if you follow the principles of the handbook it's obvious to extend him the assumption of good faith. The heuristic that's advocated in the handbook failed in the case of Brent and it seems like nobody who was in charge of the curriculum after the Brent episode thought about how to change the heuristic to produce a better result.

We like to believe in the good of people. Believing in the good of people feels good. At the same time, acting from those principles sets us up for being exploited by bad actors. With a rising amount of resources the EA and rationality communities face challenges from being exploited and we need to get better about making decisions about whom to trust.

Making good decisions about whom to trust is important to avoid being exploited. It’s also important for collaboration and lack of justified trust leads to an inability to cooperate. Few EA’s donate to promising projects of other people in their local EA group because they don’t trust anybody in their local EA group strongly enough and money instead flows through centralized institutions. Having a good theory about whom to trust would allow resources to be locally spent more effectively than through centralized organizations.


Bureaucracy growing in organizations and bureaucrats getting more power is often bad. From one person who trained with CFAR puts a lot of value in people who teach dressing professionally. Being more like a normal corporation is not the way you get the people into CFAR who will actually move the art of rationality forward. That needs the kind of people who build pillow forts because it comes out high on the list of how to have fun when doing a rational analysis. The people CFAR had in the beginning. Dressing personally is not the meaningful standards to which we should hold each other. It has to be about deeper things.

Bureaucracy isn't always bad but it's important to understand what it is for. In How To Use Bureaucracies Samo Burja writes:

The purpose of a bureaucracy is to save the time of a competent person. Put another way: to save time, some competent people will create a system that is meant to do exactly what they want — nothing more and nothing less. In particular, it’s necessary to create a bureaucracy when you are both (a) trying to do something that you do not have the capacity to do on your own, and (b) unable to find a competent, aligned person to handle the project for you.

In Burja's frame, having to solve the issue of responsibility with bureaucracy is essentially to say that most people at CFAR aren't competent and aligned at being responsible. While in some sense, the error that happened with Brent is evidence of a lack of competence, the reaction should not be to accept that CFAR staff aren't competent at it but to think about how to get them to be competent at it.


You will lose respect with some people when you don't dress professionally and build pillow forts, but I feel quite proud when I'm able to say when someone accuses us as cold rationalists: "We do manage to build pillow forts together, when we come together while you just use a cold bureaucratic structure for your conferences".  Being a rationalist isn't a role that you can do from 9 to 5 the way most corporations work. It's a deep part of personal identity and for CFAR to work on it's mission there needs to be the freedom to express the personal identity without interfering with bureaucracy.

The intellectual freedom to build a pillow fort instead of rejecting the idea out of hand as unserious is central to being able to pursue a wide range of new ideas. Pillow-fort-freedom is important.

That does mean that you don't get the safety that bureaucracy provides and the people actually need to come to believe on a deep level through rational argument that makes feeling responsible part of their personal identity. It’s necessary to actually reason well about when to trust people and when the Good-Faith-Principle risks too much harm. That's a lot harder than bureaucracy and it's a lot harder to hire people who fit in but it's what's needed. Responsibility actually needs to be felt on a deep level.

If your goal is to pay as little attention to your responsibility as the handbook implies, bureaucracy might be a good tool but that shouldn't be the goal.

I spoke to another former staff member, who expects that CFAR will never amount to anything more than what it had already accomplished by 2017 because it moved away from the way it worked in the beginning. It's possible to scale CFAR in a bureaucratic fashion. There's an existing curriculum. You can standardize the process of teaching the curriculum. What you don't get through bureaucratic processes is deep improvement of the existing curriculum.


To repeat it again, I do believe that all the involved people are good people (not counting Brent/Gleb). Good people who make some bad decisions. While I didn't go into steel-manning the Circling Europe position on responsibility, I do understand why smart people might take that position, I just think it's wrong because it leads to bad outcomes. I expect that many readers will think that in this case the position should be immediately rejected out of hand, I assure you that Policy Debates Should Not Appear One-Sided.

I hope for more deep discussion among rationalists about what responsibility is, when we have it, what a teacher in a personal development context is. I hope for a CFAR that's less bureaucratic and in which rationalists feel at home so that they don't want to leave the organization and build up implicit knowledge in teachers and curriculum designers over decades, to push the art of rationality forward. And of course pillow-fort-freedom for more pillow forts.

New Comment
1 comment, sorted by Click to highlight new comments since:

I haven’t been to CFAR personally (though I almost did in 2020), so I may not be the best one to comment,  but I’ll try anyways.

Perhaps the true question lies on a more fundamental level than whether or not to implement the bureaucratic methods you’ve discussed.

Perhaps the decision to be made is whether or not to expand at all. That is quantity or quality. Because the processes governing human affairs do not allow for both, at least not to equal extents. 

For example, if an order of magnitude expansion is envisioned, avoiding the increasing use of such methods would be exceedingly difficult, in fact it would be so tremendous of an advancement for human affairs, if such an expansion were realized while maintaining ‘pillow fort freedom’, that the value of such designs may even outweigh the educational goal itself.