Splendid post. I am generally pro CFAR being alive; I was also pleased to read that the new workshops will still be ~2/3rds the good content from the original workshops, not exclusively new and experimental stuff, which makes me more confident in encouraging people to go (i.e. that the floor on the experience will still be quite good). Many things in this post seem to me to accurately address pathologies in CFAR 1.0; here's to CFAR 2.0 having even more success in developing an art of human rationality than CFAR 1.0.
Ways to help CFAR or to connect to CFAR besides donating:
[...]
- Book our venue (or help a friend realize they’d enjoy booking the venue, if they would)
I'll also let people know that I had a great experience renting the CFAR venue in Bodega Bay. I took the Inkhaven residents there for a weekend off-site, and the participants rated it really highly. It was a great bonding experience to be in a single house altogether, we had nice daily walks to the ocean, and Jack & Sunny were lovely hosts with their two adorable kittens. Endorsed as a good getaway space for up to ~50 people.
(And we do still need money to be viable, because being a custodian of such a community requires staff time and money for food/lodging/staff flights/etc.)
As a minor issue, I think I'm failing to understand this parenthetical. I already believe that many good non-profits need donations to live, and cannot sustain themselves fully on sales and revenue. This seemed to read to me though that aCFAR is justifying itself as needing funds primarily because of trying to sustain a community. Slightly earlier you wrote about the alumni community, which you felt was originally quite generative, then became lower quality, and you'd like to do something to get life into one again. But I don't think you mean to imply that the alumni community is the sole purpose of donations. What did you mean here?
This post is the main part of a sequence of year-end efforts to invite real conversation about CFAR, published to coincide with our fundraiser.
Introduction / What’s up with this post
My main aim with this post is to have a real conversation about aCFAR[1] that helps us be situated within a community that (after this conversation) knows us. My idea for how to do this is to show you guys a bunch of pieces of how we’re approaching things, in enough detail to let you kibitz.[2]
My secondary aim, which I also care about, is to see if some of you wish to donate, once you understand who we are and what we’re doing. (Some of you may wish to skip to the donation section.)
Since this post is aimed partly at letting you kibitz on our process, it’s long.[3] Compared to most fundraiser posts, it’s also a bit unusually structured. Please feel free to skip around, and to participate in the comment thread after reading only whatever (maybe tiny) pieces interest you.
I’d like CFAR to live in a community
I’d like CFAR to live in a community where:
In the past, CFAR didn't know how to live in a community in this way (partly because I was often in charge, and I didn’t know how to do it). But I think CFAR and I now have the ability to do this.
As an example of the gap: I used to be somehow trying to claim that we were running our organization in the best, most EA-efficient, or most rational way. As a result, whenever someone argued in public that some revised action would be better, I thought I had to either:
But now, it’s different. We are visibly a particular organization, believed in by particular people, with details. The premises we believe in together (aka our operational premises for what we CFAR staff are building) are separated out from our epistemics, and from claims about what’s objectively best.
Anyhow: requesting community membership of this sort for CFAR, and setting you guys up to have a free and full conversation about CFAR, is the main business of this post, and is the main thing I’m trying to ask of you, Dear Reader, if you are interested and able.
Kibitzing requests
Some kinds of kibitzing I’d particularly appreciate:
Introductions: Me, aCFAR… and you, Reader?
I’ll start the introductions.
I’ll also try introducing aCFAR as though it’s a particular person with a history:
If you’re up for introducing yourself (which I’d appreciate!) there are two good ways:
On to the post proper!
Workshops
Workshops have always been the heart of our work at aCFAR. We spend most of our staff time tinkering toward making the workshop good, staring at folks at the workshop to see if it is good, iterating, etc. It’s where our take on rationality comes to life, changes us, is changed by its encounters with some of you guys, and so on.
So – if you want to kibitz on our current generators – it may be where you can best see them.
For those just meeting us: A CFAR workshop is a 4.5 day retreat with about 25 varied guests, 12-ish staff and volunteers, and a bunch of hard work, rationality, and conversation. The workshop typically involves a bunch of classes on rationality techniques and lots of time to apply those techniques and work on solving actual problems in real life. We currently have our next workshop scheduled for January 21-25 in Austin, TX.
Workshops: favorite bits
Among my favorite positive indicators from our workshops:
1. People made friends at the workshops and in the alumni network.
Many workshop guests across our history have told me a CFAR workshop was the first time they’d managed to make friends in the decade-or-more since they finished college.
This wasn’t an accidental side-effect of the workshops; we tuned the workshops toward: (a) creating contexts where people could update deeply (which helps with making real friends) and (b) arranging small and interactive classes with pair work, providing a “names and faces” Anki deck, hosting lightning talks, etc. to make it easy to make new friends at the workshop.
This wasn’t a side-goal for us, separate from the main aim of “rationality training”; IMO there’s a deep connection between [conversations and friendships, of the sort that can make a person bigger, and can change them] and the actual gold near “rationality,” such that each of (true friendships, rationality) can activate the other.
2. People had conversations at the workshops that updated the real generators of their actions.
Many conversations in the default world involve people explaining why a reasonable person might believe or do as they are doing, without sharing (or often knowing) the causes of their choices. But at CFAR, the real causes of actions often were (and are) properly in the conversation.
Relatedly, people at workshops would become at least briefly able to consider changing things they’d taken for granted, such as career paths, ways of relating to other people, etc., and they’d do it in a context full of curiosity, where there was room for many different thoughts.
3. The workshop was visibly “alive” in that it felt organic, filled with zany details, etc.
If this CFAR is going well, we should have spare energy and perceptiveness and caring with which to make many side-details awesome. We did this well in the past; we seem to be doing it even better now.
For example, during Questing at our November workshop, we had CFAR instructors run short “interludes” during which people can breathe and reflect a moment in between 10-minute hero-and-sidekick problem-solving blocks. However, due to a minor scheduling mishap, CFAR instructor Preston ended up committed to be in two places at once. Preston solved his problem by setting up an “oracle” to run his section of inner simulator-inspired Questing interludes.
For another example, chef Jirasek created waves of life emanating from the kitchen in the form of music, food art, and sort of ostentatious interactions with the locals (e.g. replacing his whole wardrobe with stuff from some local thrift stores).
4. Truth-seeking, curiosity-eliciting, rationality-friendly context
The context at our workshops is friendly both to hearing peoples’ perspectives deeply and to being able to point out possibly-contrary evidence.
Workshops: iffy bits, and their current state
Although there’s much I love about our old workshops, I would not be able to run them now, although I could probably cheer for someone else doing it; there’s too much I was eventually unable to stomach for myself. In particular:
Power over / doing something “to” people (current status: looks solved)
I currently aim not to take pains to impact someone unless I can take equal pains to hear them (in the sense of letting them change me, in deep and unpredicted ways). This is part of a general precept that conscious processes (such as CFAR guests) should not be subservient to processes that can’t see them (such as a rock with “follow policy X” written on it, or a CFAR instructor who hasn’t much attention to spare for the guest’s observations).
My main complaint about our past workshops is that I, and much of ‘we’, did not always hit this standard (although we tried some, and some of our staff did hit it). It’s part of my current take on how to do epistemics in groups.
More details about this complaint of mine, for those interested:
1. Excessively narrow backchaining / insufficient interest in both the world, and our workshop guests
I was scared about AI risk, all the time. I was in an emergency. And while I did try at the workshops to drop all that for a bit and take an interest in the people in front of me, I was also at the workshops to “make progress” on the AI risk stuff.
So, my notion of which participants were the coolest (most worth paying attention to, inviting back, etc) was mostly:
(As opposed to, say, who had neat make-stuff skills or research patterns we didn’t have, that might broaden our horizons; I was too tired to really see or care about such.)
2. Nudging the CFAR alumni culture toward #1, so our community also became narrower
I, and other CFAR staff, weren’t the only ones who evaluated coolness a bit too narrowly, by my present taste. I think I and others in positions of community leadership often helped set this up in various ways.
(As a contrast point, the 2007-2011 OvercomingBias commenter and meetup community had broad and deep engagement without being a “school of thought” in the way the CFAR and LW rationalists later were, IMO.)
3. Trying to do something “to” our guests; priming our guests to want something done to them.
Many of our guests signed up for the workshop so that we could help make them more rational so that they could be better EAs (for example). And we wanted them there for much the same reason (sometimes; some of us).
4. Casting ourselves as having more epistemic authority or charisma than I currently think warranted.
Deeply related to #1, 2, and 3 above.
I’m relieved that our Nov 2025 workshop (and our prior, tiny pilot at Arbor Summer Camp) did not have these problems AFAICT. Things I saw in November, that I believe I’d see differently if we did still have these problems:
This is enormously relieving to me; uncertainty about whether we could change this thing was my main reason for hesitating to run CFAR workshops. We will of course still be keeping our eyes out.
More workshop iffy bits
While the “power over” thing was the iffy bit that bugged me the most, there are also other things we want or need to change about the workshop. You can see our whole workshop-related bugs-and-puzzles-and-todos list here.
More about the new workshop
If you’ve been to a CFAR workshop in the ~2015-2020 era, you should expect that current ones:
Further detail, if you want it, at More details on CFAR’s new workshops.
Larger contexts surrounding our workshops
In this section, I’d like to talk about the larger contexts (in people, or in time) that our workshops depend on and contribute to, as well as some solved and unsolved pieces about those larger contexts.
aCFAR’s instructors and curriculum developers
Our major change, here, is that all instructors and curriculum developers are now very part-time. (In 2012-2020, most workshop instruction and curriculum development work was done by full-time staff.)
There are two big reasons I made this change.
A pleasant bonus is that we get more mileage per donor dollar: a few hours/week of trying our units on volunteers and on each other is enough to keep CFAR in our shower thoughts as we go through the week (for me, and for many other instructors AFAICT), and the rest of our normal life seems then to give us useful insights too. (And we’re paid hourly, so a "lighter" schedule that still gets curriculum development flowing is a good deal for donors!)
aCFAR’s alumni community
Our workshop development process is stronger with a healthy alumni community in several ways:
Our alumni community was extremely fun and generative in CFAR’s early years, but gradually became less invested and lower trust over time, partly as a natural side-effect of passing years, and partly because we weren’t doing community all that well. We still have an alumni mailing list and it hosts some interesting discussions, but things there feel less active and exciting than they once were.
We like our alumni and think they’re cool! We’d like to figure out how to freshly kindle some of the energy that made the old CFAR alumni community as cool a place as it was.
My guess (not a promise) is that we should start a new alumni community with these features:
Lineage-crediting and gatekeeping
It is vital to accurately, publicly track where good things come from (lineage-crediting). At the same time, it is necessary not to let people into our events or alumni networks who we can’t deal with having there. This combination can be awkward.
As an example of this awkwardness: Michael Vassar taught me and many people a bunch about rationality when I joined the rationalist and AI safety scene in 2008, and he was also quite involved in me changing my mind about the stuff I mentioned changing my mind about in 2020. I can see traces of his ideas all over this post. My thoughts in this post, and the ideas in the newer parts of CFAR, were also greatly influenced by my good friends Anonymous and Anonymous.
And yet, for varied reasons, I wouldn’t feel good about having any of those three visit an intro CFAR workshop (although I might well invite Michael Vassar to an alumni reunion or similar event, where my tolerances are a bit broader; and I’d gladly have all three to a retreat run by a more bespoke CFAR spin-off called LARC/Bramble). I think this is not unusual bad luck; my best guess is many of those who “woke up” as kids in strange surroundings and who forged their own paths to being unusually conscious and agentic, dodged some of the “be rule-abiding” training that makes most middle class people easy for other middle class people to predict and be safe around. And the CFAR alumni network is a large, semi-institutional context designed to work okay for folks who are within the normal range on rule-abiding and who are used to getting to assume others are too, for good reason. (To be clear, I also learned a pile of rationality from many others, most notably Eliezer, who are reliably rule-abiding.)
This sort of “awkward” isn’t only costly because of wanting not to alienate my friends. It’s also costly because it’s confusing (to me, to them, and to workshop guests and onlookers). When rationality content is presented within a context that couldn’t have made that content and that doesn’t help tend the sources of that content, it’s harder to set up good feedback loops. (Cf. the Caring that Tends its own Sources).
But, here I am, anyhow, having decided that this is the best world I can manage, and trying to describe something of its workings in public.
My plan, roughly, is the obvious one:
Michael “Valentine” Smith
While we are on the topic of both gatekeeping and lineage-tracking: we are considering bringing CFAR co-founder Michael “Valentine” Smith back onto our workshop staff.
I’d like to note this publicly now, because:
If it matters, I and various others have worked closely with Valentine at LARC/Bramble (CFAR’s more bespoke spinoff organization) for the last two years, and I have found it comfortable, wholesome, and generative.[8]
The broader rationality community
The broader rationality community makes our work at aCFAR feasible (e.g. via donations, via sending us participants who are already rationality fans, via giving us good rationality stuff to draw on, and via good critiques). We are grateful to you guys. It’s important to me that we give back to you, somehow, in the long run. My main current theory as to how to give back is that we should write substantive blog posts as our theories-of-rationality congeal, and should make our process open so if we fail this time, it’ll be easier for interested parties to see what exactly went wrong (no opaque fizzles).
Flows of money, and what financial viability looks like within our new ethos
We do not yet have a demonstrated-to-work plan under which aCFAR (in our new incarnation) can be financially sustainable.
In 2012-2020, a large majority of our donations came from AI risk donors, who donated because CFAR recruited for MIRI (or to a lesser extent other AI safety efforts) or because they otherwise believed we would help with AI risk.
Also, in 2012-2020, a significant chunk of our workshop revenue came from EAs (both AI risk people and EAs more broadly) who had heard that CFAR workshops would somehow make them better EAs, and perhaps also that CFAR itself was an EA organization worth supporting. And so they balked less at the (then) $3.9k price tag because it was parsed as an EA expense.
Double also, in 2012-2020, we workshop instructors broadly tried to position ourselves as people who know things and can give that knowledge to you (and so are worth paying for those things).
My current attempt at CFAR branding lets go of all three of these angles on “you should give us money,” in favor of an ethos more like: “we (including you, dear workshop guest) are a community of people who love to geek out (in a hands-on way) about a common set of questions, such as:
Under this model, CFAR instructors differ from workshop guests in that we spent a bunch of time testing and refining particular classes (which we try to make into good springboards for doing hands-on geeking out of this sort in common, and so for jumpstarting guests’ ability to have rich conversations with each other, and to do rich, grounded noticing together, and to point out traction-creating things that are visibly true once pointed-to). But we try not to differ in perceived/requested epistemic status, or in “you should believe us”-flavored social cues.
Also, under the new model, our requests aren’t backed by a claimed long-run EA payoff: we are not saying “please consider sacrificing parts of your well-being to work at CFAR, or to attend CFAR or implement our taught habits, because it’ll help with AI risk somehow.” Instead we are saying “please come nearby if it interests you. And if you like what happens next, and what changes it seems to give you in the observable near- and medium-term, then maybe keep trying things with us for as long as this seems actually healthy / rewarding / to give good fruits to you and visible others in a simple, cards-on-the-table way.”
I expect our new model is more wholesome – I expect it’ll bring healthier feedback loops to our curriculum and culture, will form a healthier town square that is more fruitful and has fewer stuck beliefs and forcefully propagated illusions, and will be an easier context in which to keep us staff wanting to share most info in public, including evidence we’re wrong. But I don’t know if it’ll bring in enough revenue to keep us viable or not. (And we do still need money to be viable, because being a custodian of such a community requires staff time and money for food/lodging/staff flights/etc.)
If we can’t make a financial go of things under our new ethos, my plan is not to revert to our past ethos, it’s to fold – though my guess is we’ll make it.[9]
How our ethos fits together
In this section, you’ll find pieces of what motivates us and principles we intend to follow.
Is aCFAR aimed at getting AI not to kill everyone? If not, why are you (Anna) working on it?
We are not backchained from “help get the world into state X which’ll be better for AI,” nor from “help recruit people to AI safety work,” “help persuade people to take better AI policy actions,” or anything like that.
My (Anna’s) motivations do and don’t relate to AI safety; it’s complicated; I’ll publish a separate post going into detail here in about a day.
Principles
This is an attempt to make visible the principles that I, and to some extent CFAR, are trying to act on in our CFAR work. I, and we, might change our mind about these – these aren’t a promise – but I plan to review these every three months and to note publicly if I change my mind about any (and to note publicly if CFAR changes leadership to someone who may run on different principles).
We’ll start with some short-to-explain ones, then head into some long explanations that really should be their own blog posts.
Truth is crucial
This principle is one of the “things that go without saying” around LessWrong most of the time (and is shared with past-CFAR), but it’s precious.
Honor who-ness
Remember each person is a miracle, is way larger than our map of them, and is sustained by knowledge and patterns of their own making. Honor this. Allow ourselves to be changed deeply by the knowledge, patterns, character, etc. of anyone who we deeply change.
Stay able to pivot or shut down, without leaving anybody in the lurch
It’s easier to stand by principles if there’s a known and not-too-painful-or-commitment-breaking path for quitting within a few months (should we prove unable to stick by these principles while remaining solvent, say).
Serious conversation, done in hearty faith
This section is written by my colleague John Salvatier.
Serious conversations deal with the real issues at play and go beyond literary genre patterns. And serious conversations in hearty faith apply enough real human trying to get to real discovery about the topic.
Serious discussion of problems we really care about where the participants are fully engaged are kind of a miracle. For example, if you’re wondering whether to quit your job, a serious and hearty conversation about the question and about what matters to you in life can have a profound life effect.
At this CFAR, we are trying to have hearty faith with each other and with others to create the possibility of serious conversations. (And we are trying to do this without forcing, via repeatedly asking ourselves something like: “does it feel good to share my real cruxes right now, and to hear where [person] is coming from? If not, what sensible reasons might I have for not (bearing in mind that there’s lots of useful stuff in me that conscious-me didn’t build)?” We aren’t trying to impose hearty faith; we’re taking its presence as a thermometer of whether life is going well right here.)
Serious conversations are like science experiments. Their success is not measured on reaching a particular outcome, but on their revealing substantial things about the world that bring us into closer contact with the world.
The classic Eliezer/Robin AI Foom Debate is a good example of something that might look like a serious conversation but somehow isn’t a “conversation” in quite the sense we mean. A conversation would spend a bunch of time doing asymmetric things where one person is mainly trying to understand the other (for example passing their ITT). Instead, Eliezer and Robin each use each other as a foil to better articulate their own view. This might be serious research, or good exposition to an audience, but it isn’t the thing we have in mind.
Hearty faith is necessary for successful serious conversations when our maps (or theirs) have messy relevance to the world and our goals. Which they will when the topic is a life frontier or a world frontier.
Hearty faith is different than just good faith.
Bad faith is lying, fraud. An abandoning of our integrity.
Lousy faith however is when our intentions are like a thin stew instead of a hearty, many-flavored, full-bodied one. In “lousy faith” we are putting in effort to keep integrity on some dimensions, but not very many.
Hearty faith, by contrast, is when we act with attention to many sorts of integrity all at once (the more, the heartier, like a hearty stew).
Hearty faith is necessary for serious conversations with messy world maps to be successful because every such conversation has many relevant-but-illegible layers that are otherwise obscured and hearty faith allows legibilizing them. It allows the relevant-but-illegible conversation layers into the conversation on good terms.
The caring that tends its own sources
This is a phrase I made up, inspired by Eliezer’s The Lens that Sees its Own Flaws (which is one of my very favorite Eliezer posts, and conveys an idea that’s on my shortlist for “most inspiring insights ever”) and and also by conversations with my friends Evan McMullen and Anonymous.
I hope to eventually write a blog post about this principle that makes sense. But this is not that blog post, it is a placeholder.
So: we find ourselves alive, awake, caring. How did I, or you, reader, get to be like this? It’s a bit of a miracle. We can tell decent causal stories (mine involves my parents, their parents, the United States, a brief opening in Hungary’s border during a war, my mom’s careful crafting of endless ‘math games’ for me, my dad’s absorbing a useful secularism from the Soviet Union that he rightly hated… going further back we have the European Enlightenment, eons of biological evolution, and more). We can tell decent causal stories, and it’s worth bothering to tell them, and bothering to try to get it right; and at the end of the day “a miracle” is still a decent term for it – the processes that let us be here are something large, and worth marveling at, and contain deep generative “magic” that we don’t yet know how to build.
How to relate to this?
Concretely:
No large costs without a feedback loop grounded in earned knowledge and caring
This principle is an attempt to articulate the main thing I changed my mind about in 2020.
It now seems to me that when you’re running an organization, such as aCFAR or the neighborhood bakery, you’ll benefit if you:
Take an interest in what produces and sustains these resources. Be aware of the rough extent to which you do or don’t have reliable maps of what’s involved in producing and sustaining these sources, so you can maintain the needed amount of [respect / Chesterton’s fence / actively watching out for needed conditions you shouldn’t disrupt], without being unduly cautious about everything.
For example, I understand how to turn hot water and peppermint teabags into peppermint tea. (Thus, I can change up my water heating method, its temperature, etc without being surprised by the results.)
On the other hand, my friend sometimes likes to walk his dog with me. I’m pretty sure there’s detail to where he will/won’t take his dog, when he does/doesn’t feel like doing it, etc., and I’m pretty sure that detail helps maintain cool functionality, but I also know I don’t know how it all works. Thus, I know that if I try making many of these decisions for my friend, without consulting him, I might mess up some resource he’s used to counting on.
Take an interest in the specific “bridging structures” that let particular resources coexist.
For example, a coaster is a good “bridging structure” to keep my hot teacup from damaging my wooden table.
For a more complex structure, a bakery’s proprietor might be careful to keep their sidewalk shoveled, to greet neighboring business owners, etc. as part of a plan to allow the bakery and the downtown it’s in to avoid harming each other. This kind of bridging structure will need to regularly take in new info, since one probably can’t have an adequate static map of downtown as a whole.
Let each resource-flow and each bridging structure have a keeper who maintains both an inside view about what’s necessary for sustaining the resource flow and an inside view about how much “magic” isn’t yet in their map.
That keeper must be responsible for deploying these resources only in ways that make inside-view sense to them (e.g., if there’s a small experiment, the keeper should have felt hope in doing small experiments; if there’s a large deployment, the keeper should have felt conviction that large deployments of this sort bring fruit)
That keeper must also have enough eyes on the results of that deployment that they can update sensibly.
I’ll spell out what this means in the case of CFAR, and then explain why I care.
What this means in the case of aCFAR:
This CFAR makes use of three main resource flows:
We want all these resources used in ways where their keepers have grounded reason to think it’ll help with something they care about (and have feedback loops for checking).
Concretely, I’m aiming for:
Staff and volunteers have better lives (or not-worse lives) via our involvement with CFAR, including in the short- and medium-run
In CFAR of 2012-2020, many of us sacrificed for CFAR – we e.g. worked 60+ hrs/week, had distorted social patterns with folks in the rationality community, and otherwise paid (and sometimes caused) large costs. I’d like to arrange our culture so that people don’t do that this time around. I want us to each be simply, groundedly in favor of what we’re doing, without trusting in long-term unseen effects on the post-AGI future or anything else.
(Here and elsewhere, it’s fine if staff and volunteers sometimes try things that hurt us. The principle isn’t “no costs” or “no one made worse-off ever.” It’s rather “no key resource flows, ones that CFAR is reinforced by and grows around, that make people worse-off.” One-off “ouches” are part of how we locate what works, and are fine as long as we update away from them instead of learning to depend on them.)
Participants try aCFAR’s suggested habits based on their own inside views (not our charisma or claimed knowledge)
Some participants have historically shown up to the workshop expecting to be told what to do by people who know the answer. But I want us to resist this pressure, and to create a culture of “practice trusting your own judgment, and making many small experiments while seeing yourself as the author and experiment-iterator for your life and habits.”
Donors
I go into much more detail on this one in who I hope does and doesn’t consider donating.
Why this principle
I’m afraid that otherwise we’ll do a bunch of hard work, at large costs, that nets out to “harmful, on average, after considering opportunity costs.” I’m also afraid that all that work won’t even teach us much because, for most of it, there was no conscious human who individually thought it a good idea. (This is coming out of my 2012-2020 experiences.)
To spell out my thinking:
First: people often learn more by making their own mistakes than by “making other peoples’ mistakes.”
This is easiest to see if we consider a concrete context such as chess. If I play chess from my own inside view, I will repeatedly make moves that look like good ideas to me – and then my opponent will often show me how exactly my inside view was wrong by exploiting my errors. If I instead play chess by repeatedly trying moves my friend thinks are good, I’m likely to learn less, because my friend’s moves aren’t rooted in a detailed inside-view lodged in my head.
There are exceptions – maybe my friend has a Cool Chess Trick that I can understand once I try it, and that wouldn’t have occurred to me on my own – but these exceptions work when they’re somehow supporting an existing, intact flow of my own autonomous choice.
Second: I don’t want to build habits or culture (in our alumni) that’ll be easy for cult leaders or others to exploit.
If workshop guests practice deferring to us about what weird things to do with their minds – especially if they do so for extended periods, based on wispy claims about long-term payoffs, e.g. “this’ll help with AI risk somehow” – this risks setting some up to later try deferring to people running more obviously unhealthy cults. I speak from experience.
I also hope a culture of “remember the buck stops with you; check whether it is producing fruits you directly feel good about” may help with the rationalist community’s tendency to enable AI companies. But this is only a hope.
Third: I want good hygiene near CFAR and the rationalists / I don’t want to leave metaphorical rotting meat in our kitchen counter.
If you’ll pardon a metaphor: having living, healthy humans in a kitchen is mostly fine, hygiene-wise. Having a large slab of unrefrigerated meat sitting in the kitchen (no longer alive, and so no longer tied in with a living organism’s immune system), is a hygiene problem, especially after a while.
I suspect that if we have “living resource flows” across CFAR, the memes and habits and culture-bits that survive and spread here will mostly be good for us and others. I suspect by contrast that if we have ungrounded resource flows (ie, if we ignore this principle), we’ll risk breeding “parasitic memes” (or people) that are optimized to use up all the free energy in the system and that don’t tend to the conditions required for healthy life.
I mean it
If we can’t hit this principle (or the truer spirit behind it), my plan is to either figure out how to to hit it, or close CFAR.
(Although, here as elsewhere, I may revise my views; and I’ll update this post if I do; these principles are not permanent promises.)
Some principles you might assume we have that we don’t have:
Why we need your support / some cruxes for continuing this CFAR
There’s a sense in which we don’t need anybody. I could sit in my room, call myself an “applied rationality researcher,” and write things I called “rationality exercises” on paper or something.
But if we’re going to do something that’s not pretend, then we need people. And we need to find a way that there’s something in it for those people – a resource flow that gives back to them. (Otherwise, it’s still pretend.)
Why ask for donations?
We’re asking for donations because it takes money to run CFAR. If there are enthusiastic people out there who are willing and able to help fund us, that’ll both help a lot and seem wholesome. We aim to find a set of people who want the kind of practice we are building, and who want to build it, believe in its possibility, and try it together.
If nobody donates, we’ll likely continue; in extremity, we could e.g. sell our Bodega Bay venue, which would give us a few years’ operating expenses at our current, fairly minimalist budget. (That said, we love our venue and don’t want to sell it; more on that later.)
But if nobody donates and nobody cool wants to kibitz and all the people who try our workshop kinda want their time back and so on, of course we quit. Our main business in interacting with the community is to find a way to do cool stuff, via resources from some of you, in such a way that everyone’s glad. I suspect, but am not sure, that getting some donations from some of you is part of how to build the good, living center we are seeking.
Some disagree with us, and we’re doing this anyway
It is not the case that everyone who’s had much contact with past-CFAR believes resuming workshops is a good idea.
In particular:
There were also cheers: a sizable majority (at least of those I heard from) offered enthusiasm, well-wishes, “I’m glad there are again CFAR workshops where I can send my friends,” “I missed you guys,” etc. Former CFAR instructors Renshin (aka Lauren Lee) and Adam Scholl did this in the public comment thread. And I of course landed solidly at “yes, I want this enough that I’m willing to put in real effort.”
But I want to acknowledge that some disagree, for a few reasons:
Let me elaborate on (c): Back in 2014-2020, I would freak out whenever some serious thread of public conversation cast doubts on CFAR. I’d do this because I knew I needed CFAR staff’s morale, and I believed (accurately, I think) that many would lose their morale if even a small vocal minority said we were doing it wrong.
I believe our morale is somehow stabler now. (Perhaps partly because we factored aCFAR’s believing in’s out separately from our epistemics, and also because we’re a particular experiment we each want to do rather than a claim about the ‘objective best’).
I care about (c) for several reasons, but one is that I want good but imperfect institutions to exist in our present world, and to do this without suppressing news of their failures. Many of the previous decades’ institutions are gone from the world of 2025.[10] I think this is in significant part caused by the combination of:
Also, I put real effort into dismantling parts of my and CFAR’s positive reputation that I believed were false or ill-founded, and I did that partly because I didn’t think we could build something good near CFAR before that stuff was dismantled. Having completed that step (as I see it), I am eager to see what we can build on the new, partially razed ground.
Donations
Our finances
We currently have about $129k available for CFAR and its projects, which gives us about four months of runway.
To make it comfortably to the end of 2026, we think we need about $200k of additional donations (counting donations into this fundraiser, any SFF funding, and any other donations, but not counting workshop payments or venue rental revenue). We expect to probably get some money from SFF (probably in the form of matching funds, in about a week), and so are setting a “basic target” of $125k, and a “reach target” of $200k (as we can do more with more).
For more detail on that, see this breakdown:
General Costs
CFAR has ongoing general administrative costs – accounting, staff wages for administrative tasks, and so on. We think this will cost us about $72,000 for 2026. This is a very significant decrease from e.g. 2019, as CFAR is running with a smaller and leaner staff and no longer maintains office space.
Venue
We maintain an event venue in Bodega Bay, California, which we also rent out to other groups. This venue is both one of our primary expenses and also a source of revenue. Since 2020, the venue has been a significant net expense as we have run fewer programs there and not had many bookings. However, we now have venue caretakers who are sprucing the place up, figuring out what outside groups are looking for in a venue and how we can hit it, etc. We also expect to use our venue for more CFAR programs than we have been in the past few years.
For 2026, we estimate that we will likely have total venue costs of about $285,000. This is primarily mortgage payments, utilities, various maintenance/repair/”venue caretaking” work, and property taxes, although it also includes supplies for programs held at the venue. We also anticipate bringing in approximately $200,000 of revenue from outside bookings (after deducting cleaning fees), as well as using the venue for our own programs, hosting some staff meetings there, and so on. The savings from our own programs there are difficult to calculate but would likely be in the tens of thousands of dollars, perhaps $35,000 to $65,000 or so across 2026.
This means we anticipate the venue will on net cost us something like $20,000 to $50,000 for 2026, depending on how many programs we end up running there, how many outside workshops we hold, and what other costs we may incur. This is not ideal but we consider it a cost worth bearing for now, and in the long run we hope to run more programs there ourselves and bring in more outside bookings such that the venue ends up breaking even or being financially positive for CFAR.[11]
Workshops
Workshops are both a source of revenue and a significant cost for CFAR to run. Generally speaking, workshops gain or lose money based on how many staff members and participants are involved and how much financial aid we do or don’t offer to participants; a workshop with twenty-five participants paying full price would be profitable, while workshops with fewer participants and/or more financial aid may well lose money for CFAR on net. For instance, our November workshop ended up approximately -$28,400 on net.
In 2026, we currently anticipate running about four mainline workshops (one Jan 21-25 in Austin, TX and three yet to be announced). The workshop in Austin will incur venue costs that workshops held at our venue won’t. Insofar as the workshops otherwise have overall similar costs and revenues as the November workshop, we will probably be net minus ~$130,600 from workshops.[12]
We are excited to run these workshops even at a potential loss. In addition to being helpful to the participants, running workshops greatly aids our efforts to develop and refine an art of rationality. (In the long run, if our programs are any good, we should be able to fund the workshops more fully from those who attend, which will be better feedback loops, though we may want ongoing exceptions for students / folks without much money and for folks who are coming mostly to aid rationality development work.)
We also think that workshops benefit people beyond those who attend directly – some workshop attendees have gone on to teach others concepts like double crux and other CFAR techniques, and we think running workshops provides significant value for these “grandstudents”[13] as well.
In the past, CFAR has even offered some workshops for free – for instance, the four workshops we ran in the Czech Republic during autumn 2022 were entirely free to participants. However, the overall state of the funding environment was different when those programs were being planned and offering free mainline workshops currently seems imprudent.
Curriculum Development
In addition to the above costs, we also pay staff for general curriculum development outside of workshops – research into various aspects of rationality, work on new techniques, running test sessions where we try new material on volunteers, and so on. We project something like $25,000 in costs for this area in 2026, though this is somewhat speculative.
Aspirational
In addition to the core categories mentioned earlier, there are additionally various other projects that CFAR would like to be able to spend money on but currently is not.
For instance, in the past CFAR has supported “telos projects” – a program where CFAR provided funding for rationality-related projects that felt relevantly alive to people. In 2025, we had a few legacy projects in this area but are not soliciting new applications for telos funding; in a world where we had better funding we would like to reopen the program and use it to help new alumni run cool projects, including infrastructure for the new alumni community.
We would like to be able to pay me (Anna) to write various LessWrong posts about concepts CFAR has recently been working with, but are currently holding off on that. We would also like to (slowly, slightly) grow out our staff of curriculum developers and to modestly increase staff wages if we can.
Who I hope does, and doesn’t, consider donating
As mentioned earlier in this post, I’d like to build toward a world in which aCFAR’s donations come from, and with, the right kind of feedback loops.
I’m particularly cheerful (happy, relieved, joyful, grateful) about donations stemming from any of:
There’s something in here that you personally are rooting for, and you’re moved to root for it harder, with your dollars, so it can really be tried. (Like a home team or a city or a project in which you have pride and have/want membership)
The more dollars you deploy here, the more I hope you have some heart to spare to come along with your dollars, as “I care about this, and I’ll be kibitzing from the sidelines, and updating my total view of life based on how it goes, with enough context that my kibitzes and updates will make sense.” (The more of your dollars you deploy here, the easier we’ll try to make this “kibitzing from the sidelines” for you, if you’re willing.)
I’m particularly wary of donations stemming from:
To be clear, EA is an excellent way to donate; I’m glad some people donate this way; there’d be something seriously wrong with the world if nobody did this. But it’s not what this CFAR is doing. (More on this above.)
And in my opinion (and Michael Nielsen’s in this podcast with Ajeya Cotra, if you want a case at more length), there’d be something even more wrong with the world if most resource expenditure flowed via EA-like analysis.[15]
Another reason people used to sometimes donate, that IMO don’t apply to us today, and so would not be good reasons today:
And a couple other reasons to donate:
Ways to help CFAR or to connect to CFAR besides donating:
There are several good ways to help CFAR financially besides donating. You can:
There are also a pile of ways to help this CFAR and our mission non-financially. (Most of the resources we run on are non-financial, and are shared with us by hopeful rationality fans). Basically: kibitz with us here, or in a test session, or at a workshop. Attending a workshop helps even if you come on full scholarship a lot of the time, as having varied, cool participants makes our workshops more perspectives-rich and generative.)
For bonus points, maybe come to a workshop and then write up something substantial about it on LessWrong. (Scholarships are available for this purpose sometimes.)
Perks for donating
If you donate before Jan 31, you’ll also get, if you want:
Also, if there’s something in particular you’d like CFAR to be able to do, such as run workshops in a particular city or run an alumni event focusing on a particular component of rationality, and you’re considering a more substantial donation, please reach out (you can book a meeting via calendly, or email donate@rationality.org).
To the conversation!
Thank you for your curiosity about CFAR, and for reading (at least some of) this post! I hope you introduce yourself in the comments and that – if you end up donating (or kibitzing, or attending a workshop, or getting involved in us in whatever way) – it ends up part of a thing that’s actually good for you and the contexts you care about. And that you and we learn something together.
Yours in aspiring rationality,
Anna and aCFAR
'aCFAR' stands for “a Center For Applied Rationality.” We adopted the 'a' part recently, because calling ourselves 'the' Center for Applied Rationality seems obviously wrong. But feel free not to bother with the 'a' if it’s too annoying. I personally say 'a' when I feel like it.
One of the best ways to get to know someone is to team up on something concrete; kibitzing on a current CFAR stuck point is my suggestion for how to try a little of that between you and aCFAR.
Thanks to Davis Kingsley, John Salvatier, Paola Baca and Zvi Mowshowitz for writing help. (Particularly Davis Kingsley, who discussed practically every sentence, revised many, and made the whole thing far more readable.) Thanks to Jack Carroll for photos. Thanks to Zack Davis and Claude Code for creating the thermometer graphic up top. Remaining errors, wrong opinions, etc. are of course all mine.
My mom wanted to teach her kids math, so we could be smart. And I wanted… to be like her… which meant I also wanted to teach myself/others math! :) (Rather than, say, wanting to learn math.) Rationality education gives me an even better chance to see the gears of thinking/updating.
This overcounts a bit since this number is based on totaling the attendee count of many different programs and some people attended multiple programs, so the number of unique individuals who attended CFAR programs is lower than this.
EA spaces were receiving large influxes of new people at the time, and I hoped CFAR workshops could help the EA and rationality communities to assimilate the large waves of new people with less dilution of what made these spaces awesome. (Lightcone has mostly taken over the “develop and spread useful vocabulary, and acculturate newcomers” role in recent years, and has done it spectacularly IMO.)
Unlike some bands, we didn’t have substance abuse. But, like traveling bands, we traveled a lot to do high intensity soul-bearing stuff in a context where we were often exhausted but “the show must go on." I believe many of us, and many of our working relationships, got traveling-band-like scars. Also, we had ourselves a roster of potentially-kinda-invasive “CFAR techniques”; in hindsight some of our uses of these seem unwholesome to me. (I think these techniques are neat when used freely by an autonomous person, but are iffy at best when used to “help” a colleague stretch themselves harder for a project one is oneself invested in.)
There would still be many details to sort through. Eg, CFAR is aiming to be an unusually low-staff-charisma organization in which staff suggest exercises or whatever to participants in ways that’re unusually non-dizzying; Valentine’s native conversational style has a bit more charismatic oomph than we’re aiming for. But I love the idea of collaborating with Valentine on stuff about memes, PCK-seeking, what sorts of systematicity might allow decent epistemics, etc.. I also like the idea of having one more person who’s been around from the beginning, and has seen both CFAR’s early generativity and our failure modes, keeping an eye out.
We would also try to find other ways to make money, and tinker/brainstorm broadly.
For instance, mainstream media and academia both have much less credibility and notably less money, the ACLU lost most of its vitality, many of the big organizations in EA space from 2015ish have either ceased to do much public leadership there or ceased existing altogether, and I would guess the trends in Bowling Alone have continued although I have not checked.
It’s unlikely this would look like the venue generating more than its costs in direct booking revenue, but rather that the combination of booking revenue and cost savings for our own programs would exceed the costs of operating and maintaining the venue. Additionally we think the venue gives us a bunch of spirit and beauty, saves a bunch of staff time on logistics for each workshop we hold there, lets us support LARC and other groups we care about, and makes it easier for us to consider possible large expansions to our programs.
There’s a lot of variability in what workshops end up looking like and there’s some reason to believe later workshops may generate more revenue, but we’re using November here as the most obvious basis for comparison.
A term coined by Duncan meaning “students of our students” and which we continue to find useful in thinking about the impact of workshops and other programs.
Lighthaven, the LW website, and other Lightcone-enabled social contexts are truly remarkable, IMO – one of the last bastions of general-purpose grounded truthseeking conversation on the internet. Many of you feel most at home there, and so should be sending such donations only to Lightcone. But some should perhaps put some or all of their ‘I want to support contexts that support people like me, or that support conversations I’ll feel at home near’ budget toward CFAR. Personally, I'm donating $10k to Lightcone and putting soul and work into aCFAR, and this leaves me personally feeling happier and more a part of things than if I were to skip either.
Briefly: we humans are local creatures and we probably create better things, that contribute more to the long run, if we let ourselves have deep local interests and loyalties (to particular lines of research, to particular friendships and communities, to particular businesses or projects we are invested in) without trying to be always doing the thing that would be highest-impact for an detailless agent who happens to be us, and without trying to always be ready to change our plans and investments on a dime. I admit I’m caricaturing EA a bit, but I believe the point holds sans caricature; I would very much love to discuss this point at arbitrary length in the comment thread if you’re interested.