2078

LESSWRONG
LW

2077
Center for Applied Rationality (CFAR)
Personal Blog
2025 Top Fifty: 9%

198

CFAR update, and New CFAR workshops

by AnnaSalamon
25th Sep 2025
9 min read
53

198

198

CFAR update, and New CFAR workshops
24Davis_Kingsley
11AnnaSalamon
22Unreal
22Adam Scholl
21Duncan Sabien (Inactive)
15TsviBT
12Yulia
10Duncan Sabien (Inactive)
9CronoDAS
4JacekLach
17Duncan Sabien (Inactive)
10Eli Tyre
20Duncan Sabien (Inactive)
22Eli Tyre
4Duncan Sabien (Inactive)
21AnnaSalamon
8Eli Tyre
1CronoDAS
12AnnaSalamon
8Duncan Sabien (Inactive)
10AnnaSalamon
20Duncan Sabien (Inactive)
14Adele Lopez
40AnnaSalamon
11Adele Lopez
11AnnaSalamon
8Eli Tyre
13AnnaSalamon
13habryka
16mattmacdermott
8AnnaSalamon
30Pablo
4Sherrinford
12Drake Morrison
2AnnaSalamon
12the gears to ascension
11abstractapplic
7AnnaSalamon
4abstractapplic
2AnnaSalamon
3abstractapplic
10cousin_it
2AnnaSalamon
9Sherrinford
8Chris Lakin
8AnnaSalamon
12AnnaSalamon
7kave
7AnnaSalamon
2Raemon
6Ben Pace
2AnnaSalamon
2Screwtape
New Comment
53 comments, sorted by
top scoring
Click to highlight new comments since: Today at 8:27 AM
[-]Davis_Kingsley20d241

One other note is that CFAR has been holding some online test sessions recently while we develop new content for these workshops (and beyond!) -- if you're interested in checking out some of what we've been working on (potentially in a very unpolished form), you can sign up here to be on the test session mailing list!

Reply
[-]AnnaSalamon20d113

TBC, these test sessions are a good place to volunteer an hour of your time if you want to help us with curriculum development (and we appreciate it!); but I think they're a lot worse than what the workshop will be like, and don't provide a very good idea of it; we are often eg interviewing people about their cognitive patterns, trying stuff we haven't tried before that mostly then does belly-flops, etc.

At workshops, IME, there's somehow a magic that comes from having left behind "work mode" and entering "retreat mode," where you and everyone you're seeing in person all have space at once for thinking through things freshly, and also where there's a critical mass of seeing new ways to make progress on bits of peoples' lives that they'd assumed "just have to be this way," such that people begin to look around with hope, and to notice and try things.

Reply
[-]Unreal17d220

I benefited greatly from my CFAR experience, as a participant, a volunteer mentor, and as full-time staff. 

The main early benefit was a boost to my self-confidence and agency, the realization that I could endlessly work to resolve my own problems (physical, emotional, and mental) and that there is no end to self-improvement and overcoming one's own obstacles. I was deeply inspired by CFAR instructors as role models. 

As a mentor, I realized I could also help others with their problems, and often this meant getting out of the way and simply acting as a mirror. 

As staff, I gained some much-needed common sense and the ability to work with physical objects and places. I learned to merge with tools and the venue itself, in order to do operations. I became better at time (like being on time and learning how long it takes to do things). I learned to value my voice and became more openly disagreeable. And a lot more that I won't list. 

I believe I also gained deep insights and new ways of being that bolstered my path of truth-seeking with CFAR, which then led me to even clearer truth-seeking paths. 

If there's a piece of curriculum that I think would be a helpful addition, it's how to escape all forms of victim mentality. All the benefit of a place like CFAR seems like it could come crashing down with victim mentality in the water. The "drama triangle" is a helpful framework here. 

Reply21
[-]Adam Scholl20d220

I'm really excited to hear this, and wish you luck :)

My thinking benefited a lot from hanging around CFAR workshops, so for whatever it's worth I do recommend attending them; my guess is that most people who like reading LessWrong but haven't tried attending a workshop would come away glad they did.

Reply21
[-]Duncan Sabien (Inactive)17d*21-16

Just noting for the audience that I would not recommend this, and would strongly discourage my friends and family members from going to it.  There's sort of nothing I can do about the fact that this is inherently rude, and clashes with the largely-positive tone of all the other discussion, but it feels rather important to represent this fact, especially as someone with slightly more context and grounded understanding than most.

(I have had zero interaction with CFAR since the end of my three years working there in 2019 and can make no confident predictions about the object-level experience, but I do not trust and indeed am substantially wary of the motives, methods, and competence of (some members of) the team creating the experience.  I would not want anyone I love to put themselves in a vulnerable state under the care of (some of) these particular people, as I have justified reason to believe that duty-of-care will not be reliably discharged.  I'll note that when plans for something like this first reached my ears circa 2022*, I directly told at least one of the people I'm concerned about that I thought they should absolutely not participate in anything like this, and that I expected they would cause harm by doing so (because they had repeatedly caused harm in the past) and should leave the project to others.)

Note also that the CFAR handbook exists and is free to all.

*edited; previously said 2021; I went back and checked the email that I sent; it was in June of 2022 and was not responded to.

Reply42
[-]TsviBT9d159

(I don't know to what extent I'm agreeing with Duncan's statements here. I separately wrote comments about something like this new CFAR, though I'm not sure how much overlap there is between this vs. what I was commenting about. One thing my comments brought up was that I felt earlier CFAR things were, besides being often very helpful to many people including me, also not infrequently damaging to epistemics for group-related reasons, which seems possibly related to the "status differential" thing Duncan alludes to downthread. My comments touched on how CFAR in general and maybe Anna in particular did not nearly sufficiently treat incoming people as authors of the group narrative, and that this creates lots of distortionary effects (see for example https://www.lesswrong.com/posts/ksBcnbfepc4HopHau/dangers-of-deference ). And, similar to what Duncan says, this felt to me like a hydra / whack-a-mole type problem, where multiple patches didn't address the underlying thing. Though, with that type of problem, it tends to be hard to accurately characterize (and judge and fix-or-endorse) the underlying thing.)

Reply
[-]Yulia17d120

Can you say more about why you think duty-of-care will not be executed responsibly? There's probably a good reason why you didn't spell it out, so I don't expect an answer to the question. As someone without a lot of context, I am still curious!  

Reply
[-]Duncan Sabien (Inactive)17d*100

I think for reasons of not needlessly rending the social fabric, I don't want to be more specific.  I feel the need to register the warning, and I'm happy with people weighting or discounting it based on how much they trust my assessments generally, including the context that I worked at CFAR for three years and overlapped with much of the present staff.

Reply
[-]CronoDAS16d*94

Duncan was very unhappy with his experience and his treatment as a CFAR employee and his interactions with Anna Salamon in particular, including being pressured into doing things against his better judgment. If he prefers not to go into details, I'll respect his wishes, but after following him for a while I've got somewhat of a handle on what happened.

Reply
[-]JacekLach17d40

What's the vulnerable state you're refering to here? Staying on site?

Reply
[-]Duncan Sabien (Inactive)17d*175

Thinking about thinking, tinkering with your mental and emotional algorithms, shaking up your worldview, adopting new perspectives and new strategies, spending a lot of time zeroing in and ruminating on your problems and goals and values and considering them in contact with other people and with suggestions about how to see them and think of them and change them.  Setting aside your normal ways of doing things.

Becoming more mud, in other words.

This is already inherently vulnerable, but it gets moreso when you're doing it in an isolated retreat context surrounded by other people for multiple days in which there is a clear status differential between the instructors and the participants.

There are ways to do this that are more responsible and careful, and there are ways to do this that are less responsible and careful.  Separately, a person or group can have the intent to do such a thing responsibly and carefully, and this is not the same as being able to do this responsibly and carefully.  Some of these staff members are completely blind to some centrally important axes of care.

(If you've seen a person or group try for X and fail repeatedly in multiple novel ways despite multiple rounds of figuring out what went wrong and fixing it in each specific case, it's wise to be wary of their latest attempt at X. Sometimes people exhibit a curiously robust capacity to keep generating brand-new ways to get X wrong, and my desire to register a warning here is partially downstream of my belief that something like that is true, here.)

Reply
[-]Eli Tyre12d100

Is your view that CFAR workshops in the past, specifically in the period when you were involved with them, were responsible and careful, but you expect this new generation to be worse? 

Or do you think that the past ones, in hindsight, actually weren't adequately responsible and careful? 

Or something else?

Reply1
[-]Duncan Sabien (Inactive)11d*20-12

I came to discover, over time, that the orientation I had toward participants (and subordinates, for that matter), and the care that I felt I owed the people under my supervision—

(and which I believed were standard and universal à la something like the hippocratic oath or the confidentiality standards of lawyers and therapists)

—were not, in fact, universal.  I directly observed that certain staff members did not reliably have the best interests of participants at heart (whatever their protestations to the contrary), but instead had various agendas (which were often not made explicit) which meant that the best interests of the participants might sometimes be second, or third, or even lower than third on the priority list.

i.e. I believe that the past ones, in hindsight, were not only not adequately responsible and careful but were in a crucial way not even trying to be, and I do not have reason to believe that this problem will be any less in an era where people like myself and Kenzi and Julia Galef and Dan Keys are not present.

(There is a rebuttal that might be made that goes something like "ah, well, those staff members have owned up to that very problem and are explicitly striving to do the other thing, now," but a) see the point above about not trusting people who have managed to fuck up X in multiple novel ways, and b) in the world where such a hypothetical rebuttal were in fact to be made, I wouldn't personally put much weight on the self-report of people who are saying that they used to be something like deceptive/manipulative (to the detriment of others) but don't worry, they aren't doing that anymore.)

Why is it necessary? Do you think that you are the last potential wizard of Light in the world? Why must you be the one to try for greatness, when I have advised you that you are riskier than average? Let some other, safer candidate try!

These issues didn't seem to be a problem for 95+% of participants.  But I think many of my own friends and family members would feel differently about choosing to be vulnerable in the following two scenarios:

  • You are under the care of professionals who have your best interests at heart, and who have never abused their position of power to manipulate, deceive, or exploit people under their care
  • You are under the care of professionals who probably (95+% by raw numbers) have your best interests at heart, except in the rare (5-%) subset of cases where they think they can make some interesting use of you, which they may or may not be up front with you about, versus trying to twitch your strings to maneuver you into some position for the fulfillment of their own values, agnostic to yours
Reply2
[-]Eli Tyre8d228

FWIW, I my guess is that there would be more benefit to the social fabric if you went into (some of) the details of what you observed, instead of making relatively high level statements and asking people to put weight on them to the extent that they respect your reputation.

(Hearing object level details at least makes it easier for people to make up their minds. eg there might be various specific behaviors that you think were irresponsible that some readers would think were non-issues if they heard the details. 

Further, sharing observations allows for specific points to be addressed and resolved. The alternative is an unanswerable miasma that hangs over the org forever. "Duncan, one of the people who used to work at CFAR, explicitly disendorsed the org, but wouldn't give details." is the kind of thing that people can gossip about for years, but it doesn't add gears to people's models, and there's nothing that anyone can say that can address the latent objections, because they're unstated.)

However, I also acknowledge that for things like this, there may be a bunch of private details that you are either reluctant to share or are not a liberty to share, and there might be other reasons beside, to say less.

But, insofar as you're willing to make this much of a callout post, I guess it would be better to be as specific as possible, especially as regards "malpractice" that you observed at CFAR workshops.

Reply
[-]Duncan Sabien (Inactive)8d*40

I claim to be as-aware and as-sensitive-to of all of these considerations as you are.  I think I am being as specific as possible, given constraints (many of which I wish were not there; I have a preference for speaking more clearly than I can here).

Reply
[-]AnnaSalamon11d212

If this is meant to be a characterization of my past actions (or those of any other CFAR team member, for that matter), I disagree with it. I did and do feel a duty of care. When I had particular agendas about eg AI safety recruiting that were relevant to my interactions with a participant in particular, I generally shared it with them. The thing I tried to describe as a mistake, and to change, was about an orientation to “narrative syncing” and general community set up; it was not about the deontology owed to CFAR participants as individuals.

Reply
[-]Eli Tyre7d*810

FWIW, this broadly matches my own experience of working with Anna and participating in CFAR workshops.

There were tensions in how to relate to participants at AIRCS workshops, in particular. 

These were explicitly recruitment programs for MIRI. This was extremely explicit—it was stated on the website, and I believe (though Buck could confirm) that all or most of the participants did a technical interview before they were invited to a workshop. 

The workshops were part of an extended interview process. It was a combination of 1) the staff assessing the participants, 2) the participants assessing the staff, and 3) (to some extent) enculturating the participants into MIRI/rationalist culture. 

However, the environment was dramatically less formal and more vulnerable than most job interviews: about a fourth of the content of the workshops was Circling, for instance.

This meant that workshop staff were both assessing the participants and assessing their fit-to-the-culture, while also aiming to be helpful to them and their personal development by their own lights, including helping them untangle philosophical confusions or internal conflicts.

These goals were not incompatible, but there were sometimes in tension. It could feel callous to spend a few days having deep personal conversations with someone, talking with them and trying to support them, but then later, in a staff meeting, relatively quickly coming to a judgement that they didn't make the cut.

This was a tension that we were aware of and discussed at the time. I think we overall did a good job of navigating it.

This was a very weird environment, by normal profesional standards. But to my knowledge, there was no incident in which we failed to do right by a AIRCS participant, exploited them, or treated them badly. 

The majority of people who came had a good experience, regardless of whether they eventually got hired by MIRI. Of those that did not have a good experience, I believe this was predominantly (possibly entirely?) people who felt that the workshop was a waste of time, rather than that they had actively been harmed.

I would welcome any specific information to the contrary. I could totally believe that there was stuff that I was unaware of, or subtle dynamics that I wasn't tracking at the time, but that I would conclude were fucked up on reflection, if it was pointed out to me. I can only speak from my personal perspective, not make a blanket claim about what happened. 

But as it is, I don't think we failed to discharge our deontological duty towards any participants.

Reply1
[-]CronoDAS10d15

People that disclose a conflict of interest usually aren't any less biased in practice than people who don't disclose the same conflict, even though they're generally perceived as more trustworthy. :/

Reply
[-]AnnaSalamon7d129

That may be. I made my comment in reply to a previous version of Duncan's comment (he edited after) which IIRC said specifically that I didn't disclose conflicts of interest, and [some phrase I don't recall, that I interpreted as, that I had said I wasn't even trying to treat participants according to a certain standard of care]. That is false, and is the kind of thing I can know, so I commented with it. I don't mean to imply that disclosing a conflict makes a person neutral on the subject, or that first person judgments of intent are reliable.

I did spend a good deal of time since 2020 discussing errors from my past CFAR stuff, and ways I and we may have harmed things we care about, but it was more along the lines of [doing stuff that is kinda common, but that messes with the epistemics of groups, and of individuals who sync with groups, a la Tsvi's comment], not [skipping basic attempts to be honest, kind, respectful of expectations about confidentiality, mindful of a workshop participants' well-being and of likely impacts on it, etc]. I agree with Tsvi that the epistemics-while-in-groups stuff is tricky for many groups, and is not much amenable to patching, which is more or less why this CFAR reboot took me five years, why I tested my guesses about how to do better in a couple smaller/easier group workshop contexts first, and why I'm calling these new CFAR workshops "pilots" and reserving the option to quit if the puzzles seem too hard for us. Tsvi I think remains mostly-pessimistic about my new plans, and I don't, so there's that.

Regardless:

  1. Attending a workshop where groups of people think together about thinking, and practice new cognitive habits, while living together for five days and talking earnestly about a bunch of life stuff that one doesn't normally discuss with near-strangers.... is indeed a somewhat risky activity, compared to eg attending a tennis-playing weekend or something else less introspective. Many will come out at least a little changed, and in ways that aren't all deliberate. It's worth knowing this, and thinking on whether one wants this.

    A large portion of past CFAR participants said they were quite glad they came, including months and years afterward; and I suspect it was probably good for people on net (particularly people who passed through briefly, and retained independent cultural influences, I think?); but I also suspect there were a bunch of people who experienced subtle damage to parts of their skills and outlook and aesthetics in ways that were hard for them and us to track. And I expect some of that going forward, too, although I hope to make this less frequent via:
    1. respecting individuals more deeply
    2. having a plurality of frameworks that includes eg [falsification/feedback loops] and [pride in craftsmanship] and other stuff discussed in my OP rather than only [Bayesian updating + agentiness/goal-achievement]
    3. having a "hobbyist convention" vibe where our guests are fellow hobbyists and can bring their own articulate or inarticulate frameworks
    4. not myself being in "emergency mode" around AI risk (and being myself in something closer to a "geek out with people and try to be helpful and see where things go" mode, rather than in a super-goal-oriented mode), which I think should be better for not losing "peripheral vision" or "inarticulate-but-important bits of perception."
       
  2. One should not expect the CFAR alumni community to be only made of trustworthy carefully vetted people. We plan not to accept people who we suspect have unusually bad character; but I'm not that good at telling, and I don't know that we as a team are, either. Also, there's a question of false-negatives vs false-positives here, and I don't plan to be maximally risk averse, although I do plan to try some; guests and alumni should keep in mind, when interacting with any future alumni community, that strangers vary in their trustworthiness.
     
  3. I'm sometimes fairly skilled at seeing the gears inside peoples' minds, especially when people try to open up to me; and when things are going well this can be useful to all parties. But I've harmed people via trying to dialog with bits of their mind that weren't set up for navigating outside pressures, albeit mostly not mainline workshop participants, mostly in cases where I didn't understand ways they were different from me and so moves that would've been okay on me were worse for them, and mostly in contact with peoples' untempered urge to try to be relevant to AI safety which created a lot of fear/drive that might've been bumpy without me too (eg, folks who felt "I must continue to work at CFAR or MIRI or [wherever] or else my life won't matter," and so weren't okay with the prospect of maybe-losing a job that most people would've quit because of the pain or difficulty of it). I do think I'm better at not causing harm in this way now (via chilling out in general, via somewhat better models of how some non-me people work, and via the slow accumulation of common sense), but whether other people have grounds to believe me about this varies.

Is the above enough bad that the world would be better off without CFAR re-opening and offering workshops? IMO, no. CFAR ran sixty-ish multi-day events from 2012-2020 with close to two thousand participants; some things went badly wrong, many things were meh compared to our hopes (our rationality curriculum is fairly cool, but didn't feedback its way to superpowers); many things went gloriously right (new friendships; new businesses; more bay area rationality community in which many found families or other things they wanted; many alumni who tell me they learned, at the workshop, how to actively notice and change personal habits or parts of their lives that weren't working for them). 2025 is somehow a time when many organizations and community structures have shut down; and I think there's something sad about that and I don't want CFAR's to be one of them.

It seems good to me that people, including Duncan, want to tell their friends and family their views. (Also including people in top-level comments below this one who want to share positives; those are naturally a bit easier for me personally to enjoy.). A cacophony of people trying to share takes and info seems healthy to me, and healthier than a context where people-with-knowledge are pressured never to share negatives (or where people-with-knowledge who have positives are quiet about those in case CFAR is actually bad and they trick people).

I hope for relatively relaxed narrative structures both about CFAR and broadly, where peoples' friends can help them make sense of whatever they are seeing and experiencing, and can help them get info they might want, in public view where sensible, without much all-or-nothingness. (I don't mean from Duncan, whose honest take here is extremely negative, and who deserves to have that tracked; but from the mixture of everyone.)

Reply11
[-]Duncan Sabien (Inactive)7d82

Just noting for the audience that the edits which Anna references in her reply to CronoDAS, as if they had substantively changed the meaning of my original comment, were to add:

  • The phrase "directly observed"
  • The parenthetical about having good epistemic hygiene with regards to people's protestations to the contrary
  • The bit about agendas often not being made explicit

It did not originally specify undisclosed conflicts of interest in any way that the new version doesn't.  Both versions contained the same core (true) claim: that multiple of the staff members common to both CFAR!2017 and CFAR!2025 often had various (i.e. not only the AI stuff) agendas which would bump participant best interests to second, third, or even lower on the priority ladder.

I've also added, just now, a clarifying edit to a higher comment: "Some of these staff members are completely blind to some centrally important axes of care."  This seemed important to add, given that Anna is below making claims of having seen, modeled, and addressed the problems (a refrain I have heard from her, directly, in multiple epochs, and taken damage from naively trusting more than once).  More (abstract, philosophical) detail on my views about this sort of dynamic here.

Reply
[-]AnnaSalamon7d101

> given that Anna is below making claims of having seen, modeled, and addressed the problems

I think I am mostly saying that I don’t agree that there were ever problems of the sort you are describing, w.r.t standard of care etc. That is: I think I and other CFAR staff were following the basics of standard deontology w.r.t. participants the whole time, and I think the workshops were good enough that it was probably better to be running them the whole time.

I added detail to caveat that and to try to make the conversation less confusing for the few who’re trying to follow it in a high-detail way.

Reply
[-]Duncan Sabien (Inactive)10d*2011

I'll note that both this and my top-level comment have a lot of agree-disagree votes, and that it would be wise for people looking in from the outside to ponder what it means for e.g. the top-level comment to have 18 people voting and to end up at -2.

(It might be tempting to sum it up as "ah, Duncan claimed that there's something to be wary of here, and the hive-mind ultimately ended up in disagreement" but I think it's more like "Duncan claimed there's something to be wary of here, and close to half of the people agreed (but were drowned out by the somewhat more than half who disagreed)."  Which is precisely what you would expect if there were some system or process that was consistently harmful to certain people, but not all or even most—a lot of people who passed through unscathed would be like "what do you mean? I was well-cared-for!" and might not pause to wonder about whether they were a black raven and what evidence their experience provides about claims of the existence of white ravens.)

Reply
[-]Adele Lopez19d*144

People with a history of mania, hypomania, or psychosis. (There’s some evidence that everything from meditation retreats to philosophy books to CFAR workshops may trigger mania or psychosis in folks with tendencies in that direction. If you’re vulnerable in this direction, it’s probably best to not come, or at least to talk to your psychiatrist before deciding.)

There seems to be a profound lack of curiosity about why rationalist-y things tend to cause psychosis. It is NOT NORMAL for things to just sometimes cause psychosis, whoopsie! (Sorry to pick on you Anna, you are at least trying to mitigate this risk here which is more than I can say for the community at large.)

Psychosis isn't just some random thing (like mania kind of is, in this context), it is a state where one is no longer able to determine what is presently real, and what is not. Rationality is, in large part, about becoming better at determining what is real (even in hard cases). It should be a Halt. Melt. Catch Fire. moment when your rationality workshop is somehow regularly crashing people's easy-mode epistemics! To first-order, you should expect a successful rationality workshop to help people prone to psychosis.

It would be one thing if these rationality techniques were extremely effective such that it was plausibly a trade-off worth making. But as far as I can tell, this is not the case, and the people who have substantially leveled-up in "rationality" have done it just by spending an order-of-magnitude more time working specifically on this. The main benefit of the workshops seems to me to have been the networking aspect. It's pretty easy to run networking events without causing psychosis.

Reply1
[-]AnnaSalamon19d4012

I've spent probably 200 hours trying to understand stuff near here, in various ways, across the last 15 years. I don't have a lack of curiosity about it.

(Partly this is because for awhile, many people across the greater rationalist community who seemed a bit psychotic or manic, or more rarely full-blown psychotic/manic, were people who someone would reach out to me about. Partly it's because CFAR had an early experience with a CFAR participant who had a full-blown manic episode after coming to a workshop, and I and we as a staff responded to this by trying to up our game at noticing warning signs of such. Partly it's because I've somehow been curious about what's up with psychosis since I was a teen, after a friend's experience with psychosis and psychiatry.)

People who run all kinds of psychological workshops or meditation retreats tell me that their workshops can occasionally trigger manic or psychotic states in folks with a predisposition in that direction. (Eg Landmark, several different kinds of meditation, some person I talked to at a conference who did random self-help stuff). My high school friend was told by her psychiatrist not to read philosophy books, because allegedly philosophy books are a common psychosis trigger. Sudden major life changes, particularly negative ones but sometimes also positive ones, can trigger mania or psychosis. Psychedelics, including cannabis, can also trigger mania and psychosis. I suspect there's a common thread running through all of this, where mania/psychosis often happens when the mind tries to reorganize, plus or minus some other factors I don't understand well.

People also tell me autistic spectrum folks have psychotic episodes more often. Autistic spectrum seems common in the rationalist community.

In terms of how risky CFAR workshops in particular are (I'm sharing data here, not trying to argue that they are or aren't): about 1800 people have attended 4.5-day or longer events with us. From this set, I am aware of two full-blown manic or psychotic episodes happening at or shortly after a workshop: one from the early participant I mentioned above, and one from someone in ~2018-ish. The later person tried cannabis during "comfort zone exploration," which they got from another participant without us knowing, which seemed to set off the episode. If I take as a "control group" people who had already been accepted to a CFAR workshop, and had committed to attending but had not yet actually attended: there was one manic or psychotic episode I know of in that group (a person who canceled their participation and told us this was because of mania/psychosis). The early participant had a previous milder psychosis-like episode after reading the Sequences, a couple years before he attended CFAR; the later participant had a previous milder maybe-episode in response to life stresses. I do think we should try to exercise care here.

(In terms of why I'm talking about "mania or psychosis" together, instead of separating these out: it turns out I'm fairly accurate at predicting when a psychiatrist will say that a person has either mania or psychosis, but am no better than chance at predicting which of these things a psychiatrist will say a given person has. Also most of the first aid seems similar: if you have mild signs of mania or psychosis, maybe: avoid recreational drugs, including alcohol and coffee; get sleep if at all possible; do normal grounding things such as gardening or the dishes, rather than "heady" or agitating things such as math or philosophy or politics; remember that now is not the only time to ever solve the thing one is currently upset about; don't drive cars or make major decisions; maybe see a psychiatrist; maybe reach out to loved ones and dial back the complexities of life for a little bit. So I use the courser-grained concept "mania/psychosis.")

Reply
[-]Adele Lopez19d111

I've spent probably 200 hours trying to understand stuff near here, in various ways, across the last 15 years. I don't have a lack of curiosity about it.

That's good to hear. Any insights?

People who run all kinds of psychological workshops or meditation retreats tell me that their workshops can occasionally trigger manic or psychotic states in folks with a predisposition in that direction. (Eg Landmark, several different kinds of meditation, some person I talked to at a conference who did random self-help stuff). My high school friend was told by her psychiatrist not to read philosophy books, because allegedly philosophy books are a common psychosis trigger. Psychedelics, including cannabis, can also trigger mania and psychosis. I suspect there's a common thread running through all of this.

Yeah, there's something fucked up about meditation communities too. And let's not forget Vassar/Vassarites. 

I think the through-line has to do with drastic modification of self-image, which helps explain the AI cases too (or higher rate in trans). It seems to be a lot worse if this modification was pushed on them to any degree. 

(I'm not saying that modification of self-image is categorically bad. It's necessary as your actual self changes, and most people probably have false beliefs here (maybe even all conscious experience according to some). But be careful. Please!)

I'm not really swayed by arguments that our rough neurotype is just more prone to this (almost certainly true), since the inciting incident—when it's not just drugs—usually seems to be some sort of rationality content or technique. People are prone to dying, but we don't just shrug and say "damn that's crazy" when something causes someone to die. There should be a post-mortem analysis, and sign-post warnings. Maybe you've been diligent about this, but the community-at-large seems to have a missing mood here. More public boggling would have been nice.

In terms of how risky CFAR workshops in particular are (I'm sharing data here, not trying to argue that they are or aren't): about 1800 people have attended 4.5-day or longer events with us. From this set, I am aware of two full-blown manic or psychotic episodes happening at or shortly after a workshop: one from the early participant I mentioned above, and one from someone in ~2018-ish. The later person tried cannabis during "comfort zone exploration," which they got from another participant without us knowing, which seemed to set off the episode. If I take as a "control group" people who had already been accepted to a CFAR workshop, and had committed to attending but had not yet actually attended: there was one manic or psychotic episode I know of in that group (a person who canceled their participation and told us this was because of mania/psychosis). The early participant had a previous milder psychosis-like episode after reading the Sequences, a couple years before he attended CFAR; the later participant had a previous milder maybe-episode in response to life stresses. I do think we should try to exercise care here.

Thanks for sharing the data. It's plausible to me that CFAR isn't particularly bad here, but the prevalence in the community seems extremely high compared to say, my childhood Mormon ward (one case that I know of, did psychedelics which is a no-no). This is something that's been bothering me about the community in general for years, and your post was the unlucky one that inspired me to say something[1] because the psychosis part had the feeling of the missing mood I'm trying to point at.


And fair point re. mania/psychosis. 
 

  1. ^

    Why not earlier? For better-or-worse (worse), having a model I'm happy with seems to be a prerequisite to taking action for me. That only happened about a month ago, while researching the AI psychosis stuff.

Reply
[-]AnnaSalamon19d*116

I agree more community interest would be good here; and I appreciate you writing about it; although I also feel grudging because I don't want this to take up all the attention under my post about new pilot CFAR workshops. Any chance you'd be up for heading with me to your shortform or to open thread or [making a top-level post yourself as a locus for discussion] or something? I'll follow and discuss there. And we can link to it here.

I would off the top of my head guess the Mormons are unusually good at avoiding psychotic episodes, in addition to the rationality community being unusually bad for this; and I agree each situation deserves a postmortem etc.

Edited to add: Adele and I are now talking on Adele's shortform if anyone would like to join there. 

If anyone later wants to post brief take-aways or points they're particularly interested in back here, I don't object to that, I just don't want there to be a large amount of long-winded discussion on it here.

Reply
[-]Eli Tyre12d*84

It should be a Halt. Melt. Catch Fire. moment when your rationality workshop is somehow regularly crashing people's easy-mode epistemics! To first-order, you should expect a successful rationality workshop to help people prone to psychosis.

For what it's worth, I think this is directionally correct, and important, but I don't necessarily buy it as worded.

Sometimes advanced techniques / tools for do allow power users to do more than they otherwise would be able to, but also break basic-level stuff for less advanced users. There are some people that are able to get a lot more out of their computers with a Linux install, and also for most people trying to use and work with Linux can totally to interfere with pretty basic stuff that that "just worked" when using windows, or (if you do it wrong) just break your machine, without having the tools to fix it.

It's correspondingly not that surprising to me if power tools for for making big changes to people's epistemologies sometimes have the effect of making some people worse at the basics. (Though obviously, if this is the case, a huge priority needs to be attending to and mitigating this dynamic.) 

That said, I think that the rationality project broadly construed has often fallen into a failure mode of trying to do radically ambitious stuff without first solidly mastering the boring and bog standard basics, and often undershooting, not just our ambitions, but the more boring baselines.

Like, we aimed to be faster than science but in practice I think we often didn't meet the epistemic standards of a reasonably healthy scientific subfield. 

If I invest substantial effort in rationality development in the future, I intend to first focus on doing the basics really well before trying for superhuman rationality.

Reply
[-]AnnaSalamon4d130

Two CFAR applicants told me our emails (an acceptance email and a "here's how you schedule a financial aid interview or ask further questions" email) landed in their spam folders. So, if you or your friend applied and are wondering why you haven't heard from us, try checking your spam folder. Thanks!

Reply
[-]habryka18d1325

We would like now to be called “A Center for Applied Rationality,” not “the Center for Applied Rationality.” Because we’d like to be visibly not trying to be the one canonical locus.

FWIW, I tried this for a bit and failed. Saying "a Center for Applied Rationality" just sounds nonsensical and every time I have considered using it in conversation I predicted that I would just get weird blank stares. 

I am planning to continue calling it "the Center for Applied Rationality" as a result (and also am kind of annoyed about what reads to me as basically non-grammatical language on the website and other places, plus a request to non-standard language that I think would be reliably embarrassing when trying to use it in conversation).

My guess is if you want to change the usage here, you'll have to change the name properly.

Reply1
[-]mattmacdermott18d1612

“A Center for Applied Rationality” works as a tagline but not as a name

Reply2
[-]AnnaSalamon18d8-1

You're right. Oops!

I added a footnote above modifying our request to "when it's easy/convenient." Eg as mattmacdermott notes below, we can at least use it as a tagline ("Signed, Anna from A Center for ...").
 

Reply1
[-]Pablo18d3022

The problem is that organizations generally do not include the article used to refer to them in their names. For example, the name of the Council on Foreign Relations is not ‘The Council on Foreign Relations’, but ‘Council on Foreign Relations’. For this reason, one should always use the definite article ‘the’ to refer to CFAR, because one’s intention is to refer to the entity so named. Saying “a Center for Applied Rationality” would invite questions like, “Wait! Are there other orgs also called ‘Center for Applied Rationality’?”

Alternatively, you could change ‘Center for Applied Rationality’ to ‘A Center for Applied Rationality’, but this would also be very strange. As mentioned, entities do not generally include the article as part of their names, but when they do, it is, to my knowledge, always the definite article (e.g., The New York Times).

My humble advice is to drop this idea. You can communicate that you are not trying to be the one canonical org on this topic in other ways.

Reply1
[-]Sherrinford16d*40

I think even that signature tagline version does not work so well, as people who do not know it would possibly not understand that you are referring to a specific organization. It would at least need to be 

"Anna from 

CFAR - a center for ..."

Reply
[-]Drake Morrison19d123

For grounding data, I keep thinking of Shop Class as Soulcraft by Matthew Crawford. Doing some kind of wood-carving or pottery or painting or sketching animals on nature walks, or something like that seems well-advised. Also works as a toy problem to practice new skills on. 

Reply
[-]AnnaSalamon7d20

Thanks! I'll take a look.

Reply
[-]the gears to ascension19d122

I really enjoyed CFAR ten years ago, it was what finally got me to take action and finally start properly studying how to contribute to AI capabilities. I now think that was a first-order bad thing for it to have done, and it's unclear whether my response to realizing that was bad is remotely close to undoing the p(doom|action)-p(doom|counterfactual inaction) change. I think CFAR has promising ideas for how to think well but I generally think most ways of making people better end up mostly impacting the world by amplifying AI capabilities researchers who are least doom avoidant in their approach, and it's not clear to me how to prevent that given that CFAR was already at the time designed to get people to think about consequences.

Reply
[-]abstractapplic20d111

At CFAR workshops, people often become conscious of new ways their minds can work, and new things they can try. But we don’t have enough “and now I’ll try to repair my beautiful electronic sculpture, which I need to do right now because the windstorm just blew it all apart, and which will incidentally give me a bunch of real-world grounding” mixed in.

I’d love suggestions here.

I'll try to make sure I'm running a D&D.Sci scenario over both of the spans you mentioned: data-science-y attendants would get a chance to test their data-science-y skills against small but tricky problems with knowable right answers, and non-data-science-y attendants would probably still get something out of spectating (especially if they make a point of trying to predict which participants are closest to said right answer).

(. . . and if anyone else has some kind of [inference|decision]-centric moderately-but-not-excessively-demanding public puzzle/challenge they've been meaning to run, those spans look like the time to do it.)

Reply
[-]AnnaSalamon20d70

Thanks; I appreciate this thought and offer! I'm not sure how well "internet things" can co-exist with the sort of in-person "be fully present" thing that seems to help workshops do their magic, but, per CFAR's recommendations, I'm gonna set a 5-minute timer later today and think on the best way to do it before declaring it impossible :)

Reply
[-]abstractapplic19d40

Fwiw, the scenarios don't have to be solved collaboratively online, and in fact most players play most of them solo. For that matter, they don't need internet access: would-be players could make sure they have the problem description & the dataset & their favorite analysis tools downloaded, then cut the wifi.

(. . . unless "be fully present" rules out laptops too, in which case yeah nvm.)

Reply
[-]AnnaSalamon19d20

Would it work from print-outs?

Reply
[-]abstractapplic19d30

It could, for a game with an unusually small & clean dataset (I'm thinking in particular of On The Construction Of Impossible Structures and How The Grinch Pessimized Christmas) . . . but realistically a LWer solving a problem like that on paper would spend the entire time lamenting that they weren't using a computer, which doesn't seem like a mental state conducive to personal growth. So nvm.

(I do have other thoughts on potential epistemic grounding activities but they're all obvious: board games,  2-4-6 tests[1], pub quizzes with confidence intervals attached, etc.)

  1. ^

    With different rules than the original 2-4-6 test, obviously.

Reply
[-]cousin_it20d*103

This was pleasant to read! You seem to be shifting toward some conservative vibes (in the sense of appreciating the nice things about the past, not in the sense of the Republican party).

To me it feels like there's a bit of tension between doing lots of purely mental exercises, like Hamming questions, and trying to be more "whole". One idea I have is that you become more "whole" by physically doing stuff while having the right kind of focus. But it's a bit tricky to explain what it feels like. I'll try...

For example, when drawing I can easily get into overthinking; but if I draw a quick sketch with my eyes closed, just from visual imagination, it frees me up. Or when playing an instrument, I can easily get into overthinking; but when playing with a metronome, or matching tones with a recording, I get into flow and it feels like improving and relaxing at the same time. Or to take a silly example, I've found that running makes me tense, but skipping (not with a rope, just skipping along the street for a bit) is a happy thing and I feel good afterward. So maybe this feeling that you're looking for isn't a mind thing, but a mind-body connection thing.

Reply
[-]AnnaSalamon20d20

Thanks; I appreciate this thought, particularly the examples bit.

Reply
[-]Sherrinford16d91

I really welcome the announcement that CFAR is restarting. When I attended a workshop, I liked the participants, the lecturers, the atmosphere, and the impact of committing time to work on problems that participants had previously procrastinated. That said, a bunch of thoughts and questions:

  • I am not sure whether there is really some specific "rationality magic" about these workshops. The CFAR technique collection contains cool techniques, but it does not really feel that different from what you might do in time-management/micro-habits/GTD/whatever workshop combined with some things that seem like group coaching, psychological process consulting or things that at least feel a little woo.
  • There might be a specific group dynamic going on in these workshops that has to do with the commitment atmosphere, self-expectations, selection effects, the payment of $ 5000. This may get some people to become productive or whatever, but I assume it can also be unhealthy to others (note that not all unhealthy developments are on the level of psychosis or mania or whatever).
  • I attended a free workshop in Prague in 2022. So maybe some of the effects were different there. Nonetheless, I would like to know what insights you generated with those workshops (assuming that that was evaluated systematically). I think they were held for generating data.
  • It seems positive that "circling" is not mentioned as a “CFAR classic”.
Reply
[-]Chris Lakin20d80

Congratulations!

Reply1
[-]AnnaSalamon20d81

(CFAR's website is several years out of date, so please ignore it for now; I'll have it up-to-date-ish in a day or two.)

Reply
[-]AnnaSalamon19d121

I have now updated the website, so feel free to stop ignoring it. (There are still some changes we're planning to make sometime in the next month or so, eg adding an FAQ and more staff book picks and the ability to take coaching clients. But the current website should be accurate, if a bit spartan. If you notice something wrong on it, we do want to know.)

Reply
[-]kave11d72

I know many of you folks care a lot about how AI goes. I'm curious how you connect that with – or actively disconnect that from – the new workshops.

The question I'm most interested in: do you have a set of values you intend the workshops to do well by, that don't involve AI, and that you don't intend to let AI pre-empt?[1][2]

I'm also interested in any thinking you have about how the workshops support the role of x-risk, but if I could pick one question, it'd be the former.

  1. ^

    At least in a given workshop. Perhaps you'd stop doing the workshops overall if your thoughts about AI changed

  2. ^

    Barring edge cases, like someone trying to build an AGI in the basement or whatever

Reply
[-]AnnaSalamon18d70

CFAR now has an X.com account, https://x.com/CFARonX. If you happen to be up for following us on there, it might help convince X.com that we're an actual organization and not a spambot, which would be nice for us.

(Weirdly, we "upgraded" to a paid account and it responded to this by freezing our ability to edit our profile photo or handle until verified, which I wish I'd anticipated.)

Reply
[-]Raemon18d20

currently when I click that link I get "Page not found"

Reply
[-]Ben Pace18d61

The period had become part of the URL. I have fixed it.

Reply
[-]AnnaSalamon18d20

Thanks; fixed.

Reply
[-]Screwtape13d20

I have a lot of interest in the data collection puzzle.

Object Level Questions

My last best writeup of the problem is the Unofficial 2024 LessWrong Community Census, in one of the fishing expeditions. My strategy has been to ask about things that might make people more rational (e.g. going to CFAR workshops, reading The Sequences, etc) and ask questions to test people's rationality (e.g. conjunction fallacy, units of exchange, etc) and then check if there's any patterns.  

There's always the good ol' self-report on comfort with techniques, but I've been trying to collect questions that are objective evaluations. A partial collection of my best:

  • Calibration questions ("What are your odds that the population of Japan is >100 million?")
  • Conjunction fallacy questions (Ask group A "What are your odds Russia and Ukraine are still at war in 2026?" and ask group B "What are your odds Putin is dead and Russia and Ukraine are still at war in 2026?")
  • Units of Exchange questions (See the "Values and Dutch Booking" section of the census for one way I test that.)
  • Argument by authority ("Do you agree with Scott Alexander that Ritalin has less risk of Parkinson's than Adderall?" where this is a mistake he's since admitted.)
  • Brainstorm count ("How many unaccustomed uses of objects in this room for combat can you come up with?")

Still, self-reports aren't worthless.

Meta, how do we find good questions?

I'm tempted to ask people their goals, ask who's succeeding at their goals or at common goals, and then operate as though that's a useful proxy. There's a fair number of people who say they want a well paying job and a happy relationship, and other people who have those things. Selection effects are sneaky though, and I don't trust my ability to sort out people who are doing well financially because of CFAR's good teachings from the people who were able to attend CFAR because they were already doing well financially. 

On a meta level, I feel pretty excited about different groups that are trying to increase rationality asking each other's questions. That is, if ESPR had a question, CFAR had another question, and the Guild of the Rose had a third question, I think it'd be great if each of them asked their attendees all three questions. Even better in my view to add a few organizations that are adjacent but not really aiming at that goal; ACX Everywhere or Manifold, for instance. Those would be control groups. The different organizations are doing different things, and if ESPR starts doing better on the evaluation questions than Guild of the Rose then maybe the Guild starts borrowing more from ESPR's approach. If ACX Everywhere attendees have better calibration than Metaculus, then we notice we're confused. I've been doing this for the ULWC Census already, and I'd be interested in adding it to after-event surveys.

Is there one or two questions CFAR wants to ask, or has historically asked, that you'd like to add to that collection? Put another way, what are the couple of evaluation questions you think CFAR alumni will do better on relative to say, ACX Everywhere attendees?

Reply
Moderation Log
More from AnnaSalamon
View more
Curated and popular this week
53Comments
Center for Applied Rationality (CFAR)
Personal Blog

Hi all! After about five years of hibernation and quietly getting our bearings,[1] CFAR will soon be running two pilot mainline workshops, and may run many more, depending how these go.

First, a minor name change request 

We would like now to be called “A Center for Applied Rationality,” not “the Center for Applied Rationality.” Because we’d like to be visibly not trying to be the one canonical locus.[2]

Second, pilot workshops! 

We have two, and are currently accepting applications / sign-ups:

  • Nov 5–9, in California;
  • Jan 21–25, near Austin, TX;

Apply here. (If you're interested in the workshop but not sure you want to come, you're welcome to apply; it can be a good way to talk to us about the workshops and get a sense of things.)

Third, a bit about what to expect if you come

The workshops will have a familiar form factor:

  • 4.5 days (arrive Wednesday evening; depart Sunday night or Monday morning).
    ~25 participants, plus a few volunteers.
  • 5 instructors.
  • Immersive, on-site, with lots of conversation over meals and into the evenings.

I like this form factor, because:

  • Ideal length: Usually, people spend the first two days settling in, getting used to this particular set of strangers, etc. With a 4.5 day workshop, that still leaves us 2.5 days to have exceptionally present/earnest conversations. This length lets people settle into really deep conversational threads, without getting too exhausted.
  • Creates a local social context: Much of human thinking is social. We will let ourselves think a certain way when we have conversation-partners who are up for thinking that way with us, and when we can see a social context where many people are doing it in a way that seems healthy/happy.
  • It helps CFAR staff do gradient descent on how to make the social contexts we want, as well as on how to “teach” particular "rationality techniques.”
    • There’s a bunch of features we desire from the social context: people can make new, lasting friendships; can have earnest conversations about stuff they care about; can be and feel free, conscious, and in touch with what matters to them; can acquire lastingly more traction on their “Hamming Questions,” etc. Workshops are a good format for learning how to create particular social contexts, and how to help people become consciously empowered about particular aspects of being human. (I think.)

Many classic classes, with some new stuff and a subtly different tone:

Like CFAR’s previous workshops, the new workshops are jam-packed with considerably more content than most people expect from 4.5 days.

This includes:

1) Many “CFAR classics,” probably including: Inner Simulator, TAPs, Goal-Factoring, Focusing, Resolve Cycles, CoZE lab, and Hamming Questions. (There's no need to look at this stuff before coming; I’m only linking in case you want to get an idea.)

2) A tone shift (vs the classic workshops) to more of a “rationality hobbyist convention, with visitors from many philosophical schools.” In both our newer (less polished) classes and our remakes of some classics, we’re emphasizing aspects of the human condition that some of us felt were underexplored in the Sequences and in previous CFAR workshops. Notably:

  • a) Pride in craftsmanship, and how this makes it easier to do good work;
  • b) Where people get “energy” from, for doing hard, uncertain, or socially courageous work.
  • c) Feedback loops, along the lines of David Deutsch’s falsificationism
  • d) Christopher Alexander’s design patterns, and the role of beauty, and of the interplay of parts and wholes, in functional structures;
  • e) Fredrich Hayek’s model of how knowledge is amalgamated across an economy (and how this depends on respecting natural property rights), taken as a metaphor for actions within a mind.
  • f) How keeping "surprise logs" about our own actions and reactions can help us become aware of more of ourselves over time
  • g) [Your idea goes here, maybe? Because you bring it and show it to us and others.]

If you want, you’ll get assistance locating the most fundamental moves in your own patterns of thinking, distilling these patterns into a thing you and others can practice consciously (even where they don’t match ours).

(Someone might ask: if there are varied schools of thought present, not all based in the Sequences, what makes it a “rationality” convention? My answer is that it’s a “rationality” convention because we care a lot about forming true beliefs, and about building large-scale models that make coherent, accurate predictions even when taken literally. Some people do talk about “auras” or “reincarnation” in ways that help them describe or fit some local pattern, but at the end of the day these things are not physically literal, and you get bad predictions if you think they are, and we want to keep our eye on that ball while geeking out about the full range of the human condition.)

3) A first two days packed with "content" (mostly classic material, with some new), followed by a (pilot, not yet honed) second half aimed at helping you integrate the skills with one another, with your prior skills, and with your everyday life. Our goal here is to get your CFAR-style/"5-minute-timer-style" skills to coexist with "tortoise skills," with slow patterns of self-observation and of bringing things slowly to consciousness, and with whatever relationships and slow projects you care about.

There will also be nature walks, a chance to chill around a fire pit, or other unhurried time to just hang out.

Who might want to come / why might a person want to come?

You might like to come if any of these are true:

  • A big house full of rationality hobbyists geeking out for four days sounds like your idea of a good time;
  • You want to experience the classic CFAR workshop, and missed it last time around. (This one isn’t identical, but it has most of the best bits.)
  • You want to support and shape this particular attempt at a rationality scene (with your time, ideas, and workshop fee).

Who probably shouldn’t come?

These rationality workshops are not for everyone. In particular:

  • People who don’t want to be around people quite this many hours. (As in the past, most participants stay in a large house with many other staff and participants, take shared classes with many paired exercises, and socialize over meals and evenings. Sleeping arrangements are usually shared rooms. You can always step away for breaks, but it’s still a lot of people-time.)
  • People who have an object-level project they don’t want to step away from. (CFAR workshops might disrupt your progress in two ways: by being five days (plus transit and recovery time) where you can’t do much work on your normal stuff, and by doing a bunch of “thinking about thinking” that risks disrupting a productive groove.) If this is you, it may be better to wait until a retreat feels more appealing.
  • People with a history of mania, hypomania, or psychosis. (There’s some evidence that everything from meditation retreats to philosophy books to CFAR workshops may trigger mania or psychosis in folks with tendencies in that direction. If you’re vulnerable in this direction, it’s probably best to not come, or at least to talk to your psychiatrist before deciding.) (For related reasons, please do not bring cannabis or other recreational drugs to a workshop, regardless of your personal risk factors; or at minimum don't bring it to share.)
  • People who hate it when folks who don’t understand them try to tell them how to think anyhow. (We try not to be blindly full of ourselves, but we don’t always succeed.) 

Cost:

We want the workshop fees to cover the marginal cost to CFAR of running these workshops, and a little bit also of the “standing costs” of running an organization (trying curriculum beforehand on volunteers so we can refine it, etc). We are therefore charging:

  • $5,000 if you make over $170k/year (without dependents; somewhat higher with dependents).[3]
  • Sliding scale amounts between $2,000 and $5,000 depending on your income.
  • $2,000 if you’re under 25, or make less than $75k/year.

If you can’t afford $2k and you believe you’ll bring a lot to the workshop, you’re welcome to apply for financial aid and we’ll see what we can do. Likewise if you really don’t want to put in the amount the sliding scale would demand, and your presence would add substantial value, you’re also welcome to apply for financial aid, and we will consider it.

Why this cost:

The above includes room and board. Running and developing CFAR workshops costs us quite a bit; charging at this level should allow us to roughly break even, so we can keep doing this sustainably. I don’t necessarily claim our classes will be worth it to you, although I do think some will get much value from coming. (If you come and, two weeks after returning home, you think your experiences at the workshop haven’t digested into something you find worth it, you can request a refund if you like – CFAR offered this historically, and we intend to keep that part.)

(We are working with an all-very-part-time staff, and plan to keep doing it this way, as I now suspect "doing very-part-time curriculum development and teaching for CFAR" can be healthy, but needs to be mixed with other stuff. (Eliezer said this first, but I didn't believe him.) This decreases total costs some, but it's still expensive.)

How did we prepare these workshops? And the workshops’ epistemic status.

Historical-CFAR (2012-2020) ran about sixty four-day (or longer) retreats of various kinds, and did its best to improve them by gradient-descent. We also thought hard, tried things informally in smaller settings, read stuff from others who’d tried stuff, and learned especially from Eliezer’s Sequences/R:AZ.

These latest workshops came from that heritage, plus me having something of an existential crisis in 2020[4] (and reading Hayek, Christopher Alexander, and others, and playing around), and other instructors having their own experiences. We’ve been doing some playing around with these things (different ones of us, in different contexts), but much less so far than on the old stuff – more like CFAR workshops of 2012/2013 in that way.

What alternatives are there to coming to a workshop?

We here at CFAR believe in goal factoring (sometimes).

If your reason for considering coming to a workshop is that you’d like to boost a “rationality movement” in some form, you might also consider:

  • Starting or supporting a local meetup or rationality practice group
  • Contributing to online rationality discussions;
  • Donating financially to Lightcone (which TBC is not CFAR).

If your reason for considering coming is that you’d like a retreat-style break from your daily life, or a chance to reflect, you might also consider:

  • Organizing a camping trip or retreat with friends, perhaps one where you try techniques from the CFAR handbook, or attempt earnest discussions about life.

If your reason is that you’d like to get better at forming true beliefs, or achieving stuff, you might consider:

  • Making a list of what exactly you’d like to get better at, and doing written weekly/monthly/quarterly reviews about how things are progressing, and what habits/TAPs you might want to try toward progressing these further, perhaps with a bit of coaching mixed in from some rationalist who you think could help.

I think the CFAR retreat is on the pareto frontier for this kind of thing, from my POV. But of course, opinions vary.

Some unsolved puzzles, in case you have helpful comments:

Puzzle: How to get enough “grounding data,” as people tinker with their own mental patterns

One of the healthiest things about Burning Man, IMO, is that at the same time that people are messing around with personal identity and sex and drugs (not necessarily healthy), many of them are also trying to eg repair complicated electronics for art pieces in the middle of the desert without spare parts (healthy; exposes their new mental postures to many “is this working?” checks that’re grounded in the physical world).

At CFAR workshops, people often become conscious of new ways their minds can work, and new things they can try. But we don’t have enough “and now I’ll try to repair my beautiful electronic sculpture, which I need to do right now because the windstorm just blew it all apart, and which will incidentally give me a bunch of real-world grounding” mixed in.

I’d love suggestions here.

Puzzle: How to help people become, or at least stay, “intact,” in several ways 

There are several features of “humans in human-traditional contexts, who haven’t tried to mess with their functioning with ‘techniques’” that I admire and would love to help people boost (if I knew how, and if people wanted this), or that I’d at least like to avoid eroding much.

Among these:

  • “Mental grip strength”: the ability to try really hard on something, or to keep noticing that something is awful if it is awful, rather than quickly reorienting to some easier state.
  • “Organism-level wholeness”: many parts of the psyche (including ones I have no conscious handle on) are in sync with one another.
  • “Living in a full-color world, full of normal human caring, and connected to ancestral humanity. A world that is stable, and that one is not about to dissociate out of.”

    (Many people today, especially high-level people in the bay area, seem to me sort of… abstract, dissociated, cobbled-together-on-purpose-via-conscious-understanding-of-algorithms compared to the people in older books and movies. I’d like more of the normal/historical human thing.)

Puzzle: What data to collect, or how to otherwise see more of what’s happening

This one is a general. But practical suggestions for what to ask people about (or what data to otherwise collect) so as to discern how they’re doing, what impact we’re having, etc. are appreciated.

Thanks for reading!

  1. ^

    (With some one-off and small pilot workshops mixed in, and four Prague workshops in 2022, and with the successful spinning off of LARC/Bramble, which is a small non-profit you've probably never heard of that is running its own tiny bespoke workshops)

  2. ^

    ETA: except that as several people pointed out post posting, this is sometimes grammatically inconvenient or requiring of mental effort. So, new request: please do this when it's easy, but feel free to say "the" when needed for clarity/ease. And we'll keep thinking on names.

  3. ^

    For context, CFAR charged $3900 from 2012 to 2020; $3900 when we started is roughly $5500 today according to the US Bureau of Labor Statistics. 

  4. ^

    Briefly: I became worried that “strategies like the Democrats’ strategy for how people should sync up informationally” for getting lots of people to sync up were predictably certain kinds of useless, and that there was too much of that in my efforts with CFAR and with recruitment for MIRI. I made an attempt to write about this in Narrative Syncing and in My low-quality thoughts on why CFAR didn’t get farther, although I’m not satisfied with either piece. (I also think fear and urgency helped create tricky dynamics; from my POV I addressed some of this in What should you change in response to an "emergency"? And AI risk.)

Mentioned in
18Monthly Roundup #35: October 2025