Our old workshop had a hyper/agitated/ungrounded energy running through it: “do X and you can be cool and rational like HPMOR!Harry”; “do X and you can maybe help with whether we’ll all die.”
This also seems like an important factor in making it easier for alumni to get pulled into cults — upvoting an urgency/desperation to Fix Something ⇒ finding more appeal in questionable exotic offers of power. (Not unique to CFAR, of course — that urgency/desperation is a deeper thread in rationalist culture + something that people might come in with from the start — but I would think CFAR / this energy acted as a vector for it.)
Agreed! Although Eliezer has bunches of [something in the vicinity of urgency/desperation] and is better than even [most people without that] at avoiding "finding more appeal in questionable exotic offers of power". Which I'd like to understand.
In the 2012-2020 workshops, our “CFAR techniques” seemed to help people do 5-minute-timer or insight-based things, but seemed to some of us to make it harder, or at least not easier, to eg:
- Get physical exercise
I think I dispute this? What did you observe that leads you to think this?
It certainly did not have this impact on me. Admittedly there was a period in which I didn't go as hard at strength training as I could have, out of a according to me-misapplied notion of attending to all parts of me. But I've consistently gotten physical exercise over the past 10 years, with occasional lapses that have rarely been longer than a week or two. That was not mostly due to CFAR, but I sure did use CFAR techniques at various points to build, maintain, and restart my exercise habit.
l'd also say I substantially upped my physical exercise/found the habit easier to sustain post CFAR.
Timeless decision theory as an alliance of selves/deciding once to decide always were particularly motivating
Adele argued recently that a rationality curriculum worthy of the name would leave folks less vulnerable to psychosis, and that many current rationalists (CFAR alums and otherwise) are appallingly vulnerable to psychosis. After thinking about it some, I agree.
I want to quote (and endorse, and claim as important) the start of @Eli Tyre's reply at the time:
For what it's worth, I think this is directionally correct, and important, but I don't necessarily buy it as worded.
Sometimes advanced techniques / tools allow power users to do more than they otherwise would be able to, but also break basic-level stuff for less advanced users. There are some people that are able to get a lot more out of their computers with a Linux install, and also, for most people, trying to use and work with Linux can totally interfere with pretty basic stuff that that "just worked" when using windows, or (if you do it wrong) just break your machine, without having the tools to fix it.
It's correspondingly not that surprising to me if power tools for making big changes to people's epistemologies sometimes have the effect of making some people worse at the basics. (Though obviously, if this is the case, a huge priority needs to be attending to and mitigating this dynamic.)
... actually, the rest of that reply is a good comment on "ambiguous impact on health":
That said, I think that the rationality project broadly construed has often fallen into a failure mode of trying to do radically ambitious stuff without first solidly mastering the boring and bog standard basics. This led to often undershooting, not just our ambitions, but the more boring baselines.
We aimed to be faster than science. But, in practice, I think we often didn't meet the epistemic standards of a reasonably healthy scientific subfield.
If I invest substantial effort in rationality development in the future, I intend to first focus on doing the basics really well before trying for superhuman rationality.
(This post is part of a sequence of year-end efforts to invite real conversation about CFAR; you’ll find more about our workshops, as well as our fundraiser, at What’s going on at CFAR? Updates and Fundraiser and at More details on CFAR's new workshops)
In part of that post, we discuss the main thing that bothered me about our past workshop and why I think it is probably fixed now (though we’re still keeping an eye out). Here, I list the biggest remaining known troubles with our workshops and our other major workshop-related todo items.
Your thoughts as to what’s really up with these and how to potentially address them (or what cheap investigations might get us useful info) are most welcome.
Ambiguous impact on health
(Current status: ?)
In the 2012-2020 workshops, our “CFAR techniques” seemed to help people do 5-minute-timer or insight-based things, but seemed to some of us to make it harder, or at least not easier, to eg:
This seems unfortunate.
I’m mildly hopeful the changes we’ve made to our new workshops will also help with this. My reasons for hope:
However, I won’t be surprised if this is still a problem. If so, we’ll need to fix it.
Unclear mechanism of action; lack of "piecewise checkability"
(Current status: unsolved)
Magic happens at the workshops (people seem to “wake up” a bit, look around, and e.g. notice they hate doing the ironing but could probably pay someone to do it, or they’re bored of their normal activities but can change them, or their parents may die someday and now is a good time to reach out, or their own habits are made of examinable, changeable pieces just like the outside world is. And this is great!). But afterwards, it’s hard (for them and for us) to describe the workshop and its mechanism of action to someone who hasn’t been there. Sometimes it’s even hard to describe to someone who has been there, if they’ve had years to forget.[2]
The “hard to remember / hard to describe” property makes it difficult to know:
People also sometimes wonder: are CFAR workshops just fun parties, dressed up as rationality training? I think they aren’t (and those who’ve wondered this aloud in my hearing mostly haven’t been to one).[3] But:
We at aCFAR don’t know how to do this. Yet.
Habits that make it easier for alumni to get pulled into cults
(current status: ?)
As mentioned in our main post: if workshop guests practice deferring to us about what weird things to do with their minds, (especially if they do so for extended periods, based on wispy claims about long-term payoffs such as “this’ll help with AI risk somehow”), this risks setting some up to later try deferring to people running more obviously unhealthy cults. I speak from experience.
(My guess, too, is that our broader cultural influence may have spread some of this unfortunate pattern to the wider culture of the rationalist community, not just to alums. People would come to the workshop, observe a bunch of healthy normal-looking people having fun doing “rationality exercises,” and come away with the cultural belief that introspection and self-modification are cool, normal, and useful. Which they sometimes are. But we need more precision in the transmitted cultural pattern.)
We’ve got to find a way to make our workshops not set people up for bad patterns here. Our current emphasis on “remember the buck stops with you; check whether it is producing fruits you directly feel good about” may help. So may finding ways to maintain, as our and guests’ central focus, the discovery of checkable stuff in the outside world and/or the building of outside stuff with obvious feedback loops.
Outward-directedness
(current status: minor progress)
In addition to perhaps assisting with cult resistance, it would also be nice for other reasons if our workshops can become more outward-directed -- more about noticing and building neat things in the world, rather than pure introspection/self-modification.
More things I want for the workshops
Here are some additional things I would like to have (or have more of) at future CFAR workshops:
A SPARC instructor told me that the head of a different math program for young people had complained to him that SPARC or HPMOR seemed to mess up peoples’ ability to be deeply interested in textbooks, or other gradual acquisition of deep math knowledge, in favor of immediate cleverness / insights / reconceptualizations. (It’s been some years since I heard it; I might be botching the details a bit. Logan Strohl’s work on “tortoise skills” seems to me to be a response to independently noticing a similar need.)
Thanks to Dan Keys for persistently raising this fact to my attention and convincing me of its importance.
The reason I think workshops don't just work by being fun parties: alumni often have a characteristic "CFAR alumni" skillset they didn't come in with. For example, if most people attempt a task (e.g., to mow the lawn) and find it's not in their direct reach (e.g. because the lawnmower is broken and all nearby stores are closed for Thanksgiving), they'll decide it's impossible. If CFAR alumni are in the same situation, they may (sensibly) decide it's too expensive to be worth it, but they'll usually be aware that there are other avenues they could try (e.g. phoning gardeners from Craigslist and attempting to rent their lawnmower).
Former CFAR instructor Kenzi Ashkie and I used to observe and discuss these skills in our alums, including months and years after they attended.
Adele argued recently that a rationality curriculum worthy of the name would leave folks less vulnerable to psychosis, and that many current rationalists (CFAR alums and otherwise) are appallingly vulnerable to psychosis. After thinking about it some, I agree. I’m hoping our “respecting who-ness” thread and pride or self-esteem threads help some; there is also much else we can try.