- Ability to notice and respect self boundaries feels particularly important to me.
That seems right.
I wish I had a clearer notion of what "self" means, here.
I tried asking myself “What [skills / character traits / etc] might reduce risk of psychosis, or might indicate a lack of vulnerability to psychosis, while also being good?”
(The “while also being good” criterion is meant to rule out things such as “almost never changing one’s mind about anything major” that for all I know might be a protective factor, but that I don’t want for myself or for other people I care about.)
I restricted myself to longer-term traits. (That is: I’m imagining “psychosis” as a thing that happens when *both* (a) a person has weak structures in some way; and (b) a person has high short-term stress on those structures, eg from having had a major life change recently or having taken a psychedelic or something. I’m trying to brainstorm traits that would help with (a), controlling for (b).)
It actually hadn’t occurred to me to ask myself this question before, so thank you Adele. (By contrast, I had put effort into reducing (b) in cases where someone is already in a more mildly psychosis-like direction, eg the first aid stuff I mentioned earlier. )
—
My current brainstorm:
(1) The thing Nathaniel Brandon calls “self-esteem,” and gives exercises for developing in Six Pillars of Self-esteem. (Note that this is a much cooler than than what my elementary school teachers seemed to mean by the word.)
(2) The ability to work on long-term projects successfully for a long time. (Whatever that’s made of.)
(3) The ability to maintain long-term friendships and collaborations. (Whatever that’s made of.)
(4) The ability to notice / tune into and respect other peoples’ boundaries (or organizations’ boundaries, or etc). Where by a “boundary” I mean: (a) stuff the person doesn’t consent to, that common practice or natural law says they’re the authority about (e.g. “I’m not okay with you touching my hand”; “I’m not willing to participate in conversations where I’m interrupted a lot”) OR (b) stuff that’ll disable the person’s usual modes/safeguards/protections/conscious-choosing-powers (?except in unusually wholesome cases of enthusiastic consent).
(4) Anything good that allows people to have a check of some sort on local illusions or local impulses. Eg:
(5) Tempo stuff: Getting regular sleep, regular exercise, having deep predictable rhythms to one’s life (eg times of day for eating vs for not-eating; times of week for working vs for not-working; times of year for seeing extended family and times for reflecting). Having a long memory, and caring about thoughts and purposes that extend across time.
(6) Embeddedness in a larger world, eg
I'll do this; thank you. In general please don't assume I've done all the obvious things (in any domain); it's easy to miss stuff and cheap to read unneeded advice briefly.
I’ll try here to summarize (my guess at) your views, Adele. Please let me know what I’m getting right and wrong. And also if there are points you care about that I left out.
I think you think:
(1) Psychotic episodes are quite bad for people when they happen.
(2) They happen a lot more (than gen population base rates) around the rationalists.
(2a) They also happen a lot more (than gen population base rates) among “the kinds of people we attract.” You’re not sure whether we’re above the base rate for “the kinds of people who would be likely to end up here.” You also don’t care much about that question.
(3) There are probably things we as a community can tractably do to significantly reduce the number of psychotic episodes, in a way that is good or not-bad for our goals overall.
(4) People such as Brent caused/cause psychotic episodes sometimes, or increase their rate in people with risk factors or something.
(5) You’re not sure whether CFAR workshops were more psychosis-risky than other parts of the rationalist community.
(6) You think CFAR leadership, and leadership of the rationality community broadly, had and has a duty to try to reduce the number of psychotic episodes in the rationalist community at large, not just events happening at / directly related to CFAR workshops.
(6b) You also think CFAR leadership failed to perform this duty.
(7) You think you can see something of the mechanisms whereby psyches sometimes have psychotic episodes, and that this view affords some angles for helping prevent such episodes.
(8) Separately from “7”, you think psychotic episodes are in some way related to poor epistemics (e.g., psychotic people form really false models of a lot of basic things), and you think it should probably be possible to create “rationality techniques” or "cogsec techniques" or something that simultaneously improve most peoples’ overall epistemics, and reduce peoples’ vulnerability to psychosis.
Thanks; fixed.
CFAR now has an X.com account, https://x.com/CFARonX. If you happen to be up for following us on there, it might help convince X.com that we're an actual organization and not a spambot, which would be nice for us.
(Weirdly, we "upgraded" to a paid account and it responded to this by freezing our ability to edit our profile photo or handle until verified, which I wish I'd anticipated.)
You're right. Oops!
I added a footnote above modifying our request to "when it's easy/convenient." Eg as mattmacdermott notes below, we can at least use it as a tagline ("Signed, Anna from A Center for ...").
I have now updated the website, so feel free to stop ignoring it. (There are still some changes we're planning to make sometime in the next month or so, eg adding an FAQ and more staff book picks and the ability to take coaching clients. But the current website should be accurate, if a bit spartan. If you notice something wrong on it, we do want to know.)
I appreciate you taking the time to engage with me here, I imagine this must be a pretty frustrating conversation for you in some ways. Thank you.
No, I mean, I do honestly appreciate you engaging, and my grudgingness is gone now that we aren't putting the long-winded version under the post about pilot workshops (and I don't mind if you later put some short comments there). Not frustrating. Thanks.
And please feel free to be as persistent or detailed or whatever as you have any inclination toward.
(To give a bit more context on why I appreciate it: my best guess is that old CFAR workshops did both a lot of good, and a significant amount of damage, by which I mostly don't mean psychosis, I mostly mean smaller kinds of damage to peoples' thinking habits or to ways the social fabric could've formed. A load-bearing piece of my hope of doing better this time is to try to have everything visible unless we have a good reason not to (a "good reason" like [personal privacy of a person who isn't in power], hence why I'm not naming the specific people who had manic/psychotic episodes; not like [wanting CFAR not to look bad]), and to try to set up a context where people really do share concerns and thoughts. I'm not wholly sure how to do that, but I'm pretty sure you're helping here.)
I'll have more comments tomorrow or sometime.
I'm somehow wanting to clarify the difference between a "bridging heuristic" and solving a bucket error. If a person is to be able to hope for "an AI pause" and "not totalitarianism" at the same time (like cousin_it), they aren't making a bucket error.
But, they/we might still not know how to try to harness social energy toward "an AI pause" without harnessing social energy toward "let any government that says it's pro-pause, move toward totalitarianism with AI safety as fig leaf".
The bridging heuristic I'd want would somehow involve built-in delimiters, so that if a if a social coalition gathered momentum behind the heuristic, the coalition wouldn't be exploitable -- its members would know what lines, if crossed, meant that the people who had co-opted the name of the coalition were no longer fighting for the coalition's real values.
Like, if a good free speech organization backs Alice's legal right to say [really dumb/offensive thing], the organization manages to keep track that it's deal is "defend anybody's legal right to say anything", rather than "build coalition for [really dumb/offensive thing]"; it doesn't get confused and switch to supporting [really dumb/offensive thing]. Adequate ethical heuristics around [good thing X, eg AI safety] would let us build social momentum toward [X] without it getting co-opted by [bad things that try to say they're X].