I appreciate Zoe Curzi's revelations of her experience with Leverage.  I know how hard it is to speak up when no or few others do, and when people are trying to keep things under wraps.

I haven't posted much publicly about my experiences working as a researcher at MIRI (2015-2017) or around CFAR events, to a large degree because I've been afraid.  Now that Zoe has posted about her experience, I find it easier to do so, especially after the post was generally well-received by LessWrong.

I felt moved to write this, not just because of Zoe's post, but also because of Aella's commentary:

I've found established rationalist communities to have excellent norms that prevent stuff like what happened at Leverage. The times where it gets weird is typically when you mix in a strong leader + splintered, isolated subgroup + new norms. (this is not the first time)

This seemed to me to be definitely false, upon reading it.  Most of what was considered bad about the events at Leverage Research also happened around MIRI/CFAR, around the same time period (2017-2019).

I don't want to concentrate on the question of which is "worse"; it is hard to even start thinking about that without discussing facts on the ground and general social models that would apply to both cases.  I also caution against blame in general, in situations like these, where many people (including me!) contributed to the problem, and have kept quiet for various reasons.  With good reason, it is standard for truth and reconciliation events to focus on restorative rather than retributive justice, and include the possibility of forgiveness for past crimes.

As a roadmap for the rest of the post, I'll start by describing some background, describe some trauma symptoms and mental health issues I and others have experienced, and describe the actual situations that these mental events were influenced by and "about" to a significant extent.

Background: choosing a career

After I finished my CS/AI Master's degree at Stanford, I faced a choice of what to do next.  I had a job offer at Google for machine learning research and a job offer at MIRI for AI alignment research.  I had also previously considered pursuing a PhD at Stanford or Berkeley; I'd already done undergrad research at CoCoLab, so this could have easily been a natural transition.

I'd decided against a PhD on the basis that research in industry was a better opportunity to work on important problems that impact the world; since then I've gotten more information from insiders that academia is a "trash fire" (not my quote!), so I don't regret this decision.

I was faced with a decision between Google and MIRI.  I knew that at MIRI I'd be taking a pay cut.  On the other hand, I'd be working on AI alignment, an important problem for the future of the world, probably significantly more important than whatever I'd be working on at Google.  And I'd get an opportunity to work with smart, ambitious people, who were structuring their communication protocols and life decisions around the content of the LessWrong Sequences.

These Sequences contained many ideas that I had developed or discovered independently, such as functionalist theory of mind, the idea that Solomonoff Induction was a formalization of inductive epistemology, and the idea that one-boxing in Newcomb's problem is more rational than two-boxing.  The scene attracted thoughtful people who cared about getting the right answer on abstract problems like this, making for very interesting conversations.

Research at MIRI was an extension of such interesting conversations to rigorous mathematical formalism, making it very fun (at least for a time).  Some of the best research I've done was at MIRI (reflective oracles, logical induction, others).  I met many of my current friends through LessWrong, MIRI, and the broader LessWrong Berkeley community.

When I began at MIRI (in 2015), there were ambient concerns that it was a "cult"; this was a set of people with a non-mainstream ideology that claimed that the future of the world depended on a small set of people that included many of them.  These concerns didn't seem especially important to me at the time.  So what if the ideology is non-mainstream as long as it's reasonable?  And if the most reasonable set of ideas implies high impact from a rare form of research, so be it; that's been the case at times in history.

(Most of the rest of this post will be negative-valenced, like Zoe's post; I wanted to put some things I liked about MIRI and the Berkeley community up-front.  I will be noting parts of Zoe's post and comparing them to my own experience, which I hope helps to illuminate common patterns; it really helps to have an existing different account to prompt my memory of what happened.)

Trauma symptoms and other mental health problems

Back to Zoe's post.  I want to disagree with a frame that says that the main thing that's bad was that Leverage (or MIRI/CFAR) was a "cult".  This makes it seem like what happened at Leverage is much worse than what could happen at a normal company.  But, having read Moral Mazes and talked to people with normal corporate experience (especially in management), I find that "normal" corporations are often quite harmful to the psychological health of their employees, e.g. causing them to have complex PTSD symptoms, to see the world in zero-sum terms more often, and to have more preferences for things to be incoherent.  Normal startups are commonly called "cults", with good reason.  Overall, there are both benefits and harms of high-demand ideological communities ("cults") compared to more normal occupations and social groups, and the specifics matter more than the general class of something being "normal" or a "cult", although the general class affects the structure of the specifics.

Zoe begins by listing a number of trauma symptoms she experienced.  I have, personally, experienced most of those on the list of cult after-effects in 2017, even before I had a psychotic break.

The psychotic break was in October 2017, and involved psychedelic use (as part of trying to "fix" multiple deep mental problems at once, which was, empirically, overly ambitious); although people around me to some degree tried to help me, this "treatment" mostly made the problem worse, so I was placed in 1-2 weeks of intensive psychiatric hospitalization, followed by 2 weeks in a halfway house.  This was followed by severe depression lasting months, and less severe depression from then on, which I still haven't fully recovered from.  I had PTSD symptoms after the event and am still recovering.

During this time, I was intensely scrupulous; I believed that I was intrinsically evil, had destroyed significant parts of the world with my demonic powers, and was in a hell of my own creation.  I was catatonic for multiple days, afraid that by moving I would cause harm to those around me.  This is in line with scrupulosity-related post-cult symptoms.

Talking about this is to some degree difficult because it's normal to think of this as "really bad".  Although it was exceptionally emotionally painful and confusing, the experience taught me a lot, very rapidly; I gained and partially stabilized a new perspective on society and my relation to it, and to my own mind.  I have much more ability to relate to normal people now, who are also for the most part also traumatized.

(Yes, I realize how strange it is that I was more able to relate to normal people by occupying an extremely weird mental state where I thought I was destroying the world and was ashamed and suicidal regarding this; such is the state of normal Americans, apparently, in a time when suicidal music is extremely popular among youth.)

Like Zoe, I have experienced enormous post-traumatic growth.  To quote a song, "I am Woman": "Yes, I'm wise, but it's wisdom born of pain.  I guess I've paid the price, but look how much I've gained."

While most people around MIRI and CFAR didn't have psychotic breaks, there were at least 3 other cases of psychiatric institutionalizations by people in the social circle immediate to MIRI/CFAR; at least one other than me had worked at MIRI for a significant time, and at least one had done work with MIRI on a shorter-term basis.  There was, in addition, a case of someone becoming very paranoid, attacking a mental health worker, and hijacking her car, leading to jail time; this person was not an employee of either organization, but had attended multiple CFAR events including a relatively exclusive AI-focused one.

I heard that the paranoid person in question was concerned about a demon inside him, implanted by another person, trying to escape.  (I knew the other person in question, and their own account was consistent with attempting to implant mental subprocesses in others, although I don't believe they intended anything like this particular effect).  My own actions while psychotic later that year were, though physically nonviolent, highly morally confused; I felt that I was acting very badly and "steering in the wrong direction", e.g. in controlling the minds of people around me or subtly threatening them, and was seeing signs that I was harming people around me, although none of this was legible enough to seem objectively likely after the fact.  I was also extremely paranoid about the social environment, being unable to sleep normally due to fear.

There are even cases of suicide in the Berkeley rationality community associated with scrupulosity and mental self-improvement (specifically, Maia Pasek/SquirrelInHell, and Jay Winterford/Fluttershy, both of whom were long-time LessWrong posters; Jay wrote an essay about suicidality, evil, domination, and Roko's basilisk months before the suicide itself).  Both these cases are associated with a subgroup splitting off of the CFAR-centric rationality community due to its perceived corruption, centered around Ziz.  (I also thought CFAR was pretty corrupt at the time, and I also attempted to split off another group when attempts at communication with CFAR failed; I don't think this judgment was in error, though many of the following actions were; the splinter group seems to have selected for high scrupulosity and not attenuated its mental impact.)

The cases discussed are not always of MIRI/CFAR employees, so they're hard to attribute to the organizations themselves, even if they were clearly in the same or a nearby social circle.  Leverage was an especially legible organization, with a relatively clear interior/exterior distinction, while CFAR was less legible, having a set of events that different people were invited to, and many conversations including people not part of the organization.  Hence, it is easier to attribute organizational responsibility at Leverage than around MIRI/CFAR.  (This diffusion of responsibility, of course, doesn't help when there are actual crises, mental health or otherwise.)

Obviously, for every case of poor mental health that "blows up" and is noted, there are many cases that aren't.  Many people around MIRI/CFAR and Leverage, like Zoe, have trauma symptoms (including "cult after-effect symptoms") that aren't known about publicly until the person speaks up.

Why do so few speak publicly, and after so long?

Zoe discusses why she hadn't gone public until now.  She first cites fear of response:

Leverage was very good at convincing me that I was wrong, my feelings didn't matter, and that the world was something other than what I thought it was. After leaving, it took me years to reclaim that self-trust.

Clearly, not all cases of people trying to convince each other that they're wrong are abusive; there's an extra dimension of institutional gaslighting, people telling you something you have no reason to expect they actually believe, people being defensive and blocking information, giving implausible counter-arguments, trying to make you doubt your account and agree with their bottom line.

Jennifer Freyd writes about "betrayal blindness", a common problem where people hide from themselves evidence that their institutions have betrayed them.  I experienced this around MIRI/CFAR.

Some background on AI timelines: At the Asilomar Beneficial AI conference, in early 2017 (after AlphaGo was demonstrated in late 2016), I remember another attendee commenting on a "short timelines bug" going around.  Apparently a prominent researcher was going around convincing people that human-level AGI was coming in 5-15 years.

This trend in belief included MIRI/CFAR leadership; one person commented that he noticed his timelines trending only towards getting shorter, and decided to update all at once.  I've written about AI timelines in relation to political motivations before (long after I actually left MIRI).

Perhaps more important to my subsequent decisions, the AI timelines shortening triggered an acceleration of social dynamics.  MIRI became very secretive about research.  Many researchers were working on secret projects, and I learned almost nothing about these.  I and other researchers were told not to even ask each other about what others of us were working on, on the basis that if someone were working on a secret project, they may have to reveal this fact.  Instead, we were supposed to discuss our projects with an executive, who could connect people working on similar projects.

I had disagreements with the party line, such as on when human-level AGI was likely to be developed and about security policies around AI, and there was quite a lot of effort to convince me of their position, that AGI was likely coming soon and that I was endangering the world by talking openly about AI in the abstract (not even about specific new AI algorithms). Someone in the community told me that for me to think AGI probably won't be developed soon, I must think I'm better at meta-rationality than Eliezer Yudkowsky, a massive claim of my own specialness [EDIT: Eliezer himself and Sequences-type thinking, of course, would aggressively disagree with the epistemic methodology advocated by this person].  I experienced a high degree of scrupulosity about writing anything even somewhat critical of the community and institutions (e.g. this post).  I saw evidence of bad faith around me, but it was hard to reject the frame for many months; I continued to worry about whether I was destroying everything by going down certain mental paths and not giving the party line the benefit of the doubt, despite its increasing absurdity.

Like Zoe, I was definitely worried about fear of response.  I had paranoid fantasies about a MIRI executive assassinating me.  The decision theory research I had done came to life, as I thought about the game theory of submitting to a threat of a gun, in relation to how different decision theories respond to extortion.

This imagination, though extreme (and definitely reflective of a cognitive error), was to some degree re-enforced by the social environment.  I mentioned the possibility of whistle-blowing on MIRI to someone I knew, who responded that I should consider talking with Chelsea Manning, a whistleblower who is under high threat.  There was quite a lot of paranoia at the time, both among the "establishment" (who feared being excluded or blamed) and "dissidents" (who feared retaliation by institutional actors).  (I would, if asked to take bets, have bet strongly against actual assassination, but I did fear other responses.)

More recently (in 2019), there were multiple masked protesters at a CFAR event (handing out pamphlets critical of MIRI and CFAR) who had a SWAT team called on them (by camp administrators, not CFAR people, although a CFAR executive had called the police previously about this group), who were arrested, and are now facing the possibility of long jail time.  While this group of people (Ziz and some friends/associates) chose an unnecessarily risky way to protest, hearing about this made me worry about violently authoritarian responses to whistleblowing, especially when I was under the impression that it was a CFAR-adjacent person who had called the cops to say the protesters had a gun (which they didn't have), which is the way I heard the story the first time.

Zoe further talks about how the experience was incredibly confusing and people usually only talk about the past events secretively.  This matches my experience.

Like Zoe, I care about the people I interacted with during the time of the events (who are, for the most part, colleagues who I learned from), and I don't intend to cause harm to them through writing about these events.

Zoe discusses an unofficial NDA people signed as they left, agreeing not to talk badly of the organization.  While I wasn't pressured to sign an NDA, there were significant security policies discussed at the time (including the one about researchers not asking each other about research).  I was discouraged from writing a blog post estimating when AI would be developed, on the basis that a real conversation about this topic among rationalists would cause AI to come sooner, which would be more dangerous (the blog post in question would have been similar to the AI forecasting work I did later, here and here; judge for yourself how dangerous this is).  This made it hard to talk about the silencing dynamic; if you don't have the freedom to speak about the institution and limits of freedom of speech, then you don't have freedom of speech.

(Is it a surprise that, after over a year in an environment where I was encouraged to think seriously about the possibility that simple actions such as writing blog posts about AI forecasting could destroy the world, I would develop the belief that I could destroy everything through subtle mental movements that manipulate people?)

Years before, MIRI had a non-disclosure agreement that members were pressured to sign, as part of a legal dispute with Louie Helm.

I was certainly socially discouraged from revealing things that would harm the "brand" of MIRI and CFAR, by executive people.  There was some discussion at the time of the possibility of corruption in EA/rationality institutions (e.g. Ben Hoffman's posts criticizing effective altruism, GiveWell, and the Open Philanthropy Project); a lot of this didn't end up on the Internet due to PR concerns.

Someone who I was collaborating with at the time (Michael Vassar) was commenting on social epistemology and the strengths and weaknesses of various people's epistemology and strategy, including people who were leaders at MIRI/CFAR.  Subsequently, Anna Salamon said that Michael was causing someone else at MIRI to "downvote Eliezer in his head" and that this was bad because it meant that the "community" would not agree about who the leaders were, and would therefore have akrasia issues due to the lack of agreement on a single leader in their head telling them what to do.  (Anna says, years later, that she was concerned about bias in selectively causing downvotes rather than upvotes; however, at the time, based on what was said, I had the impression that the primary concern was about coordination around common leadership rather than bias specifically.)

This seemed culty to me and some friends; it's especially evocative in relation to Julian Jaynes' writing about bronze age cults, which detail a psychological model in which idols/gods give people voices in their head telling them what to do.

(As I describe these events in retrospect they seem rather ridiculous, but at the time I was seriously confused about whether I was especially crazy or in-the-wrong, and the leadership was behaving sensibly.  If I were the type of person to trust my own judgment in the face of organizational mind control, I probably wouldn't have been hired in the first place; everything I knew about how to be hired would point towards having little mental resistance to organizational narratives.)

Strange psycho-social-metaphysical hypotheses in a group setting

Zoe gives a list of points showing how "out of control" the situation at Leverage got.  This is consistent with what I've heard from other ex-Leverage people.

The weirdest part of the events recounted is the concern about possibly-demonic mental subprocesses being implanted by other people. As a brief model of something similar to this (not necessarily the same model as the Leverage people were using): people often pick up behaviors ("know-how") and mental models from other people, through acculturation and imitation. Some of this influence could be (a) largely unconscious on the part of the receiver, (b) partially intentional or the part of the person having mental effects on others (where these intentions may include behaviorist conditioning, similar to hypnosis, causing behaviors to be triggered under certain circumstances), and (c) overall harmful to the receiver's conscious goals. According to IFS-like psychological models, it's common for a single brain to contain multiple sub-processes with different intentions. While the mental subprocess implantation hypothesis is somewhat strange, it's hard to rule out based on physics or psychology.

As weird as the situation got, with people being afraid of demonic subprocesses being implanted by other people, there were also psychotic breaks involving demonic subprocess narratives around MIRI and CFAR. These strange experiences are, as far as I can tell, part of a more general social phenomenon around that time period; I recall a tweet commenting that the election of Donald Trump convinced everyone that magic was real.

Unless there were psychiatric institutionalizations or jail time resulting from the Leverage psychosis, I infer that Leverage overall handled their metaphysical weirdness better than the MIRI/CFAR adjacent community.  While in Leverage the possibility of subtle psychological influence between people was discussed relatively openly, around MIRI/CFAR it was discussed covertly, with people being told they were crazy for believing it might be possible.  (I noted at the time that there might be a sense in which different people have "auras" in a way that is not less inherently rigorous than the way in which different people have "charisma", and I feared this type of comment would cause people to say I was crazy.)

As a consequence, the people most mentally concerned with strange social metaphysics were marginalized, and had more severe psychoses with less community support, hence requiring normal psychiatric hospitalization.

The case Zoe recounts of someone "having a psychotic break" sounds tame relative to what I'm familiar with.  Someone can mentally explore strange metaphysics, e.g. a different relation to time or God, in a supportive social environment where people can offer them informational and material assistance, and help reality-check their ideas.

Alternatively, like me, they can explore these metaphysics while:

  • losing days of sleep
  • becoming increasingly paranoid and anxious
  • feeling delegitimized and gaslit by those around them, unable to communicate their actual thoughts with those around them
  • fearing involuntary psychiatric institutionalization
  • experiencing involuntary psychiatric institutionalization
  • having almost no real mind-to-mind communication during "treatment"
  • learning primarily to comply and to play along with the incoherent, shifting social scene (there were mandatory improv classes)
  • being afraid of others in the institution, including being afraid of sexual assault, which is common in psychiatric hospitals
  • believing the social context to be a "cover up" of things including criminal activity and learning to comply with it, on the basis that one would be unlikely to exit the institution within a reasonable time without doing so

Being able to discuss somewhat wacky experiential hypotheses, like the possibility of people spreading mental subprocesses to each other, in a group setting, and have the concern actually taken seriously as something that could seem true from some perspective (and which is hard to definitively rule out), seems much more conducive to people's mental well-being than refusing to have that discussion, so they struggle with (what they think is) mental subprocess implantation on their own.  Leverage definitely had large problems with these discussions, and perhaps tried to reach more intersubjective agreement about them than was plausible (leading to over-reification, as Zoe points out), but they seem less severe than the problems resulting from refusing to have them, such as psychiatric hospitalization and jail time.

"Psychosis" doesn't have to be a bad thing, even if it usually is in our society; it can be an exploration of perceptions and possibilities not before imagined, in a supportive environment that helps the subject to navigate reality in a new way; some of R.D. Liang's work is relevant here, describing psychotic mental states as a result of ontological insecurity following from an internal division of the self at a previous time. Despite the witch hunts and so on, the Leverage environment seems more supportive than what I had access to. The people at Leverage I talk to, who have had some of these unusual experiences, often have a highly exploratory attitude to the subtle mental realm, having gained access to a new cognitive domain through the experience, even if it was traumatizing.

World-saving plans and rarity narratives

Zoe cites the fact that Leverage has a "world-saving plan" (which included taking over the world) and considered Geoff Anders and Leverage to be extremely special, e.g. Geoff being possibly the best philosopher ever:

Within a few months of joining, a supervisor I trusted who had recruited me confided in me privately, “I think there’s good reason to believe Geoff is the best philosopher who’s ever lived, better than Kant. I think his existence on earth right now is an historical event.”

Like Leverage, MIRI had a "world-saving plan".  This is no secret; it's discussed in an Arbital article written by Eliezer Yudkowsky.  Nate Soares frequently talked about how it was necessary to have a "plan" to make the entire future ok, to avert AI risk; this plan would need to "backchain" from a state of no AI risk and may, for example, say that we must create a human emulation using nanotechnology that is designed by a "genie" AI, which does a narrow task rather than taking responsibility for the entire future; this would allow the entire world to be taken over by a small group including the emulated human. [EDIT: See Nate's clarification, the small group doesn't have to be MIRI specifically, and the upload plan is an example of a plan rather than a fixed super-plan.]

I remember taking on more and more mental "responsibility" over time, noting the ways in which people other than me weren't sufficient to solve the AI alignment problem, and I had special skills, so it was uniquely my job to solve the problem.  This ultimately broke down, and I found Ben Hoffman's post on responsibility to resonate (which discusses the issue of control-seeking).

The decision theory of backchaining and taking over the world somewhat beyond the scope of this post.  There are circumstances where back-chaining is appropriate, and "taking over the world" might be necessary, e.g. if there are existing actors already trying to take over the world and none of them would implement a satisfactory regime.  However, there are obvious problems with multiple actors each attempting to control everything, which are discussed in Ben Hoffman's post.

This connects with what Zoe calls "rarity narratives".  There were definitely rarity narratives around MIRI/CFAR.  Our task was to create an integrated, formal theory of values, decisions, epistemology, self-improvement, etc ("Friendliness theory"), which would help us develop Friendly AI faster than the rest of the world combined was developing AGI (which was, according to leaders, probably in less than 20 years).  It was said that a large part of our advantage in doing this research so fast was that we were "actually trying" and others weren't.  It was stated by multiple people that we wouldn't really have had a chance to save the world without Eliezer Yudkowsky (obviously implying that Eliezer was an extremely historically significant philosopher).

Though I don't remember people saying explicitly that Eliezer Yudkowsky was a better philosopher than Kant, I would guess many would have said so.  No one there, as far as I know, considered Kant worth learning from enough to actually read the Critique of Pure Reason in the course of their research; I only did so years later, and I'm relatively philosophically inclined.  I would guess that MIRI people would consider a different set of philosophers relevant, e.g. would include Turing and Einstein as relevant "philosophers", and I don't have reason to believe they would consider Eliezer more relevant than these, though I'm not certain either way.  (I think Eliezer is a world-historically-significant philosopher, though not as significant as Kant or Turing or Einstein.)

I don't think it's helpful to oppose "rarity narratives" in general.  People need to try to do hard things sometimes, and actually accomplishing those things would make the people in question special, and that isn't a good argument against trying the thing at all.  Intellectual groups with high information integrity, e.g. early quantum mechanics people, can have a large effect on history.  I currently think the intellectual work I do is pretty rare and important, so I have a "rarity narrative" about myself, even though I don't usually promote it.  Of course, a project claiming specialness while displaying low information integrity is, effectively, asking for more control and resources that it can beneficially use.

Rarity narratives can have the effects of making a group of people more insular, more concentrating relevance around itself and not learning from other sources (in the past or the present), making local social dynamics be more centered on a small number of special people, and increasing pressure on people to try to do (or pretend to try to do) things beyond their actual abilities; Zoe and I both experienced these effects.

(As a hint to evaluating rarity narratives yourself: compare Great Thinker's public output to what you've learned from other public sources; follow citations and see where Great Thinker might be getting their ideas from; read canonical great philosophy and literature; get a quantitative sense of how much insight is coming from which places throughout spacetime.)

The object-level specifics of each case of world-saving plan matter, of course; I think most readers of this post will be more familiar with MIRI's world-saving plan, especially since Zoe's post provides few object-level details about the content of Leverage's plan.

Debugging

Rarity ties into debugging; if what makes us different is that we're Actually Trying and the other AI research organizations aren't, then we're making a special psychological claim about ourselves, that we can detect the difference between actually and not-actually trying, and cause our minds to actually try more of the time.

Zoe asks whether debugging was "required"; she notes:

The explicit strategy for world-saving depended upon a team of highly moldable young people self-transforming into Elon Musks.

I, in fact, asked a CFAR instructor in 2016-17 whether the idea was to psychologically improve yourself until you became Elon Musk, and he said "yes".  This part of the plan was the same [EDIT: Anna clarifies that, while some people becoming like Elon Musk was some people's plan, there was usually acceptance of people not changing themselves; this might to some degree apply to Leverage as well].

Self-improvement was a major focus around MIRI and CFAR, and at other EA orgs.  It often used standard CFAR techniques, which were taught at workshops.  It was considered important to psychologically self-improve to the point of being able to solve extremely hard, future-lightcone-determining problems.

I don't think these are bad techniques, for the most part.  I think I learned a lot by observing and experimenting on my own mental processes.  (Zoe isn't saying Leverage's techniques are bad either, just that you could get most of them from elsewhere.)

Zoe notes a hierarchical structure where people debugged people they had power over:

Trainers were often doing vulnerable, deep psychological work with people with whom they also lived, made funding decisions about, or relied on for friendship. Sometimes people debugged each other symmetrically, but mostly there was a hierarchical, asymmetric structure of vulnerability; underlings debugged those lower than them on the totem pole, never their superiors, and superiors did debugging with other superiors.

This was also the case around MIRI and CFAR.  A lot of debugging was done by Anna Salamon, head of CFAR at the time; Ben Hoffman noted that "every conversation with Anna turns into an Anna-debugging-you conversation", which resonated with me and others.

There was certainly a power dynamic of "who can debug who"; to be a more advanced psychologist is to be offering therapy to others, being able to point out when they're being "defensive", when one wouldn't accept the same from them.  This power dynamic is also present in normal therapy, although the profession has norms such as only getting therapy from strangers, which change the situation.

How beneficial or harmful this was depends on the details.  I heard that "political" discussions at CFAR (e.g. determining how to resolve conflicts between people at the organization, which could result in people leaving the organization) were mixed with "debugging" conversations, in a way that would make it hard for people to focus primarily on the debugged person's mental progress without imposing pre-determined conclusions.  Unfortunately, when there are few people with high psychological aptitude around, it's hard to avoid "debugging" conversations having political power dynamics, although it's likely that the problem could have been mitigated.

[EDIT: See PhoenixFriend's pseudonymous comment, and replies to it, for more on power dynamics including debugging-related ones at CFAR specifically.]

It was really common for people in the social space, including me, to have a theory about how other people are broken, and how to fix them, by getting them to understand a deep principle you do and they don't.  I still think most people are broken and don't understand deep principles that I or some others do, so I don't think this was wrong, although I would now approach these conversations differently.

A lot of the language from Zoe's post, e.g. "help them become a master", resonates.  There was an atmosphere of psycho-spiritual development, often involving Kegan stages There is a significant degree of overlap between people who worked with or at CFAR and people at the Monastic Academy [EDIT: see Duncan's comment estimating that the actual amount of interaction between CFAR and MAPLE was pretty low even though there was some overlap in people].

Although I wasn't directly financially encouraged to debug people, I infer that CFAR employees were, since instructing people was part of their job description.

Other issues

MIRI did have less time pressure imposed by the organization itself than Leverage did, despite the deadline implied by the AGI timeline; I had no issues with absurdly over-booked calendars.  I vaguely recall that CFAR employees were overworked especially around workshop times, though I'm pretty uncertain of the details.

Many people's social lives, including mine, were spent mostly "in the community"; much of this time was spent on "debugging" and other psychological work.  Some of my most important friendships at the time, including one with a housemate, were formed largely around a shared interest in psychological self-improvement.  There was, therefore, relatively little work-life separation (which has upsides as well as downsides).

Zoe recounts an experience with having unclear, shifting standards applied, with the fear of ostracism.  Though the details of my experience are quite different, I was definitely afraid of being considered "crazy" and marginalized for having philosophy ideas that were too weird, even though weird philosophy would be necessary to solve the AI alignment problem.  I noticed more people saying I and others were crazy as we were exploring sociological hypotheses that implied large problems with the social landscape we were in (e.g. people thought Ben Hoffman was crazy because of his criticisms of effective altruism). I recall talking to a former CFAR employee who was scapegoated and ousted after failing to appeal to the winning internal coalition; he was obviously quite paranoid and distrustful, and another friend and I agreed that he showed PTSD symptoms [EDIT: I infer scapegoating based on the public reason given being suspicious/insufficient; someone at CFAR points out that this person was paranoid and distrustful while first working at CFAR as well].

Like Zoe, I experienced myself and others being distanced from old family and friends, who didn't understand how high-impact the work we were doing was.  Since leaving the scene, I am more able to talk with normal people (including random strangers), although it's still hard to talk about why I expect the work I do to be high-impact.

An ex-Leverage person I know comments that "one of the things I give Geoff the most credit for is actually ending the group when he realized he had gotten in over his head. That still left people hurt and shocked, but did actually stop a lot of the compounding harm."  (While Geoff is still working on a project called "Leverage", the initial "Leverage 1.0" ended with most of the people leaving.) This is to some degree happening with MIRI and CFAR, with a change in the narrative about the organizations and their plans, although the details are currently less legible than with Leverage.

Conclusion

Perhaps one lesson to take from Zoe's account of Leverage is that spending relatively more time discussing sociology (including anthropology and history), and less time discussing psychology, is more likely to realize benefits while avoiding problems.  Sociology is less inherently subjective and meta than psychology, having intersubjectively measurable properties such as events in human lifetimes and social network graph structures.  My own thinking has certainly gone in this direction since my time at MIRI, to great benefit.  I hope this account I have written helps others to understand the sociology of the rationality community around 2017, and that this understanding helps people to understand other parts of the society they live in.

There are, obviously from what I have written, many correspondences, showing a common pattern for high-ambition ideological groups in the San Francisco Bay Area.  I know there are serious problems at other EA organizations, which produce largely fake research (and probably took in people who wanted to do real research, who become convinced by their experience to do fake research instead), although I don't know the specifics as well.  EAs generally think that the vast majority of charities are doing low-value and/or fake work.  I also know that San Francisco startup culture produces cult-like structures (and associated mental health symptoms) with regularity.  It seems more productive to, rather than singling out specific parties, think about the social and ecological forces that create and select for the social structures we actually see, which include relatively more and less cult-like structures.  (Of course, to the extent that harm is ongoing due to actions taken by people and organizations, it's important to be able to talk about that.)

It's possible that after reading this, you think this wasn't that bad.  Though I can only speak for myself here, I'm not sad that I went to work at MIRI instead of Google or academia after college.  I don't have reason to believe that either of these environments would have been better for my overall intellectual well-being or my career, despite the mental and social problems that resulted from the path I chose.  Scott Aaronson, for example, blogs about "blank faced" non-self-explaining authoritarian bureaucrats being a constant problem in academia.  Venkatesh Rao writes about the corporate world, and the picture presented is one of a simulation constantly maintained thorough improv.

I did grow from the experience in the end.  But I did so in large part by being very painfully aware of the ways in which it was bad.

I hope that those that think this is "not that bad" (perhaps due to knowing object-level specifics around MIRI/CFAR justifying these decisions) consider how they would find out whether the situation with Leverage was "not that bad", in comparison, given the similarity of the phenomena observed in both cases; such an investigation may involve learning object-level specifics about what happened at Leverage.  I hope that people don't scapegoat; in an environment where certain actions are knowingly being taken by multiple parties, singling out certain parties has negative effects on people's willingness to speak without actually producing any justice.

Aside from whether things were "bad" or "not that bad" overall, understanding the specifics of what happened, including harms to specific people, is important for actually accomplishing the ambitious goals these projects are aiming at; there is no reason to expect extreme accomplishments to result without very high levels of epistemic honesty.

New Comment
960 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I want to add some context I think is important to this.

Jessica was (I don't know if she still is) part of a group centered around a person named Vassar, informally dubbed "the Vassarites". Their philosophy is complicated, but they basically have a kind of gnostic stance where regular society is infinitely corrupt and conformist and traumatizing and you need to "jailbreak" yourself from it (I'm using a term I found on Ziz's discussion of her conversations with Vassar; I don't know if Vassar uses it himself). Jailbreaking involves a lot of tough conversations, breaking down of self, and (at least sometimes) lots of psychedelic drugs.

Vassar ran MIRI a very long time ago, but either quit or got fired, and has since been saying that MIRI/CFAR is also infinitely corrupt and conformist and traumatizing (I don't think he thinks they're worse than everyone else, but I think he thinks they had a chance to be better, they wasted it, and so it's especially galling that they're just as bad).  Since then, he's tried to "jailbreak" a lot of people associated with MIRI and CFAR - again, this involves making them paranoid about MIRI/CFAR and convincing them to take lots of drugs. The combinat... (read more)

[-]devi990

Including Olivia, and Jessica, and I think Devi. Devi had a mental breakdown and detransitioned IIHC

Digging out this old account to point out that I have not in fact detransitioned, but find it understandable why those kinds of rumours would circulate given my behaviour during/around my experience of psychosis. I'll try to explain some context for the record.

In other parts of the linked blogpost Ziz writes about how some people around the rationalist community were acting on or spreading variations of the meme "trans women are [psychologically] men". I experienced this while dating AM (same as mentioned above). She repeatedly brought up this point in various interactions. Since we were both trans women this was hurting us both, so I look back with more pity than concern about malice. At some point during this time I started treating this as a hidden truth that I was proud of myself for being able to see, which I in retrospect I feel disgusted and complicit to have accepted. This was my state of mind when I discussed these issues with Zack reinforcing each others views. I believe (less certain) I also broached the topic with Michael and/or Anna at some point which probably went... (read more)

I want to point out that the level of mental influence being attributed to Michael in this comment and others (e.g. that he's "causing psychotic breaks" and "jailbreaking people" through conversation, "that listening too much to Vassar [causes psychosis], predictably") isn't obviously less than the level of mental influence Leverage attributed to people in terms of e.g. mental objects. Some people in the thread are self-congratulating on the rationalists not being as crazy and abusive as Leverage was in worrying that people were spreading harmful psychological objects to each other, and therefore isolating these people from their friends. Yet many in this comment thread are, literally, calling for isolating Michael Vassar from his friends on the basis of his mental influence on others.

Yes, I agree with you that all of this is very awkward.

I think the basic liberal model where everyone uses Reason a lot and we basically trust their judgments is a good first approximation and we should generally use it.

But we have to admit at least small violations of it even to get the concept of "cult". Not just the sort of weak cults we're discussing here, but even the really strong cults like Heaven's Gate or Jamestown. In the liberal model, someone should be able to use Reason to conclude that being in Heaven's Gate is bad for them, and leave. When we use the word "cult", we're implicitly agreeing that this doesn't always work, and we're bringing in creepier and less comprehensible ideas like "charisma" and "brainwashing" and "cognitive dissonance".

(and the same thing with the concept of "emotionally abusive relationship")

I don't want to call the Vassarites a cult because I'm sure someone will confront me with a Cult Checklist that they don't meet, but I think that it's not too crazy to argue that some of these same creepy ideas like charisma and so on were at work there. And everyone knows cults can get weird and end in mental illness. I agree it's weird that you can get tha... (read more)

It seems to me like in the case of Leverage, them working 75 hours per week reduced the time the could have used to use Reason to conclude that they are in a system that's bad for them. 

That's very different from someone having a few conversation with Vassar and then adopting a new belief and spending a lot of the time reasoning about that alone and the belief being stable without being embedded into a strong enviroment that makes independent thought hard because it keeps people busy.

A cult in it's nature is a social institution and not just a meme that someone can pass around via having a few conversations.

8Viliam
Perhaps the proper word here might be "manipulation" or "bad influence".

I think "mind virus" is fair. Vassar spoke a lot about how the world as it is can't be trusted. I remember that many of the people in his circle spoke, seemingly apropos of nothing, about how bad involuntary commitment is, so that by the time someone was psychotic their relationship with psychiatry and anyone who would want to turn to psychiatry to help them was poisoned. Within the envelope of those beliefs you can keep a lot of other beliefs safe from scrutiny. 

4ChristianKl
The thing with "bad influence" is that it's a pretty value-laden thing. In a religious town the biology teacher who tells the children about evolution and explains how it makes sense that our history goes back a lot further then a few thousands years is reasonably described as bad influence by the parents.  The religion teacher gets the children to doubt the religious authorities. Those children then can also be a bad influence on others by also getting them to doubt authorities. In a similar war Vassar gets people to question other authorities and social conventions and how those ideas can then be passed on.  Vassar speaks about things like Moral Mazes. Memes like that make people distrust institutions. There are the kind of bad influence that can get people to quit their job. Talking about the biology teacher like they are intend to start an evolution cult feels a bit misleading.

It seems to me that, at least in your worldview, this question of whether and what sort of subtle mental influence between people is possible is extremely important, to the point where different answers to the question could lead to pretty different political philosophies.

Let's consider a disjunction: 1: There isn't a big effect here, 2: There is a big effect here.

In case 1:

  • It might make sense to discourage people from talking too much about "charisma", "auras", "mental objects", etc, since they're pretty fake, really not the primary factors to think about when modeling society.
  • The main problem with the relevant discussions at Leverage is that they're making grandiose claims of mind powers and justifying e.g. isolating people on the basis of these, not actual mental influence.
  • The case made against Michael, that he can "cause psychotic breaks" by talking with people sometimes (or, in the case of Eric B, by talking sometimes with someone who is talking sometimes with the person in question), has no merit. People are making up grandiose claims about Michael to justify scapegoating him, it's basically a witch hunt. We should have a much more moderated, holistic picture where ther
... (read more)

I agree I'm being somewhat inconsistent, I'd rather do that than prematurely force consistency and end up being wrong or missing some subtlety. I'm trying to figure out what went on in these cases in more details and will probably want to ask you a lot of questions by email if you're open to that.

8jessicata
Yes, I'd be open to answering email questions.

This misses the fact that people’s ability to negatively influence others might vary very widely, making it so that it is silly to worry about, say, 99.99% of people strongly negatively influencing you, but reasonable to worry about the other 0.01%. If Michael is one of those 0.01%, then Scott’s worldview is not inconsistent.

9TekhneMakre
If it's reasonable to worry about the .01%, it's reasonable to ask how the ability varies. There's some reason, some mechanism. This is worth discussing even if it's hard to give more than partial, metaphorical hypotheses. And if there are these .01% of very strong influencers, that is still an exception to strong liberal individualism.
4jessicata
That would still admit some people at Leverage having significant mental influence, especially if they got into weird mental tech that almost no one gets into. A lot of the weirdness is downstream of them encountering "body workers" who are extremely good at e.g. causing mental effects by touching people's back a little; these people could easily be extremal, and Leverage people learned from them. I've had sessions with some post-Leverage people where it seemed like really weird mental effects are happening in some implicit channel (like, I feel a thing poking at the left side of my consciousness and the person says, "oh, I just did an implicit channel thing, maybe you felt that"), I've never experienced effects like that (without drugs, and not obviously on drugs either though the comparison is harder) with others including with Michael, Anna, or normal therapists. This could be "placebo" in a way that makes it ultimately not that important but still, if we're admitting that 0.01% of people have these mental effects then it seems somewhat likely that this includes some Leverage people. Also, if the 0.01% is disproportionately influential (which, duh), then getting more detailed models than "charisma" is still quite important.
-1[comment deleted]

One important implication of "cults are possible" is that many normal-seeming people are already too crazy to function as free citizens of a republic.

In other words, from a liberal perspective, someone who can't make their own decisions about whether to hang out with Michael Vassar and think about what he says is already experiencing a severe psychiatric emergency and in need of a caretaker, since they aren't competent to make their own life decisions. They're already not free, but in the grip of whatever attractor they found first.

Personally I bite the bullet and admit that I'm not living in a society adequate to support liberal democracy, but instead something more like what Plato's Republic would call tyranny. This is very confusing because I was brought up to believe that I lived in a liberal democracy. I'd very much like to, someday.

I think there are less extreme positions here. Like "competent adults can make their own decisions, but they can't if they become too addicted to certain substances." I do think manipulation by others can rise to the level of drugs and is an exceptional case, not proof that a lot of people are fundamentally incapable of being free.  

4Benquo
I think the principled liberal perspective on this is Bryan Caplan's: drug addicts have or develop very strong preferences for drugs. The assertion that they can't make their own decisions is a declaration of intent to coerce them, or an arrogation of the right to do so. I don't think that many people are "fundamentally incapable of being free." But it seems like some people here are expressing grievances that imply that either they themselves or some others are, right now, not ready for freedom of association. The claim that someone is dangerous enough that they should be kept away from "vulnerable people" is a declaration of intent to deny "vulnerable people" freedom of association for their own good. (No one here thinks that a group of people who don't like Michael Vassar shouldn't be allowed to get together without him.)

drug addicts have or develop very strong preferences for drugs. The assertion that they can't make their own decisions is a declaration of intent to coerce them, or an arrogation of the right to do so.

I really don't think this is an accurate description of what is going on in people's mind when they are experiencing drug dependencies. I've spent a good chunk of my childhood with an alcoholic father, and he would have paid most of his wealth to stop being addicted to drinking, went through great lengths trying to tie himself to various masts to stop, and generally expressed a strong preference for somehow being able to self-modify the addiction away, but ultimately failed to do so. 

Of course, things might be different for different people, but at least in the one case where I have a very large amount of specific data, this seems like it's a pretty bad model of people's preferences. Based on the private notebooks of his that I found after his death, this also seemed to be his position in purely introspective contexts without obvious social desirability biases. My sense is that he would have strongly preferred someone to somehow take control away from him, in this specific domain of his life.

7Benquo
This seems like some evidence that the principled liberal position is false - specifically, that it is not self-ratifying. If you ask some people what their preferences are, they will express a preference for some of their preferences to be thwarted, for their own good. Contractarianism can handle this sort of case, but liberal democracy with inalienable rights cannot, and while liberalism is a political philosophy, contractarianism is just a policy proposal, with no theory of citizenship or education.
2NancyLebovitz
https://en.wikipedia.org/wiki/Olivier_Ameisen A sidetrack, but a French surgeon found that Baclofen (a muscle relaxant) cured his alcoholism by curing the craving. He was surprised to find that it cured compulsive spending when he didn't even realize he had a problem. He had a hard time raising money for an official experiment, and it came out inconclusive, and he died before the research got any further.  
2Jayson_Virissimo
This is more-or-less Aristotle's defense of (some cases of) despotic rule: it benefits those that are naturally slaves (those whose deliberative faculty functions below a certain threshold) in addition to the despot (making it a win-win scenario).

Aristotle seems (though he's vague on this) to be thinking in terms of fundamental attributes, while I'm thinking in terms of present capacity, which can be reduced by external interventions such as schooling.

Thinking about people I know who've met Vassar, the ones who weren't brought up to go to college* seem to have no problem with him and show no inclination to worship him as a god or freak out about how he's spooky or cultish; to them, he's obviously just a guy with an interesting perspective.

*As far as I know I didn't know any such people before 2020; it's very easy for members of the educated class to mistake our bubble for statistical normality.

Thinking about people I know who've met Vassar, the ones who weren't brought up to go to college* seem to have no problem with him and show no inclination to worship him as a god or freak out about how he's spooky or cultish; to them, he's obviously just a guy with an interesting perspective.

This is very interesting to me! I'd like to hear more about how the two group's behavior looks diff, and also your thoughts on what's the difference that makes the difference, what are the pieces of "being brought up to go to college" that lead to one class of reactions?

I have talked to Vassar, while he has a lot of "explicit control over conversations" which could be called charisma, I'd hypothesize that the fallout is actually from his ideas. (The charisma/intelligence making him able to credibly argue those)

My hypothesis is the following:  I've met a lot of rationalists + adjacent people. A lot of them care very deeply about EA and AI alignment. In fact, it seems to me to be a core part of a lot of these people's identity ("I'm an EA person, thus I'm a good person doing important work"). Two anecdotes to illustrate this:
- I'd recently argued against a committed EA person. Eventually, I started feeling almost-bad about arguing (even though we're both self-declared rationalists!) because I'd realised that my line of reasoning questioned his entire life. His identity was built deeply on EA, his job was selected to maximize money to give to charity. 
- I'd had a conversation with a few unemployed rationalist computer scientists. I suggested we might start a company together. One I got: "Only if it works on the alignment problem, everything else is irrelevant to me". 

Vassar very persuasively argues against EA and work done at MIRI/CFAR... (read more)

[-]mic110

What are your or Vassar's arguments against EA or AI alignment? This is only tangential to your point, but I'd like to know about it if EA and AI alignment are not important.

The general argument is that EA's are not really doing what they say they do. One example from Vassar would be that when it comes to COVID-19 for example there seem to be relatively little effective work by EA's. In contrast Vassar considered giving prisoners access to personal equipment the most important and organized effectively for that to happen. 

EA's created in EA Global an enviroment where someone who wrote a good paper warning about the risks of gain-of-function research doesn't address that directly but only talks indirectly about it to focus on more meta-issues. Instead of having conflicts with people doing gain-of-function research the EA community mostly ignored it's problems and funded work that's in less conflict with the establishment. There's nearly no interest in learning from those errors in the EA community and people rather avoid conflicts.

If you read the full comments of this thread you will find reports that CEA used legal threats to cover up Leverage related information. 

AI alignment is important but just because one "works on AI risk" doesn't mean that the work actually decreases AI risk. Tying your personal identity to being someone who works to d... (read more)

9NancyLebovitz
Did Vassar argue that existing EA organizations weren't doing the work they said they were doing, or that EA as such was a bad idea? Or maybe that it was too hard to get organizations to do it?

He argued

(a) EA orgs aren't doing what they say they're doing (e.g. cost effectiveness estimates are wildly biased, reflecting bad procedures being used internally), and it's hard to get organizations to do what they say they do

(b) Utilitarianism isn't a form of ethics, it's still necessary to have principles, as in deontology or two-level consequentialism

(c) Given how hard it is to predict the effects of your actions on far-away parts of the world (e.g. international charity requiring multiple intermediaries working in a domain that isn't well-understood), focusing on helping people you have more information about makes sense unless this problem can be solved

(d) It usually makes more sense to focus on ways of helping others that also build capacities, including gathering more information, to increase long-term positive impact

If you for example want the critcism on GiveWell, Ben Hoffman was employed at GiveWell and made experiences that suggest that the process based on which their reports are made has epistemic problems. If you want the details talk to him. 

The general model would be that between actual intervention and the top there are a bunch of maze levels. GiveWell then hired normal corporatist people who behave in the dynamics that the immoral maze sequence describes play themselves out.

Vassar's action themselves are about doing altruistic actions more directly by looking for who are most powerless who need help and working to help them. In the COVID case he identified prisoners and then worked on making PPE available for them.

You might see his thesis is that "effective" in EA is about adding a management layer for directing interventions and that management layer has the problems that the immoral maze sequence describes. According to Vassar someone who wants to be altrustic shouldn't delegate his judgements of what's effective and thus warrents support to other people.

2[comment deleted]
6jefftk
Link? I'm not finding it
3ChristianKl
https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe?commentId=zqcynfzfKma6QKMK9

I think what you're pointing to is:

I have a large number of negative Leverage experiences between 2015-2017 that I never wrote up due to various complicated adversarial dynamics surrounding Leverage and CEA (as well as various NDAs and legal threats, made by both Leverage and CEA, not leveled at me, but leveled at enough people around me that I thought I might cause someone serious legal trouble if I repeat a thing I heard somewhere in a more public setting)

I'm getting a bit pedantic, but I wouldn't gloss this as "CEA used legal threats to cover up Leverage related information". Partly because the original bit is vague, but also because "cover up" implies that the goal is to hide information.

For example, imagine companies A and B sue each other, which ends up with them settling and signing an NDA. Company A might accept an NDA because they want to move on from the suit and agreeing to an NDA does that most effectively. I would not describe this as company A using legal threats to cover up B-related information.

In the timeframe CEA and Leverage where doing together the Pareto Fellowship. If you read the common knowledge post you find people finding that they were mislead by CEA because the announcement didn't mention that the Pareto Fellowship was largely run by Leverage.

On their mistakes page CEA, they have a section about the Pareto Fellowship but it hides the fact that Leverage was involved in the Pareto Fellowship but says "The Pareto Fellowship was a program sponsored by CEA and run by two CEA staff, designed to deepen the EA involvement of promising students or people early in their careers."

That does look to me like hiding information about the cooperation between Leverage and CEA. 

I do think that publically presuming that people who hide information have something to hide is useful. If there's nothing to hide I'd love to know what happened back then or who thinks what happened should stay hidden. At the minimum I do think that CEA witholding the information that the people who went to their programs spend their time in what now appears to be a cult is something that CEA should be open about in their mistakes page. 

Yep, I think CEA has in the past straightforwardly misrepresented (there is a talk on the history of EA by Will and Toby that says some really dubious things here, IIRC) and sometimes even lied in order to not mention Leverage's history with Effective Altruism. I think this was bad, and continues to be bad.

My initial thought on reading this was 'this seems obviously bad', and I assumed this was done to shield CEA from reputational risk.

Thinking about it more, I could imagine an epistemic state I'd be much more sympathetic to: 'We suspect Leverage is a dangerous cult, but we don't have enough shareable evidence to make that case convincingly to others, or we aren't sufficiently confident ourselves. Crediting Leverage for stuff like the EA Summit (without acknowledging our concerns and criticisms) will sound like an endorsement of Leverage, which might cause others to be drawn into its orbit and suffer harm. But we don't feel confident enough to feel comfortable tarring Leverage in public, or our evidence was shared in confidence and we can't say anything we expect others to find convincing. So we'll have to just steer clear of the topic for now.'

Still seems better to just not address the subject if you don't want to give a fully accurate account of it. You don't have to give talks on the history of EA!

I think the epistemic state of CEA was some mixture of something pretty close to what you list here, and something that I would put closer to something more like "Leverage maybe is bad, or maybe isn't, but in any case it looks bad, and I don't think I want people to think EA or CEA is bad, so we are going to try to avoid any associations between these entities, which will sometimes require stretching the truth".

"Leverage maybe is bad, or maybe isn't, but in any case it looks bad, and I don't think I want people to think EA or CEA is bad, so we are going to try to avoid any associations between these entities, which will sometimes require stretching the truth"

That has the collary: "We don't expect EA's to care enough about the truth/being transparent that this is a huge reputational risk for us."

It does look weird to me that CEA doesn't include this on the mistakes page when they talk about Pareto. I just sent CEA an email to ask:

Hi CEA,

On https://www.centreforeffectivealtruism.org/our-mistakes I see "The Pareto Fellowship was a program sponsored by CEA and run by two CEA staff, designed to deepen the EA involvement of promising students or people early in their careers. We realized during and after the program that senior management did not provide enough oversight of the program. For example, reports by some applicants indicate that the interview process was unprofessional and made them deeply uncomfortable."

Is there a reason that the mistakes page does not mention the involvement of Leverage in the Pareto Fellowship? [1]

Jeff

[1] https://www.lesswrong.com/posts/Kz9zMgWB5C27Pmdkh/common-knowledge-about-leverage-research-1-0?commentId=znudKxFhvQxgDMv7k

7jefftk
They wrote back, linking me to https://www.lesswrong.com/posts/Kz9zMgWB5C27Pmdkh/common-knowledge-about-leverage-research-1-0?commentId=2QcdhTjqGcSc99sNN ("we're working on a couple of updates to the mistakes page, including about this")

Yep, I think the situation is closer to what Jeff describes here, though, I honestly don't actually know, since people tend to get cagey when the topic comes up.

9ChristianKl
I talked with Geoff and according to him there's no legal contract between CEA and Leverage that prevents information sharing. All information sharing is prevented by organization internal NDA's.

Huh, that's surprising, if by that he means "no contracts between anyone currently at Leverage and anyone at CEA". I currently still think it's the case, though I also don't see any reason for Geoff to lie here. Maybe there is some technical sense in which there is no contract between Leverage and CEA, but there are contracts between current Leverage employees, who used to work at CEA, and current CEA employees? 

4ChristianKl
What he said is compatible with Ex-CEA people still being bound by the NDA's they signed they were at CEA. I don't think anything happened that releases ex-CEA people from NDAs. The important thing is that CEA is responsible for those NDA and is free to unilaterally lift them if they would have an interest in the free flow of information. In the case of a settlement with contracts between the two organisations CEA couldn't unilaterally lift the settlement contract. Public pressure on CEA seems to be necessary to get the information out in the open.

Talking with Vassar feels very intellectually alive. Maybe, like a high density of insight porn. I imagine that the people Ben talks about wouldn't get much enjoyment out of insight porn either, so that emotional impact isn't there.

There's probably also an element that plenty of people who can normally follow an intellectual conversation can't keep up a conversation with Vassar and then are filled after a conversation with a bunch of different ideas that lack order in their mind. I imagine that sometimes there's an idea overload that prevents people from critically thinking through some of the ideas.

If you have a person who hasn't gone to college, they are used to encountering people who make intellectual arguments that go over their head and have a way to deal with that. 

From meeting Vassar, I don't feel like he has the kind of charisma that someone like Valentine has (which I guess Valentine has downstream of doing a lot of bodywork stuff). 

This seems mostly right; they're more likely to think "I don't understand a lot of these ideas, I'll have to think about this for a while" or "I don't understand a lot of these ideas, he must be pretty smart and that's kinda cool" than to feel invalidated by this and try to submit to him in lieu of understanding.

The people I know who weren't brought up to go to college have more experience navigating concrete threats and dangers, which can't be avoided through conformity, since the system isn't set up to take care of people like them. They have to know what's going on to survive. This results in an orientation less sensitive to subtle threats of invalidation, and that sees more concrete value in being informed by someone.

In general this means that they're much more comfortable with the kind of confrontation Vassar engages in, than high-class people are.

7Hazard
This makes a lot of sense. I can notice ways in which I generally feels more threatened by social invalidation than actual concrete threats of violence.
4NancyLebovitz
This is interesting to me because I was brought up to go to college, but I didn't take it seriously (plausibly from depression or somesuch), and I definitely think of him as a guy with an interesting perspective. Okay, a smart guy with an interesting perspective, but not a god. It had never occurred to me before that maybe people who were brought up to assume they were going to college might generally have a different take on the world than I do.

I talked and corresponded with Michael a lot during 2017–2020, and it seems likely that one of the psychotic breaks people are referring to is mine from February 2017? (Which Michael had nothing to do with causing, by the way.) I don't think you're being fair.

"jailbreak" yourself from it (I'm using a term I found on Ziz's discussion of her conversations with Vassar; I don't know if Vassar uses it himself)

I'm confident this is only a Ziz-ism: I don't recall Michael using the term, and I just searched my emails for jailbreak, and there are no hits from him.

again, this involves making them paranoid about MIRI/CFAR and convincing them to take lots of drugs [...] describing how it was a Vassar-related phenomenon

I'm having trouble figuring out how to respond to this hostile framing. I mean, it's true that I've talked with Michael many times about ways in which (in his view, and separately in mine) MIRI, CfAR, and "the community" have failed to live up to their stated purposes. Separately, it's also true that, on occasion, Michael has recommended I take drugs. (The specific recommendations I recall were weed and psilocybin. I always said No; drug use seems like a very bad idea giv... (read more)

I don't want to reveal any more specific private information than this without your consent, but let it be registered that I disagree with your assessment that your joining the Vassarites wasn't harmful to you. I was not around for the 2017 issues (though if you reread our email exchanges from April you will understand why I'm suspicious), but when you had some more minor issues in 2019 I was more in the loop and I ended out emailing the Vassarites (deliberately excluding you from the email, a decision I will defend in private if you ask me) accusing them of making your situation worse and asking them to maybe lay off you until you were maybe feeling slightly better, and obviously they just responded with their "it's correct to be freaking about learning your entire society is corrupt and gaslighting" shtick. 

I'm having trouble figuring out how to respond to this hostile framing. I mean, it's true that I've talked with Michael many times about ways in which (in his view, and separately in mine) MIRI, CfAR, and "the community" have failed to live up to their stated purposes. Separately, it's also true that, on occasion, Michael has recommended I take drugs. (The specific recommendations I recall were weed and psilocybin. I always said No; drug use seems like a very bad idea given my history of psych problems.)

[...]

Michael is a charismatic guy who has strong views and argues forcefully for them. That's not the same thing as having mysterious mind powers to "make people paranoid" or cause psychotic breaks! (To the extent that there is a correlation between talking to Michael and having psych issues, I suspect a lot of it is a selection effect rather than causal: Michael told me once that he specifically seeks out people who are high in Eysenckian psychoticism.) If someone thinks Michael is wrong about something, great: I'm sure he'd be happy to argue about it, time permitting. But under-evidenced aspersions that someone is somehow dangerous just to talk to are not an argument.

I more or les... (read more)

Thing 0:

Scott.

Before I actually make my point I want to wax poetic about reading SlateStarCodex.

In some post whose name I can't remember, you mentioned how you discovered the idea of rationality. As a child, you would read a book with a position, be utterly convinced, then read a book with the opposite position and be utterly convinced again, thinking that the other position was absurd garbage. This cycle repeated until you realized, "Huh, I need to only be convinced by true things."

This is extremely relatable to my lived experience. I am a stereotypical "high-functioning autist." I am quite gullible, formerly extremely gullible. I maintain sanity by aggressively parsing the truth values of everything I hear. I am extremely literal. I like math.

To the degree that "rationality styles" are a desirable artifact of human hardware and software limitations, I find your style of thinking to be the most compelling.

Thus I am going to state that your way of thinking about Vassar has too many fucking skulls.

Thing 1:

Imagine two world models:

  1. Some people want to act as perfect nth-order cooperating utilitarians, but can't because of human limitations. They are extremely scrupulous, so they feel
... (read more)

I enjoyed reading this. Thanks for writing it. 

One note though: I think this post (along with most of the comments) isn't treating Vassar as a fully real person with real choices. It (also) treats him like some kind of 'force in the world' or 'immovable object'. And I really want people to see him as a person who can change his mind and behavior and that it might be worth asking him to take more responsibility for his behavior and its moral impacts. I'm glad you yourself were able to "With basic rationality skills, avoid contracting the Vassar, then [heal] the damage to [your] social life." 

But I am worried about people treating him like a force of nature that you make contact with and then just have to deal with whatever the effects of that are. 

I think it's pretty immoral to de-stabilize people to the point of maybe-insanity, and I think he should try to avoid it, to whatever extent that's in his capacity, which I think is a lot. 

"Vassar's ideas are important and many are correct. It just happens to be that he might drive you insane."

I might think this was a worthwhile tradeoff if I actually believed the 'maybe insane' part was unavoidable, and I do not believ... (read more)

I think that treating Michael Vassar as an unchangeable force of nature is the right way to go—for the purposes of discussions precisely like this one. Why? Because even if Michael himself can (and chooses to) alter his behavior in some way (regardless of whether this is good or bad or indifferent), nevertheless there will be other Michael Vassars out there—and the question remains, of how one is to deal with arbitrary Michael Vassars one encounters in life.

In other words, what we’ve got here is a vulnerability (in the security sense of the word). One day you find that you’re being exploited by a clever hacker (we decline to specify whether he is a black hat or white hat or what). The one comes to you and recommends a patch. But you say—why should we treat this specific attack as some sort of unchangeable force of nature? Rather we should contact this hacker and persuade him to cease and desist. But the vulnerability is still there…

I think you can either have a discussion that focuses on an individual and if you do it makes sense to model them with agency or you can have more general threat models. 

If you however mix the two you are likely to get confused in both directions. You will project ideas from your threat model into the person and you will take random aspects of the individual into your threat model that aren't typical for the threat.

I am not sure how much 'not destabilize people' is an option that is available to Vassar.

My model of Vassar is as a person who is constantly making associations, and using them to point at the moon. However, pointing at the moon can convince people of nonexistent satellites and thus drive people crazy. This is why we have debates instead of koan contests.

Pointing at the moon is useful when there is inferential distance; we use it all the time when talking with people without rationality training. Eliezer used it, and a lot of "you are expected to behave better for status reasons look at my smug language"-style theist-bashing, in the Sequences. This was actually highly effective, although it had terrible side effects.

I think that if Vassar tried not to destabilize people, it would heavily impede his general communication. He just talks like this. One might say, "Vassar, just only say things that you think will have a positive effect on the person." 1. He already does that. 2. That is advocating that Vassar manipulate people. See Valencia in Worth the Candle.

In the pathological case of Vassar, I think the naive strategy of "just say the thing you think is true" is still correct.

Menta... (read more)

I think that if Vassar tried not to destabilize people, it would heavily impede his general communication.

My suggestion for Vassar is not to 'try not to destabilize people' exactly. 

It's to very carefully examine his speech and its impacts, by looking at the evidence available (asking people he's interacted with about what it's like to listen to him) and also learning how to be open to real-time feedback (like, actually look at the person you're speaking to as though they're a full, real human—not a pair of ears to be talked into or a mind to insert things into). When he talks theory, I often get the sense he is talking "at" rather than talking "to" or "with". The listener practically disappears or is reduced to a question-generating machine that gets him to keep saying things. 

I expect this process could take a long time / run into issues along the way, and so I don't think it should be rushed. Not expecting a quick change. But claiming there's no available option seems wildly wrong to me. People aren't fixed points and generally shouldn't be treated as such. 

This is actually very fair. I think he does kind of insert information into people.

I never really felt like a question-generating machine, more like a pupil at the foot of a teacher who is trying to integrate the teacher's information.

I think the passive, reactive approach you mention is actually a really good idea of how to be more evidential in personal interaction without being explicitly manipulative.

Thanks!

6ChristianKl
I think I interacted with Vassar four times in person, so I might get some things wrong here, but I think that he's pretty disassociated from his body which closes a normal channel of perceiving impacts on the person he's speaking with. This thing looks to me like some bodily process generating stress / pain and being a cause for disassociation. It might need a body worker to fix whatever goes on there to create the conditions for perceiving the other person better. Beyond that Circling might be an enviroment in which one can learn to interact with others as humans who have their own feelings but that would require opening up to the Circling frame. 
4ChristianKl
You are making a false dichomaty here. You are assuming that everything that has a negative effect on a person is manipulation.  As Vassar himself sees the situation people believe a lot of lies for reasons of fitting in socially in society. From that perspective getting people to stop believing in those lies will make it harder to fit socially into society.  If you would get a Nazi guard at Ausschwitz into a state where the moral issue of their job can't be disassociated anymore, that's very predicably going to have a negative effect on that prison guard.  Vassar position would be that it would be immoral to avoid talking about the truth about the nature of their job when talking with the guard in a motivation to make life easier for the guard. 
3Benquo
I think this line of discussion would be well served by marking a natural boundary in the cluster "crazy." Instead of saying "Vassar can drive people crazy" I'd rather taboo "crazy" and say: Personally I care much more, maybe lexically more, about the upside of minds learning about their situation, than the downside of mimics going into maladaptive death spirals, though it would definitely be better all round if we can manage to cause fewer cases of the latter without compromising the former, much like it's desirable to avoid torturing animals, and it would be desirable for city lights not to interfere with sea turtles' reproductive cycle by resembling the moon too much.
6pjen
My problem with this comment is it takes people who: * can't verbally reason without talking things through (and are currently stuck in a passive role in a conversation) and who: * respond to a failure of their verbal reasoning * under circumstances of importance (in this case moral importance) * and conditions of stress, induced by * trying to concentrate while in a passive role * failing to concentrate under conditions of high moral importance by simply doing as they are told - and it assumes they are incapable of reasoning under any circumstances. It also then denies people who are incapable of independent reasoning the right to be protected from harm.
5mathenjoyer
EDIT: Ben is correct to say we should taboo "crazy." This is a very uncharitable interpretation (entirely wrong). The highly scrupulous people here can undergo genuine psychological collapse if they learn their actions aren't as positive utility as they thought. (entirely wrong) I also don't think people interpret Vassar's words as a strategy and implement incoherence. Personally, I interpreted Vassar's words as factual claims then tried to implement a strategy on them. When I was surprised by reality a bunch, I updated away. I think the other people just no longer have a coalitional strategy installed and don't know how to function without one. This is what happened to me and why I repeatedly lashed out at others when I perceived them as betraying me, since I no longer automatically perceived them as on my side. I rebuilt my rapport with those people and now have more honest relationships with them. (still endorsed) Beyond this, I think your model is accurate.

The highly scrupulous people here can undergo genuine psychological collapse if they learn their actions aren’t as positive utility as they thought.

“That which can be destroyed by the truth should be”—I seem to recall reading that somewhere.

And: “If my actions aren’t as positive utility as I think, then I desire to believe that my actions aren’t as positive utility as I think”.

If one has such a mental makeup that finding out that one’s actions have worse effects than one imagined causes genuine psychological collapse, then perhaps the first order of business is to do everything in one’s power to fix that (really quite severe and glaring) bug in one’s psyche—and only then to attempt any substantive projects in the service of world-saving, people-helping, or otherwise doing anything really consequential.

5mathenjoyer
Thank you for echoing common sense!
-1Benquo
What is psychological collapse? For those who can afford it, taking it easy for a while is a rational response to noticing deep confusion, continuing to take actions based on a discredited model would be less appealing, and people often become depressed when they keep confusedly trying to do things that they don't want to do. Are you trying to point to something else? What specific claims turned out to be false? What counterevidence did you encounter?

Specific claim: the only nontrivial obstacle in front of us is not being evil

This is false. Object-level stuff is actually very hard.

Specific claim: nearly everyone in the aristocracy is agentically evil. (EDIT: THIS WAS NOT SAID. WE BASICALLY AGREE ON THIS SUBJECT.)

This is a wrong abstraction. Frame of Puppets seems naively correct to me, and has become increasingly reified by personal experience of more distant-to-my-group groups of people, to use a certain person's language. Ideas and institutions have the agency; they wear people like skin.

Specific claim: this is how to take over New York.

Didn't work.

4Benquo
I think this needs to be broken up into 2 claims: 1 If we execute strategy X, we'll take over New York. 2 We can use straightforward persuasion (e.g. appeals to reason, profit motive) to get an adequate set of people to implement strategy X. 2 has been falsified decisively. The plan to recruit candidates via appealing to people's explicit incentives failed, there wasn't a good alternative, and as a result there wasn't a chance to test other parts of the plan (1). That's important info and worth learning from in a principled way. Definitely I won't try that sort of thing again in the same way, and it seems like I should increase my credence both that plans requiring people to respond to economic incentives by taking initiative to play against type will fail, and that I personally might be able to profit a lot by taking initiative to play against type, or investing in people who seem like they're already doing this, as long as I don't have to count on other unknown people acting similarly in the future. But I find the tendency to respond to novel multi-step plans that would require someone do take initiative by sitting back and waiting for the plan to fail, and then saying, "see? novel multi-step plans don't work!" extremely annoying. I've been on both sides of that kind of transaction, but if we want anything to work out well we have to distinguish cases of "we / someone else decided not to try" as a different kind of failure from "we tried and it didn't work out."
3mathenjoyer
This is actually completely fair. So is the other comment.
0Benquo
This seems to be conflating the question of "is it possible to construct a difficult problem?" with the question of "what's the rate-limiting problem?". If you have a specific model for how to make things much better for many people by solving a hard technical problem before making substantial progress on human alignment, I'd very much like to hear the details. If I'm persuaded I'll be interested in figuring out how to help. So far this seems like evidence to the contrary, though, as it doesn't look like you thought you could get help making things better for many people by explaining the opportunity.
8Unreal
To the extent I'm worried about Vassar's character, I am as equally worried about the people around him. It's the people around him who should also take responsibility for his well-being and his moral behavior. That's what friends are for. I'm not putting this all on him. To be clear. 

I think it's a fine way of think about mathematical logic, but if you try to think this way about reality, you'll end up with views that make internal sense and are self-reinforcing but don't follow the grain of facts at all. When you hear such views from someone else, it's a good idea to see which facts they give in support. Do their facts seem scant, cherrypicked, questionable when checked? Then their big claims are probably wrong.

The people who actually know their stuff usually come off very different. Their statements are carefully delineated: "this thing about power was true in 10th century Byzantium, but not clear how much of it applies today".

Also, just to comment on this:

It is called Taking Ideas Seriously and using language literally. It is my personal favorite strategy, but I have no other options considering my neurotype.

I think it's somewhat changeable. Even for people like us, there are ways to make our processing more "fuzzy". Deliberately dimming some things, rounding others. That has many benefits: on the intellectual level you learn to see many aspects of a problem instead of hyperfocusing on one; emotionally you get more peaceful when thinking about things; a... (read more)

5mathenjoyer
On the third paragraph: I rarely have problems with hyperfixation. When I do, I just come back to the problem later, or prime myself with a random stimulus. (See Steelmanning Divination.) Peacefulness is enjoyable and terminally desirable, but in many contexts predators want to induce peacefulness to create vulnerability. Example: buying someone a drink with ill intent. (See "Safety in numbers" by Benjamin Ross Hoffman. I actually like relaxation, but agree with him that feeling relaxed in unsafe environments is a terrible idea. Reality is mostly an unsafe environment. Am getting to that.) I have no problem enjoying warm fuzzies. I had problems with them after first talking with Vassar, but I re-equilibrated. Warm fuzzies are good, helpful, and worth purchasing. I am not a perfect utilitarian. However, it is important that when you buy fuzzies instead of utils, as Scott would put it, you know what you are buying. Many will sell fuzzies and market them as utils. I sometimes round things, it is not inherently bad. Dimming things is not good. I like being alive. From a functionalist perspective, the degree to which I am aroused (with respect to the senses and the mind) is the degree to which I am a real, sapient being. Dimming is sometimes terminally valuable as relaxation, and instrumentally valuable as sleep, but if you believe in Life, Freedom, Prosperity And Other Nice Transhumanist Things then dimming being bad in most contexts follows as a natural consequence. On the second paragraph: This is because people compartmentalize. After studying a thing for a long time, people will grasp deep nonverbal truths about that thing. Sometimes they are wrong; without the legibility of the elucidation, false ideas such gained are difficult to destroy. Sometimes they are right! Mathematical folklore is an example: it is literally metis among mathematicians. Highly knowledgeable and epistemically skilled people delineate. Sometimes the natural delineation is "this is tru

I mostly see where you're coming from, but I think the reasonable answer to "point 1 or 2 is a false dichotomy" is this classic, uh, tumblr quote (from memory):

"People cannot just. At no time in the history of the human species has any person or group ever just. If your plan relies on people to just, then your plan will fail."

This goes especially if the thing that comes after "just" is "just precommit."

My expectation is that interaction with Vassar is that the people who espouse 1 or 2 expect that the people interacting are incapable of precommitting to the required strength. I don't know if they're correct, but I'd expect them to be, because I think people are just really bad at precommitting in general. If precommitting was easy, I think we'd all be a lot more fit and get a lot more done. Also, Beeminder would be bankrupt.

This is a very good criticism! I think you are right about people not being able to "just."

My original point with those strategies was to illustrate an instance of motivated stopping about people in the community who have negative psychological effects, or criticize popular institutions. Perhaps it is the case that people genuinely tried to make a strategy but automatically rejected my toy strategies as false. I do not think it is, based on "vibe" and on the arguments that people are making, such as "argument from cult."

I think you are actually completely correct about those strategies being bad. Instead, I failed to point out that I expect a certain level of mental robustness-to-nonsanity from people literally called "rationalists." This comes off as sarcastic but I mean it completely literally.

Precommitting isn't easy, but rationality is about solving hard problems. When I think of actual rationality, I think of practices such as "five minutes of actually trying" and alkjash's "Hammertime." Humans have a small component of behavior that is agentic, and a huge component of behavior that is non-agentic and installed by vaguely agentic processes (simple conditioning, mimicry, social... (read more)

2Hazard
I found many things you shared useful. I also expect that because of your style/tone you'll get down voted :(
-46xtz05qw

Michael is very good at spotting people right on the verge of psychosis

...and then pushing them.

Michael told me once that he specifically seeks out people who are high in Eysenckian psychoticism.

So, this seems deliberate. [EDIT: Or not. Zack makes a fair point.] He is not even hiding it, if you listen carefully.

Michael told me once that he specifically seeks out people who are high in Eysenckian psychoticism.

So, this seems deliberate.

Because high-psychoticism people are the ones who are most likely to understand what he has to say.

This isn't nefarious. Anyone trying to meet new people to talk to, for any reason, is going to preferentially seek out people who are a better rather than worse match. Someone who didn't like our robot cult could make structurally the same argument about, say, efforts to market Yudkowsky's writing (like spending $28,000 distributing copies of Harry Potter and the Methods to math contest winners): why, they're preying on innocent high-IQ systematizers and filling their heads with scary stories about the coming robot apocalypse!

I mean, technically, yes. But in Yudkowsky and friends' worldview, the coming robot apocalypse is actually real, and high-IQ systematizers are the people best positioned to understand this important threat. Of course they're going to try to market their memes to that neurotype-demographic. What do you expect them to do? What do you expect Michael to do?

There's a sliding scale ranging from seeking out people who are better at understanding arguments in general to seeking out people who are biased toward agreeing with a specific set of arguments (and perhaps made better at understanding those arguments by that bias). Targeting math contest winners seems more toward the former end of the scale than targeting high-psychoticism people. This is something that seems to me to be true independently of the correctness of the underlying arguments. You don't have to already agree about the robot apocalypse to be able to see why math contest winners would be better able to understand arguments for or against the robot apocalypse.

If Yudkowsky and friends were deliberately targeting arguments for short AI timelines at people who already had a sense of a foreshortened future, then that would be more toward the latter end of the scale, and I think you'd object to that targeting strategy even though they'd be able to make an argument structurally the same as your comment.

Yudkowsky and friends are targeting arguments that AGI is important at people already likely to believe AGI is important (and who are open to thinking it's even more important than they think), e.g. programmers, transhumanists, and reductionists. The case is less clear for short timelines specifically, given the lack of public argumentation by Yudkowsky etc, but the other people I know who have tried to convince people about short timelines (e.g. at the Asilomar Beneficial AI conference) were targeting people likely to be somewhat convinced of this, e.g. people who think machine learning / deep learning are important.

In general this seems really expected and unobjectionable? "If I'm trying to convince people of X, I'm going to find people who already believe a lot of the pre-requisites for understanding X and who might already assign X a non-negligible prior". This is how pretty much all systems of ideas spread, I have trouble thinking of a counterexample.

I mean, do a significant number of people not select who they talk with based on who already agrees with them to some extent and is paying attention to similar things?

If short timelines advocates were seeking out people with personalities that predisposed them toward apocalyptic terror, would you find it similarly unobjectionable? My guess is no. It seems to me that a neutral observer who didn't care about any of the object-level arguments would say that seeking out high-psychoticism people is more analogous to seeking out high-apocalypticism people than it is to seeking out programmers, transhumanists, reductionists, or people who think machine learning / deep learning are important.

The way I can make sense of seeking high-psychoticism people being morally equivalent to seeking high IQ systematizers, is if I drain any normative valance from "psychotic," and imagine there is a spectrum from autistic to psychotic. In this spectrum the extreme autistic is exclusively focused on exactly one thing at a time, and is incapable of cognition that has to take into account context, especially context they aren't already primed to have in mind, and the extreme psychotic can only see the globally interconnected context where everything means/is connected to everything else. Obviously neither extreme state is desirable, but leaning one way or another could be very helpful in different contexts.  

See also: indexicality.

On the other hand, back in my reflective beliefs, I think psychosis is a much scarier failure mode than "autism," on this scale, and I would not personally pursue any actions that pushed people toward it without, among other things, a supporting infrastructure of some kind for processing the psychotic state without losing the plot (social or cultural would work, but whatever).

2jessicata
I wouldn't find it objectionable. I'm not really sure what morally relevant distinction is being pointed at here, apocalyptic beliefs might make the inferential distance to specific apocalyptic hypotheses lower.

Well, I don't think it's obviously objectionable, and I'd have trouble putting my finger on the exact criterion for objectionability we should be using here. Something like "we'd all be better off in the presence of a norm against encouraging people to think in ways that might be valid in the particular case where we're talking to them but whose appeal comes from emotional predispositions that we sought out in them that aren't generally either truth-tracking or good for them" seems plausible to me. But I think it's obviously not as obviously unobjectionable as Zack seemed to be suggesting in his last few sentences, which was what moved me to comment.

2dxu
I don't have well-formed thoughts on this topic, but one factor that seems relevant to me has a core that might be verbalized as "susceptibility to invalid methods of persuasion", which seems notably higher in the case of people with high "apocalypticism" than people with the other attributes described in the grandparent. (A similar argument applies in the case of people with high "psychoticism".)
2jessicata
That might be relevant in some cases but seems unobjectionable both in the psychoticism case and the apocalypse case. I would predict that LW people cluster together in personality measurements like OCEAN and Eysenck, it's by default easier to write for people of a similar personality to yourself. Also, people notice high rates of Asperger's-like characteristics around here, which are correlated with Jewish ethnicity and transgenderism (also both frequent around here).
3Unreal
It might not be nefarious.  But it might also not be very wise.  I question Vassar's wisdom, if what you say is indeed true about his motives.  I question whether he's got the appropriate feedback loops in place to ensure he is not exacerbating harms. I question whether he's appropriately seeking that feedback rather than turning away from the kinds he finds overwhelming, distasteful, unpleasant, or doesn't know how to integrate.  I question how much work he's done on his own shadow and whether it's not inadvertently acting out in ways that are harmful. I question whether he has good friends he trusts who would let him know, bluntly, when he is out of line with integrity and ethics or if he has 'shadow stuff' that he's not seeing.  I don't think this needs to be hashed out in public, but I hope people are working closer to him on these things who have the wisdom and integrity to do the right thing. 
0ChristianKl
Rumor has it that https://www.sfgate.com/news/bayarea/article/Man-Gets-5-Years-For-Attacking-Woman-Outside-13796663.php is due to Vassar recommended drugs. In the OP that case does get blamed on CFAR's enviroment without any mentioning of that part. When talking about whether or not CFAR is responsible for that stories factors like that seem to me to matter quite a bit. I'd love whether anyone who's nearer can confirm/deny the rumor and fill in missing pieces. 

As I mentioned elsewhere, I was heavily involved in that incident for a couple months after it happened and I looked for causes that could help with the defense. AFAICT No drugs were taken in the days leading up to the mental health episode or arrest (or people who took drugs with him lied about it).