I appreciate Zoe Curzi's revelations of her experience with Leverage.  I know how hard it is to speak up when no or few others do, and when people are trying to keep things under wraps.

I haven't posted much publicly about my experiences working as a researcher at MIRI (2015-2017) or around CFAR events, to a large degree because I've been afraid.  Now that Zoe has posted about her experience, I find it easier to do so, especially after the post was generally well-received by LessWrong.

I felt moved to write this, not just because of Zoe's post, but also because of Aella's commentary:

I've found established rationalist communities to have excellent norms that prevent stuff like what happened at Leverage. The times where it gets weird is typically when you mix in a strong leader + splintered, isolated subgroup + new norms. (this is not the first time)

This seemed to me to be definitely false, upon reading it.  Most of what was considered bad about the events at Leverage Research also happened around MIRI/CFAR, around the same time period (2017-2019).

I don't want to concentrate on the question of which is "worse"; it is hard to even start thinking about that without discussing facts on the ground and general social models that would apply to both cases.  I also caution against blame in general, in situations like these, where many people (including me!) contributed to the problem, and have kept quiet for various reasons.  With good reason, it is standard for truth and reconciliation events to focus on restorative rather than retributive justice, and include the possibility of forgiveness for past crimes.

As a roadmap for the rest of the post, I'll start by describing some background, describe some trauma symptoms and mental health issues I and others have experienced, and describe the actual situations that these mental events were influenced by and "about" to a significant extent.

Background: choosing a career

After I finished my CS/AI Master's degree at Stanford, I faced a choice of what to do next.  I had a job offer at Google for machine learning research and a job offer at MIRI for AI alignment research.  I had also previously considered pursuing a PhD at Stanford or Berkeley; I'd already done undergrad research at CoCoLab, so this could have easily been a natural transition.

I'd decided against a PhD on the basis that research in industry was a better opportunity to work on important problems that impact the world; since then I've gotten more information from insiders that academia is a "trash fire" (not my quote!), so I don't regret this decision.

I was faced with a decision between Google and MIRI.  I knew that at MIRI I'd be taking a pay cut.  On the other hand, I'd be working on AI alignment, an important problem for the future of the world, probably significantly more important than whatever I'd be working on at Google.  And I'd get an opportunity to work with smart, ambitious people, who were structuring their communication protocols and life decisions around the content of the LessWrong Sequences.

These Sequences contained many ideas that I had developed or discovered independently, such as functionalist theory of mind, the idea that Solomonoff Induction was a formalization of inductive epistemology, and the idea that one-boxing in Newcomb's problem is more rational than two-boxing.  The scene attracted thoughtful people who cared about getting the right answer on abstract problems like this, making for very interesting conversations.

Research at MIRI was an extension of such interesting conversations to rigorous mathematical formalism, making it very fun (at least for a time).  Some of the best research I've done was at MIRI (reflective oracles, logical induction, others).  I met many of my current friends through LessWrong, MIRI, and the broader LessWrong Berkeley community.

When I began at MIRI (in 2015), there were ambient concerns that it was a "cult"; this was a set of people with a non-mainstream ideology that claimed that the future of the world depended on a small set of people that included many of them.  These concerns didn't seem especially important to me at the time.  So what if the ideology is non-mainstream as long as it's reasonable?  And if the most reasonable set of ideas implies high impact from a rare form of research, so be it; that's been the case at times in history.

(Most of the rest of this post will be negative-valenced, like Zoe's post; I wanted to put some things I liked about MIRI and the Berkeley community up-front.  I will be noting parts of Zoe's post and comparing them to my own experience, which I hope helps to illuminate common patterns; it really helps to have an existing different account to prompt my memory of what happened.)

Trauma symptoms and other mental health problems

Back to Zoe's post.  I want to disagree with a frame that says that the main thing that's bad was that Leverage (or MIRI/CFAR) was a "cult".  This makes it seem like what happened at Leverage is much worse than what could happen at a normal company.  But, having read Moral Mazes and talked to people with normal corporate experience (especially in management), I find that "normal" corporations are often quite harmful to the psychological health of their employees, e.g. causing them to have complex PTSD symptoms, to see the world in zero-sum terms more often, and to have more preferences for things to be incoherent.  Normal startups are commonly called "cults", with good reason.  Overall, there are both benefits and harms of high-demand ideological communities ("cults") compared to more normal occupations and social groups, and the specifics matter more than the general class of something being "normal" or a "cult", although the general class affects the structure of the specifics.

Zoe begins by listing a number of trauma symptoms she experienced.  I have, personally, experienced most of those on the list of cult after-effects in 2017, even before I had a psychotic break.

The psychotic break was in October 2017, and involved psychedelic use (as part of trying to "fix" multiple deep mental problems at once, which was, empirically, overly ambitious); although people around me to some degree tried to help me, this "treatment" mostly made the problem worse, so I was placed in 1-2 weeks of intensive psychiatric hospitalization, followed by 2 weeks in a halfway house.  This was followed by severe depression lasting months, and less severe depression from then on, which I still haven't fully recovered from.  I had PTSD symptoms after the event and am still recovering.

During this time, I was intensely scrupulous; I believed that I was intrinsically evil, had destroyed significant parts of the world with my demonic powers, and was in a hell of my own creation.  I was catatonic for multiple days, afraid that by moving I would cause harm to those around me.  This is in line with scrupulosity-related post-cult symptoms.

Talking about this is to some degree difficult because it's normal to think of this as "really bad".  Although it was exceptionally emotionally painful and confusing, the experience taught me a lot, very rapidly; I gained and partially stabilized a new perspective on society and my relation to it, and to my own mind.  I have much more ability to relate to normal people now, who are also for the most part also traumatized.

(Yes, I realize how strange it is that I was more able to relate to normal people by occupying an extremely weird mental state where I thought I was destroying the world and was ashamed and suicidal regarding this; such is the state of normal Americans, apparently, in a time when suicidal music is extremely popular among youth.)

Like Zoe, I have experienced enormous post-traumatic growth.  To quote a song, "I am Woman": "Yes, I'm wise, but it's wisdom born of pain.  I guess I've paid the price, but look how much I've gained."

While most people around MIRI and CFAR didn't have psychotic breaks, there were at least 3 other cases of psychiatric institutionalizations by people in the social circle immediate to MIRI/CFAR; at least one other than me had worked at MIRI for a significant time, and at least one had done work with MIRI on a shorter-term basis.  There was, in addition, a case of someone becoming very paranoid, attacking a mental health worker, and hijacking her car, leading to jail time; this person was not an employee of either organization, but had attended multiple CFAR events including a relatively exclusive AI-focused one.

I heard that the paranoid person in question was concerned about a demon inside him, implanted by another person, trying to escape.  (I knew the other person in question, and their own account was consistent with attempting to implant mental subprocesses in others, although I don't believe they intended anything like this particular effect).  My own actions while psychotic later that year were, though physically nonviolent, highly morally confused; I felt that I was acting very badly and "steering in the wrong direction", e.g. in controlling the minds of people around me or subtly threatening them, and was seeing signs that I was harming people around me, although none of this was legible enough to seem objectively likely after the fact.  I was also extremely paranoid about the social environment, being unable to sleep normally due to fear.

There are even cases of suicide in the Berkeley rationality community associated with scrupulosity and mental self-improvement (specifically, Maia Pasek/SquirrelInHell, and Jay Winterford/Fluttershy, both of whom were long-time LessWrong posters; Jay wrote an essay about suicidality, evil, domination, and Roko's basilisk months before the suicide itself).  Both these cases are associated with a subgroup splitting off of the CFAR-centric rationality community due to its perceived corruption, centered around Ziz.  (I also thought CFAR was pretty corrupt at the time, and I also attempted to split off another group when attempts at communication with CFAR failed; I don't think this judgment was in error, though many of the following actions were; the splinter group seems to have selected for high scrupulosity and not attenuated its mental impact.)

The cases discussed are not always of MIRI/CFAR employees, so they're hard to attribute to the organizations themselves, even if they were clearly in the same or a nearby social circle.  Leverage was an especially legible organization, with a relatively clear interior/exterior distinction, while CFAR was less legible, having a set of events that different people were invited to, and many conversations including people not part of the organization.  Hence, it is easier to attribute organizational responsibility at Leverage than around MIRI/CFAR.  (This diffusion of responsibility, of course, doesn't help when there are actual crises, mental health or otherwise.)

Obviously, for every case of poor mental health that "blows up" and is noted, there are many cases that aren't.  Many people around MIRI/CFAR and Leverage, like Zoe, have trauma symptoms (including "cult after-effect symptoms") that aren't known about publicly until the person speaks up.

Why do so few speak publicly, and after so long?

Zoe discusses why she hadn't gone public until now.  She first cites fear of response:

Leverage was very good at convincing me that I was wrong, my feelings didn't matter, and that the world was something other than what I thought it was. After leaving, it took me years to reclaim that self-trust.

Clearly, not all cases of people trying to convince each other that they're wrong are abusive; there's an extra dimension of institutional gaslighting, people telling you something you have no reason to expect they actually believe, people being defensive and blocking information, giving implausible counter-arguments, trying to make you doubt your account and agree with their bottom line.

Jennifer Freyd writes about "betrayal blindness", a common problem where people hide from themselves evidence that their institutions have betrayed them.  I experienced this around MIRI/CFAR.

Some background on AI timelines: At the Asilomar Beneficial AI conference, in early 2017 (after AlphaGo was demonstrated in late 2016), I remember another attendee commenting on a "short timelines bug" going around.  Apparently a prominent researcher was going around convincing people that human-level AGI was coming in 5-15 years.

This trend in belief included MIRI/CFAR leadership; one person commented that he noticed his timelines trending only towards getting shorter, and decided to update all at once.  I've written about AI timelines in relation to political motivations before (long after I actually left MIRI).

Perhaps more important to my subsequent decisions, the AI timelines shortening triggered an acceleration of social dynamics.  MIRI became very secretive about research.  Many researchers were working on secret projects, and I learned almost nothing about these.  I and other researchers were told not to even ask each other about what others of us were working on, on the basis that if someone were working on a secret project, they may have to reveal this fact.  Instead, we were supposed to discuss our projects with an executive, who could connect people working on similar projects.

I had disagreements with the party line, such as on when human-level AGI was likely to be developed and about security policies around AI, and there was quite a lot of effort to convince me of their position, that AGI was likely coming soon and that I was endangering the world by talking openly about AI in the abstract (not even about specific new AI algorithms). Someone in the community told me that for me to think AGI probably won't be developed soon, I must think I'm better at meta-rationality than Eliezer Yudkowsky, a massive claim of my own specialness [EDIT: Eliezer himself and Sequences-type thinking, of course, would aggressively disagree with the epistemic methodology advocated by this person].  I experienced a high degree of scrupulosity about writing anything even somewhat critical of the community and institutions (e.g. this post).  I saw evidence of bad faith around me, but it was hard to reject the frame for many months; I continued to worry about whether I was destroying everything by going down certain mental paths and not giving the party line the benefit of the doubt, despite its increasing absurdity.

Like Zoe, I was definitely worried about fear of response.  I had paranoid fantasies about a MIRI executive assassinating me.  The decision theory research I had done came to life, as I thought about the game theory of submitting to a threat of a gun, in relation to how different decision theories respond to extortion.

This imagination, though extreme (and definitely reflective of a cognitive error), was to some degree re-enforced by the social environment.  I mentioned the possibility of whistle-blowing on MIRI to someone I knew, who responded that I should consider talking with Chelsea Manning, a whistleblower who is under high threat.  There was quite a lot of paranoia at the time, both among the "establishment" (who feared being excluded or blamed) and "dissidents" (who feared retaliation by institutional actors).  (I would, if asked to take bets, have bet strongly against actual assassination, but I did fear other responses.)

More recently (in 2019), there were multiple masked protesters at a CFAR event (handing out pamphlets critical of MIRI and CFAR) who had a SWAT team called on them (by camp administrators, not CFAR people, although a CFAR executive had called the police previously about this group), who were arrested, and are now facing the possibility of long jail time.  While this group of people (Ziz and some friends/associates) chose an unnecessarily risky way to protest, hearing about this made me worry about violently authoritarian responses to whistleblowing, especially when I was under the impression that it was a CFAR-adjacent person who had called the cops to say the protesters had a gun (which they didn't have), which is the way I heard the story the first time.

Zoe further talks about how the experience was incredibly confusing and people usually only talk about the past events secretively.  This matches my experience.

Like Zoe, I care about the people I interacted with during the time of the events (who are, for the most part, colleagues who I learned from), and I don't intend to cause harm to them through writing about these events.

Zoe discusses an unofficial NDA people signed as they left, agreeing not to talk badly of the organization.  While I wasn't pressured to sign an NDA, there were significant security policies discussed at the time (including the one about researchers not asking each other about research).  I was discouraged from writing a blog post estimating when AI would be developed, on the basis that a real conversation about this topic among rationalists would cause AI to come sooner, which would be more dangerous (the blog post in question would have been similar to the AI forecasting work I did later, here and here; judge for yourself how dangerous this is).  This made it hard to talk about the silencing dynamic; if you don't have the freedom to speak about the institution and limits of freedom of speech, then you don't have freedom of speech.

(Is it a surprise that, after over a year in an environment where I was encouraged to think seriously about the possibility that simple actions such as writing blog posts about AI forecasting could destroy the world, I would develop the belief that I could destroy everything through subtle mental movements that manipulate people?)

Years before, MIRI had a non-disclosure agreement that members were pressured to sign, as part of a legal dispute with Louie Helm.

I was certainly socially discouraged from revealing things that would harm the "brand" of MIRI and CFAR, by executive people.  There was some discussion at the time of the possibility of corruption in EA/rationality institutions (e.g. Ben Hoffman's posts criticizing effective altruism, GiveWell, and the Open Philanthropy Project); a lot of this didn't end up on the Internet due to PR concerns.

Someone who I was collaborating with at the time (Michael Vassar) was commenting on social epistemology and the strengths and weaknesses of various people's epistemology and strategy, including people who were leaders at MIRI/CFAR.  Subsequently, Anna Salamon said that Michael was causing someone else at MIRI to "downvote Eliezer in his head" and that this was bad because it meant that the "community" would not agree about who the leaders were, and would therefore have akrasia issues due to the lack of agreement on a single leader in their head telling them what to do.  (Anna says, years later, that she was concerned about bias in selectively causing downvotes rather than upvotes; however, at the time, based on what was said, I had the impression that the primary concern was about coordination around common leadership rather than bias specifically.)

This seemed culty to me and some friends; it's especially evocative in relation to Julian Jaynes' writing about bronze age cults, which detail a psychological model in which idols/gods give people voices in their head telling them what to do.

(As I describe these events in retrospect they seem rather ridiculous, but at the time I was seriously confused about whether I was especially crazy or in-the-wrong, and the leadership was behaving sensibly.  If I were the type of person to trust my own judgment in the face of organizational mind control, I probably wouldn't have been hired in the first place; everything I knew about how to be hired would point towards having little mental resistance to organizational narratives.)

Strange psycho-social-metaphysical hypotheses in a group setting

Zoe gives a list of points showing how "out of control" the situation at Leverage got.  This is consistent with what I've heard from other ex-Leverage people.

The weirdest part of the events recounted is the concern about possibly-demonic mental subprocesses being implanted by other people. As a brief model of something similar to this (not necessarily the same model as the Leverage people were using): people often pick up behaviors ("know-how") and mental models from other people, through acculturation and imitation. Some of this influence could be (a) largely unconscious on the part of the receiver, (b) partially intentional or the part of the person having mental effects on others (where these intentions may include behaviorist conditioning, similar to hypnosis, causing behaviors to be triggered under certain circumstances), and (c) overall harmful to the receiver's conscious goals. According to IFS-like psychological models, it's common for a single brain to contain multiple sub-processes with different intentions. While the mental subprocess implantation hypothesis is somewhat strange, it's hard to rule out based on physics or psychology.

As weird as the situation got, with people being afraid of demonic subprocesses being implanted by other people, there were also psychotic breaks involving demonic subprocess narratives around MIRI and CFAR. These strange experiences are, as far as I can tell, part of a more general social phenomenon around that time period; I recall a tweet commenting that the election of Donald Trump convinced everyone that magic was real.

Unless there were psychiatric institutionalizations or jail time resulting from the Leverage psychosis, I infer that Leverage overall handled their metaphysical weirdness better than the MIRI/CFAR adjacent community.  While in Leverage the possibility of subtle psychological influence between people was discussed relatively openly, around MIRI/CFAR it was discussed covertly, with people being told they were crazy for believing it might be possible.  (I noted at the time that there might be a sense in which different people have "auras" in a way that is not less inherently rigorous than the way in which different people have "charisma", and I feared this type of comment would cause people to say I was crazy.)

As a consequence, the people most mentally concerned with strange social metaphysics were marginalized, and had more severe psychoses with less community support, hence requiring normal psychiatric hospitalization.

The case Zoe recounts of someone "having a psychotic break" sounds tame relative to what I'm familiar with.  Someone can mentally explore strange metaphysics, e.g. a different relation to time or God, in a supportive social environment where people can offer them informational and material assistance, and help reality-check their ideas.

Alternatively, like me, they can explore these metaphysics while:

  • losing days of sleep
  • becoming increasingly paranoid and anxious
  • feeling delegitimized and gaslit by those around them, unable to communicate their actual thoughts with those around them
  • fearing involuntary psychiatric institutionalization
  • experiencing involuntary psychiatric institutionalization
  • having almost no real mind-to-mind communication during "treatment"
  • learning primarily to comply and to play along with the incoherent, shifting social scene (there were mandatory improv classes)
  • being afraid of others in the institution, including being afraid of sexual assault, which is common in psychiatric hospitals
  • believing the social context to be a "cover up" of things including criminal activity and learning to comply with it, on the basis that one would be unlikely to exit the institution within a reasonable time without doing so

Being able to discuss somewhat wacky experiential hypotheses, like the possibility of people spreading mental subprocesses to each other, in a group setting, and have the concern actually taken seriously as something that could seem true from some perspective (and which is hard to definitively rule out), seems much more conducive to people's mental well-being than refusing to have that discussion, so they struggle with (what they think is) mental subprocess implantation on their own.  Leverage definitely had large problems with these discussions, and perhaps tried to reach more intersubjective agreement about them than was plausible (leading to over-reification, as Zoe points out), but they seem less severe than the problems resulting from refusing to have them, such as psychiatric hospitalization and jail time.

"Psychosis" doesn't have to be a bad thing, even if it usually is in our society; it can be an exploration of perceptions and possibilities not before imagined, in a supportive environment that helps the subject to navigate reality in a new way; some of R.D. Liang's work is relevant here, describing psychotic mental states as a result of ontological insecurity following from an internal division of the self at a previous time. Despite the witch hunts and so on, the Leverage environment seems more supportive than what I had access to. The people at Leverage I talk to, who have had some of these unusual experiences, often have a highly exploratory attitude to the subtle mental realm, having gained access to a new cognitive domain through the experience, even if it was traumatizing.

World-saving plans and rarity narratives

Zoe cites the fact that Leverage has a "world-saving plan" (which included taking over the world) and considered Geoff Anders and Leverage to be extremely special, e.g. Geoff being possibly the best philosopher ever:

Within a few months of joining, a supervisor I trusted who had recruited me confided in me privately, “I think there’s good reason to believe Geoff is the best philosopher who’s ever lived, better than Kant. I think his existence on earth right now is an historical event.”

Like Leverage, MIRI had a "world-saving plan".  This is no secret; it's discussed in an Arbital article written by Eliezer Yudkowsky.  Nate Soares frequently talked about how it was necessary to have a "plan" to make the entire future ok, to avert AI risk; this plan would need to "backchain" from a state of no AI risk and may, for example, say that we must create a human emulation using nanotechnology that is designed by a "genie" AI, which does a narrow task rather than taking responsibility for the entire future; this would allow the entire world to be taken over by a small group including the emulated human. [EDIT: See Nate's clarification, the small group doesn't have to be MIRI specifically, and the upload plan is an example of a plan rather than a fixed super-plan.]

I remember taking on more and more mental "responsibility" over time, noting the ways in which people other than me weren't sufficient to solve the AI alignment problem, and I had special skills, so it was uniquely my job to solve the problem.  This ultimately broke down, and I found Ben Hoffman's post on responsibility to resonate (which discusses the issue of control-seeking).

The decision theory of backchaining and taking over the world somewhat beyond the scope of this post.  There are circumstances where back-chaining is appropriate, and "taking over the world" might be necessary, e.g. if there are existing actors already trying to take over the world and none of them would implement a satisfactory regime.  However, there are obvious problems with multiple actors each attempting to control everything, which are discussed in Ben Hoffman's post.

This connects with what Zoe calls "rarity narratives".  There were definitely rarity narratives around MIRI/CFAR.  Our task was to create an integrated, formal theory of values, decisions, epistemology, self-improvement, etc ("Friendliness theory"), which would help us develop Friendly AI faster than the rest of the world combined was developing AGI (which was, according to leaders, probably in less than 20 years).  It was said that a large part of our advantage in doing this research so fast was that we were "actually trying" and others weren't.  It was stated by multiple people that we wouldn't really have had a chance to save the world without Eliezer Yudkowsky (obviously implying that Eliezer was an extremely historically significant philosopher).

Though I don't remember people saying explicitly that Eliezer Yudkowsky was a better philosopher than Kant, I would guess many would have said so.  No one there, as far as I know, considered Kant worth learning from enough to actually read the Critique of Pure Reason in the course of their research; I only did so years later, and I'm relatively philosophically inclined.  I would guess that MIRI people would consider a different set of philosophers relevant, e.g. would include Turing and Einstein as relevant "philosophers", and I don't have reason to believe they would consider Eliezer more relevant than these, though I'm not certain either way.  (I think Eliezer is a world-historically-significant philosopher, though not as significant as Kant or Turing or Einstein.)

I don't think it's helpful to oppose "rarity narratives" in general.  People need to try to do hard things sometimes, and actually accomplishing those things would make the people in question special, and that isn't a good argument against trying the thing at all.  Intellectual groups with high information integrity, e.g. early quantum mechanics people, can have a large effect on history.  I currently think the intellectual work I do is pretty rare and important, so I have a "rarity narrative" about myself, even though I don't usually promote it.  Of course, a project claiming specialness while displaying low information integrity is, effectively, asking for more control and resources that it can beneficially use.

Rarity narratives can have the effects of making a group of people more insular, more concentrating relevance around itself and not learning from other sources (in the past or the present), making local social dynamics be more centered on a small number of special people, and increasing pressure on people to try to do (or pretend to try to do) things beyond their actual abilities; Zoe and I both experienced these effects.

(As a hint to evaluating rarity narratives yourself: compare Great Thinker's public output to what you've learned from other public sources; follow citations and see where Great Thinker might be getting their ideas from; read canonical great philosophy and literature; get a quantitative sense of how much insight is coming from which places throughout spacetime.)

The object-level specifics of each case of world-saving plan matter, of course; I think most readers of this post will be more familiar with MIRI's world-saving plan, especially since Zoe's post provides few object-level details about the content of Leverage's plan.


Rarity ties into debugging; if what makes us different is that we're Actually Trying and the other AI research organizations aren't, then we're making a special psychological claim about ourselves, that we can detect the difference between actually and not-actually trying, and cause our minds to actually try more of the time.

Zoe asks whether debugging was "required"; she notes:

The explicit strategy for world-saving depended upon a team of highly moldable young people self-transforming into Elon Musks.

I, in fact, asked a CFAR instructor in 2016-17 whether the idea was to psychologically improve yourself until you became Elon Musk, and he said "yes".  This part of the plan was the same [EDIT: Anna clarifies that, while some people becoming like Elon Musk was some people's plan, there was usually acceptance of people not changing themselves; this might to some degree apply to Leverage as well].

Self-improvement was a major focus around MIRI and CFAR, and at other EA orgs.  It often used standard CFAR techniques, which were taught at workshops.  It was considered important to psychologically self-improve to the point of being able to solve extremely hard, future-lightcone-determining problems.

I don't think these are bad techniques, for the most part.  I think I learned a lot by observing and experimenting on my own mental processes.  (Zoe isn't saying Leverage's techniques are bad either, just that you could get most of them from elsewhere.)

Zoe notes a hierarchical structure where people debugged people they had power over:

Trainers were often doing vulnerable, deep psychological work with people with whom they also lived, made funding decisions about, or relied on for friendship. Sometimes people debugged each other symmetrically, but mostly there was a hierarchical, asymmetric structure of vulnerability; underlings debugged those lower than them on the totem pole, never their superiors, and superiors did debugging with other superiors.

This was also the case around MIRI and CFAR.  A lot of debugging was done by Anna Salamon, head of CFAR at the time; Ben Hoffman noted that "every conversation with Anna turns into an Anna-debugging-you conversation", which resonated with me and others.

There was certainly a power dynamic of "who can debug who"; to be a more advanced psychologist is to be offering therapy to others, being able to point out when they're being "defensive", when one wouldn't accept the same from them.  This power dynamic is also present in normal therapy, although the profession has norms such as only getting therapy from strangers, which change the situation.

How beneficial or harmful this was depends on the details.  I heard that "political" discussions at CFAR (e.g. determining how to resolve conflicts between people at the organization, which could result in people leaving the organization) were mixed with "debugging" conversations, in a way that would make it hard for people to focus primarily on the debugged person's mental progress without imposing pre-determined conclusions.  Unfortunately, when there are few people with high psychological aptitude around, it's hard to avoid "debugging" conversations having political power dynamics, although it's likely that the problem could have been mitigated.

[EDIT: See PhoenixFriend's pseudonymous comment, and replies to it, for more on power dynamics including debugging-related ones at CFAR specifically.]

It was really common for people in the social space, including me, to have a theory about how other people are broken, and how to fix them, by getting them to understand a deep principle you do and they don't.  I still think most people are broken and don't understand deep principles that I or some others do, so I don't think this was wrong, although I would now approach these conversations differently.

A lot of the language from Zoe's post, e.g. "help them become a master", resonates.  There was an atmosphere of psycho-spiritual development, often involving Kegan stages There is a significant degree of overlap between people who worked with or at CFAR and people at the Monastic Academy [EDIT: see Duncan's comment estimating that the actual amount of interaction between CFAR and MAPLE was pretty low even though there was some overlap in people].

Although I wasn't directly financially encouraged to debug people, I infer that CFAR employees were, since instructing people was part of their job description.

Other issues

MIRI did have less time pressure imposed by the organization itself than Leverage did, despite the deadline implied by the AGI timeline; I had no issues with absurdly over-booked calendars.  I vaguely recall that CFAR employees were overworked especially around workshop times, though I'm pretty uncertain of the details.

Many people's social lives, including mine, were spent mostly "in the community"; much of this time was spent on "debugging" and other psychological work.  Some of my most important friendships at the time, including one with a housemate, were formed largely around a shared interest in psychological self-improvement.  There was, therefore, relatively little work-life separation (which has upsides as well as downsides).

Zoe recounts an experience with having unclear, shifting standards applied, with the fear of ostracism.  Though the details of my experience are quite different, I was definitely afraid of being considered "crazy" and marginalized for having philosophy ideas that were too weird, even though weird philosophy would be necessary to solve the AI alignment problem.  I noticed more people saying I and others were crazy as we were exploring sociological hypotheses that implied large problems with the social landscape we were in (e.g. people thought Ben Hoffman was crazy because of his criticisms of effective altruism). I recall talking to a former CFAR employee who was scapegoated and ousted after failing to appeal to the winning internal coalition; he was obviously quite paranoid and distrustful, and another friend and I agreed that he showed PTSD symptoms [EDIT: I infer scapegoating based on the public reason given being suspicious/insufficient; someone at CFAR points out that this person was paranoid and distrustful while first working at CFAR as well].

Like Zoe, I experienced myself and others being distanced from old family and friends, who didn't understand how high-impact the work we were doing was.  Since leaving the scene, I am more able to talk with normal people (including random strangers), although it's still hard to talk about why I expect the work I do to be high-impact.

An ex-Leverage person I know comments that "one of the things I give Geoff the most credit for is actually ending the group when he realized he had gotten in over his head. That still left people hurt and shocked, but did actually stop a lot of the compounding harm."  (While Geoff is still working on a project called "Leverage", the initial "Leverage 1.0" ended with most of the people leaving.) This is to some degree happening with MIRI and CFAR, with a change in the narrative about the organizations and their plans, although the details are currently less legible than with Leverage.


Perhaps one lesson to take from Zoe's account of Leverage is that spending relatively more time discussing sociology (including anthropology and history), and less time discussing psychology, is more likely to realize benefits while avoiding problems.  Sociology is less inherently subjective and meta than psychology, having intersubjectively measurable properties such as events in human lifetimes and social network graph structures.  My own thinking has certainly gone in this direction since my time at MIRI, to great benefit.  I hope this account I have written helps others to understand the sociology of the rationality community around 2017, and that this understanding helps people to understand other parts of the society they live in.

There are, obviously from what I have written, many correspondences, showing a common pattern for high-ambition ideological groups in the San Francisco Bay Area.  I know there are serious problems at other EA organizations, which produce largely fake research (and probably took in people who wanted to do real research, who become convinced by their experience to do fake research instead), although I don't know the specifics as well.  EAs generally think that the vast majority of charities are doing low-value and/or fake work.  I also know that San Francisco startup culture produces cult-like structures (and associated mental health symptoms) with regularity.  It seems more productive to, rather than singling out specific parties, think about the social and ecological forces that create and select for the social structures we actually see, which include relatively more and less cult-like structures.  (Of course, to the extent that harm is ongoing due to actions taken by people and organizations, it's important to be able to talk about that.)

It's possible that after reading this, you think this wasn't that bad.  Though I can only speak for myself here, I'm not sad that I went to work at MIRI instead of Google or academia after college.  I don't have reason to believe that either of these environments would have been better for my overall intellectual well-being or my career, despite the mental and social problems that resulted from the path I chose.  Scott Aaronson, for example, blogs about "blank faced" non-self-explaining authoritarian bureaucrats being a constant problem in academia.  Venkatesh Rao writes about the corporate world, and the picture presented is one of a simulation constantly maintained thorough improv.

I did grow from the experience in the end.  But I did so in large part by being very painfully aware of the ways in which it was bad.

I hope that those that think this is "not that bad" (perhaps due to knowing object-level specifics around MIRI/CFAR justifying these decisions) consider how they would find out whether the situation with Leverage was "not that bad", in comparison, given the similarity of the phenomena observed in both cases; such an investigation may involve learning object-level specifics about what happened at Leverage.  I hope that people don't scapegoat; in an environment where certain actions are knowingly being taken by multiple parties, singling out certain parties has negative effects on people's willingness to speak without actually producing any justice.

Aside from whether things were "bad" or "not that bad" overall, understanding the specifics of what happened, including harms to specific people, is important for actually accomplishing the ambitious goals these projects are aiming at; there is no reason to expect extreme accomplishments to result without very high levels of epistemic honesty.

New Comment
960 comments, sorted by Click to highlight new comments since: Today at 4:00 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I want to add some context I think is important to this.

Jessica was (I don't know if she still is) part of a group centered around a person named Vassar, informally dubbed "the Vassarites". Their philosophy is complicated, but they basically have a kind of gnostic stance where regular society is infinitely corrupt and conformist and traumatizing and you need to "jailbreak" yourself from it (I'm using a term I found on Ziz's discussion of her conversations with Vassar; I don't know if Vassar uses it himself). Jailbreaking involves a lot of tough conversations, breaking down of self, and (at least sometimes) lots of psychedelic drugs.

Vassar ran MIRI a very long time ago, but either quit or got fired, and has since been saying that MIRI/CFAR is also infinitely corrupt and conformist and traumatizing (I don't think he thinks they're worse than everyone else, but I think he thinks they had a chance to be better, they wasted it, and so it's especially galling that they're just as bad).  Since then, he's tried to "jailbreak" a lot of people associated with MIRI and CFAR - again, this involves making them paranoid about MIRI/CFAR and convincing them to take lots of drugs. The combinat... (read more)

Including Olivia, and Jessica, and I think Devi. Devi had a mental breakdown and detransitioned IIHC

Digging out this old account to point out that I have not in fact detransitioned, but find it understandable why those kinds of rumours would circulate given my behaviour during/around my experience of psychosis. I'll try to explain some context for the record.

In other parts of the linked blogpost Ziz writes about how some people around the rationalist community were acting on or spreading variations of the meme "trans women are [psychologically] men". I experienced this while dating AM (same as mentioned above). She repeatedly brought up this point in various interactions. Since we were both trans women this was hurting us both, so I look back with more pity than concern about malice. At some point during this time I started treating this as a hidden truth that I was proud of myself for being able to see, which I in retrospect I feel disgusted and complicit to have accepted. This was my state of mind when I discussed these issues with Zack reinforcing each others views. I believe (less certain) I also broached the topic with Michael and/or Anna at some point which probably went... (read more)

I want to point out that the level of mental influence being attributed to Michael in this comment and others (e.g. that he's "causing psychotic breaks" and "jailbreaking people" through conversation, "that listening too much to Vassar [causes psychosis], predictably") isn't obviously less than the level of mental influence Leverage attributed to people in terms of e.g. mental objects. Some people in the thread are self-congratulating on the rationalists not being as crazy and abusive as Leverage was in worrying that people were spreading harmful psychological objects to each other, and therefore isolating these people from their friends. Yet many in this comment thread are, literally, calling for isolating Michael Vassar from his friends on the basis of his mental influence on others.

Yes, I agree with you that all of this is very awkward.

I think the basic liberal model where everyone uses Reason a lot and we basically trust their judgments is a good first approximation and we should generally use it.

But we have to admit at least small violations of it even to get the concept of "cult". Not just the sort of weak cults we're discussing here, but even the really strong cults like Heaven's Gate or Jamestown. In the liberal model, someone should be able to use Reason to conclude that being in Heaven's Gate is bad for them, and leave. When we use the word "cult", we're implicitly agreeing that this doesn't always work, and we're bringing in creepier and less comprehensible ideas like "charisma" and "brainwashing" and "cognitive dissonance".

(and the same thing with the concept of "emotionally abusive relationship")

I don't want to call the Vassarites a cult because I'm sure someone will confront me with a Cult Checklist that they don't meet, but I think that it's not too crazy to argue that some of these same creepy ideas like charisma and so on were at work there. And everyone knows cults can get weird and end in mental illness. I agree it's weird that you can get tha... (read more)

It seems to me like in the case of Leverage, them working 75 hours per week reduced the time the could have used to use Reason to conclude that they are in a system that's bad for them. 

That's very different from someone having a few conversation with Vassar and then adopting a new belief and spending a lot of the time reasoning about that alone and the belief being stable without being embedded into a strong enviroment that makes independent thought hard because it keeps people busy.

A cult in it's nature is a social institution and not just a meme that someone can pass around via having a few conversations.

Perhaps the proper word here might be "manipulation" or "bad influence".

I think "mind virus" is fair. Vassar spoke a lot about how the world as it is can't be trusted. I remember that many of the people in his circle spoke, seemingly apropos of nothing, about how bad involuntary commitment is, so that by the time someone was psychotic their relationship with psychiatry and anyone who would want to turn to psychiatry to help them was poisoned. Within the envelope of those beliefs you can keep a lot of other beliefs safe from scrutiny. 

The thing with "bad influence" is that it's a pretty value-laden thing. In a religious town the biology teacher who tells the children about evolution and explains how it makes sense that our history goes back a lot further then a few thousands years is reasonably described as bad influence by the parents.  The religion teacher gets the children to doubt the religious authorities. Those children then can also be a bad influence on others by also getting them to doubt authorities. In a similar war Vassar gets people to question other authorities and social conventions and how those ideas can then be passed on.  Vassar speaks about things like Moral Mazes. Memes like that make people distrust institutions. There are the kind of bad influence that can get people to quit their job. Talking about the biology teacher like they are intend to start an evolution cult feels a bit misleading.

It seems to me that, at least in your worldview, this question of whether and what sort of subtle mental influence between people is possible is extremely important, to the point where different answers to the question could lead to pretty different political philosophies.

Let's consider a disjunction: 1: There isn't a big effect here, 2: There is a big effect here.

In case 1:

  • It might make sense to discourage people from talking too much about "charisma", "auras", "mental objects", etc, since they're pretty fake, really not the primary factors to think about when modeling society.
  • The main problem with the relevant discussions at Leverage is that they're making grandiose claims of mind powers and justifying e.g. isolating people on the basis of these, not actual mental influence.
  • The case made against Michael, that he can "cause psychotic breaks" by talking with people sometimes (or, in the case of Eric B, by talking sometimes with someone who is talking sometimes with the person in question), has no merit. People are making up grandiose claims about Michael to justify scapegoating him, it's basically a witch hunt. We should have a much more moderated, holistic picture where ther
... (read more)

I agree I'm being somewhat inconsistent, I'd rather do that than prematurely force consistency and end up being wrong or missing some subtlety. I'm trying to figure out what went on in these cases in more details and will probably want to ask you a lot of questions by email if you're open to that.

Yes, I'd be open to answering email questions.

This misses the fact that people’s ability to negatively influence others might vary very widely, making it so that it is silly to worry about, say, 99.99% of people strongly negatively influencing you, but reasonable to worry about the other 0.01%. If Michael is one of those 0.01%, then Scott’s worldview is not inconsistent.

If it's reasonable to worry about the .01%, it's reasonable to ask how the ability varies. There's some reason, some mechanism. This is worth discussing even if it's hard to give more than partial, metaphorical hypotheses. And if there are these .01% of very strong influencers, that is still an exception to strong liberal individualism.
That would still admit some people at Leverage having significant mental influence, especially if they got into weird mental tech that almost no one gets into. A lot of the weirdness is downstream of them encountering "body workers" who are extremely good at e.g. causing mental effects by touching people's back a little; these people could easily be extremal, and Leverage people learned from them. I've had sessions with some post-Leverage people where it seemed like really weird mental effects are happening in some implicit channel (like, I feel a thing poking at the left side of my consciousness and the person says, "oh, I just did an implicit channel thing, maybe you felt that"), I've never experienced effects like that (without drugs, and not obviously on drugs either though the comparison is harder) with others including with Michael, Anna, or normal therapists. This could be "placebo" in a way that makes it ultimately not that important but still, if we're admitting that 0.01% of people have these mental effects then it seems somewhat likely that this includes some Leverage people. Also, if the 0.01% is disproportionately influential (which, duh), then getting more detailed models than "charisma" is still quite important.
-1[comment deleted]2y

One important implication of "cults are possible" is that many normal-seeming people are already too crazy to function as free citizens of a republic.

In other words, from a liberal perspective, someone who can't make their own decisions about whether to hang out with Michael Vassar and think about what he says is already experiencing a severe psychiatric emergency and in need of a caretaker, since they aren't competent to make their own life decisions. They're already not free, but in the grip of whatever attractor they found first.

Personally I bite the bullet and admit that I'm not living in a society adequate to support liberal democracy, but instead something more like what Plato's Republic would call tyranny. This is very confusing because I was brought up to believe that I lived in a liberal democracy. I'd very much like to, someday.

I think there are less extreme positions here. Like "competent adults can make their own decisions, but they can't if they become too addicted to certain substances." I do think manipulation by others can rise to the level of drugs and is an exceptional case, not proof that a lot of people are fundamentally incapable of being free.  

I think the principled liberal perspective on this is Bryan Caplan's: drug addicts have or develop very strong preferences for drugs. The assertion that they can't make their own decisions is a declaration of intent to coerce them, or an arrogation of the right to do so. I don't think that many people are "fundamentally incapable of being free." But it seems like some people here are expressing grievances that imply that either they themselves or some others are, right now, not ready for freedom of association. The claim that someone is dangerous enough that they should be kept away from "vulnerable people" is a declaration of intent to deny "vulnerable people" freedom of association for their own good. (No one here thinks that a group of people who don't like Michael Vassar shouldn't be allowed to get together without him.)

drug addicts have or develop very strong preferences for drugs. The assertion that they can't make their own decisions is a declaration of intent to coerce them, or an arrogation of the right to do so.

I really don't think this is an accurate description of what is going on in people's mind when they are experiencing drug dependencies. I've spent a good chunk of my childhood with an alcoholic father, and he would have paid most of his wealth to stop being addicted to drinking, went through great lengths trying to tie himself to various masts to stop, and generally expressed a strong preference for somehow being able to self-modify the addiction away, but ultimately failed to do so. 

Of course, things might be different for different people, but at least in the one case where I have a very large amount of specific data, this seems like it's a pretty bad model of people's preferences. Based on the private notebooks of his that I found after his death, this also seemed to be his position in purely introspective contexts without obvious social desirability biases. My sense is that he would have strongly preferred someone to somehow take control away from him, in this specific domain of his life.

This seems like some evidence that the principled liberal position is false - specifically, that it is not self-ratifying. If you ask some people what their preferences are, they will express a preference for some of their preferences to be thwarted, for their own good. Contractarianism can handle this sort of case, but liberal democracy with inalienable rights cannot, and while liberalism is a political philosophy, contractarianism is just a policy proposal, with no theory of citizenship or education.
https://en.wikipedia.org/wiki/Olivier_Ameisen A sidetrack, but a French surgeon found that Baclofen (a muscle relaxant) cured his alcoholism by curing the craving. He was surprised to find that it cured compulsive spending when he didn't even realize he had a problem. He had a hard time raising money for an official experiment, and it came out inconclusive, and he died before the research got any further.  
This is more-or-less Aristotle's defense of (some cases of) despotic rule: it benefits those that are naturally slaves (those whose deliberative faculty functions below a certain threshold) in addition to the despot (making it a win-win scenario).

Aristotle seems (though he's vague on this) to be thinking in terms of fundamental attributes, while I'm thinking in terms of present capacity, which can be reduced by external interventions such as schooling.

Thinking about people I know who've met Vassar, the ones who weren't brought up to go to college* seem to have no problem with him and show no inclination to worship him as a god or freak out about how he's spooky or cultish; to them, he's obviously just a guy with an interesting perspective.

*As far as I know I didn't know any such people before 2020; it's very easy for members of the educated class to mistake our bubble for statistical normality.

Thinking about people I know who've met Vassar, the ones who weren't brought up to go to college* seem to have no problem with him and show no inclination to worship him as a god or freak out about how he's spooky or cultish; to them, he's obviously just a guy with an interesting perspective.

This is very interesting to me! I'd like to hear more about how the two group's behavior looks diff, and also your thoughts on what's the difference that makes the difference, what are the pieces of "being brought up to go to college" that lead to one class of reactions?

I have talked to Vassar, while he has a lot of "explicit control over conversations" which could be called charisma, I'd hypothesize that the fallout is actually from his ideas. (The charisma/intelligence making him able to credibly argue those)

My hypothesis is the following:  I've met a lot of rationalists + adjacent people. A lot of them care very deeply about EA and AI alignment. In fact, it seems to me to be a core part of a lot of these people's identity ("I'm an EA person, thus I'm a good person doing important work"). Two anecdotes to illustrate this:
- I'd recently argued against a committed EA person. Eventually, I started feeling almost-bad about arguing (even though we're both self-declared rationalists!) because I'd realised that my line of reasoning questioned his entire life. His identity was built deeply on EA, his job was selected to maximize money to give to charity. 
- I'd had a conversation with a few unemployed rationalist computer scientists. I suggested we might start a company together. One I got: "Only if it works on the alignment problem, everything else is irrelevant to me". 

Vassar very persuasively argues against EA and work done at MIRI/CFAR... (read more)

What are your or Vassar's arguments against EA or AI alignment? This is only tangential to your point, but I'd like to know about it if EA and AI alignment are not important.

The general argument is that EA's are not really doing what they say they do. One example from Vassar would be that when it comes to COVID-19 for example there seem to be relatively little effective work by EA's. In contrast Vassar considered giving prisoners access to personal equipment the most important and organized effectively for that to happen. 

EA's created in EA Global an enviroment where someone who wrote a good paper warning about the risks of gain-of-function research doesn't address that directly but only talks indirectly about it to focus on more meta-issues. Instead of having conflicts with people doing gain-of-function research the EA community mostly ignored it's problems and funded work that's in less conflict with the establishment. There's nearly no interest in learning from those errors in the EA community and people rather avoid conflicts.

If you read the full comments of this thread you will find reports that CEA used legal threats to cover up Leverage related information. 

AI alignment is important but just because one "works on AI risk" doesn't mean that the work actually decreases AI risk. Tying your personal identity to being someone who works to d... (read more)

Did Vassar argue that existing EA organizations weren't doing the work they said they were doing, or that EA as such was a bad idea? Or maybe that it was too hard to get organizations to do it?

He argued

(a) EA orgs aren't doing what they say they're doing (e.g. cost effectiveness estimates are wildly biased, reflecting bad procedures being used internally), and it's hard to get organizations to do what they say they do

(b) Utilitarianism isn't a form of ethics, it's still necessary to have principles, as in deontology or two-level consequentialism

(c) Given how hard it is to predict the effects of your actions on far-away parts of the world (e.g. international charity requiring multiple intermediaries working in a domain that isn't well-understood), focusing on helping people you have more information about makes sense unless this problem can be solved

(d) It usually makes more sense to focus on ways of helping others that also build capacities, including gathering more information, to increase long-term positive impact

If you for example want the critcism on GiveWell, Ben Hoffman was employed at GiveWell and made experiences that suggest that the process based on which their reports are made has epistemic problems. If you want the details talk to him. 

The general model would be that between actual intervention and the top there are a bunch of maze levels. GiveWell then hired normal corporatist people who behave in the dynamics that the immoral maze sequence describes play themselves out.

Vassar's action themselves are about doing altruistic actions more directly by looking for who are most powerless who need help and working to help them. In the COVID case he identified prisoners and then worked on making PPE available for them.

You might see his thesis is that "effective" in EA is about adding a management layer for directing interventions and that management layer has the problems that the immoral maze sequence describes. According to Vassar someone who wants to be altrustic shouldn't delegate his judgements of what's effective and thus warrents support to other people.

2[comment deleted]2y
Link? I'm not finding it

I think what you're pointing to is:

I have a large number of negative Leverage experiences between 2015-2017 that I never wrote up due to various complicated adversarial dynamics surrounding Leverage and CEA (as well as various NDAs and legal threats, made by both Leverage and CEA, not leveled at me, but leveled at enough people around me that I thought I might cause someone serious legal trouble if I repeat a thing I heard somewhere in a more public setting)

I'm getting a bit pedantic, but I wouldn't gloss this as "CEA used legal threats to cover up Leverage related information". Partly because the original bit is vague, but also because "cover up" implies that the goal is to hide information.

For example, imagine companies A and B sue each other, which ends up with them settling and signing an NDA. Company A might accept an NDA because they want to move on from the suit and agreeing to an NDA does that most effectively. I would not describe this as company A using legal threats to cover up B-related information.

In the timeframe CEA and Leverage where doing together the Pareto Fellowship. If you read the common knowledge post you find people finding that they were mislead by CEA because the announcement didn't mention that the Pareto Fellowship was largely run by Leverage.

On their mistakes page CEA, they have a section about the Pareto Fellowship but it hides the fact that Leverage was involved in the Pareto Fellowship but says "The Pareto Fellowship was a program sponsored by CEA and run by two CEA staff, designed to deepen the EA involvement of promising students or people early in their careers."

That does look to me like hiding information about the cooperation between Leverage and CEA. 

I do think that publically presuming that people who hide information have something to hide is useful. If there's nothing to hide I'd love to know what happened back then or who thinks what happened should stay hidden. At the minimum I do think that CEA witholding the information that the people who went to their programs spend their time in what now appears to be a cult is something that CEA should be open about in their mistakes page. 

Yep, I think CEA has in the past straightforwardly misrepresented (there is a talk on the history of EA by Will and Toby that says some really dubious things here, IIRC) and sometimes even lied in order to not mention Leverage's history with Effective Altruism. I think this was bad, and continues to be bad.

My initial thought on reading this was 'this seems obviously bad', and I assumed this was done to shield CEA from reputational risk.

Thinking about it more, I could imagine an epistemic state I'd be much more sympathetic to: 'We suspect Leverage is a dangerous cult, but we don't have enough shareable evidence to make that case convincingly to others, or we aren't sufficiently confident ourselves. Crediting Leverage for stuff like the EA Summit (without acknowledging our concerns and criticisms) will sound like an endorsement of Leverage, which might cause others to be drawn into its orbit and suffer harm. But we don't feel confident enough to feel comfortable tarring Leverage in public, or our evidence was shared in confidence and we can't say anything we expect others to find convincing. So we'll have to just steer clear of the topic for now.'

Still seems better to just not address the subject if you don't want to give a fully accurate account of it. You don't have to give talks on the history of EA!

I think the epistemic state of CEA was some mixture of something pretty close to what you list here, and something that I would put closer to something more like "Leverage maybe is bad, or maybe isn't, but in any case it looks bad, and I don't think I want people to think EA or CEA is bad, so we are going to try to avoid any associations between these entities, which will sometimes require stretching the truth".

"Leverage maybe is bad, or maybe isn't, but in any case it looks bad, and I don't think I want people to think EA or CEA is bad, so we are going to try to avoid any associations between these entities, which will sometimes require stretching the truth"

That has the collary: "We don't expect EA's to care enough about the truth/being transparent that this is a huge reputational risk for us."

It does look weird to me that CEA doesn't include this on the mistakes page when they talk about Pareto. I just sent CEA an email to ask:


On https://www.centreforeffectivealtruism.org/our-mistakes I see "The Pareto Fellowship was a program sponsored by CEA and run by two CEA staff, designed to deepen the EA involvement of promising students or people early in their careers. We realized during and after the program that senior management did not provide enough oversight of the program. For example, reports by some applicants indicate that the interview process was unprofessional and made them deeply uncomfortable."

Is there a reason that the mistakes page does not mention the involvement of Leverage in the Pareto Fellowship? [1]


[1] https://www.lesswrong.com/posts/Kz9zMgWB5C27Pmdkh/common-knowledge-about-leverage-research-1-0?commentId=znudKxFhvQxgDMv7k

They wrote back, linking me to https://www.lesswrong.com/posts/Kz9zMgWB5C27Pmdkh/common-knowledge-about-leverage-research-1-0?commentId=2QcdhTjqGcSc99sNN ("we're working on a couple of updates to the mistakes page, including about this")

Yep, I think the situation is closer to what Jeff describes here, though, I honestly don't actually know, since people tend to get cagey when the topic comes up.

I talked with Geoff and according to him there's no legal contract between CEA and Leverage that prevents information sharing. All information sharing is prevented by organization internal NDA's.

Huh, that's surprising, if by that he means "no contracts between anyone currently at Leverage and anyone at CEA". I currently still think it's the case, though I also don't see any reason for Geoff to lie here. Maybe there is some technical sense in which there is no contract between Leverage and CEA, but there are contracts between current Leverage employees, who used to work at CEA, and current CEA employees? 

What he said is compatible with Ex-CEA people still being bound by the NDA's they signed they were at CEA. I don't think anything happened that releases ex-CEA people from NDAs. The important thing is that CEA is responsible for those NDA and is free to unilaterally lift them if they would have an interest in the free flow of information. In the case of a settlement with contracts between the two organisations CEA couldn't unilaterally lift the settlement contract. Public pressure on CEA seems to be necessary to get the information out in the open.

Talking with Vassar feels very intellectually alive. Maybe, like a high density of insight porn. I imagine that the people Ben talks about wouldn't get much enjoyment out of insight porn either, so that emotional impact isn't there.

There's probably also an element that plenty of people who can normally follow an intellectual conversation can't keep up a conversation with Vassar and then are filled after a conversation with a bunch of different ideas that lack order in their mind. I imagine that sometimes there's an idea overload that prevents people from critically thinking through some of the ideas.

If you have a person who hasn't gone to college, they are used to encountering people who make intellectual arguments that go over their head and have a way to deal with that. 

From meeting Vassar, I don't feel like he has the kind of charisma that someone like Valentine has (which I guess Valentine has downstream of doing a lot of bodywork stuff). 

This seems mostly right; they're more likely to think "I don't understand a lot of these ideas, I'll have to think about this for a while" or "I don't understand a lot of these ideas, he must be pretty smart and that's kinda cool" than to feel invalidated by this and try to submit to him in lieu of understanding.

The people I know who weren't brought up to go to college have more experience navigating concrete threats and dangers, which can't be avoided through conformity, since the system isn't set up to take care of people like them. They have to know what's going on to survive. This results in an orientation less sensitive to subtle threats of invalidation, and that sees more concrete value in being informed by someone.

In general this means that they're much more comfortable with the kind of confrontation Vassar engages in, than high-class people are.

This makes a lot of sense. I can notice ways in which I generally feels more threatened by social invalidation than actual concrete threats of violence.
This is interesting to me because I was brought up to go to college, but I didn't take it seriously (plausibly from depression or somesuch), and I definitely think of him as a guy with an interesting perspective. Okay, a smart guy with an interesting perspective, but not a god. It had never occurred to me before that maybe people who were brought up to assume they were going to college might generally have a different take on the world than I do.

I talked and corresponded with Michael a lot during 2017–2020, and it seems likely that one of the psychotic breaks people are referring to is mine from February 2017? (Which Michael had nothing to do with causing, by the way.) I don't think you're being fair.

"jailbreak" yourself from it (I'm using a term I found on Ziz's discussion of her conversations with Vassar; I don't know if Vassar uses it himself)

I'm confident this is only a Ziz-ism: I don't recall Michael using the term, and I just searched my emails for jailbreak, and there are no hits from him.

again, this involves making them paranoid about MIRI/CFAR and convincing them to take lots of drugs [...] describing how it was a Vassar-related phenomenon

I'm having trouble figuring out how to respond to this hostile framing. I mean, it's true that I've talked with Michael many times about ways in which (in his view, and separately in mine) MIRI, CfAR, and "the community" have failed to live up to their stated purposes. Separately, it's also true that, on occasion, Michael has recommended I take drugs. (The specific recommendations I recall were weed and psilocybin. I always said No; drug use seems like a very bad idea giv... (read more)

I don't want to reveal any more specific private information than this without your consent, but let it be registered that I disagree with your assessment that your joining the Vassarites wasn't harmful to you. I was not around for the 2017 issues (though if you reread our email exchanges from April you will understand why I'm suspicious), but when you had some more minor issues in 2019 I was more in the loop and I ended out emailing the Vassarites (deliberately excluding you from the email, a decision I will defend in private if you ask me) accusing them of making your situation worse and asking them to maybe lay off you until you were maybe feeling slightly better, and obviously they just responded with their "it's correct to be freaking about learning your entire society is corrupt and gaslighting" shtick. 

I'm having trouble figuring out how to respond to this hostile framing. I mean, it's true that I've talked with Michael many times about ways in which (in his view, and separately in mine) MIRI, CfAR, and "the community" have failed to live up to their stated purposes. Separately, it's also true that, on occasion, Michael has recommended I take drugs. (The specific recommendations I recall were weed and psilocybin. I always said No; drug use seems like a very bad idea given my history of psych problems.)


Michael is a charismatic guy who has strong views and argues forcefully for them. That's not the same thing as having mysterious mind powers to "make people paranoid" or cause psychotic breaks! (To the extent that there is a correlation between talking to Michael and having psych issues, I suspect a lot of it is a selection effect rather than causal: Michael told me once that he specifically seeks out people who are high in Eysenckian psychoticism.) If someone thinks Michael is wrong about something, great: I'm sure he'd be happy to argue about it, time permitting. But under-evidenced aspersions that someone is somehow dangerous just to talk to are not an argument.

I more or les... (read more)

Thing 0:


Before I actually make my point I want to wax poetic about reading SlateStarCodex.

In some post whose name I can't remember, you mentioned how you discovered the idea of rationality. As a child, you would read a book with a position, be utterly convinced, then read a book with the opposite position and be utterly convinced again, thinking that the other position was absurd garbage. This cycle repeated until you realized, "Huh, I need to only be convinced by true things."

This is extremely relatable to my lived experience. I am a stereotypical "high-functioning autist." I am quite gullible, formerly extremely gullible. I maintain sanity by aggressively parsing the truth values of everything I hear. I am extremely literal. I like math.

To the degree that "rationality styles" are a desirable artifact of human hardware and software limitations, I find your style of thinking to be the most compelling.

Thus I am going to state that your way of thinking about Vassar has too many fucking skulls.

Thing 1:

Imagine two world models:

  1. Some people want to act as perfect nth-order cooperating utilitarians, but can't because of human limitations. They are extremely scrupulous, so they feel
... (read more)

I enjoyed reading this. Thanks for writing it. 

One note though: I think this post (along with most of the comments) isn't treating Vassar as a fully real person with real choices. It (also) treats him like some kind of 'force in the world' or 'immovable object'. And I really want people to see him as a person who can change his mind and behavior and that it might be worth asking him to take more responsibility for his behavior and its moral impacts. I'm glad you yourself were able to "With basic rationality skills, avoid contracting the Vassar, then [heal] the damage to [your] social life." 

But I am worried about people treating him like a force of nature that you make contact with and then just have to deal with whatever the effects of that are. 

I think it's pretty immoral to de-stabilize people to the point of maybe-insanity, and I think he should try to avoid it, to whatever extent that's in his capacity, which I think is a lot. 

"Vassar's ideas are important and many are correct. It just happens to be that he might drive you insane."

I might think this was a worthwhile tradeoff if I actually believed the 'maybe insane' part was unavoidable, and I do not believ... (read more)

I think that treating Michael Vassar as an unchangeable force of nature is the right way to go—for the purposes of discussions precisely like this one. Why? Because even if Michael himself can (and chooses to) alter his behavior in some way (regardless of whether this is good or bad or indifferent), nevertheless there will be other Michael Vassars out there—and the question remains, of how one is to deal with arbitrary Michael Vassars one encounters in life.

In other words, what we’ve got here is a vulnerability (in the security sense of the word). One day you find that you’re being exploited by a clever hacker (we decline to specify whether he is a black hat or white hat or what). The one comes to you and recommends a patch. But you say—why should we treat this specific attack as some sort of unchangeable force of nature? Rather we should contact this hacker and persuade him to cease and desist. But the vulnerability is still there…

I think you can either have a discussion that focuses on an individual and if you do it makes sense to model them with agency or you can have more general threat models. 

If you however mix the two you are likely to get confused in both directions. You will project ideas from your threat model into the person and you will take random aspects of the individual into your threat model that aren't typical for the threat.

I am not sure how much 'not destabilize people' is an option that is available to Vassar.

My model of Vassar is as a person who is constantly making associations, and using them to point at the moon. However, pointing at the moon can convince people of nonexistent satellites and thus drive people crazy. This is why we have debates instead of koan contests.

Pointing at the moon is useful when there is inferential distance; we use it all the time when talking with people without rationality training. Eliezer used it, and a lot of "you are expected to behave better for status reasons look at my smug language"-style theist-bashing, in the Sequences. This was actually highly effective, although it had terrible side effects.

I think that if Vassar tried not to destabilize people, it would heavily impede his general communication. He just talks like this. One might say, "Vassar, just only say things that you think will have a positive effect on the person." 1. He already does that. 2. That is advocating that Vassar manipulate people. See Valencia in Worth the Candle.

In the pathological case of Vassar, I think the naive strategy of "just say the thing you think is true" is still correct.

Menta... (read more)

I think that if Vassar tried not to destabilize people, it would heavily impede his general communication.

My suggestion for Vassar is not to 'try not to destabilize people' exactly. 

It's to very carefully examine his speech and its impacts, by looking at the evidence available (asking people he's interacted with about what it's like to listen to him) and also learning how to be open to real-time feedback (like, actually look at the person you're speaking to as though they're a full, real human—not a pair of ears to be talked into or a mind to insert things into). When he talks theory, I often get the sense he is talking "at" rather than talking "to" or "with". The listener practically disappears or is reduced to a question-generating machine that gets him to keep saying things. 

I expect this process could take a long time / run into issues along the way, and so I don't think it should be rushed. Not expecting a quick change. But claiming there's no available option seems wildly wrong to me. People aren't fixed points and generally shouldn't be treated as such. 

This is actually very fair. I think he does kind of insert information into people.

I never really felt like a question-generating machine, more like a pupil at the foot of a teacher who is trying to integrate the teacher's information.

I think the passive, reactive approach you mention is actually a really good idea of how to be more evidential in personal interaction without being explicitly manipulative.


I think I interacted with Vassar four times in person, so I might get some things wrong here, but I think that he's pretty disassociated from his body which closes a normal channel of perceiving impacts on the person he's speaking with. This thing looks to me like some bodily process generating stress / pain and being a cause for disassociation. It might need a body worker to fix whatever goes on there to create the conditions for perceiving the other person better. Beyond that Circling might be an enviroment in which one can learn to interact with others as humans who have their own feelings but that would require opening up to the Circling frame. 
You are making a false dichomaty here. You are assuming that everything that has a negative effect on a person is manipulation.  As Vassar himself sees the situation people believe a lot of lies for reasons of fitting in socially in society. From that perspective getting people to stop believing in those lies will make it harder to fit socially into society.  If you would get a Nazi guard at Ausschwitz into a state where the moral issue of their job can't be disassociated anymore, that's very predicably going to have a negative effect on that prison guard.  Vassar position would be that it would be immoral to avoid talking about the truth about the nature of their job when talking with the guard in a motivation to make life easier for the guard. 
I think this line of discussion would be well served by marking a natural boundary in the cluster "crazy." Instead of saying "Vassar can drive people crazy" I'd rather taboo "crazy" and say: Personally I care much more, maybe lexically more, about the upside of minds learning about their situation, than the downside of mimics going into maladaptive death spirals, though it would definitely be better all round if we can manage to cause fewer cases of the latter without compromising the former, much like it's desirable to avoid torturing animals, and it would be desirable for city lights not to interfere with sea turtles' reproductive cycle by resembling the moon too much.
My problem with this comment is it takes people who: * can't verbally reason without talking things through (and are currently stuck in a passive role in a conversation) and who: * respond to a failure of their verbal reasoning * under circumstances of importance (in this case moral importance) * and conditions of stress, induced by * trying to concentrate while in a passive role * failing to concentrate under conditions of high moral importance by simply doing as they are told - and it assumes they are incapable of reasoning under any circumstances. It also then denies people who are incapable of independent reasoning the right to be protected from harm.
EDIT: Ben is correct to say we should taboo "crazy." This is a very uncharitable interpretation (entirely wrong). The highly scrupulous people here can undergo genuine psychological collapse if they learn their actions aren't as positive utility as they thought. (entirely wrong) I also don't think people interpret Vassar's words as a strategy and implement incoherence. Personally, I interpreted Vassar's words as factual claims then tried to implement a strategy on them. When I was surprised by reality a bunch, I updated away. I think the other people just no longer have a coalitional strategy installed and don't know how to function without one. This is what happened to me and why I repeatedly lashed out at others when I perceived them as betraying me, since I no longer automatically perceived them as on my side. I rebuilt my rapport with those people and now have more honest relationships with them. (still endorsed) Beyond this, I think your model is accurate.

The highly scrupulous people here can undergo genuine psychological collapse if they learn their actions aren’t as positive utility as they thought.

“That which can be destroyed by the truth should be”—I seem to recall reading that somewhere.

And: “If my actions aren’t as positive utility as I think, then I desire to believe that my actions aren’t as positive utility as I think”.

If one has such a mental makeup that finding out that one’s actions have worse effects than one imagined causes genuine psychological collapse, then perhaps the first order of business is to do everything in one’s power to fix that (really quite severe and glaring) bug in one’s psyche—and only then to attempt any substantive projects in the service of world-saving, people-helping, or otherwise doing anything really consequential.

Thank you for echoing common sense!
What is psychological collapse? For those who can afford it, taking it easy for a while is a rational response to noticing deep confusion, continuing to take actions based on a discredited model would be less appealing, and people often become depressed when they keep confusedly trying to do things that they don't want to do. Are you trying to point to something else? What specific claims turned out to be false? What counterevidence did you encounter?

Specific claim: the only nontrivial obstacle in front of us is not being evil

This is false. Object-level stuff is actually very hard.

Specific claim: nearly everyone in the aristocracy is agentically evil. (EDIT: THIS WAS NOT SAID. WE BASICALLY AGREE ON THIS SUBJECT.)

This is a wrong abstraction. Frame of Puppets seems naively correct to me, and has become increasingly reified by personal experience of more distant-to-my-group groups of people, to use a certain person's language. Ideas and institutions have the agency; they wear people like skin.

Specific claim: this is how to take over New York.

Didn't work.

I think this needs to be broken up into 2 claims: 1 If we execute strategy X, we'll take over New York. 2 We can use straightforward persuasion (e.g. appeals to reason, profit motive) to get an adequate set of people to implement strategy X. 2 has been falsified decisively. The plan to recruit candidates via appealing to people's explicit incentives failed, there wasn't a good alternative, and as a result there wasn't a chance to test other parts of the plan (1). That's important info and worth learning from in a principled way. Definitely I won't try that sort of thing again in the same way, and it seems like I should increase my credence both that plans requiring people to respond to economic incentives by taking initiative to play against type will fail, and that I personally might be able to profit a lot by taking initiative to play against type, or investing in people who seem like they're already doing this, as long as I don't have to count on other unknown people acting similarly in the future. But I find the tendency to respond to novel multi-step plans that would require someone do take initiative by sitting back and waiting for the plan to fail, and then saying, "see? novel multi-step plans don't work!" extremely annoying. I've been on both sides of that kind of transaction, but if we want anything to work out well we have to distinguish cases of "we / someone else decided not to try" as a different kind of failure from "we tried and it didn't work out."
This is actually completely fair. So is the other comment.
This seems to be conflating the question of "is it possible to construct a difficult problem?" with the question of "what's the rate-limiting problem?". If you have a specific model for how to make things much better for many people by solving a hard technical problem before making substantial progress on human alignment, I'd very much like to hear the details. If I'm persuaded I'll be interested in figuring out how to help. So far this seems like evidence to the contrary, though, as it doesn't look like you thought you could get help making things better for many people by explaining the opportunity.
To the extent I'm worried about Vassar's character, I am as equally worried about the people around him. It's the people around him who should also take responsibility for his well-being and his moral behavior. That's what friends are for. I'm not putting this all on him. To be clear. 

I think it's a fine way of think about mathematical logic, but if you try to think this way about reality, you'll end up with views that make internal sense and are self-reinforcing but don't follow the grain of facts at all. When you hear such views from someone else, it's a good idea to see which facts they give in support. Do their facts seem scant, cherrypicked, questionable when checked? Then their big claims are probably wrong.

The people who actually know their stuff usually come off very different. Their statements are carefully delineated: "this thing about power was true in 10th century Byzantium, but not clear how much of it applies today".

Also, just to comment on this:

It is called Taking Ideas Seriously and using language literally. It is my personal favorite strategy, but I have no other options considering my neurotype.

I think it's somewhat changeable. Even for people like us, there are ways to make our processing more "fuzzy". Deliberately dimming some things, rounding others. That has many benefits: on the intellectual level you learn to see many aspects of a problem instead of hyperfocusing on one; emotionally you get more peaceful when thinking about things; a... (read more)

On the third paragraph: I rarely have problems with hyperfixation. When I do, I just come back to the problem later, or prime myself with a random stimulus. (See Steelmanning Divination.) Peacefulness is enjoyable and terminally desirable, but in many contexts predators want to induce peacefulness to create vulnerability. Example: buying someone a drink with ill intent. (See "Safety in numbers" by Benjamin Ross Hoffman. I actually like relaxation, but agree with him that feeling relaxed in unsafe environments is a terrible idea. Reality is mostly an unsafe environment. Am getting to that.) I have no problem enjoying warm fuzzies. I had problems with them after first talking with Vassar, but I re-equilibrated. Warm fuzzies are good, helpful, and worth purchasing. I am not a perfect utilitarian. However, it is important that when you buy fuzzies instead of utils, as Scott would put it, you know what you are buying. Many will sell fuzzies and market them as utils. I sometimes round things, it is not inherently bad. Dimming things is not good. I like being alive. From a functionalist perspective, the degree to which I am aroused (with respect to the senses and the mind) is the degree to which I am a real, sapient being. Dimming is sometimes terminally valuable as relaxation, and instrumentally valuable as sleep, but if you believe in Life, Freedom, Prosperity And Other Nice Transhumanist Things then dimming being bad in most contexts follows as a natural consequence. On the second paragraph: This is because people compartmentalize. After studying a thing for a long time, people will grasp deep nonverbal truths about that thing. Sometimes they are wrong; without the legibility of the elucidation, false ideas such gained are difficult to destroy. Sometimes they are right! Mathematical folklore is an example: it is literally metis among mathematicians. Highly knowledgeable and epistemically skilled people delineate. Sometimes the natural delineation is "this is tru

I mostly see where you're coming from, but I think the reasonable answer to "point 1 or 2 is a false dichotomy" is this classic, uh, tumblr quote (from memory):

"People cannot just. At no time in the history of the human species has any person or group ever just. If your plan relies on people to just, then your plan will fail."

This goes especially if the thing that comes after "just" is "just precommit."

My expectation is that interaction with Vassar is that the people who espouse 1 or 2 expect that the people interacting are incapable of precommitting to the required strength. I don't know if they're correct, but I'd expect them to be, because I think people are just really bad at precommitting in general. If precommitting was easy, I think we'd all be a lot more fit and get a lot more done. Also, Beeminder would be bankrupt.

This is a very good criticism! I think you are right about people not being able to "just."

My original point with those strategies was to illustrate an instance of motivated stopping about people in the community who have negative psychological effects, or criticize popular institutions. Perhaps it is the case that people genuinely tried to make a strategy but automatically rejected my toy strategies as false. I do not think it is, based on "vibe" and on the arguments that people are making, such as "argument from cult."

I think you are actually completely correct about those strategies being bad. Instead, I failed to point out that I expect a certain level of mental robustness-to-nonsanity from people literally called "rationalists." This comes off as sarcastic but I mean it completely literally.

Precommitting isn't easy, but rationality is about solving hard problems. When I think of actual rationality, I think of practices such as "five minutes of actually trying" and alkjash's "Hammertime." Humans have a small component of behavior that is agentic, and a huge component of behavior that is non-agentic and installed by vaguely agentic processes (simple conditioning, mimicry, social... (read more)

I found many things you shared useful. I also expect that because of your style/tone you'll get down voted :(

Michael is very good at spotting people right on the verge of psychosis

...and then pushing them.

Michael told me once that he specifically seeks out people who are high in Eysenckian psychoticism.

So, this seems deliberate. [EDIT: Or not. Zack makes a fair point.] He is not even hiding it, if you listen carefully.

Michael told me once that he specifically seeks out people who are high in Eysenckian psychoticism.

So, this seems deliberate.

Because high-psychoticism people are the ones who are most likely to understand what he has to say.

This isn't nefarious. Anyone trying to meet new people to talk to, for any reason, is going to preferentially seek out people who are a better rather than worse match. Someone who didn't like our robot cult could make structurally the same argument about, say, efforts to market Yudkowsky's writing (like spending $28,000 distributing copies of Harry Potter and the Methods to math contest winners): why, they're preying on innocent high-IQ systematizers and filling their heads with scary stories about the coming robot apocalypse!

I mean, technically, yes. But in Yudkowsky and friends' worldview, the coming robot apocalypse is actually real, and high-IQ systematizers are the people best positioned to understand this important threat. Of course they're going to try to market their memes to that neurotype-demographic. What do you expect them to do? What do you expect Michael to do?

There's a sliding scale ranging from seeking out people who are better at understanding arguments in general to seeking out people who are biased toward agreeing with a specific set of arguments (and perhaps made better at understanding those arguments by that bias). Targeting math contest winners seems more toward the former end of the scale than targeting high-psychoticism people. This is something that seems to me to be true independently of the correctness of the underlying arguments. You don't have to already agree about the robot apocalypse to be able to see why math contest winners would be better able to understand arguments for or against the robot apocalypse.

If Yudkowsky and friends were deliberately targeting arguments for short AI timelines at people who already had a sense of a foreshortened future, then that would be more toward the latter end of the scale, and I think you'd object to that targeting strategy even though they'd be able to make an argument structurally the same as your comment.

Yudkowsky and friends are targeting arguments that AGI is important at people already likely to believe AGI is important (and who are open to thinking it's even more important than they think), e.g. programmers, transhumanists, and reductionists. The case is less clear for short timelines specifically, given the lack of public argumentation by Yudkowsky etc, but the other people I know who have tried to convince people about short timelines (e.g. at the Asilomar Beneficial AI conference) were targeting people likely to be somewhat convinced of this, e.g. people who think machine learning / deep learning are important.

In general this seems really expected and unobjectionable? "If I'm trying to convince people of X, I'm going to find people who already believe a lot of the pre-requisites for understanding X and who might already assign X a non-negligible prior". This is how pretty much all systems of ideas spread, I have trouble thinking of a counterexample.

I mean, do a significant number of people not select who they talk with based on who already agrees with them to some extent and is paying attention to similar things?

If short timelines advocates were seeking out people with personalities that predisposed them toward apocalyptic terror, would you find it similarly unobjectionable? My guess is no. It seems to me that a neutral observer who didn't care about any of the object-level arguments would say that seeking out high-psychoticism people is more analogous to seeking out high-apocalypticism people than it is to seeking out programmers, transhumanists, reductionists, or people who think machine learning / deep learning are important.

The way I can make sense of seeking high-psychoticism people being morally equivalent to seeking high IQ systematizers, is if I drain any normative valance from "psychotic," and imagine there is a spectrum from autistic to psychotic. In this spectrum the extreme autistic is exclusively focused on exactly one thing at a time, and is incapable of cognition that has to take into account context, especially context they aren't already primed to have in mind, and the extreme psychotic can only see the globally interconnected context where everything means/is connected to everything else. Obviously neither extreme state is desirable, but leaning one way or another could be very helpful in different contexts.  

See also: indexicality.

On the other hand, back in my reflective beliefs, I think psychosis is a much scarier failure mode than "autism," on this scale, and I would not personally pursue any actions that pushed people toward it without, among other things, a supporting infrastructure of some kind for processing the psychotic state without losing the plot (social or cultural would work, but whatever).

I wouldn't find it objectionable. I'm not really sure what morally relevant distinction is being pointed at here, apocalyptic beliefs might make the inferential distance to specific apocalyptic hypotheses lower.

Well, I don't think it's obviously objectionable, and I'd have trouble putting my finger on the exact criterion for objectionability we should be using here. Something like "we'd all be better off in the presence of a norm against encouraging people to think in ways that might be valid in the particular case where we're talking to them but whose appeal comes from emotional predispositions that we sought out in them that aren't generally either truth-tracking or good for them" seems plausible to me. But I think it's obviously not as obviously unobjectionable as Zack seemed to be suggesting in his last few sentences, which was what moved me to comment.

I don't have well-formed thoughts on this topic, but one factor that seems relevant to me has a core that might be verbalized as "susceptibility to invalid methods of persuasion", which seems notably higher in the case of people with high "apocalypticism" than people with the other attributes described in the grandparent. (A similar argument applies in the case of people with high "psychoticism".)
That might be relevant in some cases but seems unobjectionable both in the psychoticism case and the apocalypse case. I would predict that LW people cluster together in personality measurements like OCEAN and Eysenck, it's by default easier to write for people of a similar personality to yourself. Also, people notice high rates of Asperger's-like characteristics around here, which are correlated with Jewish ethnicity and transgenderism (also both frequent around here).
It might not be nefarious.  But it might also not be very wise.  I question Vassar's wisdom, if what you say is indeed true about his motives.  I question whether he's got the appropriate feedback loops in place to ensure he is not exacerbating harms. I question whether he's appropriately seeking that feedback rather than turning away from the kinds he finds overwhelming, distasteful, unpleasant, or doesn't know how to integrate.  I question how much work he's done on his own shadow and whether it's not inadvertently acting out in ways that are harmful. I question whether he has good friends he trusts who would let him know, bluntly, when he is out of line with integrity and ethics or if he has 'shadow stuff' that he's not seeing.  I don't think this needs to be hashed out in public, but I hope people are working closer to him on these things who have the wisdom and integrity to do the right thing. 
Rumor has it that https://www.sfgate.com/news/bayarea/article/Man-Gets-5-Years-For-Attacking-Woman-Outside-13796663.php is due to Vassar recommended drugs. In the OP that case does get blamed on CFAR's enviroment without any mentioning of that part. When talking about whether or not CFAR is responsible for that stories factors like that seem to me to matter quite a bit. I'd love whether anyone who's nearer can confirm/deny the rumor and fill in missing pieces. 

As I mentioned elsewhere, I was heavily involved in that incident for a couple months after it happened and I looked for causes that could help with the defense. AFAICT No drugs were taken in the days leading up to the mental health episode or arrest (or people who took drugs with him lied about it).

I, too, asked people questions after that incident and failed to locate any evidence of drugs.

As I heard this story, Eric was actively seeking mental health care on the day of the incident, and should have been committed before it happened, but several people (both inside and outside the community) screwed up. I don't think anyone is to blame for his having had a mental break in the first place.

I now got some better sourced information from a friend who's actually in good contact with Eric. Given that I'm also quite certain that there were no drugs involved and that isn't a case of any one person being mainly responsible for it happening but multiple people making bad decisions. I'm currently hoping that Eric will tell his side himself so that there's less indirection about the information sourcing so I'm not saying more about the detail at this point in time.

Edit: The following account is a component of a broader and more complex narrative. While it played a significant role, it must be noted that there were numerous additional challenges concurrently affecting my life. Absent these complicating factors, the issues delineated in this post alone may not have precipitated such severe consequences. Additionally, I have made minor revisions to the third-to-last bullet point for clarity.

It is pertinent to provide some context to parts of my story that are relevant to the ongoing discussions.

  • My psychotic episode was triggered by a confluence of factors, including acute physical and mental stress, as well as exposure to a range of potent memes. I have composed a detailed document on this subject, which I have shared privately with select individuals. I am willing to share this document with others who were directly involved or have a legitimate interest. However, a comprehensive discussion of these details is beyond the ambit of this post, which primarily focuses on the aspects related to my experiences at Vassar.
  • During my psychotic break, I believed that someone associated with Vassar had administered LSD to me. Although I no longer hold thi
... (read more)

Thank you for sharing such personal details for the sake of the conversation.

Thanks for sharing the details of your experience. Fyi I had a trip earlier in 2017 where I had the thought "Michael Vassar is God" and told a couple people about this, it was overall a good trip, not causing paranoia afterwards etc.

If I'm trying to put my finger on a real effect here, it's related to how Michael Vassar was one of the initial people who set up the social scene (e.g. running singularity summits and being executive director of SIAI), being on the more "social/business development/management" end relative to someone like Eliezer; so if you live in the scene, which can be seen as a simulacrum, the people most involved in setting up the scene/simulacrum have the most aptitude at affecting memes related to it, like a world-simulator programmer has more aptitude at affecting the simulation than people within the simulation (though to a much lesser degree of course).

As a related example, Von Neumann was involved in setting up post-WWII US Modernism, and is also attributed extreme mental powers by modernism (e.g. extreme creativity in inventing a wide variety of fields); in creating the social system, he also has more memetic influence within that system, and could more effectively change its boundaries e.g. in creating new fields of study.

2017 would be the year Eric's episode happened as well. Did this result in multiple conversation about "Michael Vassar is God" that Eric might then picked up when he hang around the group?
I don't know, some of the people were in common between these discussions so maybe, but my guess would be that it wasn't causal, only correlational. Multiple people at the time were considering Michael Vassar to be especially insightful and worth learning from.
I haven't used the word god myself nor have heard it used by other people to refer to someone who's insightful and worth learning from. Traditionally, people learn from prophets and not from gods.
Can someone please clarify what is meant in this conext by 'Vassar's group', or the term 'Vassarites' used by others? My intution previously was that Michael Vassar had no formal 'group' or insitution of any kind, and it was just more like 'a cluster of friends who hung out together a lot', but this comment makes it seem like something more official.

While "Vassar's group" is informal, it's more than just a cluster of friends; it's a social scene with lots of shared concepts, terminology, and outlook (although of course not every member holds every view and members sometimes disagree about the concepts, etc etc). In this way, the structure is similar to social scenes like "the AI safety community" or "wokeness" or "the startup scene" that coordinate in part on the basis of shared ideology even in the absence of institutional coordination, albeit much smaller. There is no formal institution governing the scene, and as far as I've ever heard Vassar himself has no particular authority within it beyond individual persuasion and his reputation.

Median Group is the closest thing to a "Vassarite" institution, in that its listed members are 2/3 people who I've heard/read describing the strong influence Vassar has had on their thinking and 1/3 people I don't know, but AFAIK Median Group is just a project put together by a bunch of friends with similar outlook and doesn't claim to speak for the whole scene or anything.

As a member of that cluster I endorse this description.

Michael and I are sometimes-housemates and I've never seen or heard of any formal "Vassarite" group or institution, though he's an important connector in the local social graph, such that I met several good friends through him.
3Eli Tyre2y
Thank you very much for sharing. I wasn't aware of any of these details.
6Scott Alexander2y
If this information isn't too private, can you send it to me? scott@slatestarcodex.com
I have sent you the document in question. As the contents are somewhat personal, I would prefer that it not be disseminated publicly. However, I am amenable to it being shared with individuals who have a valid interest in gaining a deeper understanding of the matter.

I feel pretty defensive reading and responding to this comment, given a previous conversation with Scott Alexander where he said his professional opinion would be that people who have had a psychotic break should be on antipsychotics for the rest of their life (to minimize risks of future psychotic breaks). This has known severe side effects like cognitive impairment and brain shrinkage and lacks evidence of causing long-term improvement. When I was on antipsychotics, my mental functioning was much lower (noted by my friends) and I gained weight rapidly. (I don't think short-term use of antipsychotics was bad, in my case)

It is in this context that I'm reading that someone talking about the possibility of mental subprocess implantation ("demons") should be "treated as a psychological emergency", when the Eric Bryulant case had already happened, and talking about the psychological processes was necessary for making sense of the situation. I feared involuntary institutionalization at the time, quite a lot, for reasons like this.

If someone expresses opinions like this, and I have reason to believe they would act on them, then I can't believe myself to have freedom of speech. That ... (read more)

I don't remember the exact words in our last conversation. If I said that, I was wrong and I apologize.

My position is that in schizophrenia (which is a specific condition and not just the same thing as psychosis), lifetime antipsychotics might be appropriate. EG this paper suggests continuing for twelve months after a first schizophrenic episode and then stopping and seeing how things go, which seems reasonable to me. It also says that if every time you take someone off antipsychotics they become fully and dangerous psychotic again, then lifetime antipsychotics are probably their best bet. In a case like that, I would want the patient's buy-in, ie if they were medicated after a psychotic episode I would advise them of the reasons why continued antipsychotic use was recommended in their case, if they said they didn't want it we would explore why given the very high risk level, and if they still said they didn't want it then I would follow their direction.

I didn't get a chance to talk to you during your episode, so I don't know exactly what was going on. I do think that psychosis should be thought of differently than just "weird thoughts that might be true", as more of a whole-body n... (read more)

I don’t remember the exact words in our last conversation. If I said that, I was wrong and I apologize.

Ok, the opinions you've described here seem much more reasonable than what I remember, thanks for clarifying.

I do think that psychosis should be thought of differently than just “weird thoughts that might be true”, since it’s a whole-body nerve-and-brain dysregulation of which weird thoughts are just one symptom.

I agree, yes. I think what I was afraid of at the time was being called crazy and possibly institutionalized for thinking somewhat weird thoughts that people would refuse to engage with, and showing some signs of anxiety/distress that were in some ways a reaction to my actual situation. By the time I was losing sleep etc, things were quite different at a physiological level and it made sense to treat the situation as a psychiatric emergency.

If you can show someone that they're making errors that correspond to symptoms of mild psychosis, then telling them that and suggesting corresponding therapies to help with the underlying problem seems pretty reasonable.

Thanks, if you meant that, when someone is at a very early stage of thinking strange things, you should talk to them about it and try to come to a mutual agreement on how worrying this is and what the criteria would be for psych treatment, instead of immediately dehumanizing them and demanding the treatment right away, then I 100% agree.

I think if someone has mild psychosis and you can guide them back to reality-based thoughts for a second, that is compassionate and a good thing to do in the sense that it will make them feel better, but also kind of useless because the psychosis still has the same chance of progressing into severe psychosis anyway - you're treating a symptom.

If psychosis is caused by an underlying physiological/biochemical process, wouldn't that suggest that e.g. exposure to Leverage Research wouldn't be a cause of it?

If being part of Leverage is causing less reality-based thoughts and nudging someone into mild psychosis, I would expect that being part of some other group could cause more reality-based thoughts and nudge someone away from mild psychosis. Why would causation be possible in one direction but not the other?

I guess another hypothesis here is that some cases are caused by social/environmental factors and others are caused by biochemical factors. If that's true, I'd expect changing someone's environment to be more helpful for the former sort of case.

[probably old-hat [ETA: or false], but I'm still curious what you think] My (background unexamined) model of psychosis-> schizophrenia is that something, call it the "triggers", sets a person on a trajectory of less coherence / grounding; if the trajectory isn't corrected, they just go further and further. The "triggers" might be multifarious; there might be "organic" psychosis and "psychic" psychosis, where the former is like what happens from lead poisoning, and the latter is, maybe, what happens when you begin to become aware of some horrible facts. If your brain can rearrange itself quickly enough to cope with the newly known reality, your trajectory points back to the ground. If it can't, you might have a chain reaction where (1) horrible facts you were previously carefully ignoring, are revealed because you no longer have the superstructure that was ignore-coping with them; (2) your ungroundedness opens the way to unepistemic beliefs, some of which might be additionally horrifying if true; (3) you're generally stressed out because things are going wronger and wronger, which reinforces everything.

If this is true, then your statement:

. I think if someone has mild psychosis a
... (read more)
8Rafael Harth2y
There is this basic idea (I think from an old blogpost that Eliezer wrote) that if someone says there are goblins in the closet, dismissing them outright is confusing rationality with trust in commonly held claims, whereas the truly rational thing is to just open the closet and look. I think this is correct in principle but not applicable in many real-world cases. The real reason why even rational people routinely dismiss many weird explanations for things isn't that they have sufficient evidence against them, it's that the weird explanation is inconsistent with a large set of high confidence beliefs that they currently hold. If someone tells me that they can talk to their deceased parents, I'm probably not going to invest the time to test whether they can obtain novel information this way; I'm just going to assume they're delusional because I'm confident spirits don't exist. That said, if that someone helped write the logical induction paper, I personally would probably hear them out regardless of how weird the thing sounds. Nonetheless, I think it remains true that dismissing beliefs without considering the evidence is often necessary in practice.
If someone tells me that they can talk to their deceased parents, I'm probably not going to invest the time to test whether they can obtain novel information this way; I'm just going to assume they're delusional because I'm confident spirits don't exist.

This is failing to track ambiguity in what's being refered to. If there's something confusing happening--something that seems important or interesting, but that you don't yet have words to well-articulate it--then you try to say what you can (e.g. by talking about "demons"). In your scenario, you don't know exactly what you're dismissing. You can confidently dismiss, in the absence of extraordinary evidence, that (1) their parents's brains have been rotting in the ground, and (2) they are talking with their parents, in the same way you talk to a present friend; you can't confidently dismiss, for example, that they are, from their conscious perspective, gaining information by conversing with an entity that's naturally thought of as their parents (which we might later describe as, they have separate structure in them, not integrated with their "self", that encoded thought patterns from their parents, blah blah blah etc.). You can say "oh well yes of course if it's *just a metaphor* maybe I don't want to dismiss them", but the point is that from a partially pre-theoretic confusion, it's not clear what's a metaphor and it requires further work to disambiguate what's a metaphor.

As the joke goes, there's nothing crazy about talking to dead people. When dead people respond, then you start worrying.

I don’t think we need to blame/ostracize/cancel him and his group, except maybe from especially sensitive situations full of especially vulnerable people.

Based on the things I am reading about what has happened, blame, ostracism, and cancelling seem like the bare minimum of what we should do.

Vassar has had, I think about 6, transfems gravitate to him, join his projects, go on his quests, that I’ve heard. Including Olivia, and Jessica, and I think Devi. Devi had a mental breakdown and detransitioned IIHC. Jessica had a mental breakdown and didn’t detransition. Olivia became an agent of mental breakdown, compulsively breaking others via drug trips because they went through gates they shouldn’t’ve.

This is really, really serious. If this happened to someone closer to me I'd be out for blood, and probably legal prosecution.

Let's not minimize how fucked up this is.

Olivia, Devi and I all talked to people other than Michael Vassar, such as Anna Salamon. We gravitated towards the Berkeley community, which was started around Eliezer's writing. None of us are calling for blame, ostracism, or cancelling of Michael. Michael helped all of us in ways no one else did. None of us have a motive to pursue a legal case against him. Ziz's sentence you quoted doesn't implicate Michael in any crimes.

The sentence is also misleading given Devi didn't detransition afaik.

Jessicata, I will be blunt here. This article you wrote was [EDIT: expletive deleted] misleading. Perhaps you didn't do it on purpose; perhaps this is what you actually believe. But from my perspective, you are an unreliable narrator.

Your story, original version:

  • I worked for MIRI/CFAR
  • I had a psychotic breakdown, and I believed I was super evil
  • the same thing also happened to a few other people
  • conclusion: MIRI/CFAR is responsible for all this

Your story, updated version:

  • I worked for MIRI/CFAR
  • then Michael Vassar taught me that everyone is super evil, including CFAR/MIRI, and told me to use drugs in order to get a psychotic breakdown and liberate myself from evil
  • I actually used the drugs
  • I had a psychotic breakdown, and I believed I was super evil
  • the same thing also happened to a few other people
  • conclusion: I still blame MIRI/CFAR, and I am trying to downplay Vassar's role in this

If you can't see how these two stories differ, then... I don't have sufficiently polite words to describe it, so let's just say that to me these two stories seem very different.

Lest you accuse me of gaslighting, let me remind you that I am not doubting any of the factual statements you made. (I actually tried to... (read more)

I could be very wrong, but the story I currently have about this myself is that Vassar himself was a different and saner person before he used too much psychedelics. :( :( :(

Non-agenda'd question: about when did you notice changes in him?

9Eliezer Yudkowsky2y
My autobiographical episodic memory is nowhere near good enough to answer this question, alas.

Do you have a timeline of when you think that shift happened? That might make it easier for other people who knew Vassar at the time to say whether their observation matched yours.

That... must have hurt a lot.

(I hope your story is right.)

I saw some him make some questionable drug use decisions at Burning Man in 2011 and 2012, including larger than normal doses, and I don't think I saw all of it.
A lot of people take a lot of drugs on big events like Burning Man with little issue. In my observation, it's typically the overly frequent and/or targeted psychedelic use that causes such big changes at least in those that start of fairly stable.
you publicly describe your suffering as a way to show people that MIRI/CFAR is evil.

Could you expand more on this? E.g. what are a couple sentences in the post that seem most trying to show this.

Because it seems like you call it bad when you attribute it to MIRI/CFAR, but when other people suggest that Vassar was responsible, then it seems a bit like no big deal, definitely not anything to blame him for.

I appreciate the thrust of your comment, including this sentence, but also this sentence seems uncharitable, like it's collapsing down stuff that shouldn't be collapsed. For example, it could be that the MIRI/CFAR/etc. social field could set up (maybe by accident, or even due to no fault of any of the "central" people) the conditions where "psychosis" is the best of the bad available options; in which case it makes sense to attribute causal fault to the social field, not to a person who e.g. makes that clear to you, and therefore more proximal causes your breakdown. (Of course there's disagreement about whether that's the state of the world, but it's not necessarily incoherent.)

I do get the sense that jessicata is relating in a funny way to Michael Vassar, e.g. by warping the narrative around him while selectively posing as "just trying to state facts" in relation to other narrative fields; but this is hard to tell, since it's also what it might look like if Michael Vassar was systematically scapegoated, and jessicata is reporting more direct/accurate (hence less bad-seeming) observations.

Where did jessicata corroborate this sentence "then Michael Vassar taught me that everyone is super evil, including CFAR/MIRI, and told me to use drugs in order to get a psychotic breakdown and liberate myself from evil" ? 

I should note that, as an outsider, the main point I recall Eliezer making in that vein is that he used Michael Vassar as a model for the character who was called Professor Quirrell. As an outsider, I didn't see that as an unqualified endorsement - though I think your general message should be signal-boosted.

The claim that Michael Vassar is substantially like Quirrell seems to me strange. Where did you get the claim that Eliezer modelled Vassar after Quirrell? To make the claim a bit more based on public data, take Vassar's TedX talk. I think it gives a good impression of how Vassar thinks. There are some official statistics that claim for Jordan that life expectancy, so I think there's a good chance that Vassar here actually believes what he says. If you however look deeper then Jordan's life expectancy is not as high as is asserted by Vassar. Given that the video is in the public record that's an error that everybody can find who tries to check what Vassar is saying. I don't think it's in Vassar's interest to give a public talk like that with claims that are easily found to be wrong by factchecking. Quirrell wouldn't have made an error like this but is a lot more controlled.  Eliezer made Vassar president of the precursor of MIRI. That's a strong signal of trust and endorsement.

Eliezer has openly said Quirrell's cynicism is modeled after a mix of Michael Vassar and Robin Hanson.

But from my perspective, you are an unreliable narrator.

I appreciate you're telling me this given that you believe it. I definitely am in some ways, and try to improve over time.

then Michael Vassar taught me that everyone is super evil, including CFAR/MIRI, and told me to use drugs in order to get a psychotic breakdown and liberate myself from evil

I said in the text that (a) there were conversations about corruption in EA institutions, including about the content of Ben Hoffman's posts, (b) I was collaborating with Michael Vassar at the time, (c) Michael Vassar was commenting about social epistemology. I admit that connecting points (a) and (c) would have made the connection clearer, but it wouldn't have changed the text much.

In cases where someone was previously part of a "cult" and later says it was a "cult" and abusive in some important ways, there has to be a stage where they're thinking about how bad the social context was, and practically always, that involves conversations with other people who are encouraging them to look at the ways their social context is bad. So my having conversations where people try to convince me CFAR/MIRI are evil is expected given what el... (read more)

What I'm saying is that the Berkeley community should be. Supplying illicit drugs is a crime (but perhaps the drugs were BYO?). IDK if doing so and negligently causing permanent psychological injury is a worse crime, but it should be.

I'm not going to comment on drug usage in detail for legal reasons, except to note that there are psychedelics legal in some places, such as marijuana in CA.

It doesn't make sense to attribute unique causal responsibility for psychotic breaks to anyone, except maybe to the person it's happening to. There are lots of people all of us were talking to in that time period who influenced us, and multiple people were advocating psychedelic use. Not all cases happened to people who were talking significantly with Michael around the time. As I mentioned in the OP, as I was becoming more psychotic, people tried things they thought might help, which generally didn't, and they could have done better things instead. Even causal responsibility doesn't imply blame, e.g. Eliezer had some causal responsibility due to writing things that attracted people to the Berkeley scene where there were higher-variance psychological outcomes. Michael was often talking with people who were already "not ok" in important ways, which probably affects the statistics.

Please see my comment on the grandparent.

I agree with Jessica's general characterization that this is better understood as multi-causal rather than the direct cause of actions by one person.

Relevant bit of social data: Olivia is the most irresponsible-with-drugs person I've ever met, by a sizeable margin; and I know of one specific instance (not a person named in your comment or any other comments on this post) where Olivia gave someone an ill-advised drug combination and they had a bad time (though not a psychotic break).

I don't remember specific names, but something similar happened at one of the first rationality minicamps. Technically, this was not about drugs but some supplements (i.e. completely legal things), but there was someone mixing various kinds of powders and saying "yeah, trust me, I have a lot of experience with this, I did a lot of research, it is perfectly safe to take a dose this high, really", and then an ambulance had to be called. So, I assume you meant that Olivia goes even far beyond this, right?

My memory of the RBC incident you're referring to was that it wasn't supplements that did it, it was a caffeine overdose from energy drinks leading into a panic attack. But there were certainly a lot of supplements around and they could've played a role I didn't know about.

When I say that I believe Olivia is irresponsible with drugs, I'm not excluding the unscheduled supplements, but the story I referred to involved the scheduled kind.

I've posted an edit/update above after talking to Vassar.

A question for the 'Vassarites', if they will: were you doing anything like the "unihemispheric sleep" exercise (self-inducing hallucinations/dissociative personalities by sleep deprivation) the Zizians are described as doing?

No. All sleep deprivation was unintentional (anxiety-induced in my case).

I banned him from SSC meetups for a combination of reasons including these

If you make bans like these it would be worth to communicate them to the people organizing SSC meetups. Especially, when making bans for safety reasons of meetup participants not communicating those bans seems very strange to me.

Vassar lived a while after he left the Bay Area in Berlin and for decisions whether or not to make an effort to integrate someone like him (and invite him to LW and SSC meetups) such kind of information is valuable and Bay people not sharing it but claiming that they do anything that would work in practice like a ban feels misleading. 

For reasons I don't fully understand and which might or might not be related to this, he left the Bay Area. This was around the time COVID happened, so everything's kind of been frozen in place since then.

I think Vassar left the Bay area more then a year before COVID happened. As far as I remember his stated reasoning was something along the lines of everyone in the Bay Area getting mindkilled by leftish ideology.

It was on the Register of Bans, which unfortunately went down after I deleted the blog. I admit I didn't publicize it very well because this was a kind of sensitive situation and I was trying to do it without destroying his reputation.

If there are bans that are supposed to be enforced, mentioning that in the mails that go out to organizers for a ACX everywhere event would make sense. I'm not 100% sure that I got all the mails because Ruben forwarded mails for me (I normally organize LW meetups in Berlin and support Ruben with the SSC/ACX meetups), but in those there was no mention of the word ban.

I don't think it needs to be public but having such information in a mail like the one Aug 23 would likely to be necessary for a good portion of the meetup organizers to know that there an expectation that certain people aren't welcome.

https://www.lesswrong.com/posts/iWWjq5BioRkjxxNKq/michael-vassar-at-the-slatestarcodex-online-meetup seems to have happened after that point in time. Vassar not only attended a Slate Star Codex but was central in it and presenting his thoughts.

I organized that, so let me say that:

  • That online meetup, or the invitation to Vassar, was not officially affiliated to or endorsed by SSC. Any responsibility for inviting him is mine.
  • I have  conversed with him a few times, as follows:
  • I met him in Israel around 2010. He was quite interesting, though he did try to get me to withdraw my retirement savings to invest with him. He was somewhat persuasive. During our time in conversation, he made some offensive statements, but I am perhaps less touchy about such things than the younger generation.
  • In 2012, he explained  Acausal Trade to me, and that was the seed of  this post. That discussion was quite sensible and I thank him for that.
  • A few years later, I invited him to speak at LessWrong Israel.  At that time I thought him a mad genius -- truly both.  His talk was verging on incoherence, with flashes of apparent insight.
  • Before the online meetup, 2021, he insisted on a preliminary talk; he made statements that produced twinges of persuasiveness. (Introspecting that is kind of interesting, actually.) I stayed with it for 2 or more hours before begging off, because it was fascinating in a way. I was able to analyze
... (read more)

It seems to me that despite organizing multiple SSC events you had no knowledge that Vassar was banned from SSC events. Neither had anyone reading the event anouncement to the extend that they would tell you that Vassar was banned before the event happened.

To me that suggests that there's a problem of not sharing information about who's banned to those organizing meetups in an effective way, so that a ban has the consequence one would expect it to have.

It might be useful to have a global blacklist somewhere. Possible legal consequences, if someone decides to sue you for libel. (Perhaps the list should only contain the names, not the reasons?) EDIT: Nevermind. There are more things I would like to say about this, but this is not the right place. Later I may write a separate article explaining the threat model I had in mind.
Legal threats matter a great deal for what can be done in a situation like this. When it comes to a "global blacklist" there's the question about governance. Who decides who's on and who isn't. When it comes to SSC or ACX meetups the governance question is clear. Anybody who's organizing a meetup under those labels should follow Scott's guidance.  That however only works if that information is communicated to meetup organizers. 

So, it's been a long time since I actually commented on Less Wrong, but since the conversation is here...

Hearing about this is weird for me, because I feel like, compared to the opinions I heard about him from other people in the community, I kind of... always had uncomfortable feelings about Mike Vassar? And I say this without having had direct personal contact with him except, IIRC, maybe one meetup I attended where he was there and we didn't talk directly, although we did occasionally participate in some of the same conversations online.


By all accounts, it sounds like he's always been quite charismatic in person, and this isn't the first time I've heard someone describe him as a "wizard." But empirically, there are some people who're very charismatic who propagate some really bad ideas and whose impacts on the lives of people around them, or on society at large, can be quite negative. As of last I was paying attention to him, I wouldn't have expected Mike Vassar to have that negative an effect on the lives of the people around him, but I was always stuck in an awkward position of feeling like I was surrounded by people who took him more seriously than I felt like he ought ... (read more)

I met Vassar once. He came across as extremely charismatic (with a sort of charisma that probably only works on a particular type of people, which includes me), creating the impression of saying wise and insightful things (especially if you lack relevant domain knowledge), while in truth he was saying a lot of stuff which was patently absurd. Something about his delivery was so captivating, that it took me a while to "shake off the fairy dust" and realize just how silly some of his claims were, even when it should have been obvious from the start. Moreover, his worldview seemed heavily based on paranoidal / conspiracy-theory type of thinking. So, yes, I'm not too surprised by Scott's revelations about him.

He came across as extremely charismatic (with a sort of charisma that probably only works on a particular type of people, which includes me), creating the impression of saying wise and insightful things (especially if you lack relevant domain knowledge), while in truth he was saying a lot of stuff which was patently absurd.

Yeah, it definitely didn't work on me. I believe I wrote this thread shortly after my one-and-only interaction with him, in which he said a lot of things that made me very skeptical but that I couldn't easily refute, or had much time to think about before he would move on to some other topic. (Interestingly, he actually replied in that thread even though I didn't mention him by name.)

It saddens me to learn that his style of conversation/persuasion "works" on many people who otherwise seem very smart and capable (and even self-selected for caring about being rational). It seems like pretty bad news as far as what kind of epistemic situation humanity is in (e.g., how easily we will be manipulated by even slightly-smarter-than-human AIs / human-AI systems).

7Wei Dai2y
Oh, this is because the OP that I was replying to did mention him by name:
-10[comment deleted]2y

I was always stuck in an awkward position of feeling like I was surrounded by people who took him more seriously than I felt like he ought to be taken.

Heh, the same feeling here. I didn't have much opportunity to interact with him in person. I remember repeatedly hearing praise about how incredibly smart he is (from people whom I admired), then trying to find something smart written by him, and feeling unimpressed and confused, like maybe I wasn't reading the right texts or I failed to discover the hidden meaning that people smarter than me have noticed.

Hypothesis 1: I am simply not smart enough to recognize his greatness. I can recognize people one level above me, and they can recognize people one level above them, but when I try to understand someone two levels above me, it's all gibberish to me.

Hypothesis 2: He is more persuasive in person than in writing. (But once he impressed you in person, you will now see greatness in his writing, too. Maybe because of halo affect. Maybe because now you understand the hidden layers of what he actually meant by that.) Maybe he is more persuasive in person because he can make his message optimized for the receiver; which might be a good thing... (read more)

Not a direct response to you, but if anyone who hasn't talked to Vassar is wanting an example of Vassar-conversation that may be easier to understand or get some sense from than most examples would (though it'll have a fair bit in it that'll probably still seem false/confusing), you might try Spencer Greenberg's podcast with Vassar.

As a datapoint: I listened to that podcast 4 times, and took notes 3 of those 4 times, to try and clearly parse what he's saying. I certainly did not fully succeed. 

My notes.

It seems like he said some straightforwardly contradictory things? For instance, that strong conflict theorists trust their own senses and feelings more, but also trust them less?

I would really like to understand what he's getting at by the way, so if it is clearer for you than it is for me, I'd actively appreciate clarification.

i tried reading / skimming some of that summary it made me want to scream  what a horrible way to view the world / people / institutions / justice  i should maybe try listening to the podcast to see if i have a similar reaction to that 

Seeing as how you posted this 9 days ago, I hope you did not bite off more than you could chew, and I hope you do not want to scream anymore.

In Harry Potter the standard practice seems to be to "eat chocolate" and perhaps "play with puppies" after exposure to ideas that are both (1) possibly true, and (2) very saddening to think about.

Then there is Gendlin's Litany (and please note that I am linking to a critique, not to unadulterated "yay for the litany" ideas) which I believe is part of Lesswrong's canon somewhat on purpose. In the critique there are second and third thoughts along these lines, which I admire for their clarity, and also for their hopefulness.

Ideally [a better version of the Litany] would communicate: “Lying to yourself will eventually screw you up worse than getting hurt by a truth,” instead of “learning new truths has no negative consequences.”

This distinction is particularly important when the truth at hand is “the world is a fundamentally unfair place that will kill you without a second thought if you mess up, and possibly even if you don’t.”

EDIT TO CLARIFY: The person who goes about their life ignoring the universe’s Absolute Neutrality is very fundamentally

... (read more)

There's also these 2 podcasts which cover quite a variety of topics, for anyone who's interested:
You've Got Mel - With Michael Vassar
Jim Rutt Show - Michael Vassar on Passive-Aggressive Revolution

I haven't seen/heard anything particularly impressive from him either, but perhaps his 'best work' just isn't written down anywhere?
My impression as an outsider (I met him once and heard and read some things people were saying about him) was that he seemed smart but also seemed like kind of a kook...

I have replied to this comment in a top-level post.

Ziz's perspective here gives you a pretty detailed example of how this social trick works (i.e. spontaneously pretend something someone else did was objectionable and use it as an excuse to make a fit/leave to make the other person walk on eggshells or chase you).
Since comments get occluded you should refer to an edit/update somewhere at the top if you want it to be seen by those who already read your original comment.
1Yoav Ravid2y
Is this the highest rated comment on the site?

Okay, meta: This post has over 500 comments now and it's really hard to keep a handle on all of the threads. So I spent the last 2 hours trying to outline the main topics that keep coming up. Most top-level comments are linked to but some didn't really fit into any category, so a couple are missing; also apologies that the structure is imperfect.

Topic headers are bolded and are organized very roughly in order of how important they seem (both to me personally and in terms of the amount of air time they've gotten). 

... (read more)

This is hugely helpful, a great community service! Thanks so much, mingyuan.

I find something in me really revolts at this post, so epistemic status… not-fully-thought-through-emotions-are-in-charge?

Full disclosure: I am good friends with Zoe; I lived with her for the four months leading up to her post, and was present to witness a lot of her processing and pain. I’m also currently dating someone named in this post, but my reaction to this was formed before talking with him.

First, I’m annoyed at the timing of this. The community still seems in the middle of sensemaking around Leverage, and figuring out what to do about it, and this post feels like it pulls the spotlight away. If the points in the post felt more compelling, then I’d probably be more down for an argument of “we should bin these together and look at this as a whole”, but as it stands the stuff listed in here feels like it’s describing something significantly less damaging, and of a different kind of damage. I’m also annoyed that this post relies so heavily on Zoe’s, and the comparison feels like it cheapens what Zoe went through. I keep having a recurring thought that the author must have utterly failed to understand the intensity of the very direct impact from Leverage’s operations on Zoe. Mo... (read more)

I want to note that this post (top-level) now has more than 3x the number of comments that Zoe's does (or nearly 50% more comments than the Zoe+BayAreaHuman posts combined, if you think that's a more fair comparison), and that no one has commented on Zoe's post in 24 hours. [ETA: This changed while I was writing this comment. The point about lowered activity still stands.]

This seems really bad to me — I think that there was a lot more that needed to be figured out wrt Leverage, and this post has successfully sucked all the attention away from a conversation that I perceive to be much more important. 

I keep deleting sentences because I don't think it's productive to discuss how upset this makes me, but I am 100% with Aella here. I was wary of this post to begin with and I feel something akin to anger at what it did to the Leverage conversation.

I had some contact with Leverage 1.0 — had some friends there, interviewed for an ops job there, and was charted a few times by a few different people. I have also worked for both CFAR and MIRI, though never as a core staff member at either organization; and more importantly, I was close friends with maybe 50% of the people who worked at ... (read more)

It seems like it's relatively easy for people to share information in the CFAR+MIRI conversation. On the other hand, for those people who have actually the most central information to share in the Leverage conversation it's not as easy to share them. 

In many cases I would expect that private in person conversation are needed to progress the Leverage debate and that just takes time. Those people at leverage who want to write up their own experience likely benefit from time to do that.

Practically, helping Anna get an overview over timeline of members and funders and getting people to share stories with Aella seems to be the way going forward that's largely not about leaving LW comments.

I agree with the intent of your comment mingyuan, but perhaps the reason for the asymmetry in activity on this post is simply due to the fact that there are an order of magnitude (or several orders of magnitude?) more people with some/any experience and interaction with CFAR/MIRI (especially CFAR) compared to Leverage?

I think some of it has got to be that it's somehow easier to talk about CFAR/MIRI, rather than a sheer number of people thing. I think Leverage is somehow unusually hard to talk about, such that maybe we should figure out how to be extraordinarily kind/compassionate/gentle to anyone attempting it, or something.

I agree that Leverage has been unusually hard to talk about bluntly or honestly, and I think this has been true for most of its existence.

I also think the people at the periphery of Leverage, are starting to absorb the fact that they systematically had things hidden from them. That may be giving them new pause, before engaging with Leverage as a topic.

(I think that seems potentially fair, and considerate. To me, it doesn't feel like the same concern applies in engaging about CFAR. I also agree that there were probably fewer total people exposed to Leverage, at all.)

...actually, let me give you a personal taste of what we're dealing with?

The last time I choose to talk straightforwardly and honestly about Leverage, with somebody outside of it? I had to hard-override an explicit but non-legal privacy agreement*, to get a sanity check. When I was honest about having done so shortly thereafter, I completely and permanently lost one of my friendships as a result.

Lost-friend says they were traumatized as a result of me doing this. That having "made the mistake of trusting me" hurt their relationships with other Leveragers. That at the time, they wished they'd lied to me, which stung.

I t... (read more)

I'm finally out about my story here! But I think I want to explain a bit of why I wasn't being very clear, for a while.

I've been "hinting darkly" in public rather than "telling my full story" due to a couple of concerns:

  1. I don't want to "throw ex-friend under the bus," to use their own words! Even friend's Leverager partner (who they weren't allowed to visit, if they were "infected with objects") seemed more "swept-up in the stupidity" than "malicious." I don't know how to tell my truth, without them feeling drowned out. I do still care about that. Eurgh.

  2. Via models that come out of my experience with Brent: I think this level of silence, makes the most sense if some ex-Leveragers did get a substantial amount of good out of the experience (sometimes with none of the bad, sometimes alongside it), and/or if there's a lot of regrettable actions taken by people who were swept up in this at the time, by people who would ordinarily be harmless under normal circumstances. I recognize that bodywork was very helpful to my friend, in working through some of their (unrelated) trauma. I am more than a little reluctant to put people through the sort of mob-driven invalidation I felt, in the

... (read more)
Any thoughts on why this was coming about in the culture?  If anyone feels that way (like the lost friend) and wants to talk to me about it, I'd be interested in learning more about it. 
* I could tell that this had some concerning toxic elements, and I needed an outside sanity-check. I think under the circumstances, this was the correct call for me. I do not regret picking the particular person I chose as a sanity-check. I am also very sympathetic to other people not feeling able to pull this, given the enormous cost to doing it at the time. This is not a strong systematic assessment of how I usually treat privacy agreements. My harm-assessment process is usually structured a bit like this, with some additional pressure from an "agreement-to-secrecy," and also factors in the meta-secrecy-agreements around "being able to be held to secrecy agreements" and "being honest about how well you can be held to secrecy agreements." No, I don't feel like having a long discussion about privacy policies right now. But if you care? My thoughts on information-sharing policy were valuable enough to get me into the 2019 Review. If you start on this here, I will ignore you.

The fact that the people involved apparently find it uniquely difficult to talk about is a pretty good indication that Leverage != CFAR/MIRI in terms of cultishness/harms etc.

Yes; I want to acknowle