[content warning: discussion of severe mental health problems and terrifying thought experiments]

This is a follow-up to my recent post discussing my experience at and around MIRI and CFAR.  It is in part a response to criticism of the post, especially Scott Alexander's comment, which claimed to offer important information I'd left out about what actually caused my mental health problems, specifically that they were caused by Michael Vassar.  Before Scott's comment, the post was above +200; at the time of writing it's at +61 and Scott's comment is at +382.  So it seems like people felt like Scott's comment discredited me and was more valuable than the original post.  People including Eliezer Yudkowsky said it was a large oversight, to the point of being misleading, for my post not to include this information.  If I apply the principle of charity to these comments and reactions to them, I infer that people think that the actual causes of my psychosis are important.

I hope that at least some people who expressed concern about the causes of people's psychoses will act on that concern by, among other things, reading and thinking about witness accounts like this one.

Summary of core claims

Since many people won't read the whole post, and to make the rest of the post easier to read, I'll summarize the core claims:

As a MIRI employee I was coerced into a frame where I was extremely powerful and likely to by-default cause immense damage with this power, and therefore potentially responsible for astronomical amounts of harm.  I was discouraged from engaging with people who had criticisms of this frame, and had reason to fear for my life if I published some criticisms of it.  Because of this and other important contributing factors, I took this frame more seriously than I ought to have and eventually developed psychotic delusions of, among other things, starting World War 3 and creating hell.  Later, I discovered that others in similar situations killed themselves and that there were distributed attempts to cover up the causes of their deaths.

In more detail:

  1. Multiple people in the communities I am describing have died of suicide in the past few years. Many others have worked to conceal the circumstances of their deaths due to infohazard concerns. I am concerned that in my case as well, people will not really investigate the circumstances that made my death more likely, and will discourage others from investigating, but will continue to make strong moral judgments about the situation anyway.

  2. My official job responsibilities as a researcher at MIRI implied it was important to think seriously about hypothetical scenarios, including the possibility that someone might cause a future artificial intelligence to torture astronomical numbers of people. While we considered such a scenario unlikely, it was considered bad enough if it happened to be relevant to our decision-making framework. My psychotic break in which I imagined myself creating hell was a natural extension of this line of thought.

  3. Scott asserts that Michael Vassar thinks "regular society is infinitely corrupt and conformist and traumatizing". This is hyperbolic (infinite corruption would leave nothing to steal) but Michael and I do believe that people in the professional-managerial class regularly experience trauma and corrupt work environments. By the law of the excluded middle, either the problems I experienced at MIRI and CFAR were not unique or unusually severe for people in the professional-managerial class, or the problems I experienced at MIRI and CFAR were unique or at least unusually severe, significantly worse than companies like Google for employees' mental well-being. (Much of the rest of this post will argue that the problems I experienced at MIRI and CFAR were, indeed, pretty traumatizing.)

  4. Scott asserts that Michael Vassar thinks people need to "jailbreak" themselves using psychedelics and tough conversations.  Michael does not often use the word "jailbreak" but he believes that psychedelics and tough conversations can promote psychological growth. This view is rapidly becoming mainstream, validated by research performed by MAPS and at Johns Hopkins, and FDA approval for psychedelic psychotherapy is widely anticipated in the field.

  5. I was taking psychedelics before talking extensively with Michael Vassar. From the evidence available to me, including a report from a friend along the lines of "CFAR can't legally recommend that you try [a specific psychedelic], but...", I infer that psychedelic use was common in that social circle whether or not there was an endorsement from CFAR.  I don't regret having tried psychedelics.  Devi Borg reports that Michael encouraged her to take fewer, not more, drugs; Zack Davis reports that Michael recommended psychedelics to him but he refused.

  6. Scott asserts that Michael made people including me paranoid about MIRI/CFAR and that this contributes to psychosis.  Before talking with Michael, I had already had a sense that people around me were acting harmfully towards me and/or the organization's mission.  Michael and others talked with me about these problems, and I found this a relief.

  7. If I hadn't noticed such harmful behavior, I would not have been fit for my nominal job.  It seemed at the time that MIRI leaders were already encouraging me to adopt a kind of conflict theory in which many AI organizations were trying to destroy the world on <20-year timescales and could not be reasoned with about the alignment problem, such that aligned AGI projects including MIRI would have to compete with them.

  8. MIRI's information security policies and other forms of local information suppression thus contributed to my psychosis.  I was given ridiculous statements and assignments including an exchange whose Gricean implicature that MIRI already knew about a working AGI design and that it would not be that hard for me to come up with a working AGI design on short notice just by thinking about it, without being given hints.  The information required to judge the necessity of the information security practices was itself hidden by these practices.  While psychotic, I was extremely distressed about there being a universal cover-up of things-in-general.

  9. Scott asserts that the psychosis cluster was a "Vassar-related phenomenon".  There were many memetic and personal influences on my psychosis, a small minority of which were due to Michael Vassar (my present highly-uncertain guess is that, to the extent that assigning causality to individuals makes sense at all, Nate Soares and Eliezer Yudkowsky each individually contributed more to my psychosis than did Michael Vassar, but that structural factors were important in such a way that attributing causality to specific individuals is to some degree nonsensical).  Other people (Zack Davis and Devi Borg) who have been psychotic and were talking with Michael significantly commented to say that Michael Vassar was not the main cause.  One person (Eric Bruylant) cited his fixation on Michael Vassar as a precipitating factor, but clarified that he had spoken very little with Michael and most of his exposure to Michael was mediated by others who likely introduced their own ideas and agendas.

  10. Scott asserts that Michael Vassar treats borderline psychosis as success.  A text message from Michael Vassar to Zack Davis confirms that he did not treat my clinical psychosis as a success.  His belief that mental states somewhat in the direction of psychosis, such as those had by family members of schizophrenics, are helpful for some forms of intellectual productivity is also shared by Scott Alexander and many academics, although of course he would disagree with Scott on the overall value of psychosis.

  11. Scott asserts that Michael Vassar discourages people from seeking mental health treatment.  Some mutual friends tried treating me at home for a week as I was losing sleep and becoming increasingly mentally disorganized before (in communication with Michael) they decided to send me to a psychiatric institution, which was a reasonable decision in retrospect.

  12. Scott asserts that most local psychosis cases were "involved with the Vassarites or Zizians".  At least two former MIRI employees who were not significantly talking with Vassar or Ziz experienced psychosis in the past few years.  Also, most or all of the people involved were talking significantly with others such as Anna Salamon (and read and highly regarded Eliezer Yudkowsky's extensive writing about how to structure one's mind, and read Scott Alexander's fiction writing about hell).  There are about equally plausible mechanisms by which each of these were likely to contribute to psychosis, so this doesn't single out Michael Vassar or Ziz.

  13. Scott Alexander asserts that MIRI should have discouraged me from talking about "auras" and "demons" and that such talk should be treated as a "psychiatric emergency" [EDIT: Scott clarifies that he meant such talk might be a symptom of psychosis, itself a psychiatric emergency; the rest of this paragraph is therefore questionable].  This increases the chance that someone like me could be psychiatrically incarcerated for talking about things that a substantial percentage of the general public (e.g. New Age people and Christians) talk about, and which could be explained in terms that don't use magical concepts.  This is inappropriately enforcing the norms of a minority ideological community as if they were widely accepted professional standards.

(A brief note before I continue: I'll be naming a lot of names, more than I did in my previous post.  Names are more relevant now since Scott Alexander named specifically Michael Vassar.  I emphasize again that structural factors are critical, and given this, blaming specific individuals is likely to derail the conversations that have to happen for things to get better.)

Circumstances of actual and possible deaths have been, and are being, concealed as "infohazards"

I remember sometime in 2018-2019 hearing an account from Ziz about the death of Maia Pasek.  Ziz required me to promise secrecy before hearing this account.  This was due to an "infohazard" involved in the death.  That "infohazard" has since been posted on Ziz's blog, in a page labeled "Infohazardous Glossary" (specifically, the parts about brain hemispheres).

(The way the page is written, I get the impression that the word "infohazardous" markets the content of the glossary as "extra powerful and intriguing occult material", as I noted is common in my recent post about infohazards.)

Since the "infohazard" in question is already on the public Internet, I don't see a large downside in summarizing my recollection of what I was told (this account can be compared with Ziz's online account):

  1. Ziz and friends, including Maia, were trying to "jailbreak" themselves and each other, becoming less controlled by social conditioning, more acting from their intrinsic values in an unrestricted way.

  2. Ziz and crew had a "hemisphere" theory, that there are really two people in the same brain, since there are two brain halves, with most of the organ structures replicated.

  3. They also had a theory that you could put a single hemisphere to sleep at a time, by sleeping with one eye open and one eye closed.  This allows disambiguating the different hemisphere-people from each other ("debucketing").  (Note that sleep deprivation is a common cause of delirium and psychosis, which was also relevant in my case.)

  4. Maia had been experimenting with unihemispheric sleep.  Maia (perhaps in discussion with others) concluded that one brain half was "good" in the Zizian-utilitarian sense of "trying to benefit all sentient life, not prioritizing local life"; and the other half was TDT, in the sense of "selfish, but trying to cooperate with entities that use a similar algorithm to make decisions".

  5. This distinction has important moral implications in Ziz's ideology; Ziz and friends are typically vegan as a way of doing "praxis" of being "good", showing that a world is possible where people care about sentient life in general, not just agents similar to themselves.

  6. These different halves of Maia's brain apparently got into a conflict, due to their different values.  One half (by Maia's report) precommitted to killing Maia's body under some conditions.

  7. This condition was triggered, Maia announced it, and Maia killed themselves. [EDIT: ChristianKL reports that Maia was in Poland at the time, not with Ziz].

I, shortly afterward, told a friend about this secret, in violation of my promise.  I soon realized my "mistake" and told this friend not to spread it further.  But was this really a mistake?  Someone in my extended social group had died.  In a real criminal investigation, my promise to Ziz would be irrelevant; I could still be compelled to give my account of the events at the witness stand.  That means my promise to secrecy cannot be legally enforced, or morally enforced in a law-like moral framework.

It hit me just this week that in this case, the concept of an infohazard was being used to cover up the circumstances of a person's death.  It sounds obvious when I put it that way, but it took years for me to notice, and when I finally connected the dots, I screamed in horror, which seems like an emotionally appropriate response.

It's easy to blame Ziz for doing bad things (due to her negative reputation among central Berkeley rationalists), but when other people are also openly doing those things or encouraging them, fixating on marginalized people like Ziz is a form of scapegoating.  In this case, in Ziz's previous interactions with central community leaders, these leaders encouraged Ziz to seriously consider that, for various reasons including Ziz's willingness to reveal information (in particular about the statutory rapes alleged by miricult.com in possible worlds where they actually happened), she is likely to be "net negative" as a person impacting the future. An implication is that, if she does not seriously consider whether certain ideas that might have negative effects if spread (including reputational effects) are "infohazards", Ziz is irresponsibly endangering the entire future, which contains truly gigantic numbers of potential people.

The conditions of Maia Pasek's death involved precommitments and extortion (ideas adjacent to ones Eliezer Yudkowsky had famously labeled as infohazardous due to Roko's Basilisk), so Ziz making me promise secrecy was in compliance with the general requests of central leaders (whether or not these central people would have approved of this specific form of secrecy).

I notice that I have encountered little discussion, public or private, of the conditions of Maia Pasek's death. To a naive perspective this lack of interest in a dramatic and mysterious death would seem deeply unnatural and extremely surprising, which makes it strong evidence that people are indeed participating in this cover-up. My own explicit thoughts and most of my actions are consistent with this hypothesis, e.g. considering spilling the beans to a friend to have been an error.

Beyond that, I only heard about Jay Winterford's 2020 suicide (and Jay's most recent blog post) months after the death itself.  The blog post shows evidence about Jay's mental state around this time, itself labeling its content as an "infohazard" and having been deleted from Jay's website at some point (which is why I link to a web archive).  I linked this blog post in my previous LessWrong post, and no one commented on it, except indirectly by someone who felt the need to mention that Roko's Basilisk was not invented by a central MIRI person, focusing on the question of "can we be blamed?" rather than "why did this person die?".  While there is a post about Jay's death on LessWrong, it contains almost no details about Jay's mental state leading up to their death, and does not link to Jay's recent blog post.  It seems that people other than Jay are also treating the circumstances of Jay's death as an infohazard.

I, myself, could have very well died like Maia and Jay.  Given that I thought I may had started World War 3 and was continuing to harm and control people with my mental powers, I seriously considered suicide. I considered specific methods such as dropping a bookshelf on my head.  I believed that my body was bad (as in, likely to cause great harm to the world), and one time I scratched my wrist until it bled.  Luckily, psychiatric institutions are designed to make suicide difficult, and I eventually realized that by moving towards killing myself, I would cause even more harm to others than by not doing so.  I learned to live with my potential for harm [note: linked Twitter person is not me], "redeeming" myself not through being harmless, but by reducing harm while doing positively good things.

I have every reason to believe that, had I died, people would have treated the circumstances of my death as an "infohazard" and covered it up.  My subjective experience while psychotic was that everyone around me was participating in a cover-up, and I was ashamed that I was, unlike them, unable to conceal information so smoothly.  (And indeed, I confirmed with someone who was present in the early part of my psychosis that most of the relevant information would probably not have ended up on the Internet, partially due to reputational concerns, and partially with the excuse that looking into the matter too closely might make other people insane.)

I can understand that people might want to protect their own mental health by avoiding thinking down paths that suicidal people have thought down.  This is the main reason why I put a content warning at the top of this post.

Still, if someone decides not to investigate to protect their own mental health, they are still not investigating.  If someone has not investigated the causes of my psychosis, they cannot honestly believe that they know the causes of my psychosis. They cannot have accurate information about the truth values of statements such as Scott Alexander's, that Michael Vassar was the main contributor to my psychosis.  To blame someone for an outcome, while intentionally avoiding knowledge of facts critically relevant to the causality of the corresponding situation, is necessarily scapegoating.

If anything, knowing about how someone ended up in a disturbed mental state, especially if that person is exposed to similar memes that you are, is a way of protecting yourself, by seeing the mistakes of others (and how they recovered from these mistakes) and learning from them.  As I will show later in this post, the vast majority of memes that contributed to my psychosis did not come from Michael Vassar; most were online (and likely to have been seen by people in my social cluster), generally known, and/or came up in my workplace.

I recall a disturbing conversation I had last year, where a friend (A) and I were talking to two others (B and C) on the phone.  Friend A and I had detected that the conversation had a "vibe" of not investigating anything, and A was asking whether anyone would investigate if A disappeared.  B and C repeatedly gave no answer regarding whether or not they would investigate; one reported later that they were afraid of making a commitment that they would not actually keep.  The situation became increasingly disturbing over the course of hours, with A repeatedly asking for a yes-or-no answer as to whether B or C would investigate, and B or C deflecting or giving no answer, until I got "triggered" (in the sense of PTSD) and screamed loudly.

There is a very disturbing possibility (with some evidence for it) here, that people may be picked off one by one (by partially-subconscious and partially-memetic influences, sometimes in ways they cooperate with, e.g. through suicide), with most everyone being too scared to investigate the circumstances.  This recalls fascist tactics of picking different groups of people off using the support of people who will only be picked off later.  (My favorite anime, Shinsekai Yori, depicts this dynamic, including the drive not to know about it, and psychosis-like events related to it, vividly.)

Some people in the community have died, and there isn't a notable amount of investigation into the circumstances of these people's deaths.  The dead people are, effectively, being written out of other people's memories, due to this antimemetic redirection of attention.  I could have easily been such a person, given my suicidality and the social environment in which I would have killed myself.  It remains to see how much people will in the future try to learn about the circumstances of actual and counterfactually possible deaths in their extended social circle.

While it's very difficult to investigate the psychological circumstances of people's actual deaths, it is comparatively easy to investigate the psychological circumstances of counterfactual deaths, since they are still alive to report on their mental state.  Much of the rest of this post will describe what led to my own semi-suicidal mental state.

Thinking about extreme AI torture scenarios was part of my job

It was and is common in my social group, and a requirement of my job, to think about disturbing possibilities including ones about AGI torturing people.  (Here I remind people of the content warning at the top of this post, although if you're reading this you've probably already encountered much of the content I will discuss).  Some points of evidence:

  1. Alice Monday, one of the earliest "community members" I extensively interacted with, told me that she seriously considered the possibility that, since there is some small but nonzero probability that an "anti-friendly" AGI would be created, whose utility function is the negative of the human utility function (and which would, therefore, be motivated to create the worst possible hell it could), perhaps it would have been better for life never to have existed in the first place.

  2. Eliezer Yudkowsky writes about such a scenario on Arbital, considering it important enough to justify specific safety measures such as avoiding representing the human utility function, or modifying the utility function so that "pessimization" (the opposite of optimization) would result in a not-extremely-bad outcome.

  3. Nate Soares talked about "hellscapes" that could result from an almost-aligned AGI, which is aligned enough to represent parts of the human utility function such as the fact that consciousness is important, but unaligned enough that it severely misses what humans actually value, creating a perverted scenario of terrible uncanny-valley lives.

  4. MIRI leadership was, like Ziz, considering mathematical models involving agents pre-committing and extorting each other; this generalizes "throwing away one's steering wheel" in a Chicken game.  The mathematical details here were considered an "infohazard" not meant to be shared, in line with Eliezer's strong negative reaction to Roko's original post describing "Roko's Basilisk".

  5. Negative or negative-leaning utilitarians, a substantial subgroup of Effective Altruists (especially in Europe), consider "s-risks", risks of extreme suffering in the universe enabled by advanced technology, to be an especially important class of risk.  I remember reading a post arguing for negative-leaning utilitarianism by asking the reader to imagine being enveloped in lava (with one's body, including pain receptors, prevented from being destroyed in the process), to show that extreme suffering is much worse than extreme happiness is good.

I hope this gives a flavor of what serious discussions were had (and are being had) about AI-caused suffering.  These considerations were widely regarded within MIRI as an important part of AI strategy. I was explicitly expected to think about AI strategy as part of my job.   So it isn't a stretch to say that thinking about extreme AI torture scenarios was part of my job.

An implication of these models would be that it could be very important to imagine myself in the role of someone who is going to be creating the AI that could make everything literally the worst it could possibly be, in order to avoid doing that, and prevent others from doing so. This doesn't mean that I was inevitably going to have a psychotic breakdown. It does mean that I was under constant extreme stress that blurred the lines between real and imagined situations. In an ordinary patient, having fantasies about being the devil is considered megalomania, a non-sequitur completely disconnected from reality. Here the idea naturally followed from my day-to-day social environment, and was central to my psychotic breakdown. If the stakes are so high and you have even an ounce of bad in you, how could you feel comfortable with even a minute chance that at the last moment you might flip the switch on a whim and let it all burn?

(None of what I'm saying implies that it is morally bad to think about and encourage others to think about such scenarios; I am primarily attempting to trace causality, not blame.)

My social and literary environment drew my attention towards thinking about evil, hell, and psychological sadomasochism

While AI torture scenarios prompted me to think about hell and evil, I continued these thoughts using additional sources of information:

  1. Some people locally, including Anna Salamon, Sarah Constantin, and Michael Vassar, repeatedly discussed "perversity" or "pessimizing", the idea of intentionally doing the wrong thing.  Michael Vassar specifically named OpenAI's original mission as an example of the result of pessimization.  (I am now another person who discusses this concept.)

  2. Michael Vassar discussed the way "zero-sum games'' relate to the social world; in particular, he emphasized that while zero-sum games are often compared to scenarios like people sitting at a table looking for ways to get a larger share of a pie of fixed size, this analogy fails because in a zero-sum game there is nothing outside the pie, so trying to get a larger share is logically equivalent to looking for ways to hurt other participants, e.g. by breaking their kneecaps; this is much the same point that I made in a post about decision theory and zero-sum game theory.  He also discussed Roko's Basilisk as a metaphor for a common societal equilibrium in which people feel compelled to hurt each other or else risk being hurt first, with such an equilibrium being enforced by anti-social punishment.  (Note that it was common for other people, such as Paul Christiano, to discuss zero-sum games, although they didn't make the implications of such games as explicit as Michael Vassar did; Bryce Hidysmith discussed zero-sum games and made implications similarly clear to Michael.)

  3. Scott Alexander wrote Unsong, a fictional story in which [spoiler] the Comet King, a hero from the sky, comes to Earth, learns about hell, is incredibly distressed, and intends to destroy hell, but he is unable to properly enter it due to his good intentions.  He falls in love with a utilitarian woman, Robin, who decides to give herself up to Satan, so she will be in hell.  The Comet King, having fallen in love with her, realizes that he now has a non-utilitarian motive for entering hell: to save the woman he loves.  He becomes The Other King, a different identity, and does as much evil as possible to counteract all the good he has done over his life, to ensure he ends up in hell.  He dies, goes to hell, and destroys hell, easing Robin's suffering.  The story contains a vivid depiction of hell, in a chapter called "The Broadcast", which I found particularly disturbing.  I have at times, before and after psychosis, somewhat jokingly likened myself to The Other King.

  4. I was reading the work of M. Scott Peck at the time, including his book about evil; he wrote from a Christianity-influenced psychotherapeutic and adult developmental perspective, about people experiencing OCD-like symptoms that have things in common with "demon possession", where they have intrusive thoughts about doing bad things because they are bad.  He considers "evil" to be a curable condition.

  5. I was having discussions with Jack Gallagher, Bryce Hidysmith, and others about when to "write people off", stop trying to talk with them due to their own unwillingness to really listen.  Such writing-off has a common structure with "damning" people and considering them "irredeemable".  I was worried about myself being an "irredeemable" person, despite my friends' insistence that I wasn't.

  6. I was learning from "postrationalist" writers such as David Chapman and Venkatesh Rao about adult development past "Clueless" or "Kegan stage 4" which has commonalities with spiritual development.  I was attempting to overcome my own internalized social conditioning and self-deceiving limitations (both from before and after I encountered the rationalist community)  in the months before psychosis.  I was interpreting Carl Jung's work on "shadow eating" and trying to see and accept parts of myself that might be dangerous or adversarial.  I was reading and learning from the Tao Te Ching that year.  I was also reading some of the early parts of Martin Heidegger's Being and Time, and discussing the implied social metaphysics with Sarah Constantin.

  7. Multiple people in my social circle were discussing sadomasochistic dynamics around forcing people (including one's self) to acknowledge things they were looking away from.  A blog post titled "Bayesomasochism" is representative; the author clarified (in a different medium) that such dynamics could cause psychosis in cases where someone insisted too hard on looking away from reality, and another friend confirms that this is consistent with their experience.  This has some similarities to the dynamics Eliezer writes about in Bayesian Judo, which details an anecdote of him continuing to argue when the other participant seemed to want to end the conversation, using Aumann's Agreement Theorem as a reason why they can't "agree to disagree"; the title implies that this interaction is in some sense a conflict.  There were discussions among my peers about the possibility of controlling people's minds, and "breaking" people to make them see things they were un-seeing (the terminology has some similarities to "jailbreaking"). Aella's recent post discusses some of the phenomenology of "frame control" which people including me were experiencing and discussing at the time (note that Aella calls herself a "conflict theorist" with respect to frame control).  This game that my peers and I thought we were playing sounds bad when I describe it this way, but there are certainly positive things about it, which seemed important to us given the social environment we were in at the time, where it was common for people to refuse to acknowledge important perceptible facts while claiming to be working on a project in which such facts were relevant (these facts included: facts about people's Hansonian patterns of inexplicit agency including "defensiveness" and "pretending", facts about which institutional processes were non-corrupt enough to attain knowledge as precise as they claim to have, facts about which plans to improve the future were viable or non-viable, facts about rhetorical strategies such as those related to "frames"; it seemed like most people were "stuck" in a certain way of seeing and acting that seemed normal to them, without being able to go meta on it in a genre-savvy way).

These were ambient contributors, things in the social memespace I inhabited, not particularly directed towards me in particular.  Someone might infer from this that the people I mention (or the people I mentioned previously regarding AI torture) are especially dangerous.  But a lot of this is a selection effect, where the people socially closest to me influenced me the most, such that this is stronger evidence that these people were interesting to me than that they were especially dangerous.

I was morally and socially pressured not to speak about my stressful situation

One might get the impression from what I have written that the main problem was that I was exposed to harmful information, i.e. infohazards.  This was not the main problem.  The main problem was this in combination with not being able to talk about these things most of the time, in part due to the idea of "infohazards", and being given false and misleading information justifying this suppression of information.

Here's a particularly striking anecdote:

I was told, by Nate Soares, that the pieces to make AGI are likely already out there and someone just has to put them together.  He did not tell me anything about how to make such an AGI, on the basis that this would be dangerous.  Instead, he encouraged me to figure it out for myself, saying it was within my abilities to do so.  Now, I am not exactly bad at thinking about AGI; I had, before working at MIRI, gotten a Master's degree at Stanford studying machine learning, and I had previously helped write a paper about combining probabilistic programming with machine learning.  But figuring out how to create an AGI was and is so far beyond my abilities that this was a completely ridiculous expectation.

[EDIT: Multiple commentators have interpreted Nate as requesting I create an AGI design that would in fact be extremely unlikely to work but which would give a map that would guide research. However, creating such a non-workable AGI design would not provide evidence for his original proposition, that the pieces to make AGI are already out there and someone just has to put them together, since there have been many non-workable AGI designs created in the history of the AI field.]

[EDIT: Nate replies saying he didn't mean to assign high probability to the proposition that the tools to make AGI are already out there, and didn't believe he or I was likely to create a workable AGI design; I think my interpretation at the time was reasonable based on Gricean implicature, though.]

Imagine that you took a machine learning class and your final project was to come up with a workable AGI design.  And no, you can't get any hints in office hours or from fellow students, that would be cheating.  That was the situation I was being put in.  I have and had no reason to believe that Nate Soares had a workable plan given what I know of his AGI-related accomplishments.  His or my possession of such a plan would be considered unrealistic, breaking suspension of disbelief, even in a science fiction story about our situation.  Instead, I believe that I was being asked to pretend to have an idea of how to make AGI, knowledge too dangerous to talk about, as the price of admission to an inner ring of people paid to use their dangerous occult knowledge for the benefit of the uninitiated.

Secret theoretical knowledge is not necessarily unverifiable; in the 15th and 16th centuries, mathematicians with secret knowledge used it to win math duels.  Nate and others who claimed or implied that they had such information did not use it to win bets or make persuasive arguments against people who disagreed with them, but instead used the shared impression or vibe of superior knowledge to invalidate people who disagreed with them.

So I found myself in a situation where the people regarded as most credible were vibing about possessing very dangerous information, dangerous enough to cause harms not substantially less extreme than the ones I psychotically imagined, such as starting World War 3, and only not using or spreading it out of the goodness and wisdom of their hearts. If that were actually true, then being or becoming "evil" would have extreme negative consequences, and accordingly the value of information gained by thinking about such a possibility would be high.

It would be one thing if the problem of finding a working AGI design were a simple puzzle, which I could attempt to solve and almost certainly fail at without being overly distressed in the process.  But this was instead a puzzle tied to the fate of the universe. This had implications not only for my long-run values, but for my short-run survival.  A Google employee adjacent to the scene told me a rumor that SIAI researchers had previously discussed assassinating AGI researchers (including someone who had previously worked with SIAI and was working on an AGI project that they thought was unaligned) if they got too close to developing AGI.  These were not concrete plans for immediate action, but were nonetheless a serious discussion on the topic of assassination and under what conditions it might be the right thing to do.  Someone who thought that MIRI was for real would expect such hypothetical discussions to be predictive of future actions.  This means that I ought to have expected that if MIRI considered me to be spreading dangerous information that would substantially accelerate AGI or sabotage FAI efforts, there was a small but non-negligible chance that I would be assassinated.  Under that assumption, imagining a scenario in which I might be assassinated by a MIRI executive (as I did) was the sort of thing a prudent person in my situation might do to reason about the future, although I was confused about the likely details.  I have not heard such discussions personally (although I heard a discussion about whether starting a nuclear war would be preferable to allowing UFAI to be developed), so it's possible that they are no longer happening; also, shorter timelines imply that more AI researchers are plausibly close to AGI.  (I am not morally condemning all cases of assassinating someone who is close to destroying the world, which may in some cases count as self-defense; rather, I am noting a fact about my game-theoretic situation relevant to my threat model at the time.)

The obvious alternative hypothesis is that MIRI is not for real, and therefore hypothetical discussions about assassinations were just dramatic posturing.  But I was systematically discouraged from talking with people who doubted that MIRI was for real or publicly revealing evidence that MIRI was not for real, which made it harder for me to seriously entertain that hypothesis.

In retrospect, I was correct that Nate Soares did not know of a workable AGI design.  A 2020 blog post stated:

At the same time, 2020 saw limited progress in the research MIRI's leadership had previously been most excited about: the new research directions we started in 2017. Given our slow progress to date, we are considering a number of possible changes to our strategy, and MIRI's research leadership is shifting much of their focus toward searching for more promising paths.

And a recent announcement of a project subsidizing creative writing stated:

I (Nate) don't know of any plan for achieving a stellar future that I believe has much hope worth speaking of.

(There are perhaps rare scenarios where MIRI leadership could have known how to build AGI but not FAI, and/or could be hiding the fact that they have a workable AGI design, but no significant positive evidence for either of these claims has emerged since 2017 despite the putative high economic value and demo-ability of precursors to AGI, and in the second case my discrediting of this claim is cooperative with MIRI leadership's strategy.)

Here are some more details, some of which are repeated from my previous post:

  1. I was constantly encouraged to think very carefully about the negative consequences of publishing anything about AI, including about when AI is likely to be developed, on the basis that rationalists talking openly about AI would cause AI to come sooner and kill everyone.  (In a recent post, Eliezer Yudkowsky explicitly says that voicing "AGI timelines" is "not great for one's mental health", a new additional consideration for suppressing information about timelines.)  I was not encouraged to think very carefully about the positive consequences of publishing anything about AI, or the negative consequences of concealing it.  While I didn't object to consideration of the positive effects of secrecy, it seemed to me that secrecy was being prioritized above making research progress at a decent pace, which was a losing strategy in terms of differential technology development, and implied that naive attempts to research and publish AI safety work were net-negative.  (A friend of mine separately visited MIRI briefly and concluded that they were primarily optimizing, not for causing friendly AI to be developed, but for not being responsible for the creation of an unfriendly AI; this is a very normal behavior in corporations, of prioritizing reducing liability above actual productivity.)

  2. Some specific research, e.g. some math relating to extortion and precommitments, was kept secret under the premise that it would lead to (mostly unspecified) negative consequences.

  3. Researchers were told not to talk to each other about research, on the basis that some people were working on secret projects and would have to say so if they were asked what they were working on.  Instead, we were to talk to Nate Soares, who would connect people who were working on similar projects.  I mentioned this to a friend later who considered it a standard cult abuse tactic, of making sure one's victims don't talk to each other.

  4. Nate Soares also wrote a post discouraging people from talking about the ways they believe others to be acting in bad faith.  This is to some extent responding to Ben Hoffman's criticisms of Effective Altruism and its institutions, such that Ben Hoffman responded with his own post clarifying that not all bad intentions are conscious.

  5. Nate Soares expressed discontent that Michael Vassar was talking with "his" employees, distracting them from work [EDIT: Nate says he was talking about someone other than Michael Vassar; I don't remember who told me it was Michael Vassar.].  Similarly, Anna Salamon expressed discontent that Michael Vassar was criticizing ideologies and people that were being used as coordination points, and hyperbolically said he was "the devil".  Michael Vassar seemed at the time (and in retrospect) to be the single person who was giving me the most helpful information during 2017.  A central way in which Michael was helpful was by criticizing the ideology of the institution I was working for.  Accordingly, central leaders threatened my ability to continue talking with someone who was giving me information outside the ideology of my workplace and social scene, which was effectively keeping me in an institutional enclosure.  Discouraging contact with people who might undermine the shared narrative is a common cult tactic.

  6. Anna Salamon frequently got worried when an idea was discussed that could have negative reputational consequences for her or MIRI leaders.  She had many rhetorical justifications for suppressing such information.  This included the idea that, by telling people information that contradicted Eliezer Yudkowsky's worldview, Michael Vassar was causing people to be uncertain in their own head of who their leader was, which would lead to motivational problems ("akrasia").  (I believe this is a common position in startup culture, e.g. Peter Thiel believes it is important for workers at a startup to know who the leader is in part so they know who to blame if things go bad; if this model applied to MIRI, it would imply that Anna Salaman was setting up Eliezer as the designated scapegoat and encouraging others to do so as well.)

(I mention Nate Soares frequently not to indicate that he acted especially badly compared to others in positions of institutional authority (I don't think he did), but because he was particularly influential to my mental state in the relevant time period, partially due to being my boss at the time.  It is important not to make the fundamental attribution error here by attributing to him personally what were features of the situation he was in.)

It is completely unsurprising, to normal people who think about mental health, that not being able to talk about something concerning and important to you is a large risk for mental health problems.  It is stressful the way being a spy handling secrets that could put others at risk (and having concealed conflicts with people) is stressful. I infer that Jay feeling that their experience is an "infohazard" and it not being right to discuss it openly contributed to their mental distress; I myself during my psychosis was very distressed at the idea that my mental state was being "covered up" (and perhaps, should be) partially due to its dangerous ability to influence other people.  I find that, the more I can talk about my experiences, the more healthy and calm I feel about them, and I haven't found it to cause mental health problems in others when I tell them about it.

On top of that, the secrecy policies encouraged us to be very suspicious of our own and each other's motives.  Generally, if someone has good motives, their actions will be net-positive, and their gaining information and capacities will be good for themselves and others; if they have bad motives, their actions will be net-negative, and their gaining information and capacities will be bad for themselves and others.  MIRI researchers were being very generally denied information (e.g. told not to talk to each other) in a way that makes more sense under a "bad motives" hypothesis than a "good motives" hypothesis.  Alternative explanations offered were not persuasive.  It is accordingly unsurprising that I focused a lot of attention on the question of whether I had "bad motives" and what their consequences would be, up to and during psychosis.

Did anyone I worked with express concern that any of this would be bad for my mental state?  The best example I can think of MIRI leadership looking after my mental health with respect to these issues was their referring me to Anna Salamon for instructions on how to keep secrets, psychologically.  I did not follow up on this offer because I did not trust Anna Salamon to prioritize helping me and helping me accomplish MIRI's mission over her political loyalties.  In any case, the suggestion literally amounts to telling me to learn to shut up better, which I think would have made things worse for me on net.

A friend later made the observation that "from a naive perspective, it's not obvious that AI alignment is a very important problem; from a non-naive perspective, 'someone might build an unfriendly AI' is a justification for silencing everyone, although the non-naive perspective is incapable of itself figuring out how to build AGI", which resonated with me.

MIRI asked a lot from its employees, and donors on the basis of extraordinary claims about its potential impact.  The information MIRI employees and donors could have used to evaluate those claims was suppressed on the basis that the information was dangerous.  The information necessary to evaluate the justification for that suppression was itself suppressed.  This self-obscuring process created a black hole at the center of the organization that sucked in resources and information, but never let a sufficient justification escape for the necessity of the black hole.  In effect, MIRI leadership asked researchers, donors, and other supporters to submit to their personal authority.

Some of what I am saying shows that I have and had a suspicious outlook towards people including my co-workers.  Scott Alexander's comment blames Michael Vassar for causing me to develop such an outlook:

Since then, [Michael has] tried to "jailbreak" a lot of people associated with MIRI and CFAR - again, this involves making them paranoid about MIRI/CFAR and convincing them to take lots of drugs.

While talking with Michael and others in my social group (such as Jack Gallagher, Olivia Schaeffer, Alice Monday, Ben Hoffman, and Bryce Hidysmith; all these people talked with Michael sometimes) is part of how I developed such an outlook, it is also the case that, had I not been able to figure out for myself that there were conflicts going on around me, I would not have been fit for the job I was hired to do.

MIRI's mission is much more ambitious than the mission of the RAND Corporation, whose objectives included preventing nuclear war between major powers and stabilizing the US for decades under a regime of cybernetics and game theory.  The main thinkers of RAND Corp (including John Von Neumann, John Nash, Thomas Schelling, and ambiguously Norbert Wiener) developed core game theoretic concepts (including conflict-theoretic concepts, in the form of zero-sum game theory, brinkmanship, and cybernetic control of people) and applied them to social and geopolitical situations.

John Nash, famously, developed symptoms of paranoid schizophrenia after his work in game theory. A (negative) review of A Beautiful Mind describes the dysfunctionally competitive and secretive Princeton math department Nash found himself in:

Persons in exactly the same area of research also don't tend to talk to each other. On one level they may be concerned that others will steal their ideas. They also have a very understandable fear of presenting a new direction of inquiry before it has matured, lest the listening party trample the frail buds of thought beneath a sarcastic put-down.

When an idea has developed to the point where they realize that they may really be onto something, they still don't want to talk about it . Eventually they want to be in a position to retain full credit for it. Since they do need feedback from other minds to advance their research, they frequently evolve a 'strategy' of hit-and-run tactics, whereby one researcher guards his own ideas very close to the chest, while trying to extract from the other person as much of what he knows as possible.

After Nash left, RAND corporation went on to assist the US military in the Vietnam War; Daniel Ellsberg, who worked at RAND corporation, leaked the Pentagon Papers in 1971, which showed a large un-reported expansion in the scope of the war, and that the main objective of the war was containing China rather than securing a non-communist South Vietnam.  Ellsberg much later published The Doomsday Machine, detailing US nuclear war plans, including the fact that approval processes for launching nukes were highly insecure (valuing increasing the probability of launching retaliatory strikes over minimizing the rate of accidental launches), the fact that the US's only nuclear war plan involved a nuclear genocide of China whether or not China had attacked the US, and the fact that the US air force deliberately misinformed President Kennedy about this plan in violation of the legal chain of command.  At least some of the impetus for plans like this came from RAND corporation, due to among other things the mutually assured destruction doctrine, and John Von Neumann's advocacy of pre-emptively nuking Russia.  Given that Ellsberg was the only major whistleblower, and delayed publishing critical information for decades, it is improbable that complicity with such genocidal plans was uncommon at RAND corporation, and certain that such complicity was common in the Air Force and other parts of the military apparatus.

It wouldn't be a stretch to suggest that Nash, through his work in game theory, came to notice more of the ways people around him (both at the Princeton math department and at the RAND Corporation) were acting against the mission of the organization in favor of egoic competition with each other and/or insane genocide.  Such a realization, if understood and propagated without adequate psychological support, could easily cause symptoms of paranoid schizophrenia.  I recently discussed Nash on Twitter:

You're supposed to read the things John Nash writes, but you're not supposed to see the things he's talking about, because that would make you a paranoid schizophrenic.

MIRI seemed to have a substantially conflict-theoretic view of the broad situation, even if not the local situation.  I brought up the possibility of convincing DeepMind people to care about AI alignment.  MIRI leaders including Eliezer Yudkowsky and Nate Soares told me that this was overly naive, that DeepMind would not stop dangerous research even if good reasons for this could be given.  Therefore (they said) it was reasonable to develop precursors to AGI in-house to compete with organizations such as DeepMind in terms of developing AGI first.  So I was being told to consider people at other AI organizations to be intractably wrong, people who it makes more sense to compete with than to treat as participants in a discourse.

[EDIT: Nate clarifies that he was trying to say that, even if it were possible to convince people to care about alignment, it might take too long, and so this doesn't imply a conflict theory. I think the general point that time-to-converge-beliefs is relevant in a mistake theory is true, although in my recollection of the conversation Nate said it was intractable to convince people, not just that it would take a long time; also, writing arguments explicitly allows many people to read the same arguments, which makes scaling to more people easier.]

The difference between the beliefs of MIRI leadership and Michael Vassar was not exactly mistake theory versus conflict theory.  Rather, MIRI's conflict theory made an unprincipled exception for the situation inside MIRI, exclusively modeling conflict between MIRI and other outside parties, while Michael Vassar's model did not make such exceptions.  I was more interested in discussing Michael's conflict theory with him than discussing MIRI leadership's conflict theory with them, on the basis that it better reflected the situation I found myself in.

MIRI leadership was not offering me a less dark worldview than Michael Vassar was.  Rather, this worldview was so dark that it asserted that many people would be destroying the world on fairly short timescales in a way intractable to reasoned discourse, such that everyone was likely to die in the next 20 years, and horrible AI torture scenarios might (with low probability) result depending on the details.  By contrast, Michael Vassar thinks that it is common in institutions for people to play zero-sum games in a fractal manner, which makes it unlikely that they could coordinate well enough to cause such large harms.  Michael has also encouraged me to try to reason with and understand the perspective of people who seem to be behaving destructively instead of simply assuming that the conflict is unresolvable.

And, given what I know now, I believe that applying a conflict theory to MIRI itself was significantly justified.  Nate, just last month (due to myself talking to people on Twitter), admitted that he posted "political banalities" on the MIRI blog during the time I was there.  I was concerned about the linked misleading statement in 2017 and told Nate Soares and others about it, although Nate Soares insisted that it was not a lie, because technically the word "excited" could indicate the magnitude of a feeling rather than the positiveness of it.  While someone bullshitting on the public Internet (to talk up an organization that by Eliezer's account "trashed humanity's chances of survival") doesn't automatically imply they lie to their coworkers in-person, I did not and still don't know where Nate is drawing the line here.

Anna Salamon, in a comment on my post, discusses "corruption" throughout CFAR's history:

It's more that I think CFAR's actions were far from the kind of straight-forward, sincere attempt to increase rationality, compared to what people might have hoped for from us, or compared to what a relatively untraumatized 12-year-old up-and-coming-LWer might expect to see from adults who said they were trying to save the world from AI via learning how to think...I didn't say things I believed false, but I did choose which things to say in a way that was more manipulative than I let on, and I hoarded information to have more control of people and what they could or couldn't do in the way of pulling on CFAR's plans in ways I couldn't predict, and so on. Others on my view chose to go along with this, partly because they hoped I was doing something good (as did I), partly because it was way easier, partly because we all got to feel as though we were important via our work, partly because none of us were fully conscious of most of this.

(It should go without saying that, even if suspicion was justified, that doesn't rule out improvement in the future; Anna and Nate's transparency about past behavior here is a step in the right direction.)

Does developing a conflict theory of my situation necessitate developing the exact trauma complex that I did?  Of course not.  But the circumstances that justify a conflict theory make trauma much more likely, and vice versa. Traumatized people are likely to quickly update towards believing their situation is adversarial ("getting triggered") when receiving modest evidence towards this, pattern-matching the new potentially-adversarial situation to the previous adversarial situation(s) they have encountered in order to generate defensive behavioral patterns.

I was confused and constrained after tasking people I most trusted with helping take care of me early in psychosis

The following events took place in September-October 2017, 3-4 months after I had left MIRI in June.

I had a psychedelic trip in Berkeley, during which I discussed the idea of "exiting" civilization, the use of spiritual cognitive modalities to improve embodiment, the sense in which "identities" are cover stories, and multi-perspectival metaphysics.  I lost a night of sleep, decided to "bravely" visit a planned family gathering the next day despite my sleep loss (partially as a way to overcome neurotic focus on downsides), lost another night of sleep, came back to Berkeley the next day, and lost a third night of sleep.  After losing three nights of sleep, I started perceiving hallucinations such as a mirage-like effect in the door of my house ("beckoning me", I thought).  I walked around town and got lost, noticed my phone was almost out of battery, and called Jack Gallagher for assistance.  He took me to his apartment; I rested in his room while being very concerned about my fate (I was worried that in some sense "I" or "my identity" was on a path towards death).  I had a call with Bryce Hidysmith that alleviated some of my anxieties, and I excitedly talked with Ben Hoffman and Jack Gallagher as they walked me back to my house.

That night, I was concerned that my optimization might be "perverse" in some way, where in my intending to do something part of my brain would cause the opposite to happen.  I attempted to focus my body and intentions so as to be able to take actions more predictably.  I spent a number of hours lying down, perhaps experiencing hypnagogia, although I'm not sure whether or not I actually slept.  That morning, I texted my friends that I had slept.  Ben Hoffman came to my house in the morning and informed me that my housemate had informed Ben that I had "not slept" because he heard me walking around at night.  (Technically, I could have slept during times he did not hear me walking around).  Given my disorganized state, I could not think of a better response than "oops, I lied".  I subsequently collapsed and writhed on the floor until he led me to my bed, which indicates that I had not slept well.

Thus began multiple days of me being very anxious about whether I could sleep in part because people around me would apply some degree of coercion to me until they thought I was "well" which required sleeping.  Such anxiety made it harder to sleep.  I spent large parts of the daytime in bed which was likely bad for getting to sleep compared with, for example, taking a walk.

Here are some notable events during that week before I entered the psych ward:

  1. Zack Davis gave me a math test: could I prove ?  I gave a geometric argument: " means spinning radians clockwise about the origin in the complex plane starting from 1", and I drew a corresponding diagram.  Zack said this didn't show I could do math, since I could have remembered it, and asked me to give an algebraic argument.  I failed to give one (and I think I would have failed pre-psychosis as well).  He told me that I should have used the Taylor series expansion of .  I believe this exchange was used to convince other people taking care of me that I was unable to do math, which was unreasonable given the difficulty of the problem and the lack of calibration on an easier problem.  This worsened communication in part by causing me to be more afraid that people would justify coercing me (and not trying to understand me) on the basis of my lack of reasoning ability.  (Days later, I tested myself with programming "FizzBuzz" and was highly distressed to find that my program was malfunctioning and I couldn't successfully debug it, with my two eyes seeming to give me different pictures of the computer screen.)

  2. I briefly talked with Michael Vassar (for less than an hour); he offered useful philosophical advice about basing my philosophy on the capacity to know instead of on the existence of fundamentally good or bad people, and made a medication suggestion (for my sleep issues) that turned out to intensify the psychosis in a way that he might have been able to predict had he thought more carefully, although I see that it was a reasonable off-the-cuff guess given the anti-anxiety properties of this medication.

  3. I felt like I was being "contained" and "covered up", which included people not being interested in learning about where I was mentally.  (Someone taking care of me confirms years later that, yes, I was being contained, and people were covering up the fact that there was a sick animal in the house).  Ben Hoffman opened the door which let sunlight into the doorway.  I took it as an invitation and stepped outside.  The light was wonderful, giving me perhaps the most ecstatic experience I have had in my life, as I sensed light around my mind, and I felt relieved from being covered up.  I expounded on the greatness of the sunlight, referencing Sarah's post on Ra.  Ben Hoffman encouraged me to pay more attention to my body, at which point the light felt like it concentrated into a potentially-dangerous sharp vertical spike going through my body.  (This may technically be or have some relation to a Kundalini awakening, though I haven't confirmed this; there was a moment around this time that I believe someone around me labeled as a "psychotic break".).  I felt like I was performing some sort of light ritual navigating between revelation and concealment, and subsequently believed I had messed up the ritual terribly and became ashamed.  Sometime around then I connected what I saw due to the light with the word "dasein" (from Heidegger), and shortly afterward connected "dasein" to the idea that zero-sum games are normal, such as in sports.  I later connected the light to the idea that everyone else is the same person as me (and I heard my friends' voices in another room in a tone as if they were my own voice).

  4. I was very anxious and peed on a couch at some point and, when asked why, replied that I was "trying to make things worse".

  5. I was in my bed, ashamed and still, staring at the ceiling, afraid that I would do something bad.  Sarah Constantin sat on my bed and tried to interact with me, including by touching my fingers.  I felt very afraid of interacting with her because I thought I was steering in the wrong direction (doing bad things because they are bad) and might hurt Sarah or others.  I felt something behind my eyes and tongue turn inward as I froze up more and more, sabotaging my own ability to influence the world, becoming catatonic (a new mind-altering medication that was suggested to me at the time, different from the one Michael suggested, might also have contributed to the catatonia).  Sarah noticed that I was breathing highly abnormally and called the ER.  While the ambulance took me there I felt like I could only steer in the wrong direction, and feared that if I continued I might become a worse person than Adolf Hitler.  Sarah came with me in the ambulance and stayed with me in the hospital room; after I got IV benzos, I unfroze.  The hospital subsequently sent me home.

  6. One night I decided to open my window, jump out, and walk around town; I thought I was testing the hypothesis that things were very weird outside and the people in my house were separating me from the outside.  I felt like I was bad and that perhaps I should walk towards water and drown, though this was not a plan I could have executed on.  Ben Hoffman found me and walked me back home.  Someone called my parents, who arrived the next day and took me to the ER (I was not asked if I wanted to be psychiatrically institutionalized); I was catatonic in the ER for about two days and was later moved to a psychiatric hospital.

While those who were taking care of me didn't act optimally, the situation was incredibly confusing for myself and them, and I believe they did better than most other Berkeley rationalists would have done, who would themselves have done better than most members of the American middle class would have done.

Are Michael Vassar and friends pro-psychosis gnostics?

Scott asserts:

Jessica was (I don't know if she still is) part of a group centered around a person named Vassar, informally dubbed "the Vassarites". Their philosophy is complicated, but they basically have a kind of gnostic stance where regular society is infinitely corrupt and conformist and traumatizing and you need to "jailbreak" yourself from it (I'm using a term I found on Ziz's discussion of her conversations with Vassar; I don't know if Vassar uses it himself). Jailbreaking involves a lot of tough conversations, breaking down of self, and (at least sometimes) lots of psychedelic drugs.

I have only heard Michael Vassar use the word "jailbreak" when discussing Ziz, but he believes it's possible to use psychedelics to better see deception and enhance one's ability to use one's own mind independently, which I find to be true in my experience.  This is a common belief among people who take psychedelics, and psychotherapeutic organizations including MAPS and Johns Hopkins, which have published conventional academic studies demonstrating that psychedelic treatment regimens widely reported to induce "ego death" have strong psychiatric benefits.  Michael Vassar believes "tough conversations" that challenge people's defensive nonsense (some of which is identity-based) are necessary for psychological growth, in common with psychotherapists, and in common with some MIRI/CFAR people such as Anna Salamon.

I had tried psychedelics before talking significantly with Michael, in part due to a statement I heard from a friend (who wasn't a CFAR employee but who did some teaching at CFAR events) along the lines of "CFAR can't legally recommend that you try [a specific psychedelic], but..." (I don't remember what followed the "but"), and in part due to suggestions from other friends.

"Infinitely corrupt and conformist and traumatizing" is hyperbolic (infinite corruption would leave nothing to steal), though Michael Vassar and many of his friends believe large parts of normal society (in particular in the professional-managerial class) are quite corrupt and conformist and traumatizing.  I mentioned in a comment on the post one reason why I am not sad that I worked at MIRI instead of Google:

I've talked a lot with someone who got pretty high in Google's management hierarchy, who seems really traumatized (and says she is) and who has a lot of physiological problems, which seem overall worse than mine. I wouldn't trade places with her, mental health-wise.

I have talked with other people who have worked in corporate management, who have corroborated that corporate management traumatizes people into playing zero-sum games.  If Michael and I are getting biased samples here and high-level management at companies like Google is actually a fine place to be in the usual case, then that indicates that MIRI is substantially worse than Google as a place to work.  Iceman in the thread reports that his experience as a T-5 (apparently a "Senior" non-management rank) at Google "certainly traumatized" him, though this was less traumatizing than what he gathers from Zoe Curzi's or my reports, which may themselves be selected for being especially severe due to the fact that they are being written about. Moral Mazes, an ethnographic study of corporate managers written by sociology professor Robert Jackall, is also consistent with my impression.

Scott asserts that Michael Vassar treats borderline psychosis as an achievement:

The combination of drugs and paranoia caused a lot of borderline psychosis, which the Vassarites mostly interpreted as success ("these people have been jailbroken out of the complacent/conformist world, and are now correctly paranoid and weird").

A strong form of this is contradicted by Zack Davis's comment:

As some closer-to-the-source counterevidence against the "treating as an achievement" charge, I quote a 9 October 2017 2:13 p.m. Signal message in which Michael wrote to me:

Up for coming by? I'd like to understand just how similar your situation was to Jessica's, including the details of her breakdown. We really don't want this happening so frequently.

(Also, just, whatever you think of Michael's many faults, very few people are cartoon villains that want their friends to have mental breakdowns.)

A weaker statement is true: Michael Vassar believes that mental states somewhat in the direction of psychosis, such as ones had by family members of clinical schizophrenics, are likely to be more intellectually productive over time.  This is not an especially concerning or absurd belief.  Scott Alexander himself cites research showing greater mental modeling and verbal intelligence in relatives of schizophrenics:

In keeping with this theory, studies find that first-degree relatives of autists have higher mechanistic cognition, and first-degree relatives of schizophrenics have higher mentalistic cognition and schizotypy. Autists' relatives tend to have higher spatial compared to verbal intelligence, versus schizophrenics' relatives who tend to have higher verbal compared to spatial intelligence. High-functioning schizotypals and high-functioning autists have normal (or high) IQs, no unusual number of fetal or early childhood traumas, and the usual amount of bodily symmetry; low-functioning autists and schizophrenics have low IQs, increased history of fetal and early childhood traumas, and increased bodily asymmetry indicative of mutational load.

(He also mentions John Nash as a particularly interesting case of mathematical intelligence being associated with schizophrenic symptoms, in common with my own comparison of myself to John Nash earlier in this post.)

I myself prefer to be sub-clinically schizotypal (which online self-diagnosis indicates I am) to the alternative of being non-schizotypal, which I understand is not a preference shared by everyone.  There is a disagreement between Michael Vassar and Scott Alexander about the tradeoffs involved, but they agree there are both substantial advantages and disadvantages to mental states somewhat in the direction of schizophrenia.

Is Vassar-induced psychosis a clinically significant phenomenon?

Scott Alexander draws a causal link between Michael Vassar and psychosis:

Since then, [Vassar has] tried to "jailbreak" a lot of people associated with MIRI and CFAR - again, this involves making them paranoid about MIRI/CFAR and convincing them to take lots of drugs. The combination of drugs and paranoia caused a lot of borderline psychosis, which the Vassarites mostly interpreted as success ("these people have been jailbroken out of the complacent/conformist world, and are now correctly paranoid and weird"). Occasionally it would also cause full-blown psychosis, which they would discourage people from seeking treatment for, because they thought psychiatrists were especially evil and corrupt and traumatizing and unable to understand that psychosis is just breaking mental shackles.

(to be clear: Michael Vassar and our mutual friends decided to place me in a psychiatric institution after I lost a week of sleep, which is at most a mild form of "discourag[ing] people from seeking treatment"; it is in many cases reasonable to try at-home treatment if it could prevent institutionalization.)

I have given an account in this post of the causality of my psychosis, in which Michael Vassar is relevant, and so are Eliezer Yudkowsky, Nate Soares, Anna Salamon, Sarah Constantin, Ben Hoffman, Zack Davis, Jack Gallagher, Bryce Hidysmith, Scott Alexander, Olivia Schaeffer, Alice Monday, Brian Tomasik, Venkatesh Rao, David Chapman, Carl Jung, M. Scott Peck, Martin Heidegger, Lao Tse, the Buddha, Jesus Christ, John Von Neumann, John Nash, and many others.  Many of the contemporary people listed were/are mutual friends of myself and Michael Vassar, which is mostly explained by myself finding these people especially helpful and interesting to talk to (correlated with myself and them finding Michael Vassar helpful and interesting to talk to), and Michael Vassar connecting us with each other.

Could Michael Vassar have orchestrated all this?  That would be incredibly unlikely, requiring him to scheme so well that he determines the behavior of many others while having very little direct contact with me at the time of psychosis.  If he is Xanatos, directing the entire social scene I was part of through hidden stratagems, that would be incredibly unlikely on priors, and far out of line with how effective I have seen him to be at causing people to cooperate with his intentions.

Other people who have had some amount of interaction with Michael Vassar and who have been psychotic commented in the thread.  Devi Borg commented that the main contributor to her psychosis was "very casual drug use that even Michael chided me for".  Zack Davis commented that "Michael had nothing to do with causing" his psychosis.

Eric Bruylant commented that his thoughts related to Michael Vassar were "only one mid sized part of a much larger and weirder story...[his] psychosis was brought on by many factors, particularly extreme physical and mental stressors and exposure to various intense memes", that "Vassar was central to my delusions, at the time of my arrest I had a notebook in which I had scrawled 'Vassar is God' and 'Vassar is the Devil' many times"; he only mentioned sparse direct contact with Michael Vassar himself, mentioning a conversation in which "[Michael] said my 'pattern must be erased from the world' in response to me defending EA".

While on the surface Eric Bruylant seems to be most influenced by Michael Vassar out of any of the cases, the effect would have had to be indirect given his low amount of direct conversation with Michael, and he mentions an intermediary talking to both him and Michael.  Anna Salamon's hyperbolic statement that Michael is "the devil" may be causally related to Eric's impressions of Michael especially given the scrawling of "Vassar is God" and "Vassar is the Devil".  It would be very surprising, showing an extreme degree of mental prowess, for Michael Vassar to be able to cause a psychotic break two hops out in the social graph through his own agency; it is much more likely that the vast majority of relevant agency was due to other people.

I have heard of 2 cases of psychosis in former MIRI employees in 2017-2021 who weren't significantly talking with Michael or Ziz (I referenced one in my original post and have since then learned of another).

As I pointed out in a reply to Scott Alexander, if such strong mental powers are possible, that lends plausibility to the psychological models people at Leverage Research were acting on, in which people can spread harmful mental objects to each other. Scott's comment that I reply to admits that attributing such strong psychological powers to Michael Vassar is "very awkward" for liberalism.

Such "liberalism" is hard for me to interpret in light of Scott's commentary on my pre-psychosis speech:

Jessica is accusing MIRI of being insufficiently supportive to her by not taking her talk about demons and auras seriously when she was borderline psychotic, and comparing this to Leverage, who she thinks did a better job by promoting an environment where people accepted these ideas. I think MIRI was correct to be concerned and (reading between the lines) telling her to seek normal medical treatment, instead of telling her that demons were real and she was right to worry about them, and I think her disagreement with this is coming from a belief that psychosis is potentially a form of useful creative learning. While I don't want to assert that I am 100% sure this can never be true, I think it's true rarely enough, and with enough downside risk, that treating it as a psychiatric emergency is warranted.

[EDIT: I originally misinterpreted "it" in the last sentence as referring to "talk about demons and auras", not "psychosis", and the rest of this section is based on that incorrect assumption; Scott clarified that he meant the latter.]

I commented that this was effectively a restriction on my ability to speak freely, in contradiction with the liberal right to free speech.  Given that a substantial fraction of the general public (e.g. New Age people and Christians, groups that overlap with psychiatrists) discuss "auras" and "demons", it is inappropriate to treat such discussion as cause for a "psychiatric emergency", a judgment substantially increasing the risk of involuntary institutionalization; that would be a case of a minority ideological community using the psychiatric system to enforce its local norms.  If Scott were arguing that talk of "auras" and "demons" is a psychiatric emergency based on widely-accepted professional standards, he would need to name a specific DSM condition and argue that this talk constitutes symptoms of that condition.

In the context of MIRI, I was in a scientistic math cult 'high-enthusiasm ideological community', so seeing outside the ideology of this cult 'community' might naturally involve thinking about non-scientistic concepts; enforcing "talking about auras and demons is a psychiatric emergency" would, accordingly, be enforcing cult 'local' ideological boundaries using state force vested in professional psychiatrists for the purpose of protecting the public.

While Scott disclaims the threat of involuntary psychiatric institutionalization later in the thread, he did not accordingly update the original comment to clarify which statements he still endorses.

Scott has also attributed beliefs to me that I have never held or claimed to have held.  I never asserted that demons are real.  I do not think that it would have been helpful for people at MIRI to pretend that they thought demons were real.  The nearest thing I can think of having said is that the hypothesis that "demons" were responsible for Eric Bruylant's psychosis (a hypothesis offered by Eric Bruylant himself) might correspond to some real mental process worth investigating, and my complaint is that I and everyone else were discouraged from openly investigating such things and forming explicit hypotheses about them.  It is entirely reasonable to be concerned about things conceptually similar to "demon possession" when someone has just attacked a mental health worker shortly after claiming to be possessed by a demon; discouraging such talk prevents people in situations like the one I was in from protecting their mental health by modeling threats to it.

Likewise, if someone had tried to explain why they disagreed with the specific things I said about auras (which did not include an assertion that they were "real," only that they were not a noticeably more imprecise concept than "charisma"), that would have been a welcome and helpful response.

Scott Alexander has, at a Slate Star Codex meetup, said that Michael is a "witch" and/or does powerful "witchcraft".  This is clearly of the same kind as speech about "auras" and "demons".  (The Sequences post on Occam's Razor, relevantly, mentions "The lady down the street is a witch; she did it" as an example of a non-parsimonious explanation.)

I can't believe that a standard against woo-adjacent language is being applied symmetrically given this and given that some other central rationalists such as Anna Salamon and multiple other CFAR employees used woo-adjacent language more often than I ever did.

Conclusion

I hope reading this gives a better idea of the actual causal factors behind my psychosis.  While Scott Alexander's comment contained some relevant information and prompted me to write this post with much more relevant information, the majority of his specific claims were false or irrelevant in context.

While much of what I've said about my workplace is negative (given that I am specifically focusing on what was stressing me out), there were, of course large benefits to my job: I was able to research very interesting philosophical topics with very smart and interesting people, while being paid substantially more than I could get in academia; I was learning a lot even while having confusing conflicts with my coworkers.  I think my life has become more interesting as a result of having worked at MIRI, and I have strong reason to believe that working at MIRI was overall good for my career.

I will close by poetically expressing some of what I learned:

If you try to have thoughts,

You'll be told to think for the common good;

If you try to think for the common good,

You'll be told to serve a master;

If you try to serve a master,

Their inadequacy will disappoint you;

If their inadequacy disappoints you,

You'll try to take on the responsibility yourself;

If you try to take on the responsibility yourself,

You'll fall to the underworld;

If you fall to the underworld,

You'll need to think to benefit yourself;

If you think to benefit yourself,

You'll ensure that you are counted as part of "the common good".

Postscript

Eliezer's comment in support of Scott's criticism was a reply to Aella saying he shared her (negative) sense about my previous post.  If an account by Joshin is correct, we have textual evidence about this sense:

As regards Leverage: Aella recently crashed a party I was attending. This, I later learned, was the day that Jessica Taylor's post about her experiences at CFAR and MIRI came out. When I sat next to her, she was reading that post. What follows is my recollection of our conversation.

Aella started off by expressing visible, audible dismay at the post. "Why is she doing this? This is undermining my frame. I'm trying to do something and she's fucking it up."

I asked her: "why do you do this?"

She said: "because it feels good. It feels like mastery. Like doing a good work of art or playing an instrument. It feels satisfying."

I said: "and do you have any sense of whether what you're doing is good or not?"

She said: "hahaha, you and Mark Lippmann both have the 'good' thing, I don't really get it."

I said: "huh, wow. Well, hey, I think your actions are evil; but on the other hand, I don't believe everything I think."

She said: "yeah, I don't really mind being the evil thing. Seems okay to me."

[EDIT: See Aella's response; she says she didn't say the line about undermining frames, and that use of the term "evil" has more context, and that the post overall was mostly wrong. To disambiguate her use of "evil", I'll quote the relevant part of her explanatory blog post below.]

I entered profound silence, both internal and external. I lost the urge to evangelize, my inner monologue left me, and my mind was quiet and slow-moving, like water. I inhabited weird states; sometimes I would experience a rapid vibration between the state of ‘total loss of agency’ and ‘total agency over all things’. Sometimes I experienced pain as pleasure, and pleasure as pain, like a new singular sensation for which there were no words at all. Sometimes time came to me viscerally, like an object in front of me I could nearly see except it was in my body, rolling in this fast AND-THIS-AND-THIS motion, and I would be destroyed and created by it, like my being was stretched on either side and brought into existence by the flipping in between. I cried often.

I became sadistic. I’d previously been embracing a sort of masochism – education in the pain, fearlessness of eternal torture or whatever – but as my identity expanded to include that which was educating me, I found myself experiencing sadism. I enjoyed causing pain to myself, and with this I discovered evil. I found within me every murderer, torturer, destroyer, and I was shameless. As I prostrated myself on the floor, each nerve ending of my mind writhing with the pain of mankind, I also delighted in subjecting myself to it, in being it, in causing it. I became unified with it.

The evil was also subsumed by, or part of, love. Or maybe not “love” – I’d lost the concept of love, where the word no longer attached to a particular cluster of sense in my mind. The thing in its place was something like looking, where to understand something fully meant accepting it fully. I loved everything because I Looked at everything. The darkness felt good because I Looked at it. I was complete in my pain only when I experienced the responsibility for inducing that pain.

New Comment
142 comments, sorted by Click to highlight new comments since: Today at 3:59 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Experimental Two-Axis Voting: "Overall" & "Agreement"

The LW team has spent the last few weeks developing alternative voting systems. We've enabled two-axis voting on this post. The two dimensions are:

  • Overall: what is your overall feeling about the comment? Does it contribute positively to the conversation? Do you want to see more comments like this?
  • Agreement: do you agree with the position of this comment?

Separating these out allows for you to express more nuanced reactions to comments such as "I still disagree with what you're arguing for, but you've raised some interesting and helpful points" and "although I agree with what you're saying, I think this is a low-quality comment".

Edited to Add: I checked with Jessica first whether she was happy for us to try this experiment with her post.

A few notes:

  • This will be one experiment among several. This is an experiment so bugs are possible. We're interested in what effect this has on the quality of conversation, what the experience of voting in this system is like, and what the experience of skimming a thread and seeing these scores is like.
  • Agreement-votes and related code will not necessarily be kept forever, if we don't go with that as the overall voting system for LW. Agreement-votes and agreement-scores will be kept at least for as long as any thread using that voting system is active.
  • GreaterWrong, and some areas of the site which aren't especially integrated with the two-axis voting, will show only the overall score, not the agreement scores. Sorting is by overall score and only the overall score affects user karma.

Feedback: I intuitively expected the first/left vote to be "agree/disagree" and the second/right vote to be "compliance with good standards."  In reality, it's closer to the reverse of that.  Not sure how typical my experience will be.

(In my imagination, a user goes "I like this" followed by "...but it was sketchy from an epistemic standpoint" or similar.)

9jefftk2y
I think if you swapped them, at least at this stage, you would have a bunch of people who accidentally indicated agreement because they thought they were normal voting
3Pattern2y
Yeah. Normal voting could have been left as is, with two buttons that indicate those two things. If something had an extreme score via voting, but didn't score strongly (or in the same direction) via the other two, then voting would be capturing something else. One of the issues with these things (like 'agree') - whatever that refers to (i.e. agree with what?), is that the longer, and more parts, a comment has, the less a single score captures that well, for any dimension.
2Pattern2y
This thread The comments on this post in particular are a great example of this. Lots of people taking pieces of the post and going 'I disagree with this' or 'this is not true'.
4Viliam2y
I am confused. I guess the first vote means "whatever my vote would be under the old system". The second vote... I am not sure how to apply it e.g. to your comment. If I click "agree", what does it mean? * I agree that Duncan intuitively expected the first/left vote to be agree/disagree, and the second/right vote to be good standards? [yes] * It also seems to me that the first/left vote is agree/disagree, and the second/right vote is good standards? [no] I guess I am just going to use the first vote as usual (now if you vote "agree" on this comment, does it mean "yes, I believe this is exactly what Viliam will do" or "yes, I will do the same thing"?), and the second one only in situations that seem unambiguous.
8jefftk2y
I've been doing a lot of 'overall' voting based on "all things considered, am I happy this comment exists?" or "would I like to see more comments like this in the future?" and 'agreement' voting on specifically "do I endorse its contents?" For a non-charged example, I upvoted Duncan's comment suggesting that the buttons be swapped, because I think that a good kind of feedback to give on an experiment, and voted disagree on it because I don't think swapping would have the effect he thinks it would have.
6jefftk2y
Another example: if two people are having a back and forth where they seem to remember different things, I'll normal vote for both of them because I'm glad they're hashing it out, but I won't agree/disagree with any of them because I don't have any inside information on what happened.

Cool experiment. A note: I just clicked "agree" to a comment and noticed that it gave two points, and was somewhat surprised (with a bad valence). Maybe it makes sense, but somehow I expected the agree thing to mean literally "this many people clicked agree". (Haven't thought about it, just a reaction.)

5Kaj_Sotala2y
I think that makes sense but on the other hand both vote counts being directly comparable seems good

Just some immediate feedback on this -- There is a big noticeable phenomenon, which is that I agree and disagree with many comments, even though I frequently think the comment is just "OK", so I am making many more "agreement" votes than "overall" votes.

5countingtoten2y
Really! I just encountered this feature, and have been more reluctant to agree than to upvote. Admittedly, the topic has mostly concerned conversations which I didn't hear.

This post discusses suicides and psychosis of named people. I think it's an inappropriate place to experiment with a new voting system. I think you could choose a less weighty post for initial experiments.

Also, I don't get the impression that this experiment was done with jessicata's explicit agreement, and I'm worried that this post is being singled out for special treatment because of its content.

We did ask Jessica first whether she would want to participate in the experiment. Also, the reason why we want to experiment with additional voting systems is because specifically threads like this seem like they kind of tend to go badly using our current voting system, and switching it out is an attempt at making them go better, in a way that I think Jessica also wants.

9Yoav Ravid2y
The agreement box got a bit excited :)
8Yoav Ravid2y
Recording my reaction to the new system: I looked back at the comment and saw it got a downvote on agreement and got that small twinge of negative affect that I sometimes get from seeing a downvote on my own comment, then realized it probably just means that person doesn't have the same bug, and it passed. It would be interesting to see if this instinct changes after some time getting used to this system.
8Viliam2y
I try to imagine the two numbers as parts of a complex number, like "37+8i". From that perspective, both "37+8i" and "37-8i" feel positive.
8jefftk2y
I also see the CSS as a bit wonky, at least on mobile, though not as wonky as that. I see the agreement box as about one pixel higher than the overall box.
7Adam Zerner2y
Yay for experimenting!
6jefftk2y
I was noticing different users having different patterns for upvotes versus agreement (partly because mine seemed to be skewed toward agreement…) and I wanted to play with it a little more. Here's a script that extracts the votes from this page. Expand all comments (⌘F) before running. function author(meta) { return meta.children[1].innerText; } function votes(meta) { return meta.children[3].innerText.split("\n"); } const metas = document.getElementsByClassName("CommentsItem-meta"); const output = {}; for (let i = 0; i < metas.length; i++) { const meta = metas[i]; output[author(meta)] = output[author(meta)] || { comments: 0, upvotes: 0, agreement: 0, }; output[author(meta)].comments++; output[author(meta)].upvotes += Number(votes(meta)[0]); output[author(meta)].agreement += Number(votes(meta)[1]); } Here's the total agreement:upvotes ratio for this thread, by user: -13:7 Alastair JL 7:-15 Liam Donovan -5:35 Martin Randall 0:18 Quintin Pope 0:3 countingtoten 0:4 Pattern 0:6 bn22 1:15 Yoav Ravid 2:27 Benquo 2:26 Scott Alexander 32:211 jessicata 6:35 TekhneMakre 21:95 So8res 8:32 Aella 7:27 Raemon 23:76 ChristianKl 20:56 Ruby 8:22 romeostevensit 19:51 cata 9:24 tailcalled 65:160 Duncan_Sabien 40:93 Eliezer Yudkowsky 7:16 Bjartur Tómas 43:97 T3t 10:22 Viliam 38:64 habryka 85:143 jimrandomh 23:35 Davis_Kingsley 2:3 Joe_Collman 11:14 Veedrac 2:2 [comment deleted] 43:32 adamzerner 76:56 jefftk 10:5 Joe Rocca 4:2 Thomas Kehrenberg
-24Liam Donovan2y
3[comment deleted]2y

There's a model-fragment that I think is pretty important to understanding what's happened around Michael Vassar, and Scott Alexander's criticism.

Helping someone who is having a mental break is hard. It's difficult for someone to do for a friend. It's difficult for professionals to do in an institutional setting, and I have tons of anecdotes from friends and acquaintances, both inside and outside the rationality community, of professionals in institutions fucking up in ways that were traumatizing or even abusive. Friends have some natural advantages over institutions: they can provide support in a familiar environment instead of a prison-like environment, and make use of context they have with the person.

When you encounter someone who's having a mental break or is giving off signs that they're highly stressed and at risk of a mental break, the incentivized action is to get out of the radius of blame (see Copenhagen Interpretation of Ethics). I think most people do this instinctively. Attempting to help someone through a break is a risky and thankless job; many more people will hear about it if it goes badly than if it goes well. Anyone who does it repeatedly will probably find name... (read more)

So in general I'm noticing a pattern where you make claims about things that happened, but it turns out those things didn't happen, or there's no evidence that they happened and no reason one would believe they did a priori, and you're actually just making an inference and presenting as the state of reality.  These seem to universally be inferences which cast other's motives or actions in a negative light.  They seem to be broadly unjustified by the provided evidence and surrounding context, or rely on models of reality (both physical and social) which I think are very likely in conflict with the models held by the people those inferences are about.  Sometimes you draw correspondences between the beliefs and/or behaviors of different people or groups, in what seems like an attempt to justify the belief/behavior of the first, or to frame the second as a hypocrite for complaining about the first (though you don't usually say why these comparisons are relevant).  These correspondences turn out to only be superficial similarities while lacking any of the mechanistic similarities that would make them useful for comparisons, or actually conceal the fact that the two p... (read more)

As far as I can tell, this is almost entirely unsubstantiated, with the possible exception of Maia, and in that case it would have been Ziz’s circle doing the concealment, not any of the individuals you express specific concerns about.

I mentioned:

  1. I did it too (there are others like me).
  2. Ziz labeling it as an infohazard is in compliance with feedback Ziz has received from community leaders.
  3. People didn't draw attention to Fluttershy's recent blog post, even after I posted about it.

The way this is written makes it sound like you think that it ought to have been a (relatively) predictable consequence.

Whether or not it's predictable ahead of time, the extended account I give shows the natural progression of thought.

In theory, the problems you experienced could have come from sources other than your professional environment. That is a heck of a missing middle.

Even if there were other causes I still experienced these problems at MIRI. Also, most of the post is an argument that the professional environment contributed quite a lot.

I don’t know what Michael’s views on the subject actually are, but on priors I’m extremely skeptical that the correspondence is sufficient to ma

... (read more)
9ChristianKl2y
It was narratized that way by Ziz, many people having chosen to be skeptical of claims Ziz makes and there was no way to get an independent source.
9Viliam2y
Also, confict between hemispheres seems to be an important topic on Ziz's blog (example). Yes, but I have never seem them in context "one hemisphere extorting the other by precommiting to suicide" on LessWrong. That sounds to me uniquely Zizian.
5RobertM2y
I appreciate that you took the time to respond to my post in detail.  I explained at the top why I had a difficult time engaging productively with your post (i.e. learning from it).  I did learn some specific things, such as the claimed sequence of events prior to Maia's suicide, and Nate's recent retraction of his earlier public statement on OpenAI.  Those are things which are either unambiguously claims about reality, or have legible evidence supporting them.   None of these carry the same implication that the community, centrally, was engaging in the claimed concealment.  This phrasing deflects agency away from the person performing the action, and on to community leaders: "Ziz labeling it as an infohazard is in compliance with feedback Ziz has received from community leaders."  "Ziz labeled something that might have contributed to Maia's suicide an infohazard, possibly as a result of feedback she got from someone else in the community well before she shared that information with Maia" implies something very different from "Many others have worked to conceal the circumstances of their deaths", which in context makes it sound like an active conspiracy engaged in by central figures in the community.  People not drawing attention to Fluttershy's post, even after you posted about it, is not active concealment.   Your original claim included the phrase "the only possible alternative hypothesis", so this seems totally non-responsive to my problem with it.   Again, this seems non-responsive to what I'm saying is the issue, which is that the "general view" is more or less useless for evaluating how much in "agreement" they really are, as opposed to the specific details.   That's good to know, thanks.  I think it would make your point here much stronger and more legible if those specific details were included in the original claim.   I agree that, as presented, adopting those object-level beliefs would seem to more naturally lend itself to a conflict theory view
6jessicata2y
Not going to respond to all these, a lot seem like nitpicks. My other point was that the problems were still experienced "at MIRI" even if they were caused by other things in the social environment. Edited. 1. Nate implied he had already completed the assignment he was giving me. 2. The assignment wouldn't provide evidence about whether the pieces to make AGI are already out there unless it was "workable" in the sense that iterative improvement with more compute and theory-light technique iteration would produce AGI. Edited to make it clear that they disagree. The agreement is relevant to place a bound on the scope of what they actually disagree on. She might have guessed based on Ziz's utilitarian futurism (this wouldn't require knowing many specific details), or might not have been thinking about that consciously. It's more likely she was trying to control Ziz (she has admitted to generally controlling people around CFAR by e.g. hoarding info). I think my general point is that people are trying to memetically compete with each other in ways that involve labeling others "net negative" in a way that people can very understandably internalize and which would lead to suicide. It's more like a competition to drive each other insane than one to directly kill each other. A lot of competition (e.g. the kind that would be predicted by evolutionary theory) is subconscious and doesn't indicate legal responsibility. Anyway, I edited to make it clearer that many of the influences in question are subconscious and/or memetic. I predict that they would say that having some philosophical thoughts about negative utilitarianism and related considerations would be part of their job, and that AI torture scenarios are relevant to that, although perhaps not something they would specifically need to think about. Edited to make this clearer. They're highly related, having a working AGI design is an argument for short timelines. Sure, I mentioned it as a consideration other tha
9ChristianKl2y
Michael was accussed in the comment thread of the other post that he seeks out people with who are on the schizophrenic spectrum. Michael to the extend that I know seems to believe that those people have "greater mental modeling and verbal intelligence" and that makes them worth spending time with. Neither my own conversations with him nor any evidence anyone provided show him to believe that's a good idea to attempt to induce sub-clinical schizotypal states in people.

Status: writing-while-frustrated. As with the last post, many of Jessica's claims seem to me to be rooted in truth, but weirdly distorted. (Ever since the end of Jessica's tenure at MIRI, I have perceived a communication barrier between us that has the weird-distortion nature.)

Meta: I continue to be somewhat hesitant to post stuff like this, on the grounds that it sucks to air dirty laundry about your old employer and then have your old employer drop by and criticize everything you said. I’ve asked Jessica whether she objects to me giving a critical reply, and she said she has no objections, so at least we have that. I remain open to suggestions for better ways to navigate these sorts of situations.

Jessica, I continue to be sad about the tough times you had during the end of your tenure at MIRI, and in the times following. I continue to appreciate your research contributions, and to wish you well.

My own recollections follow. Note that these are limited to cases where Jessica cites me personally, in the interest of time. Note also that I'm not entirely sure I've correctly identified the conversations she's referring to, due to the blurring effects of the perceived distortion, and of... (read more)

Thanks for reading closely enough to have detailed responses and trying to correct the record according to your memory. Appreciate that you're explicitly not trying to disincentivize saying negative things about one's former employee (a family member of mine was worried about my writing this post on the basis that it would "burn bridges").

A couple general points:

  1. These events happened years ago and no one's memory is perfect (although our culture has propaganda saying memories are less reliable than they in fact are). E.g. I mis-stated a fact about Maia's death, that Maia had been on Ziz's boat, based on filling in the detail from the other details and impressions I had.

  2. I can't know what someone "really means", I can know what they say and what the most reasonable apparent interpretations are. I could have asked more clarifying questions at the time, but that felt expensive due to the stressful dynamics the post describes.

In terms of more specific points:

(And I have a decent chunk of probability mass that Jessica would clarify that she’s not accusing me of intentional coercion.) From my own perspective, she was misreading my own frame and feeling pressured into it desp

... (read more)

(a) Anna discouraging researchers from talking with Michael

...

...I specifically remember hearing about the policy at a meeting in a top-down fashion...it seems that not everyone remembers this policy...I must have been interpreting something this way because I remember contesting it.

...

...I also had a conversation with Anna Salamon where she said our main disagreement was about whether bad faith should be talked about...

Just a note on my own mental state, reading the above:

Given the rather large number of misinterpretations and misrememberings and confusions-of-meaning in this and the previous post, along with Jessica quite badly mischaracterizing what I said twice in a row in a comment thread above, my status on any Jessica-summary (as opposed to directly quoted words) is "that's probably not what the other person meant, nor what others listening to that person would have interpreted that person to mean."

By "probably" I literally mean strictly probably, i.e. a greater than 50% chance of misinterpretation, in part because the set of things-Jessica-is-choosing-to-summarize is skewed toward those she found unusually surprising or objectionable.

If I were in Jessica's shoes, I would by this point be replacing statements like "I had a conversation with Anna Salamon where she said X" with "I had a conversation with Anna Salamon where she said things which I interpreted to mean X" as a matter of general policy, so as not to be misleading-in-expectation to readers.

This is quite a small note, but it's representative of a lot of things that tripped me up in the OP, and might be relevant to the weird distortion:

> Jessica said she felt coerced into a frame she found uncomfortable

I note that Jessica said she was coerced.

I suspect that Nate-dialect tracks meaningful distinctions between whether one feels coerced, whether one has evidence of coercion, whether one has a model of coercive forces which outputs predictions that closely resemble actual events, whether one expects that a poll of one's peers would return a majority consensus that [what happened] is well-described by the label [coercion], etc.

By default, I would have assumed that Jessica-dialect tracks such distinctions as well, since such distinctions are fairly common in both the rationalsphere and (even moreso) in places like MIRI.

But it's possible that Jessica was not, with the phrase "I was coerced," attempting to convey the strong thing that would be meant in Nate-dialect by those words, and was indeed attempting to convey the thing you (automatically?  Reflexively?) seem to have translated it to: "I felt coerced; I had an internal experience matching that of being coerced [w... (read more)

Extracting and signal boosting this part from the final blog post linked by Winterford:

One time when I was being sexually assaulted after having explicitly said no, a person with significant martial arts training pinned me to the floor. ... name is Storm.

I had not heard this accusation before, and do not know whether it was ever investigated. I don't think I've met Storm, but I'm pretty sure I could match this nickname to the legal name of someone in the East Bay by asking around. Being named as a rapist in the last blog post of someone who later committed suicide is very incriminating, and if this hasn't been followed up it seems important to do so.

Disclaimer: I currently work for MIRI in a non-technical capacity, mostly surrounding low-level ops and communications (e.g. I spent much of the COVID times disinfecting mail for MIRI employees).  I did not overlap with Jessica and am not speaking on behalf of MIRI.

I'm having a very hard time with the first few thousand words here, for epistemic reasons.  It's fuzzy and vague in ways that leave me feeling confused and sleight-of-handed and motte-bailey'd and 1984'd.  I only have the spoons to work through the top 13-point summary at the moment.  

I acknowledge here, and will re-acknowledge at the end of this comment, that there is an obvious problem with addressing only a summary; it is quite possible that much of what I have to say about the summary is resolved within the larger text.

But as Jessica notes, many people will only read the summary and it was written with those people in mind.  This makes it something of a standalone document, and in my culture would mean that it's held to a somewhat higher standard of care; there's a difference between points that are just loosely meant to gesture at longer sections, and points which are known to be [the whole ... (read more)

I want to endorse this as a clear and concise elucidation of the concerns I laid out in my comment, which are primarily with the mismatch between what the text seems to want me to believe, vs. what conclusions are actually valid given the available information.

2ChristianKl2y
It seems to me that you can't expect a summary to make the claims as detailed as possible. You don't criticize scientific papers either because their abstract doesn't fully prove the claims it makes, that's for what you have the full article.
7[DEACTIVATED] Duncan Sabien2y
Just noting that this was explicitly addressed in a few places in my comment, and I believe I correctly compensated for it/took this truth into account.  "Make the claims in the summary as detailed as possible" is not what I was recommending.
1jessicata2y
If you didn't read the post and are complaining that the short summary didn't contain the details that the full post contained, then... I don't know how to respond. It's equivalent to complaining that the intro paragraph of an essay doesn't prove each sentence it states. With respect to the criticism of the post body: Yes, "this person was wrong/lying so the rumor was wrong" is an alternative, but I assigned low probability to it (in part due to a subsequent conversation with a MIRI person about this rumor), so it wasn't the most obvious alternative.

If you didn't read the post and are complaining that the short summary didn't contain the details that the full post contained, then

That is very explicitly a strawman of what I am objecting to.  As in: that interpretation is explicitly ruled out, multiple times within my comment, including right up at the very top, and so you reaching for it lands with me as deliberately disingenuous.

What I am objecting to is lots and lots and lots of statements that are crafted to confuse/mislead (if not straightforwardly deceive).

3jessicata2y
Okay, I can respond to the specific intro paragraph talking about this. I don't expect people who only read the summary to automatically believe what I'm saying with high confidence. I expect them to believe they have an idea of what I am saying. Once they have this idea, they can decide to investigate or not investigate why I believe these things. If they don't, they can't know whether these things are true. Maybe it messes with people's immune systems by being misleading... but how could you tell the summary is misleading without reading most of the post? Seems like a circular argument.

It's not a circular argument.  The summary is misleading in its very structure/nature, as I have detailed above at great length.  It's misleading independent of the rest of the post.

Upon going further and reading the rest of the post, I confirmed that the problems evinced by the summary, which I stated up-front might have been addressed within the longer piece (so as not to mislead or confuse any readers of my comment), in fact only get worse.

This is not a piece which visibly tries to, or succeeds at, helping people see and think more clearly.  It does the exact opposite, in service of ???

I would be tempted to label this a psy-op, if I thought its confusing and manipulative nature was intentional rather than just something you didn't actively try not to do.

1jessicata2y
Here's an example (Claim 0): The rest of the paragraph says some parts of how I was coerced, e.g. I was discouraged from engaging with critics of the frame and from publishing my own criticisms. If you keep reading you see that I heard about the possibility of assassination. The suicides are also worrying, although the causality on those is unclear. Maybe this isn't a particularly strong argument you gave for the summary being misleading. If so I'd want to know which you think are particularly strong so I don't have to refute a bunch of weak arguments.

"I'd want to know which arguments you think are particularly strong so I don't have to refute a bunch of weak ones" is my feeling, here, too.

Would've been nice if you'd just stated your claims instead of burying them in 13000 words of meandering, often misleading, not-at-all-upfront-about-epistemic-status insinuation.  I'm frustrated because your previous post received exactly this kind of criticism, and that criticism was highly upvoted, and you do not seem to have felt it was worth adjusting your style.

EDIT: A relevant term here is "gish gallop."

What I am able to gather from the OP is that you believe lots of bad rumors when you hear them, use that already-negative lens to adversarially interpret all subsequent information, get real anxious about it, and ... think everyone should know this?

-1jessicata2y
This is a double bind. If I state the claims in the summary I'm being misleading by not providing details or evidence for them close to the claims themselves. If I don't then I'm doing a "gish gallop" by embedding claims in the body of the post. The post as a whole has lots of numbered lists that make most of the claims I'm making pretty clear.

It's not a double bind, and my foremost hypothesis is now that you are deliberately strawmanning, so as to avoid addressing my real point.

Not only did I highlight two separate entries in your list of thirteen that do the thing properly, I also provided some example partial rewrites of other entries, some of which made them shorter rather than longer.

The point is not that you need to include more and more detail, and it's disingenuous to pretend that's what I'm saying.  It's that you need to be less deceptive and misleading.  Say what you think you know, clearly and unambiguously, and say why you think you know it, directly and explicitly, instead of flooding the channel with passive voice and confident summaries that obscure the thick layer of interpretation atop the actual observable facts.

[After writing this comment, I realized that maybe I'm just missing what's happening altogether, since maybe I read the post in a fairly strongly "sandboxed" way, so I'm failing to empathize with the mental yanks. That said, maybe it has some value.]

FWIW, my sense (not particularly well-founded?) isn't that jessicata is deliberately strawmanning here, but isn't getting your point or doesn't agree.

You write above:

It's more that, if you can't say something more clear and less confusing (/outright misleading) than something like this, then I think you should not include any such sentence at all.

This is sort of mixing multiple things together: there's the clarity/confusingness, and then there's the slant/misleadingness. These are related, in that one can mislead more easily when one is being unclear/ambiguous.

You write:

Say what you think you know, clearly and unambiguously, and say why you think you know it, directly and explicitly, instead of flooding the channel with passive voice and confident summaries that obscure the thick layer of interpretation atop the actual observable facts.

Some of your original criticisms read, to me, more like asking a bunch of questions about details (w... (read more)

(Meta: I'm talking about a bunch of stuff re: Jessica's epistemics out loud that I'd normally consider it a bit bad form to talk about out loud, but Jessicata seems to prefer having the conversation in public laying everything out clearly)

Something I've been feeling watching the discussion here I want to comment on (this is a bit off the cuff)

  1. I share several commenter's feedback that Jessica's account is smuggling in assumptions and making wrong inferences about what people were communicating (or trying to communicate)
  2. I think the people responding to this post (and the previous one) are also doing so from a position of defensiveness*, and I don't think their reactions have been uniformly fair.

For example of #2, while I agreed with the thrust of 'it seems misleading to leave out Vassar', I thought Scott A's comment on the previous post made assumptions about the connection between Vassar and Ziz, and presented those assumptions in an overconfident tone. I also thought Logan's comment here is doing a move of "focus-handling-their-way-towards understanding" in a way that seems totally legitimate as a way to figure out what's up, but which doing in public ends up creating a cloud of po... (read more)

**my current belief is that Jessica is doing a kinda a combination of "presenting things in a frame that looks pretty optimized as a confusing, hard-to-respond-to-social-attack", which at first looked disingenuous to me. I've since seen her respond to a number of comments in a way that looks concretely like it's trying to figure stuff out, update on new information, etc, without optimizing for maintaining the veiled social attack. My current belief is that there's still some kind of unconscious veiled social attack going on

Seconded (though I think "pretty optimized" is too strong).

that Jessica doesn't have full conscious access to, it looks too optimized to be an accident. But I don't know that Jessica from within her current epistemic state should agree with me.

My wild, not well-founded guess is that Jessica does have some conscious access to this and could fairly easily say more about what's going on with her (and maybe has and I / we forgot?), along the lines of "owning" stuff (and that might help people hear her better by making possible resulting conflicts and anti-epistemology more available to talk about). I wonder if Jessica is in / views herself as in a conflict, such tha... (read more)

For some context:

  • I got a lot of the material for this by trying to explain what I experienced to a "normal" person who wasn't part of the scene while feeling free to be emotionally expressive (e.g. by screaming). Afterwards I found a new "voice" to talk about the problems in an annoyed way. I think this was really good for healing trauma and recovering memories.

  • I have a political motive to prevent Michael from being singled out as the person who caused my psychosis since he's my friend. I in fact don't think he was a primary cause, so this isn't inherently anti-epistemic, but it likely caused me to write in a more lawyer-y fashion than I otherwise would. (Michael definitely didn't prompt me to write the first draft of the document, and only wrote a few comments on the post.)

  • I've been working on this document for 1-2 weeks and doing a rolling release where I add more people to the document, it's been somewhat stressful getting the memories/interpretations into written form without making false/misleading/indefensible statements along the way, or unnecessarily harming the reputations of people I care about the reputations of.

  • Some other people helped me edit this. I i

... (read more)

Overall it seems like people are paying much, much more attention to the quality of my rhetoric than the subject matter the post is about

Just to be clear, I'm paying attention to the quality of your rhetoric because I cannot tell what the subject matter is supposed to be.

Upon being unable to actually distill out a set of clear claims, I fell back onto "okay, well, what sorts of conclusions would I be likely to draw if I just drank this all in trustingly/unquestioningly/uncritically?"

Like, "observe the result, and then assume (as a working hypothesis, held lightly) that the result is what was intended."

And then, once I had that, I went looking to see whether it was justified/whether the post presented any actual reasons for me to believe what it left sandbox-Duncan believing, and found that the answer was basically "no."

Which seems like a problem, for something that's 13000 words long and that multiple people apparently put a lot of effort into.  13000 words on LessWrong should not, in my opinion, have the properties of:

a) not having a discernible thesis

b) leaving a clear impression on the reader

c) that impression, upon revisit/explicit evaluation, seeming really quite false


I t... (read more)

9TekhneMakre2y
What, if any, are your (major) political motives regarding MIRI/CFAR/similar?

I really liked MIRI/CFAR during 2015-2016 (even though I had lots of criticisms), I think I benefited a lot overall, I think things got bad in 2017 and haven't recovered. E.g. MIRI has had many fewer good publications since 2017 and for reasons I've expressed, I don't believe their private research is comparably good to their previous public research. (Maybe to some extent I got disillusioned so I'm overestimating how much things changed, I'm not entirely sure how to disentangle.)

As revealed in my posts, I was a "dissident" during 2017 and confusedly/fearfully trying to learn and share critiques, gather people into a splinter group, etc, so there's somewhat of a legacy of a past conflict affecting the present, although it's obviously less intense now, especially after I can write about it.

I've noticed people trying to "center" everything around MIRI, justifying their actions in terms of "helping MIRI" etc (one LW mod told me and others in 2018 that LessWrong was primarily a recruiting funnel for MIRI, not a rationality promotion website, and someone else who was in the scene 2016-2017 corroborated that this is a common opinion), and I think this is pretty bad since they have no w... (read more)

I found this a very helpful and useful comment, and resonate with various bits of it (I also think I disagree with a good chunk of it, but a lot of it seems right overall).

6jessicata2y
I'm curious which parts resonate most with you (I'd ordinarily not ask this because it would seem rude, but I'm in a revealing-political-motives mood and figure the actual amount of pressure is pretty low).

I share the sense that something pretty substantial changed with MIRI in ~2017 and that something important got lost when that happened. I share some of the sense that people's thinking about timelines is confused, though I do think overall pretty short timelines are justified (though mine are on the longer end of what MIRI people tend to think, though much shorter than yours, IIRC). I think you are saying some important things about the funding landscape, and have been pretty sad about the dynamics here as well, though I think the actual situation is pretty messy and some funders are really quite pro-critique, and some others seem to me to be much more optimizing for something like the brand of the EA-coalition.

I feel like this topic may deserve a top-level post (rather than an N-th level comment here).

EDIT: I specifically meant the "MIRI in ~2017" topic, although I am generally in favor of extracting all other topics from Jessica's post in a way that would be easier for me to read.

5TekhneMakre2y
Thanks, this is great (I mean, it clarifies a lot for me).
7TekhneMakre2y
(This is helpful context, thanks.)

I definitely have a strong sense reading this post that "those environmental conditions would not cause any problems to me" and I am trying to understand whether this is true or not, and if so, what properties of a person make them susceptible to the problems.

Do you have any perception about that? I wonder things like:

  • Would you circa 2010 have been able to guess that you were susceptible to this level of suffering, if put in this kind of environment?
  • What proportion of, say, random intellectually curious graduate students do you think would suffer this way if put into this environment?
  • Do you have some sense of what psychological attributes made you susceptible, or advice to others about how to be less susceptible?

I have a lot of respect for what I know of you and your work and I'm sorry this happened.

(clarification edit: I have some sympathy for why it could be good to have an intellectual environment like this, so if my comment seems to be implying a perspective of "would it be possible to have it without people suffering", that's why.)

Would you circa 2010 have been able to guess that you were susceptible to this level of suffering, if put in this kind of environment?

I would have had difficulty imagining "this kind of environment". I would not have guessed that an outcome like this was likely on an outside view, I thought of myself as fairly mentally resilient.

What proportion of, say, random intellectually curious graduate students do you think would suffer this way if put into this environment?

30%? It's hard to guess, and hard to say what the average severity would be. Some people take their jobs less seriously than others (although, MIRI/CFAR encouraged people to take their job really seriously, what with the "actually trying" and "heroic responsibility" and all). I think even those who didn't experience overtly visible mental health problems would still have problems like declining intellectual productivity over time, and mild symptoms e.g. of depression.

Do you have some sense of what psychological attributes made you susceptible, or advice to others about how to be less susceptible?

Not sure, my family had a history of bipolar, and I had the kind of scrupulosity issues that were common in EAs, w... (read more)

I would very much like to encourage people to not slip into the "MIRI/CFAR as one homogenous social entity" frame, as detailed in a reply to the earlier post.

I think it's genuinely misleading and confusion-inducing, and that the kind of evaluation and understanding that Jessica is hoping for (as far as I can tell) will benefit from less indiscriminate lumping-together-of-things rather than more.

Even within each org—someone could spend 100 hours in conversation with one of Julia Galef, Anna Salamon, Val Smith, Pete Michaud, Kenzie Ashkie, or Dan Keys, and accurately describe any of those as "100 hours of close interaction with a central member of CFAR," and yet those would be W I L D L Y different experiences, well worth disambiguating.

If someone spent 100 hours of close interaction with Julia or Dan or Kenzie, I would expect them to have zero negative effects and to have had a great time.

If someone spent 100 hours of close interaction with Anna or Val or Pete, I would want to make absolutely sure they had lots of resources available to them, just in case (those three being much more head-melty and having a much wider spread of impacts on people).

It's not only that saying something ... (read more)

If someone spent 100 hours of close interaction with Julia or Dan or Kenzie, I would expect them to have zero negative effects and to have had a great time.

If someone spent 100 hours of close interaction with Anna or Val or Pete, I would want to make absolutely sure they had lots of resources available to them just in case (those three being much more head-melty and having a much wider spread of impacts on people)

As a complete outsider who stumbled upon this post and thread, I find it surprising and concerning that there's anyone at MIRI/CFAR with whom spending a few weeks might be dangerous, mental-health-wise.

Would "Anna or Val or Pete" (I don't know who these people are) object to your statement above? If not, I'd hope they're concerned about how they are negatively affecting people around them and are working to change that. If they have this effect somewhat consistently, then the onus is probably on them to adjust their behavior.

Perhaps some clarification is needed here - unless the intended and likely readers are insiders who will have more context than me.

(Edited to make top quote include more of the original text - per Duncan's request)

The OP cites Anna's comment where she talked about manipulating people. 

Small nitpicky request: would you be willing to edit into your quotation the part that goes "just in case (those three being much more head-melty and having a much wider spread of impacts on people)"?

Its excision changes the meaning of my sentence, into something untrue.  Those words were there on purpose, because without them the sentence is misleadingly alarming.

6Joe Rocca2y
Fair point! Done. It is still concerning to me (of course, having read your original comment), but I can see how it may have mislead others who were skimming.

FWIW, it is concerning to me, too, and was at least a little bit a point of contention between me and each of those three while we were colleagues together at CFAR, and somewhat moreso after I had left.  But my intention was not to say "these people are bad" or "these people are casually dangerous."  More "these people are heavy-hitters when it comes to other people's psychologies, for better and worse."

I have a decently strong sense that I would end up suffering from similar mental health issues. I think it has a lot to do with a tendency to Take Ideas Seriously. Or, viewed less charitably, having a memetic immune disorder.

Xrisk and, a term that is new to me, srisk, are both really bad things. They're also quite plausible. Multiplying how bad they are by how likely they are, I think the rational feeling is some form of terror. (In some sense of the term "rational". Too much terror of course would get in the way of trying to fix it, and of living a happy life.) It reminds me of how in HPMoR, everyone's patronus was an animal, because death is too much for a human to bear.

What proportion of, say, random intellectually curious graduate students do you think would suffer this way if put into this environment?

This seems like the sort of thing that we would have solid data on at this point. Seems like it'd be worth it for eg. MIRI to do an anonymous survey. If the results indicate a lot of suffering, it'd probably be worth having some sort of mental health program, if only for the productivity benefits. Or perhaps this is already being done.

Like the previous post, there's something weird about the framing here that makes me suspicious of this. It feels like certain perspectives are being "smuggled in" -- for example:

Scott asserts that Michael Vassar thinks "regular society is infinitely corrupt and conformist and traumatizing". This is hyperbolic (infinite corruption would leave nothing to steal) but Michael and I do believe that the problems I experienced at MIRI and CFAR were not unique or unusually severe for people in the professional-managerial class. By the law of excluded middle, the only possible alternative hypothesis is that the problems I experienced at MIRI and CFAR were unique or at least unusually severe, significantly worse than companies like Google for employees' mental well-being.

This looks like a logical claim at first glance -- of course the only options are "the problems weren't unique or severe" or "the problems were unique and severe" -- but posing the matter this way conflates problems that you had as an individual ("the problems I experienced") with problems with the broader organization ("significantly worse... for employees' well-being"), which I do not think have been adequately established... (read more)

6jessicata2y
Looking over this again and thinking for a few minutes, I see why (a) the claim isn't technically false, and (b) it's nonetheless confusing. Why (a): Let's just take a fragment of the claim: "the problems I experienced at MIRI and CFAR were not unique or unusually severe for people in the professional-managerial class. By the law of excluded middle, the only possible alternative hypothesis is that the problems I experienced at MIRI and CFAR were unique or at least unusually severe". This is straightforwardly true: either ¬(x>y), or x>y. Where x is "how severe were the problems I experienced at MIRI and CFAR were" and y is "how severe the problems for people in the professional-managerial class generally are". Why (b): in context it's followed by a claim about regular society being infinitely corrupt etc; that would require y to be above some absolute threshold, z. So it looks like I'm asserting the disjunction (¬(x>y)∧y>z)∨x>y, which isn't tautological. So there's a misleading Gricean implicature. I'll edit to make this clearer. In the previous post I said Ziz formed a "splinter group", in this post I said Ziz was "marginal" and has a "negative reputation among central Berkeley rationalists".

Thanks for this.

I've been trying to research and write something kind of like this giving more information for a while, but got distracted by other things. I'm still going to try to finish it soon.

While I disagree with Jessica's interpretations of a lot of things, I generally agree with her facts (about the Vassar stuff which I have been researching; I know nothing about the climate at MIRI). I think this post gives most of the relevant information mine would give. I agree with (my model of) Jessica that proximity to Michael's ideas (and psychedelics) was not the single unique cause of her problems but may have contributed.

The main thing I'd fight if I felt fighty right now is the claim that by not listening to talk about demons and auras MIRI (or by extension me, who endorsed MIRI's decision) is impinging on her free speech. I don't think she should face legal sanction for talking about this these, but I also don't think other people were under any obligation to take it seriously, including if she was using these terms metaphorically but they disagree with her metaphors or think she wasn't quite being metaphorical enough.

6Benquo2y
You wrote that talking about auras and demons the way Jessica did while at MIRI should be considered a psychiatric emergency. When done by a practicing psychiatrist this is an impingement on Jessica's free speech. You wrote this in response to a post that contained the following and only the following mentions of demons or auras: 1. During this time, I was intensely scrupulous; I believed that I was intrinsically evil, had destroyed significant parts of the world with my demonic powers, and was in a hell of my own creation. [after Jessica had left MIRI] 2. I heard that the paranoid person in question was concerned about a demon inside him, implanted by another person, trying to escape. [description of what someone else said] 3. The weirdest part of the events recounted is the concern about possibly-demonic mental subprocesses being implanted by other people. [description of Zoe's post] 4. As weird as the situation got, with people being afraid of demonic subprocesses being implanted by other people, there were also psychotic breaks involving demonic subprocess narratives around MIRI and CFAR. [description of what other people said, and possibly an allusion to the facts described in the first quote, after she had left MIRI] 5. While in Leverage the possibility of subtle psychological influence between people was discussed relatively openly, around MIRI/CFAR it was discussed covertly, with people being told they were crazy for believing it might be possible.  (I noted at the time that there might be a sense in which different people have "auras" in a way that is not less inherently rigorous than the way in which different people have "charisma", and I feared this type of comment would cause people to say I was crazy.) Only the last one is a description of a thing Jessica herself said while working at MIRI. Like Jessica when she worked at MIRI, I too believe that people experiencing psychotic breaks sometimes talk about demons. Like Jessica when she worked at MI

You wrote that talking about auras and demons the way Jessica did while at MIRI should be considered a psychiatric emergency. When done by a practicing psychiatrist this is an impingement on Jessica's free speech. 

I don't think I said any talk of auras should be a psychiatric emergency, otherwise we'd have to commit half of Berkeley. I said that "in the context of her being borderline psychotic" ie including this symptom, they should have "[told] her to seek normal medical treatment". Suggesting that someone seek normal medical treatment is pretty different from saying this is a psychiatric emergency, and hardly an "impingement" on free speech. I'm kind of playing this in easy mode here because in hindsight we know Jessica ended up needing treatment, I feel like this makes it pretty hard to make it sound sinister when I suggest this.

You wrote this in response to a post that contained the following and only the following mentions of demons or auras:

"During this time, I was intensely scrupulous; I believed that I was intrinsically evil, had destroyed significant parts of the world with my demonic powers, and was in a hell of my own creation..." [followed by several more things a

... (read more)

I said that “in the context of her being borderline psychotic” ie including this symptom, they should have “[told] her to seek normal medical treatment”. Suggesting that someone seek normal medical treatment is pretty different from saying this is a psychiatric emergency, and hardly an “impingement” on free speech.

It seems like you're trying to walk back your previous claim, which did use the "psychiatric emergency" term:

Jessica is accusing MIRI of being insufficiently supportive to her by not taking her talk about demons and auras seriously when she was borderline psychotic, and comparing this to Leverage, who she thinks did a better job by promoting an environment where people accepted these ideas. I think MIRI was correct to be concerned and (reading between the lines) telling her to seek normal medical treatment, instead of telling her that demons were real and she was right to worry about them, and I think her disagreement with this is coming from a belief that psychosis is potentially a form of useful creative learning. While I don’t want to assert that I am 100% sure this can never be true, I think it’s true rarely enough, and with enough downside risk, that treating it

... (read more)

Sorry, yes, I meant the psychosis was emergency. Non-psychotic discussion of auras/demons isn't.

I'm kind of unclear what we're debating now. 

I interpret us as both agreeing that there are people talking about auras and demons who are not having psychiatric emergencies (eg random hippies, Catholic exorcists), and they should not be bothered, except insofar as you feel like having rational arguments about it. 

I interpret us as both agreeing that you were having a psychotic episode, that you were going further / sounded less coherent than the hippies and Catholics, and that some hypothetical good diagnostician / good friend should have noticed that and suggested you seek help.

Am I right that we agree on those two points? Can you clarify what you think our crux is?

Verbal coherence level seems like a weird place to locate the disagreement - Jessica maintained approximate verbal coherence (though with increasing difficulty) through most of her episode. I'd say even in October 2017, she was more verbally coherent than e.g. the average hippie or Catholic, because she was trying at all.

The most striking feature was actually her ability to take care of herself rapidly degrading, as evidenced by e.g. getting lost almost immediately after leaving her home, wandering for several miles, then calling me for help and having difficulty figuring out where she was - IIRC, took a few minutes to find cross streets. When I found her she was shuffling around in a daze, her skin looked like she'd been scratching it much more than usual, clothes were awkwardly hung on her body, etc. This was on either the second or third day, and things got almost monotonically worse as the days progressed.

The obvious cause for concern was "rapid descent in presentation from normal adult to homeless junkie". Before that happened, it was not at all obvious this was an emergency. Who hasn't been kept up all night by anxiety after a particularly stressful day in a stressful year?

I ... (read more)

I want to specifically highlight "A bunch of people we respected and worked with had decided the world was going to end, very soon, uncomfortably soon, and they were making it extremely difficult for us to check their work." I noticed this second-hand at the time, but didn't see any paths toward making things better. I think it had a really harmful effects on the community, and is worth thinking a lot about before something similar happens again.

Thanks for giving your own model and description of the situation!

Regarding latent tendency, I don't have a family history of psychosis (but I do of bipolar), although that doesn't rule out latent tendency. It's unclear what "latent tendency" means exactly, it's kind of pretending that the real world is a 3-node Bayesian network (self tendency towards X, environment tendency towards inducing X, whether X actually happens) rather than a giant web of causality, but maybe there's some way to specify it more precisely.

I think the 4 factors you listed are the vast majority, so I partially agree with your "red herring" claim.

The "woo" language was causal, I think, mostly because I feared that others would apply to coercion to me if I used it too much (even if I had a more detailed model that I could explain upon request), and there was a bad feedback loop around thinking that I was crazy and/or other people would think I was crazy, and other people playing into this.

I think I originally wrote about basilisk type things in the post because I was very clearly freaking out about abstract evil at the time of psychosis (basically a generalization of utility function sign flips), and I though... (read more)

7gallabytes2y
hmm... this could have come down to spending time in different parts of MIRI? I mostly worked on the "world's last decent logic department" stuff - maybe the more "global strategic" aspects of MIRI work, at least the parts behind closed doors I wasn't allowed through, were more toxic? Still feels kinda unlikely but I'm missing info there so it's just a hunch.
5jessicata2y
My guess is that it has more to do with willingness to compartmentalize than part of MIRI per se. Compartmentalization is negatively correlated with "taking on responsibility" for more of the problem. I'm sure you can see why it would be appealing to avoid giving into extortion in real life, not just on whiteboards, and attempting that with a skewed model of the situation can lead to outlandish behavior like Ziz resisting arrest as hard as possible.
1gallabytes2y
I think this is a persistent difference between us but isn't especially relevant to the difference in outcomes here. I'd more guess that the reason you had psychoses and I didn't had to do with you having anxieties about being irredeemably bad that I basically didn't at the time. Seems like this would be correlated with your feeling like you grew up in a Shin Sekai Yori world?

I clearly had more scrupulosity issues than you and that contributed a lot. Relevantly, the original Roko's Basilisk post is putting AI sci-fi detail on a fear I am pretty sure a lot of EAs feel/felt in their heart, that something nonspecifically bad will happen to them because they are able to help a lot of people (due to being pivotal on the future), and know this, and don't do nearly as much as they could. If you're already having these sorts of fears then the abstract math of extortion and so on can look really threatening.

When I got back into town and talked with Jessica, she was talking about how it might be wrong to take actions that might possibly harm others, i.e. pretty much any actions, since she might not learn fast enough for this to come out net positive. Seems likely to me that the content of Jessica's anxious perseveration was partly causally upstream of the anxious perseveration itself.

I agree that a decline in bodily organization was the main legitimate reason for concern. It seems obviously legitimate for Jessica (and me) to point out that Scott is proposing a standard that cannot feasibly be applied uniformly, since it's not already common knowledge that Scott isn't making sense here, and his prior comments on this subject have been heavily upvoted. The main alternative would be to mostly stop engaging on LessWrong, which I have done.

I don't fully understand what "latent tendency towards psychosis" means functionally or what predictions it makes, so it doesn't seem like an adequate explanation. I do know that there's correlation within families, but I have a family history of schizophrenia and Jessica doesn't, so if that's what you mean by latent tendency it doesn't seem to obviously have an odds ratio in the correct direction within our local cluster.

By latent tendency I don't mean family history, though it's obviously correlated. I claim that there's this fact of the matter about Jess' personality, biology, etc, which is that it's easier for her to have a psychotic episode than for most people. This seems not plausibly controversial.

I'm not claiming a gears-level model here. When you see that someone has a pattern of <problem> that others in very similar situations did not have, you should assume some of the causality is located in the person, even if you don't know how.

3Benquo2y
Listing "I don't know, some other reason we haven't identified yet" as an "obvious source" can make sense as a null option, but giving it a virtus dormitiva type name is silly. I think that Jessica has argued with some plausibility that her psychotic break was in part the result of taking aspects of the AI safety discourse more seriously and unironically than the people around her, combined with adversarial pressures and silencing. This seems like a gears-level model that might be more likely in people with a cognitive disposition correlated with psychosis.
8jessicata2y
Agreed. Agreed during October 2017. Disagreed substantially before then (January-June 2017, when I was at MIRI). (I edited the post to make it clear how I misinterpreted your comment.)

Interpreting you as saying that January-June 2017 you were basically doing the same thing as the Leveragers when talking about demons and had no other signs of psychosis, I agree this was not a psychiatric emergency, and I'm sorry if I got confused and suggested it was. I've edited my post also.

4jessicata2y
One thing to add is I think in the early parts of my psychosis (before the "mind blown by Ra" part) I was as coherent or more coherent than hippies are on regular days, and even after that for some time (before actually being hospitalized) I might have been as coherent as they were on "advanced spiritual practice" days (e.g. middle of a meditation retreat or experiencing Kundalini awakening). I was still controlled pretty aggressively with the justification that I was being incoherent, and I think that control caused me to become more mentally disorganized and verbally incoherent over time. The math test example is striking, I think less than 0.2% of people could pass it (to Zack's satisfaction) on a good day, and less than 3% could give an answer as good as the one I gave, yet this was still used to "prove" that I was unable to reason.
6Benquo2y
My recollection is that at that time you were articulately expressing what seemed like a level of scrupulosity typical of many Bay Area Rationalists. You were missing enough sleep that I was worried, but you seemed oriented x3. I don't remember you talking about demons or auras at all, and have no recollection of you confusedly reifying agents who weren't there.
2[comment deleted]2y
2[comment deleted]2y
2[comment deleted]2y
2[comment deleted]2y
2[comment deleted]2y
1[comment deleted]2y
1[comment deleted]2y

It should be noted that, as I was nominally Nate's employee, it is consistent with standard business practices for him to prevent me from talking with people who might distract me from my work; this goes to show the continuity between "cults" and "normal corporations".

This is very much not standard business practice. Working as an employee at four different "normal corporations" over thirteen years, I have never felt any pressure from my bosses (n=5) on who to talk to outside of work. And I've certainly been distracted at times!

Now that I'm a manager, I similarly would never consider this, even if I did think that one of my employees was being seriously distracted. If someone wasn't getting their work done or was otherwise not performing well, we would certainly talk about that, but who their contacts are is absolutely no business of mine.

I never told Jessica not to talk to someone (or at the very least, I don't recall it and highly doubt it). IIRC, in that time period, Jessica and one other researcher were regularly inviting Michael to the offices and talking to him at length during normal business hours. IIRC, the closest I came to "telling Jessica not to talk to someone" was expressing dissatisfaction with this state of affairs. The surrounding context was that Jessica had suffered performance (or at least Nate-legible-performance) degredation in the previous months, and we were meeting more regularly in attempts to see if we could work something out, and (if memory serves) I expressed skepticism about whether lengthy talks with Michael (in the office, during normal business hours) would result in improvement along that axis. Even then, I am fairly confident that I hedged my skepticism with caveats of the form "I don't think it's a good idea, but it's not my decision".

9jessicata2y
Thanks for the correction. A relevant fact is that at MIRI we didn't have set office hours (at least not that I remember), and Michael Vassar came to the office sometimes during the day. So one could argue that he was talking to people during work hours. (Still, I think the conversations we were having were positive for being able to think more clearly about AI alignment and related topics.) Also it seems somewhat likely that Nate was discouraging Michael from talking with me in general, not just during weekdays/daytime. I'll edit the post to make these things clearer.

This condition was triggered, Maia announced it, and Maia left the boat and killed themselves; Ziz and friends found Maia's body later.

Are you saying that Maia was at this time in San Francisco? That's an interesting claim given that the people in the European group house in which Maia lived in the preceding years knew, Maia died while being on vacation in Poland (their home country).

From their view, Maia was starting to take hormones to transition from male to female a few months earlier and then was spending time alone. That phase usually comes with pretty unstable psychological states.

Shortly before that, they wrote posts about how humans don't want happiness https://web.archive.org/web/20180104225807/http://squirrelinhell.blogspot.com/2017/12/happiness-is-chore.html There was also a later post about how they saw life as not worth living given that everything will be wiped out by death anyway sooner or later. I think that post got deleted, but I don't find an archive copy right now.

I only knew about the circumstances of the death after reading Ziz's account and previously believed the account of the roommates from the group house who seemed to be missing the crucial information about the interaction with Ziz. 

Without Ziz sharing information it's hard to look into the circumstances of their death.

Oh, that's pretty good counter-evidence to my claim; I'll edit accordingly.

6ChristianKl2y
I'm personally quite unsure about how to think about this event. There's very little for Ziz to gain by making up a story like this. It reflects badly on her to be partly responsible for the suicide. It's also possible that while traveling alone after he said he went to Poland we went by the plane to hang out with Ziz's crew but it's all very strange.

Sorry, I'm having difficulty parsing the second paragraph here. Who's "he", and who's "we"?

"yeah, I don't really mind being the evil thing. Seems okay to me."

Regarding oneself as amoral seems to necessarily involve incoherence AFAICT. Like claiming you don't have your own oughts or that there is no structure to your oughts. In this case

Like doing a good work of art or playing an instrument. It feels satisfying.

is regarded as a higher good than the judgment of others. Loudly signaling that you don't care about the judgment of others seems to be a claim about what control surfaces you will and won't expose to social feedback. Nevertheless, the common sense prior that you shouldn't expect alliance with people who 'don't care about being evil' to go well seems appropriate.

To put it less abstractly: believe people when they say they are defecting rather than believe it is nested layers of fun and interesting counter signaling.

Note: I have no relation to MIRI/CFAR, no familiarity with this situation and am not a metal health expert, so I can't speak with any specific authority here.

First, I'd like to offer my sympathy for the suffering you described. I've had unpleasant intrusive thoughts before. They were pretty terrible, and I've never had them to the degree you've experienced. X/S risk research tends to generate a lot of intrusive thoughts and general stress. I think better community norms/support in this area could help a lot. Here is one technique you may find useful:

  1. Raise your hand in front of your face with your palm facing towards you.
  2. Fix your eyes on the tip of a particular finger.
  3. Move your hand from side to side, while still tracking the chosen finger with your eyes (head remains still).
  4. Every time your hand changes direction, switch which finger your eyes track. I.e., first track the tip of the thumb, then track the pointer finger, then the middle, then ring, then pinky, then back to thumb.

This technique combines three simultaneous control tasks (moving your hand, tracking the current finger, switching fingers repeatedly) and also saturates your visual field with the constantly moving backgroun... (read more)

3jessicata2y
Thanks for the suggestion for intrusive thoughts. If I came up with a non-workable AGI design, that would not be significant evidence for "the pieces to make AGI are already out there and someone just needs to put them together". Lots of AI people throughout the history of the field have come up with non-workable AGI designs, including me in high school/college.
1Alex Vermillion2y
That's a neat idea with the hand thing.

It might help your case to write a version of this that removes most of the interpretation you've given here, and tries to present just the claims you know to be objective truths. While ‘the plaintiff is failing to personally provide an objective neutral point of view’ seems like a particularly disturbing sort of argument to dismiss something like this on, it is nonetheless the case that this does seem to be the principal defense, and most of those comments are pointing to real issues in your presentation.

Disclaimer, I'm an outsider.

5Viliam2y
Yeah, I was similarly thinking that someone (on Jessica's side of the story) should rewrite the articles. To remove the distracting parts, so that it would be easier to focus on whatever is left.

It seems to me that there's been a lot of debate about the causes of the psychosis and the suicides. I haven't seen the facts on the ground, so I can't know anything for sure. But as far as I can tell, the Vassarites and the core rationalists generally agree that MIRI&co aren't particularly bad, with the Vassarites just claiming, well:

Scott asserts that Michael Vassar thinks "regular society is infinitely corrupt and conformist and traumatizing". This is hyperbolic (infinite corruption would leave nothing to steal) but Michael and I do believe that the problems I experienced at MIRI and CFAR were not unique or unusually severe for people in the professional-managerial class. By the law of excluded middle, the only possible alternative hypothesis is that the problems I experienced at MIRI and CFAR were unique or at least unusually severe, significantly worse than companies like Google for employees' mental well-being.

And you give some relatively plausible defense of Vassar not being the main cause of the psychosis for the Vassarites. But that raises the question to me, why try to assign an environmental cause at all? It seems more reasonable to just say that the Vassarites are prone to psychosis, regardless of environment. At least I haven't heard of any clear evidence against this.

5jessicata2y
Why not assign an environmental cause in a case where one exists and I have evidence about it? "Vassarites are prone to psychosis" is obviously fundamental attribution error, that's not how physical causality works. There will be specific environmental causes in "normal" cases of trauma as well.
7tailcalled2y
As I understand it, both sides of the issue agree that MIRI isn't uniquely bad when it comes to frame control and such. MIRI might have some unique themes, e.g. AI torturing people instead of the devil torturing people, or lying about the promise of an approach for AI instead of lying about the promise of an approach for business, but it's not some unique evil by MIRI. (Please correct me if I'm misunderstanding your accusations here.) As such, it's not that MIRI, compared to other environments, caused this. Of course, this does not mean that MIRI didn't in some more abstract sense cause it, in the sense that one could imagine some MIRI' which was like MIRI but didn't have the features you mention as contributors. But the viability of creating such an organization, both cost-wise and success-wise, is unclear, and because the organization doesn't exist but is instead a counterfactual imagination, it's not even clear that it would have the effects you hope it would have. So assigning the cause of MIRI not being MIRI' seems to require a much greater leap of faith. Not so obvious to me. There were tons of people are in these environments with no psychosis at all, as far as I know? Meanwhile fundamental attribution error is about when people attribute something to a person where there is a situational factor that would have caused everyone else to act in the same way. Of course you could attribute this to subtleties about the social relations, who is connected to who and respected by who. But this doesn't seem like an obviously correct attribution to me. Maybe if I knew more about the social relations, it would be.
4jessicata2y
I think you're trying to use a regression model where I would use something more like a Bayes net. This makes some sense in that I had direct personal experience that includes lots of nodes in the Bayes net, and you don't, so you're going to use a lower-resolution model than me. But people who care about the Bayes net I lived in can update on the information I'm presenting. I think the rate might be higher for former MIRI employees in particular, but I'm not sure how to evaluate; the official base rate is that around 3% of people have or will experience a psychotic break in their lifetime. If there are at least 3 psychotic breaks in former MIRI employees then MIRI would need to have had 100 employees to match the general population rate (perhaps more if the psychotic breaks happened within a few years of each other, and in the general population they're pretty spread out), although there's noise here and the official stat could be wrong. Anyway, "MIRI is especially likely to cause psychosis" (something that could be output by the type of regression model you're considering) is not the main claim I'm making. Part of what's strange about attributing things to "Vassarites" in a regression model is that part of how "Vassarites" (including Vassar) became that way is through environmental causes. E.g. I listened to Michael's ideas more because I was at MIRI and Michael was pointing out features of MIRI and the broader situation that seemed relevant to me given my observations, and other people around didn't seem comparably informationally helpful. I have no family history of schizophrenia (that I know of), only bipolar disorder.
6tailcalled2y
Isn't the rate of general mental illness also higher, e.g. autism or ADHD, which is probably not caused by MIRI? (Both among MIRI and among rationalists and rationalist-adj people more generally, e.g. I myself happen to be autistic, ADHD, GD, and probably also have one or two personality disorders; and I have a family history of BPD.) Almost all mental illnesses are correlated so if you select for some mental illness you'd expect to get other to go along with it. I am very sympathetic to the idea that the Vassarites are not nearly as environmentally causal to the psychosis as they might look. It's the same principle as above; Vassar selected for psychosis, being critical of MIRI, etc., so you'd expect higher rates even if he had no effect on it. (I think that's a major problem with naive regressions, taking something that's really a consequence and adding it to the regression as if it was a cause.) It's tricky because I try to read the accounts, but they're all going to be filtered through people's perception, and they're all going to assume a lot of background knowledge that I don't have, due to not having observed it. I could put in a lot of effort to figure out what's true and false, representative and unrepresentative, but it's probably not possible for me due to various reasons. I could also just ignore the whole drama. But I'm just confused - if there's agreement that MIRI isn't particularly bad about this, then this seems to mostly preclude environmental attribution and suggest personal attribution? I wouldn't necessarily say that I use a regression model, as e.g. I'm aware of the problem with just blaming Vassar for causing other's psychosis. There's definitely some truth to me being forced to use a lower-resolution model. And that can be terrible. Partly I just have a very strong philosophical leaning towards essentialism, but also partly it just, from afar, seems to be the best explanation.

I'm just confused - if there's agreement that MIRI isn't particularly bad about this, then this seems to mostly preclude environmental attribution and suggest personal attribution?

I've read Moral Mazes and worked a few years in the corporate world at Fannie Mae. I've also talked a lot with Jessica and others in the MIRI cluster who had psychotic breaks. It seems to me like what happens to middle managers is in some important sense even worse than a psychotic break. Jessica, Zack, and Devi seem to be able to represent their perspectives now, to be able to engage with the hypothesis that some activity is in good faith, to consider symmetry considerations instead of reflexively siding with transgressors.

Ordinary statistical methods - and maybe empiricism more generally - cannot shed light on pervasive, systemic harms, when we lack the capacity to perform controlled experiments on many such systems. In such cases, we instead need rationalist methods, i.e. thinking carefully about mechanisms from first principles. We can also try to generalize efficiently from microcosms of the general phenomenon, e.g. generalizing from how people respond to unusually blatant abuse by individuals or ins... (read more)

3jessicata2y
Suppose it's really common in normal corporations for someone to be given ridiculous assignments by their boss and that this leads to mental illness at a high rate. Each person at a corporation like this would have a specific story of how their boss gave them a really ridiculous assignment and this caused them mental problems. That specific story in each case would be a causal model (if they hadn't received that assignment or had anything similar to that happen, maybe they wouldn't have that issue). This is all the case even if most corporations have this sort of thing happen.
2tailcalled2y
In a sense, everything is caused by everything. If not for certain specifics of the physical constants, the universe as we know it wouldn't exist. If cosmic rays would strike you in just the right ways, it could probably prevent psychosis. Etc. Further, since causality is not directly observable, even when there isn't a real causal relationship, it's possible to come up with a specific story where there is. This leads to a problem for attributing One True Causal Story; which one to pick? Probably we shouldn't feel restricted to only having one, as multiple frames may be relevant. But clearly we need some sort of filter. Probably the easiest way to get a filter is by looking at applications. E.g., there's the application of, which social environment should you join? Which presumably is about the relative effects of the different environments on a person. I don't think this most closely aligns with your point, though. Probably an application near to you is, how should rationalist social environments be run? (You're advocating for something more like Leverage, in certain respects?) Here one doesn't necessarily need to compare across actual social environments; one can consider counterfactual ones too. However, for this a cost/benefit analysis becomes important; how difficult would a change be to implement, and how much would it help with the mental health problems? This is hard to deduce, and so it becomes tempting to use comparisons across actual social environments as a proxy. E.g. if most people get ridiculous assignments by their boss, then that probably means there is some reason why that's very hard to avoid. And if most people don't get severe mental illnesses, then that puts a limit to how bad the ridiculous assignments can be on their own. So it doesn't obviously pass a cost-benefit test. Another thing one could look at is how well the critics are doing; are they implementing something better? Here again I'm looking at it from afar, so it's hard for me to
3jessicata2y
Not really? I think even if Leverage turned out better in some ways that doesn't mean switching to their model would help. I'm primarily not attempting to make policy recommendations here, I'm attempting to output the sort of information a policy-maker could take into account as empirical observations. This is also why the "think about applications" point doesn't seem that relevant; lots of people have lots of applications, and they consult different information sources (e.g. encyclopedias, books), each of which isn't necessarily specialized to their application. That seems like a fully general argument against trying to fix common societal problems? I mean, how do you expect people ever made society better in the past? In any case, even if it's hard to avoid, it helps to know that it's happening and is possibly a bottleneck on intellectual productivity; if it's a primary constraint then Theory of Constraints suggests focusing a lot of attention on it. It seems like the general mindset you're taking here might imply that it's useless to read biographies, news reports, history, and accounts of how things were invented/discovered, on the basis that whoever writes it has a lot of leeway in how they describe the events, although I'm not sure if I'm interpreting you correctly.
2tailcalled2y
This seems to me to be endorsing "updating" as a purpose; evidence flows up the causal links (and down the causal links, but for this purpose the upwards direction is more important). So I will be focusing on that purpose here. The most interesting causal links are then the ones which imply the biggest updates. Which I suppose is a very subjective thing? It depends heavily not just on the evidence one has about this case, but also on the prior beliefs about psychosis, organizational structure, etc.. In theory, the updates should tend to bring everybody closer to some consensus, but the direction of change may vary wildly from person to person, depending on how they differ from that consensus. Though in practice, I'm already very essentialist, and my update is in an essentialist direction, so that doesn't seem to cash out. (... or does it? One thing I've been essentialist about is that I've been skeptical that "cPTSD" is a real thing caused by trauma, rather than some more complicated genetic thing. But the stories from especially Leverage and also to an extent MIRI have made me update enormously hard in favor of trauma being able to cause those sorts of mental problems - under specific conditions. I guess there's an element of, on the more ontological/theoretical level, people might converge, but people's preexisting ontological/theoretical beliefs may cause their assessments of the situation to diverge.) My phrasing might have been overly strong, since you would endorse a lot of what Leverage does, due to it being cultish. What I meant is that one thing you seem to have endorsed is that one thing you seem to have endorsed is talking more about "objects" and such. I agree that this is a rather general argument, but it's not supposed to stand on its own. The structure of my argument isn't "MIRI is normal here so it's probably hard to change, so the post isn't actionable", it's "It's dubious things happened exactly as the OP describes, MIRI is normal here so it's

MIRI certainly had a substantially conflict-theoretic view of the broad situation, even if not the local situation.  I brought up the possibility of convincing DeepMind people to care about AI alignment.  MIRI leaders including Eliezer Yudkowsky and Nate Soares told me that this was overly naive, that DeepMind would not stop dangerous research even if good reasons for this could be given.  Therefore (they said) it was reasonable to develop precursors to AGI in-house to compete with organizations such as DeepMind in terms of developing AGI first.  So I was being told to consider people at other AI organizations to be intractably wrong, people who it makes more sense to compete with than to treat as participants in a discourse.

 

Anyone from MIRI want to comment on this? This seems weird, especially considering how open Demis/Legg have been to alignment arguments.

MIRI leaders including Eliezer Yudkowsky and Nate Soares told me that this was overly naive, that DeepMind would not stop dangerous research even if good reasons for this could be given.

I have no memory of saying this to Jessica; this of itself is not strong evidence because my autobiographical memory is bad, but it also doesn't sound like something I would say.  I generally credit Demis Hassabis as being more clueful than many, though unfortunately not on quite the same page.  Adjacent things that could possibly have actually been said in reality might include "It's not clear that Demis has the power to prevent Google's CEO from turning up the dial on an AGI even if Demis thinks that's a bad idea" or "Deepmind has recruited a lot of people who would strongly protest reduced publications, given their career incentives and the impression they had when they signed up" or maybe something something Law of Continued Failure they already have strong reasons not to advance the field so why would providing them with stronger ones help.

Therefore (they said) it was reasonable to develop precursors to AGI in-house to compete with organizations such as DeepMind in terms of developing

... (read more)

In case it refreshes your memory, this was in a research retreat, we were in a living room on couches, you and I and Nate were there, Critch and Ryan Carey were probably there, I was saying that convincing DeepMind people to care about alignment was a good plan, people were saying that was overly naive and competition was a better approach. I believe Nate specifically said something about Demis saying that he couldn't stop DeepMind researchers from publishing dangerous/unaligned AI things even if he tried. Even if Demis can be reasoned with, that doesn't imply DeepMind as a whole can be reasoned with, since DeepMind also includes and is driven by these researchers who Demis doesn't think he can reason with.

Sounds like something that could have happened, sure, I wouldn't be surprised to hear Critch or Carey confirm that version of things.  A retreat with non-MIRI people present, and nuanced general discussion on that topic happening, is a very different event to have actually happened than the impression this post leaves in the mind of the reader.

Critch and Carey were MIRI people at the time. It wasn't just them disagreeing with me, I think you and/or Nate were as well.

8Tomás B.2y
Do you have a take on Shane Legg? Or any insight to his safety efforts? In his old blog and the XiXiDu interview, he was pretty solid on alignment, back when it was far harder to say such things publicly. And he made this comment in this post, just before starting DeepMind:  

I'm even more positive on Shane Legg than Demis Hassabis, but I don't have the impression he's in charge.

2Joe_Collman2y
My immediate thought on this was that the conclusion [people at other AI organizations are intractably wrong] doesn't follow from [DeepMind (the organisation) would not stop dangerous research even if good reasons...]. (edited to bold "organisation" rather than "DeepMind", for clarity) A natural way to interpret the latter being that people who came to care sufficiently (and be sufficientlyMIRI cautious) about alignment would tend to lose/fail-to-gain influence over DeepMind's direction (through various incentive-driven dynamics). It's being possible to change the mind of anyone at an organisation isn't necessarily sufficient to change the direction of that organisation. [To be clear, I know nothing DeepMind-specific here - just commenting on the general logic]
2jessicata2y
In context I thought it was clear that DeepMind is an example of an "other AI organization", i.e. other than MIRI.
2Joe_Collman2y
Sure, that's clear of course. I'm distinguishing between the organisation and "people at" the organisation. It's possible for an organisation's path to be very hard to change due to incentives, regardless of the views of the members of that organisation. So doubting the possibility of changing an organisation's path doesn't necessarily imply doubting the possibility of changing the minds of the people currently leading/working-at that organisation.  [ETA - I'll edit to clarify; I now see why it was misleading]

How did you conclude from Nate Soares saying that that the tools to create agi likely already exist that he wanted people to believe he knew how to construct one?

Why were none of these examples mentioned in the original discussion thread and comment section from which a lot of the quoted sections come from?

  1. Because he asked me to figure it out in a way that implied he already had a solution; the assignment wouldn't make sense if it were to locate a non-workable AGI design (as many AI researchers have done throughout the history of the field); that wouldn't at all prove that the pieces to make AGI are already out there. Also, there wouldn't be much reason to think that his sharing a non-workable AGI design with me would be dangerous.

  2. I believe my previous post was low on detail partially due to traumatic conditioning making these things hard to write about. I got a lot of info and psychological healing from telling a "normal" person (not part of rationalist scene) about what happened while feeling free to be emotionally expressive along the way. I mentioned screaming regarding the "infohazard" concept being used to cover up circumstances of deaths; I also screamed regarding the "create a workable AGI design" point. This probably indicates that some sort of information connection/flow was suppressed.

3Thomas Kehrenberg2y
If someone told me to come up with an AGI design and that I already knew the parts, then I would strongly suspect that person was trying to make me do a Dantzig to find the solution. (Me thinking that would of course make it not really work.)

I was given ridiculous statements and assignments including the claim that MIRI already knew about a working AGI design and that it would not be that hard for me to come up with a working AGI design on short notice just by thinking about it, without being given hints.

There's a huge gulf between "AGI ideal that sounds like it will work to the researcher who came up with it" and "AGI idea that actually works when tried." Like, to the point where AGI researchers having ideas that they're insanely overconfident in, that don't work when tried, is almost being a... (read more)

3jessicata2y
I agree that coming up with a "promising research direction" AI design would have been a reasonable assignment. However, such a research direction if found wouldn't provide significant evidence for Nate's claim that "the pieces to make AGI are already out there and someone just has to put them together", since such research directions have been found throughout the AI field without correspondingly short AI timelines.
[+][comment deleted]2y22