Zoe Curzi's Experience with Leverage Research

by Ilverin1 min read13th Oct 2021110 comments

186

Leverage ResearchCommunity
Personal Blog
112 comments, sorted by Highlighting new comments since Today at 4:09 PM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Thanks for your courage, Zoe!

Personally, I've tried to maintain anonymity in online discussion of this topic for years. I dipped my toe into openly commenting last week, and immediately received an email that made it more difficult to maintain anonymity - I was told "Geoff has previously speculated to me that you are 'throwaway', the author of the 2018 basic facts post". Firstly, I very much don't appreciate my ability to maintain anonymity being narrowed like this. Rather, anonymity is a helpful defense in any sensitive online discussion, not least this one. But secondly, yes, I am throwaway/anonymoose - I posted anonymously because I didn't want to suffer adverse consequences from friends who got more involved than me. But I'm not throwaway2,  anonymous, or BayAreaHuman - those three are bringing evidence that is independent from me at least.

I only visited Leverage for a couple months, back in 2014. One thing that resonated strongly with me about your post is that the discussion is badly confused by lack of public knowledge and strong narratives, about whether people are too harsh on Leverage, what biases one might have, and so on. This is why I think we often retreat to jus... (read more)

What's frustrating about still hearing noisy debate on this topic, so many years later, is that Leverage being a really bad org seems overdetermined at this point. On the one hand, if I ranked MIRI, CFAR, CEA, FHI, and several startups I've visited, in terms of how reality-distorting they can be, Leverage would score ~9, while no other would surpass ~7. (It manages to be nontransparent and cultlike in other ways too!).  While on the other hand, their productive output was... also like a 2/10? It's indefensible. But still only a fraction of the relevant information is in the open.

One thing to note is that if you "read the room" instead of only looking at the explicit arguments, it's noticeable that a lot of people left Leverage and the new org ("Leverage 2.0") completely switched research directions, which to me seems like tacit acknowledgement that their old methods etc aren't as good.

As far as people leaving organizations I'd love to have good data for MIRI, CFAR, CEA and FHI.

I think I could write down a full history of employment for all of these orgs (except maybe FHI, which I've had fewer tabs on), in an hour or two of effort. It's somewhat costly for me (in terms of time), but if lots of people are interested, I would be happy to do it. 

I'm personally interested, and also I think having information like this collected in one place makes it much easier for everyone to understand the history and shape of the movement. IMO an employment history of those orgs would make for a very valuable top-level post.

7AppliedDivinityStudies1dVery interested

As someone who's been close to these, some had a few related issues, but Leverage seemed much more extreme in many of these dimensions to me.

However, now there are like 50 small EA/rationalist groups out there, and I am legitimately worried about quality control.

I generally worry about all kinds of potential bad actors associating themselves with EA/rationalists.

There seems to be a general pattern where new people come to an EA/LW/ACX/whatever meetup or seminar, trusting the community, and there they meet someone who abuses this trust and tries to extract free work / recruit them for their org / abuse them sexually, and the new person trusts them as representatives of the EA/rationalist community (they can easily pretend to be), while the actual representatives of EA/rationalist community probably don't even notice that this happens, or maybe feel like it's not their job to go reminding everyone "hey, don't blindly trust everyone you meet here".

I assume the illusion of transparency plays a big role here, where the existing members generally know who is important and who is a nobody, who plays a role in the movement and who is just hanging out there, what kind of behavior is approved and what kind is not... but the new member has no idea about anything, and may assume that if someone acts high-status then the person actually is high-status in the movement, and that whatever such person does has an approval of the community.

To put it bluntly... (read more)

I very much agree about the worry, My original comment was to make the easiest case quickly, but I think more extensive cases apply to. For example, I’m sure there have been substantial problems even in the other notable orgs, and in expectation we should expect there to continue to be so. (I’m not saying this based on particular evidence about these orgs, more that the base rate for similar projects seems bad, and these orgs don’t strike me as absolutely above these issues.)

One solution (of a few) that I’m in favor of is to just have more public knowledge about the capabilities and problems of orgs.

I think it’s pretty easy for orgs of about any quality level to seem exciting to new people and recruit them or take advantage of them. Right now, some orgs have poor reputations among those “in the know” (generally for producing poor quality output), but this isn’t made apparent publicly.[1] One solution is to have specialized systems that actually present negative information publicly; this could be public rating or evaluation systems.

This post by Nuno was partially meant as a test for this:

https://forum.effectivealtruism.org/posts/xmmqDdGqNZq5RELer/shallow-evaluations-of-longtermist... (read more)

9ozziegooen10hFor what it’s worth, I think this is true for basically all intense and moral communities out there. The EA/rationalist groups generally seem better than many religious and intense political groups in these areas, to me. However, even “better” is probably not at all good enough.
2Said Achmiz7hWhat are “intense” and/or “moral” communities? And, why is it (or is it?) a good thing for a community to be “moral” and/or “intense”?
5ChristianKl6hThere are certain goals for which having a moral or intense community is helpful. Whether or not I want to live in such a community I consider it okay for other people to build those communities. On the other hand, building cults is not okay in the same sense. Intense communities also generally focus on something where otherwise there's not much focus in society, increase cognitive diversity and are thus able to produce certain kinds of innovations that wouldn't happen with less cognitive diversity.
2ozziegooen37mI was just thinking of the far right-wing and left-wing in the US; radical news organizations and communities. Q-anon, some of the radical environmentalists, conspiracy groups of all types. Many intense religious communities. I'm not making a normative claim about the value of being "moral" and/or "intense", just saying that I'd expect moral/intense groups to have some of the same characteristics and challenges.
4ChristianKl1dIt seems to me that quality control has always an issue with some groups no matter how many groups there were.
2ozziegooen39mAgreed, though I think that the existence of many groups makes it a more obvious problem, and a more complicated problem.
5Evan_Gaensbauer2dLeverage Research hosted a virtual open house and AMA a couple weeks ago for their relaunch as a new kind of organization that has been percolating for the last couple years. I attended. One subject Geoff and I talked about was the debacle that was the article in The New York Times (NYT) on Scott Alexander from several months ago. I expressed my opinion that: 1. Scott Alexander could have managed his online presence much better than he did on and off for a number of years. 2. Scott Alexander and the rationality community in general could have handled the situation much better than they did. 3. Those are parts of this whole affair that too few in the rationality community have been willing to face, acknowledge or discuss about what can be learned from mistakes made. 4. Nonetheless, NYT was the instigating party in whatever of the situation constituted a conflict between NYT, and Scott Alexander and his supporters, and NYT is the party that should be held more accountable and is more blameworthy if anyone wants to make it about blame. Geoff nodded, mostly in agreement, and shared his own perspective on the matter that I won't share. Yet if Geoff considers NYT to have done one or more things wrong in that case, You yourself, Ryan, never made any mistake of posting your comments online in a way that might make it easier for someone else to de-anonymize you. If you made any mistake, it's that you didn't anticipate how adeptly Geoff would apparently infer or discern your identity. I expect why it wouldn't be so hard for Geoff to have figured it out it was you because you would have shared information about the internal activities at Leverage Research you are one of only a small number of people would have had access to. Yet that's not something you should not have had to anticipate. A presumption of good faith in a community or organization entails a common assumption that nobody would do that to their other peers. Whatever Geoff himsel

Based on how you wrote your comment, it seems that the email you received may have come across as intimidating.

I think the important information here is how did Geoff / Leverage Research handle similar criticism in the past. (I have no idea. I assume both you and Ryan probably know more about this.) As they say, past behavior is the best predictor of future behavior. The wording of the e-mail is not so important.

2Evan_Gaensbauer20hI previously have not been as aware that this is a pattern of how so many people have experienced responses to criticism from Geoff and Leverage in the past.

Many of these things seem broadly congruent with my experiences at Pareto, although significantly more extreme. Especially: ideas about psychology being arbitrarily changeable, Leverage having the most powerful psychology/self-improvement tools, Leverage being approximately the only place you could make real progress, extreme focus on introspection and other techniques to 'resolve issues in your psyche', (one participant's 'research project' involved introspecting about how they changed their mind for 2 months) and general weird dynamics (e.g. instructors sleeping with fellows; Geoff doing lectures or meeting individually with participants in a way that felt very loaded with attempts to persuade and rhetorical tricks), and paranoia (for example: participants being concerned that the things they said during charting/debugging would be used to blackmail or manipulate them; or suspecting that the private slack channels for each participant involved discussion of how useful the participants were in various ways and how to 'make use of them' in future). On the other hand, I didn't see any of the demons/objects/occult stuff, although I think people were excited about 'energy healers'/'body work', not actually believing that there was any 'energy' going on, but thinking that something interesting in the realm of psychology/sociology was going on there. Also, I benefitted from the program in many ways, many of the techniques/attitudes were very useful, and the instructors generally seemed genuinely altruistic and interested in helping fellows learn.

More thoughts:

I really care about the conversation that’s likely to ensue here, like probably a lot of people do.

I want to speak a bit to what I hope happens, and to what I hope doesn’t happen, in that conversation. Because I think it’s gonna be a tricky one.

What I hope happens:

  • Curiosity
  • Caring,
  • Compassion,
  • Interest in understanding both the specifics of what happened at Leverage, and any general principles it might hint at about human dynamics, or human dynamics in particular kinds of groups.

What I hope doesn’t happen:

  • Distancing from uncomfortable data.
  • Using blame and politics to distance from uncomfortable data.
  • Refraining from sharing true relevant facts, out of fear that others will take them in a politicized way, or will use them as an excuse for false judgments.

This is LessWrong; let’s show the world how curiosity/compassion/inquiry is done!

Thanks, Anna!

As a LessWrong mod, I've been sitting and thinking about how to make the conversation go well for days now and have been stuck on what exactly to say.  This intention setting is a good start.

I think to your list I would add judging each argument and piece of data on its merits, .i.e., updating on evidence even if it pushes against the position we currently hold.

Phrased alternatively, I'm hoping we don't: treating arguments as soldiers: accepting bad arguments because they favor our preferred conclusion, rejecting good arguments because they don't support our preferred conclusion. I think there's a risk in this cases of knowing which side you're on and then accepting and rejecting all evidence accordingly.

Refraining from sharing true relevant facts, out of fear that others will take them in a politicized way, or will use them as an excuse for false judgments.

Are you somehow guaranteeing or confidently predicting that others will not take them in a politicized way, use them as an excuse for false judgments, or otherwise cause harm to those sharing the true relevant facts? If not, why are you asking people not to refrain from sharing such facts?

(My impression is that it is sheer optimism, bordering on wishful thinking, to expect such a thing, that those who have such a fear are correct to have such a fear, and so I am confused that you are requesting it anyway.)

Thanks for the clarifying question, and the push-back. To elaborate my own take: I (like you) predict that some (maybe many) will take shared facts in a politicized way, will use them as an excuse for false or uncareful judgments, etc. I am not guaranteeing, nor predicting, that this won’t occur.

I am intending to myself do inference and conversation in a way that tries to avoids these “politicized speech” patterns, even if it turns out politically costly or socially awkward for me to do so. I am intending to make some attempt (not an infinite amount of effort, but some real effort, at some real cost if needed) to try to make it easier for others to do this too, and/or to ask it of others who I think may be amenable to being asked this, and/or to help coalesce a conversation in what I take to be a better pattern if I can figure out how to do so. I also predict, independently of my own efforts, that a nontrivial number of others will be trying this.

If “reputation management” is a person’s main goal, then the small- to medium-sized efforts I can hope to contribute toward a better conversation, plus the efforts I’m confident in predicting independently of mine, would be insufficien... (read more)

4rohinmshah2dIt sounds like you are predicting that the people who are sharing true relevant facts have values such that the long-term benefits to the group overall outweigh the short-term costs to them. In particular, it's a prediction about their values (alongside a prediction of what the short-term and long-term effects are). I'll just note that, on my view of the short-term and long-term effects, it seems pretty unclear whether by my values I should share additional true relevant information, and I lean towards it being negative. Of course, you have way more private info than me, so perhaps you just know a lot more about the short-term and long-term effects. I'm also not a fan of requests that presume that the listener is altruistic, and willing to accept personal harm for group benefit. I'm not sure if that's what you meant -- maybe you think in the long term sharing of additional facts would help them personally, not just help the group. Fwiw I don't have any particularly relevant facts. I once tagged along with a friend to a party that I later (i.e. during or after the party) found out was at Leverage. I've occasionally talked briefly with people who worked at Leverage / Paradigm / Reserve. I might have once stopped by a poster they had at EA Global. I don't think there have been other direct interactions with them.

I'm also not a fan of requests that presume that the listener ...

From my POV, requests, and statements of what I hope for, aren't advice. I think they don't presume that the listener will want to do them or will be advantaged by them, or anything much else about the listener except that it's okay to communicate my request/hope to them. My requests/hopes just share what I want. The listener can choose for themselves, based on what they want. I'm assuming listeners will only do things if they don't mind doing them, i.e. that my words won't coerce people, and I guess I'm also assuming that my words won't be assumed to be a trustworthy voice of authorities that know where the person's own interests are, or something. That I can be just some person, hoping and talking and expecting to be evaluated by equals.

Is it that you think these assumptions of mine are importantly false, such that I should try to follow some other communication norm, where I more try to only advocate for things that will turn out to be in the other party's interests, or to carefully disclaim if I'm not sure what'll be in their interests? That sounds tricky; I'm not peoples' parents and they shouldn't trust... (read more)

I'm assuming listeners will only do things if they don't mind doing them, i.e. that my words won't coerce people,

I feel like this assumption seems false. I do predict that (at least in the world where we didn't have this discussion) your statement would create a social expectation for the people to report true, relevant facts, and that this social expectation would in fact move people in the direction of reporting true, relevant facts.

I immediately made the inference myself on reading your comment. There was no choice in the matter, no execution of a deliberate strategy on my part, just an inference that Anna wants people to give the facts, and doesn't think that fear of reprisal is particularly important to care about. Well, probably, it's hard to remember exactly what I thought, but I think it was something like this. I then thought about why this might be, and how I might have misunderstood. In hindsight, the explanation you gave above should have occurred to me, that is the sort of thing that people who speak literally would do, but it did not.

I think there are lots of LWers who, like me, make these sorts of inferences automatically. (And I note that these kinds of inferences a... (read more)

9Evan_Gaensbauer2dThose making requests for others to come forward with facts in the interest of a long(er)-term common good could find norms that serves as assurance or insurance that someone will be protected against potential retaliation against their own reputation. I can't claim to know much about setting up effective norms for defending whistleblowers though.
2TekhneMakre2dIf someone takes you as an authority, then they're likely to take your wishes as commands. Imagine a CEO saying to her employees, "What I hope happens: ... What I hope doesn't happen: ...", and the (vocative/imperative mood) "Let's show the world...". That's only your responsibility insofar as you're somehow collaborating with them to have them take you as an authority; but it could happen without your collaboration. IMO no, but you could, say, ask LW to make a "comment signature" feature, and then have every comment you make link, in small font, to the comment you just made.

I read Anna's request as an attempt to create a self-fulfilling prophecy. It's much easier to bully a few individuals than a large crowd.

7Rob Bensinger2dYeah, I also read Anna as trying to create/strengthen local norms to the effect of 'whistleblowers, truth-tellers, and people-saying-the-emperor-has-no-clothes are good community members and to-be-rewarded/protected'. That doesn't make reprisals impossible, but I appreciated the push (as I interpreted it). I also interpreted Anna as leading by example to some degree -- a lot of orgs wouldn't have their president join a public conversation like this, given the reputational risks. If I felt like Anna was taking on zero risk but was asking others to take on lots of risk, I may have felt differently. Saying this publicly also (in my mind) creates some accountability for Anna to follow through. Community leaders who advocate value X and then go back on their word are in much more hot water than ones who quietly watch bad things happen. E.g., suppose this were happening on the EA Forum. People might assume by default that CEA or whoever is opposed to candor about this topic, because they're worried hashing things out in public could damage the EA-brand (or whatever). This creates a default pressure against open and honest truth-seeking. Jumping in to say 'no, actually, having this conversation here is good, and it seems good to try to make it as real as we can' can relieve a lot of that perceived pressure, even if it's not a complete solution. I perceive Anna as trying to push in that direction on a bunch of recent threads (e.g., here [https://www.lesswrong.com/posts/oEC92fNXPj6wxz8dd/how-to-think-about-and-deal-with-openai?commentId=7k3C3QTue4Bqjhi2o] ).
7Rob Bensinger2dI'm not sure what I think of Rohin's interpretation. My initial gut feeling is that it's asking too much social ownership of the micro [https://medium.com/@ThingMaker/in-defense-of-punch-bug-68fcec56cd6b], or asking community leaders to baby the community too much, or spend too much time carefully editing their comments to address all possible errors (with the inevitable result that community leaders say very little and the things they say are more dead and safe). It's not that I particularly object to the proposed rephrasings, more just that I have a gut-level sense that this is in a reference class of a thousand other similarly-small ways community leaders can accidentally slightly nudge folks in the wrong direction. In this particular case, I'd rather expect a little more from the community, rather than put this specific onus on Anna. I agree there's an empirical question of how socially risky it actually is to e.g. share negative stuff about Leverage in this thread. I'm all in favor of a thread to try to evaluate that question (which could also switch to PMs as needed if some people don't feel safe participating), and I see the argument for trying to do that first, since resolving that could make it easier to discuss everything else. I just think people here are smart and independent enough to not be 'coerced' by Anna if she doesn't open the conversation with a bunch of 'you might suffer reprisals' warnings (which does have a bit of a self-fulfilling-prophecy ring to it, though I think there are skillful ways to pull it off).
6rohinmshah2dYou're reading too much into my response. I didn't claim that Anna should have this extra onus. I made an incorrect inference, was confused, asked for clarification, was still confused by the first response (honestly I'm still confused by that response), understood after the second response, and then explained what I would have said if I were in her place when she asked about norms. (Yes, I do in fact think that the specific thing said had negative consequences. Yes, this belief shows in my comments. But I didn't say that Anna was wrong/bad for saying the specific thing, nor did I say that she "should" have done something else. Assuming for the moment that the specific statement did have negative consequences, what should I have done instead?) (On the actual question, I mostly agree that we probably have too many demands on public communication, such that much less public communication happens than would be good.) I also would have been fine with "I hope people share additional true, relevant facts". The specific phrasing seemed bad because it seemed to me to imply that the fear of reprisal was wrong. See also here [https://www.lesswrong.com/posts/XPwEptSSFRCnfHqFk/zoe-curzi-s-experience-with-leverage-research?commentId=wcZjMnddQM2qKQCiw] .
4Rob Bensinger2dOK, thanks for the correction! :]
1TekhneMakre3dOf course there's also the possibility that it's worth it. E.g. because people could then notice who is doing a rush-to-judgement thing or confirmation-bias-y thing. (This even holds if there's threat of personal harm to fact-sharers, though personal harm looks like something you added to the part you quoted.)
5rohinmshah2dI agree that's possible, but then I'd say something like "I would love to know additional true relevant facts, but I recognize there are real risks to this and only recommend people do this if they think the benefits are worth it". Analogy: it could be worth it for an employee to publicly talk about the flaws of their company / manager (e.g. because then others know not to look for jobs at that company), even though it might get them fired. In such a situation I would say something like "It would be particularly helpful to know about the flaws of company X, but I recognize there are substantial risks involved and only recommend people do this if they feel up to it". I would not say "I hope people don't refrain from speaking up about the flaws of company X out of fear that they might be fired", unless I had good reason to believe they wouldn't be fired, or good reason to believe that it would be worth it on their values (though in that case presumably they'd speak up anyway).
1TekhneMakre2dThanks. I'm actually still not sure what you're saying. Hypothesis 1: you're saying, stating "I hope person A does X" implies a non-dependence on person A's information, which implies the speaker has a lot of hidden evidence (enough to make their hope unlikely to change given A's evidence). And, people might infer that there's this hidden evidence, and update on it, which might be a mistake. Hypothesis 2: you're pointing at something about how "do X, even if you have fear" is subtly coercive / gaslighty, in the sense of trying to insert an external judgement to override someone's emotion / intuition / instinct. E.g. "out of fear" might subtly frame an aversion as a "mere emotion". (Maybe these are the same...)
2rohinmshah2dHypothesis 2 feels truer than hypothesis 1. (Just to state the obvious: it is clearly not as bad as the words "coercion" and "gaslighting" would usually imply. I am endorsing the mechanism, not the magnitude-of-badness.) I agree that hypothesis 1 could be an underlying generator of why the effect in hypothesis 2 exists. I think I am more confident in the prediction that these sorts of statements do influence people in ways-I-don't-endorse, than in any specific mechanism by which that happens.
1TekhneMakre2dOkay.

I hesitated a bit before saying this? I thought it might add a little bit of clarity, so I figured I'd bring it up.

(Sorry it got long; I'm still not sure what to cut.)

There are definitely some needs-conflicts. Between (often distant) people who, in the face of this, feel the need to cling to the strong reassurance that "this could not possibly happen to them"/"they will definitely be protected from this," and would feel reassured at seeing Strong Condemning Action as soon as possible...

...and "the people who had this happen." Who might be best-served, if they absorbed that there is always some risk of this sort of shit happening to people. For them, it would probably be best if they felt their truth was genuinely heard, and took away some actionable lessons about what to avoid, without updating their personal identity to "victim" TOO much. And in the future, embraced connections that made them more robust against attaching to this sort of thing in the future.

("Victim" is just not a healthy personal identity in the long-term, for most people.)


Sometimes, these needs are so different, that it warrants having different forums of discussion. But there is some overlap in these needs (w... (read more)

9ChristianKl1dThere's also the need to learn from what happened, so that when designing organizations in the future the same mistakes aren't repeated.

I would like it if we showed the world how accountability is done, and given your position, I find it disturbing that you have omitted this objective. That is, if I wanted to deflect the conversation away from accountability, I think I would write a post similar to yours. 

I would like it if we showed the world how accountability is done

So would I. But to do accountability (as distinguished from scapegoating, less-epistemic blame), we need to know what happened, and we need to accurately trust each other (or at least most of each other) to be able to figure out what happened, and to care what actually happened.

The “figure out what happened” and “get in a position where we can have a non-fucked conversation” steps come first, IMO.

I also sort of don’t expect that much goal divergence on the accountability steps that very-optimistically come after those steps, either, basically because integrity and visible trustworthiness serve most good goals in the long run, and vengeance or temporarily-overextended-trust serves little.

Though, accountability is admittedly a weak point of mine, so I might be missing/omitting something. Maybe spell it out if so?

7farp2dTo clarify: goal divergence between whom? Geoff and Zoe? Zoe and me? Me and you?
6deluks9173dThis reaction has been predictable for years IMO. As usual, a reasonable response required people to go public. There is no internal accountability process. Luckily things have been made public.

Epistemic status: I have not been involved with Leverage Research in any way, and have no knowledge of what actually happened beyond what's been discussed on LessWrong. This comment is an observation I have after reading the post.

I had just finished reading Pete Walker's Complex PTSD before coming across this post. In the book, the author describes a list of calm, grounded thoughts to respond to inner critic attacks. A large part of healing is for the survivor to internalize these thoughts so they can psychologically defend themselves.

I see a stark contrast between what the CPTSD book tries to instill and the ideas Leverage Research tried to instill, per Zoe's account. It's as if some of the programs at Leverage Research were trying to unravel almost all of one's sense of self.

A few examples:

Perfectionism

From the CPTSD book:

I do not have to be perfect to be safe or loved in the present. I am letting go of relationships that require perfection. I have a right to make mistakes. Mistakes do not make me a mistake.

From the post:

We might attain his level of self-efficacy, theoretical & logical precision, and strategic skill only once we were sufficiently transformed via the use

... (read more)

I imagine a lot of people want to say a lot of things about Leverage and the dynamics around it, except it’s difficult or costly/risky or hard-to-imagine-being-heard-about or similar.

If anyone is up for saying a bit about how that is for you personally (about what has you reluctant to try to share stuff to do with Leverage, or with EA/Leverage dynamics or whatever, that in some other sense you wish you could share — whether you had much contact with Leverage or not), I think that would be great and would help open up space.

I’d say err on the side of including the obvious.

My contact with Leverage over the years was fairly insignificant, which is part of why I don’t feel like it’s right for me to participate in this discussion. But there are some things that have come to mind, and since Anna’s made space for that, I’ll note them now. I still think it’s not really my place to say anything, but here’s my piece anyway. I’m speaking only for myself and my own experience.

I interviewed for an ops position at Leverage/Paradigm in early 2017, when I was still in college. The process took maybe a couple months, and the in-person interview happened the same week as my CFAR workshop; together these were my first contact with the Bay community. Some of the other rationalists I met that week warned me against Leverage in vague terms; I discussed their allegations with the ops team at my interview and came away feeling satisfied that both sides had a point.

I had a positive experience at the interview and with the ops and their team hiring process in general. The ops lead seemed to really believe in me and recommended me to other EA orgs after I didn’t get hired at Paradigm, and that was great. My (short-term) college boyfriend had a good relationship with Leverage... (read more)

The obsession with reputation control is super concerning to me, and I wonder how this connects up with Leverage's poor reputation over the years.

Like, I could imagine two simplified stories...

Story 1:

  • Leverage's early discoveries and methods were very promising, but the inferential gap was high -- they really needed a back-and-forth with someone to properly communicate, because everyone had such different objections and epistemic starting points. (This is exactly the trouble MIRI had in its early comms -- if you try to anticipate which objections will be salient to the reader, you'll usually miss the mark. And if you do this a lot, you miss the mark and are long-winded.)
  • Because of this inferential gap, Leverage acquired a very bad reputation with a bunch of people who (a) misunderstood its reasoning, and then (b) didn't get why Leverage wasn't investing more into public comms.
  • Leverage then responded by sharing less and trying to reset its public reputation to 'normal'. It wasn't trying to become super high-status, just trying to undo the damage already done / prevent things from further degrading as rumors mutated over time. Unfortunately, its approach was heavy-handed and incompet
... (read more)

I interacted with Leverage some over the years. I felt like they had useful theory and techniques, and was disappointed that it was difficult to get access to their knowledge. I enjoyed their parties. I did a Paradigm workshop. I knew people in Leverage to a casual degree.

What's live for me now is that when the other recent post about Leverage was published, I was subjected to strong, repeated pressure by someone close to Geoff to have the post marked as flawed, and asked to lean on BayAreaHuman to approximately retract the post or acknowledge its flaws. (This request was made of me in my new capacity as head of LessWrong.) "I will make a fuss" is what I was told. I agreed that the post has flaws (I commented to that effect in the thread) and this made me feel the pressure wasn't illegitimate despite being unpleasant. Now it seems to be part of a larger concerning pattern.

Further details seem pertinent, but I find myself reluctant to share them (and already apprehensive that this more muted description will have the feared effect) because I just don't want to damage the relationship I have with the person who was pressuring me. I'm unhappy about it, but I still value that relations... (read more)

With Geoff himself (with whom I personally have had a casual positive relationship) I feel more actual fear of being critical or in anyway taking the side against Leverage. I predict that if I do so, I'll be placed on the list of adversaries. And something like, just based on the reaction to the Common knowledge post, Leverage is very agenty when it comes to their reputation. Or I don't know, I don't fear any particularly terrible retribution myself, but I loathe to make "enemies".

If you do make enemies in this process, in trying to help us make sense of the situation: count me among the people you can call on to help.

Brainstorming more concrete ideas: if someone makes a GoFundMe to try to offset any financial pressure/punishment Leverage-adjacent people might experience from sharing their stories, I'll be very happy to contribute.

I'm unhappy about it, but I still value that relationship

Positive reinforcement for finding something you could say that (1) protects this sort of value at least somewhat and (2) opens the way for aggregation of the metadata, so to speak; like without your comment, and other hypothetical comments that haven't happened yet for similar reasons, the pattern could go unnoticed.


I wonder if there's an extractable social norm / conceptual structure here. Something like separating [the pattern which your friend was participating in] from [your friend as a whole, the person you have a relationship]. Those things aren't separate exactly, but it feels like it should make sense to think of them separately, e.g. to want to be adversarial towards one but not the other. Like, if there's a pattern of subtly suppressing certain information or thoughts, that's adversarial, and we can be agnostic about the structure/location of the agency behind that pattern while still wanting to respond appropriately in the adversarial frame.

My current feelings are a mixture of the following: 

  • I disagree with a lot of the details of what many people have said (both people who had bad experiences and people defending their Leverage experiences and giving positive testimonials), and feel like expressing my take has some chance of making those people feel like their experiences are invalidated, or at least spark some conflict of some type
  • I know that Geoff and Leverage more broadly in the past have said pretty straightforwardly that they will take pretty adversarial action if someone threatens their reputation or brand, and that makes me both feel like I can trust many fewer things in the discussion, and makes me personally more hesitant to share some things (while also feeling like that's kind of cowardly, but I haven't yet had the time to really work through my feelings here, which in itself has some chilling effects that I feel uncomfortable with, etc.)
  • On the other side, there have been a lot of really vicious and aggressive attacks to anyone saying anything pro-leverage for many years, with a strength that I think is overall even greater and harder to predict than what Geoff and Leverage have been doing. It's also
... (read more)

Geoff and Leverage more broadly in the past have said pretty straightforwardly that they will take pretty adversarial action if someone threatens their reputation or brand

I assume there isn't a public record of this anywhere? Could I hear more details about what was said? This sounds atrocious to me.

I similarly feel that I can't trust the exculpatory or positive evidence about Leverage much so long as I know there's pressure to withhold negative information. (Including informal NDAs and such.)

On the other side, there have been a lot of really vicious and aggressive attacks to anyone saying anything pro-leverage for many years, with a strength that I think is overall even greater and harder to predict than what Geoff and Leverage have been doing. It's also been more of a crowd-driven phenomenon, which makes it less predictable and more scary.

I agree with this too, and think it's similarly terrible, but harder to blame any individual for (and harder to fix).

I assume it's to a large extent an extreme example of the 'large inferential gaps + true beliefs that sound weird' afflicting a lot of EA orgs, including MIRI. Though if Leverage has been screwed up for a long time, some of that public reaction may also have been watered over the years by true rumors spreading about the org.

1farp2dLet's stand up for the truth regardless of threats from Geoff/Leverage, and let's stand up for the truth regardless of the mob. Let's stand up for the truth! Maintaining some aura of neutrality or impartiality at the expense of the truth would be IMO quite obviously bad. I think that it is seen as not very normative on LW to say "I know things, confidential things I will not share, and because of that I have a very [bad/good] impression of this person or group". But IMO its important to surface. Vouching is an important social process.
6ChristianKl1dIt seems that your account is registered to just participate in this discussion and you withold your personal identity. If you sincerely believe that information should be shared, why are you withholding yourself and tell other people to take risks?

I have no private information to share. I think there is an obvious difference between asking powerful people in the community to stand up for the truth, and asking some rando commentator to de-anonymize. 

4Ruby1dAnna is attempting to make people comfortable having this difficult conversation about Leverage by first inviting them just to share what factors are affecting their participation. Oliver is kindly obliging and saying what's going through his mind. This seems like a good approach to me for getting the conversation going. Once people have shared what's going through their minds–and probably these need to received with limited judgmentality–the group can then understand the dynamics at play and figure out how to proceed having a productive discussion. All that to say, I think it's better to hold off on pressuring people or saying their reactions aren't normative [1] in this sub-thread. Generally, I think having this whole conversation well requires a gentleness and patience in the face of the severe, hard-to-talk-about situation. Or to be direct, I think your comments in this thread have been brusque/pushy in a way that's hurting the conversation (others feel free to chime in if that seems wrong to them). [1] For what it's worth, I think disclosing that your stance is informed by private info is good and proper.

I think your comments in this thread have been brusque/pushy in a way that's hurting the conversation (others feel free to chime in if that seems wrong to them).

I mentioned in a different comment that I've appreciated some of farp's comments here for pushing back against what I see as a missing mood in this conversation (acknowledgment that the events described in Zoe's account are horrifying, as well as reassurance that people in leadership positions are taking the allegations seriously and might take some actions in response). I also appreciate Ruby's statement that we shouldn't pressure or judge people who might have something relevant to say.

The unitofcaring post on mediators and advocates seems relevant here. I interpret farp (edit: not necessarily in the parent comment, but in various other comments in this thread) as saying that they'd like to see more advocacy in this thread instead of just mediation. I am not someone who has any personal experiences to share about Leverage, but if I imagine how I'd personally feel if I did, I think I agree.

On mediators and advocates: I think order-of-operations MATTERS.

You can start seeking truth, and pivot to advocate, as UOC says.

What people often can't do easily is start with advocate, and pivot to truth.

And with something like this? What you advocated early can do a lot to color both what and who you listen to, and who you hear from.

7Rob Bensinger15hI liked Farp's "Let's stand up for the truth" comment, and thought it felt appropriate. (I think for different reasons than "mediators and advocates" -- I just like people bluntly stating what they think, saying the 'obvious', and cheerleading for values that genuinely deserve cheering for. I guess I didn't expect Ollie to feel pressured-in-a-bad-way by the comment, even if he disagrees with the implied advice.)

I will talk about my own bit with Leverage later, but I don't feel like it's the right time to share it yet.

(But fwiw: I do have some scars, here. I have a little bit of skin in this one. But most of what I'm going to talk about, comes from analogizing this with a different incident.)

A lot of the position I naturally slide into around this, which I have... kind of just embraced, is of trying to relate hard to the people who:

  • WERE THERE
  • May have received a lot of good along with the bad
  • May have developed a very complicated and narratively-unsatisfying opinion because of that, which feels hard to defend
  • Are very sensitized to condemning mob-speak. Because they've been told, again and again, that anything good they got out of the above, will be swept out with the bathwater if the bad comes to light.
    • This sort of thing only stays covered up for this long, if there was a lot of pressure and plausible-sounding arguments pointing in the direction of "say nothing." The particular forms of that, will vary.
    • Core Leverage seems pretty willing to resort to manipulation and threats? And despite me generally trying so hard to avoid this vibe: I want to condemn that outright.
    • Also, in any othe
... (read more)

I was once in a similar position, due to my proximity to a past (different) thing. I kinda ended up excruciatingly sensitive, to how some things might read or feel to someone who was close, got a lot of good out of it (with or without the bad), and mostly felt like there was no way their account wouldn't be twisted into something unrecognizable. And who may be struggling, with processing an abrupt shift in their own personal narrative --- although I sincerely hope the 2 years of processing helped to make this less of a thing? But if you are going through it anyway, I am sorry.

And... I want this to go right. It didn't go right then; not entirely. I think I got yelled at by someone I respect, the first time I opened up about it. I'm not quite sure how to make this less scary for them? But I want it to be.

The people I know who got swept up in this includes some exceptionally nice people. There is at least one of them, who I would ordinarily call exceptionally sane. Please don't feel like you're obligated to identify as a bad person, or as a victim, because you were swept up in this. Just because some people might say it about you, doesn't make it who you are.

9TekhneMakre15hAn abstract note: putting stock in anonymous accounts potentially opens wider a niche for false accounts, because anonymity prevents doing induction about trustworthiness across accounts by one person. (I think anonymity is a great tool to have, and don't know if this is practically a problem; I just want to track the possibility of this dynamic, and appreciate the additional value of a non-anonymous account.)

One tool here is for a non-anonymous person to vouch for the anonymous person (because they know the person, and/or can independently verify the account).

1TekhneMakre15hTrue. A maybe not-immediately-obvious possibility: someone playing Aella's role of posting anonymous accounts could offer the following option: if you given an account and take this option, then if the poster later finds out that you seriously lied, then, they have the option to de-anonymize you. The point being, in the hypothetical where the account is egregiously false, the accounter's reputation still takes a hit; and so, these accounts can be trusted more. If there's no possibility of de-anonymization, then the account can only be trusted insofar as you trust the poster's ability to track accounter's trustworthiness. Which seems like a more complicated+difficult task. (This might be terrible thing to do, IDK.)
8Spiracular14hI get VERY creepy vibes from this proposal, and want to push back hard on it. Although, hm... I think "lying" and "enemy action" are different? Enemy action occasionally warrants breaking contracts back, after they didn't respect yours. Whereas if there is ZERO lying-through-negligence in accounts of PERSONAL EXPERIENCES, we can be certain we set the bar-of-entry far too high.
-5TekhneMakre14h

In the past, I've been someone who has found it difficult and costly to talk about Leverage and the dynamics around it, or organizations that are or have been affiliated with effective altruism, though the times I've spoken up I've done more than others. I would have done it more but the costs were that some of my friends in effective altruism interacted with me less, seemed to take me less seriously in general and discouraged me from speaking up more often again with what sometimes amounted to nothing more than peer pressure. 

That was a few years ago. For lots of reasons, it's easier, less costly, less risky and easier to not feel fear for me now. I don't know yet what I'll say regarding any or all of this related to Leverage because I don't have any sense of how I might be prompted or provoked to respond. Yet I expect I'll have more to say and towards what I might share as relevant I don't have any particular feelings about yet. I'm sensitive to how my statements might impact others but for myself personally I feel almost indifferent. 

I appreciate this invitation. I'll re-link to some things I already said on my own stance: https://www.lesswrong.com/posts/Kz9zMgWB5C27Pmdkh/common-knowledge-about-leverage-research-1-0?commentId=2QKKnepsMoZmmhGSe

Beyond what I laid out there:

  • It was challenging being aware of multiple stories of harm, and feeling compelled to warn people interacting with Geoff, but not wanting to go public with surprising new claims of harm. (I did mention awareness of severe harm very understatedly in the post. I chose instead to focus on "already known" properties that I feel substantially raise the prior on the actually-observed type of harm, and to disclose in the post that my motivation in cherry-picking those statements was to support pattern-matching to a specific template of harm).

  • After posting, it was emotionally a bit of a drag to receive comments that complained that the information-sharing attempt was not done well enough, and comparatively few comments grateful for attempting to share what I could, as best I could, to the best of my ability at the time, although the upvote patterns felt encouraging. I was pretty much aware that that was what was going to happen. In general, "flinc

... (read more)
3TekhneMakre13hI don't have anything to add, but I just want to say I felt a pronounced pang of warmth/empathy towards you reading this part. Not sure why, something about fear/bravery/aloneless/fog-of-war.
9Linch2dMy general feeling about this is that the information I know is either well-known or otherwise "not my story to tell." I've had very few direct interactions with Leverage except applying to Pareto, a party or two, and some interactions with Leverage employees (not Geoff) and volunteers. As is common with human interactions, I appreciated many but not all of my interactions. Like many people in the extended community, I've been exposed to a non-overlapping subset of accounts/secondhand rumors of varying degrees of veracity. For some things it's been long enough that I can't track the degree of confidences I'm supposed to keep, and under which conditions, so it seems better to err on the side of silence. At any rate, it's ultimately not my story/tragedy. My own interactions with Leverage has not been personally noticeably harmful or beneficial.

EDIT: This comment described a bunch of emails between me and Leverage that I think would be relevant here, but I misremembered something about the thread (it was from 2017) and I'm not sure if I should post the full text so people can get the most accurate info (see below discussion), so I've deleted it for now. My apologies for the confusion

2Rob Bensinger2d?!?!?!?!?!?!?!?!?!
6Rob Bensinger2dUpdate: Looks like the thing I was surprised by didn't happen. Confusion noticed, I guess!
0Aella2dWould you happen to have/be willing to share those emails?
9alyssavance2dI have them, but I'm generally hesitant to share emails as they normally aren't considered public. I'd appreciate any arguments on this, pro or con

I generally feel reasonably comfortable sharing unsolicited emails, unless the email makes some kind of implicit request to not be published, that I judge at least vaguely valid. In general I am against "default confidentiality" norms, especially for requests or things that might be kind of adversarial. I feel like I've seen those kinds of norms weaponized in the past in ways that seems pretty bad, and think that while there is a generally broad default expectation of unsolicited private communication being kept confidential, it's not a particularly sacred protection in my mind (unless explicitly or implicitly requested, in which case I think I would talk to the person first to get a more fully comprehensive understanding for why they requested confidentiality, and would generally err on the side of not publishing, though would feel comfortable overcoming that barrier given sufficient adversarial action)

unless the email makes some kind of implicit request to not be published

What does "implicit request" mean here? There are a lot of email conversations where no one writes a single word that's alluding to 'don't share this', but where it's clearly discussing very sensitive stuff and (for that reason) no one expects it to be posted to Hacker News or whatever later.

Without having seen the emails, I'm guessing Leverage would have viewed their conversation with Alyssa as 'obviously a thing we don't want shared and don't expect you to share', and I'm guessing they'd confirm that now if asked?

I do think that our community is often too cautious about sharing stuff. But I'm a bit worried about the specific case of 'normalizing big infodumps of private emails where no one technically said they didn't want the emails shared'.

(Maybe if you said more about why it's important in this specific case? The way you phrased it sort of made it sound like you think this should be the norm even for sensitive conversations where no one did anything terrible, but I assume that's not your view.)

2habryka2dI don't know, kind of complicated, enough that I could probably write a sequence on it, and not even sure I would have full introspective access into what I would feel comfortable labeling as an "implicit request". I could write some more detail, but it's definitely a matter of degree, and the weaker the level of implicit request, the weaker the reason for sharing needs to be, with some caveats about adjusting for people's communication skills, adversarial nature of the communication, adjusting for biases, etc.

I would just ask the other party whether they are OK to share rather than speculating about what the implicit expectation is.

Hi everyone. I wanted to post a note to say first, I find it distressing and am deeply sorry that anyone had such bad experiences. I did not want or intend this at all.

I know these events can be confusing or very taxing for all the people involved, and that includes me. They draw a lot of attention, both from those with deep interest in the matter, where there may be very high stakes, and from onlookers with lower context or less interest in the situation. To hopefully reduce some of the uncertainty and stress, I wanted to share how I will respond.

My current plan (after writing this note) is to post a comment about the above-linked post. I have to think about what to write, but I can say now that it will be brief and positive. I’m not planning to push back or defend. I think the post is basically honest and took incredible courage to write. It deserves to be read.

Separately, I’m going to write a letter in my role as Executive Director of Leverage Research on the topic of harms from our previous psychology research and the structure of the organization.

It may be useful to address the Leverage/Rationality relation or the Leverage/EA relation as well, but discussion of that might distract us from what is most important right now.

Given what the post said about the NDA that people signed when leaving, it seems to me like explictely releasing people from that NDA (maybe with a provision to anonymize names of other people) would be very helpful for having a productive discussion that can integrate the experiences of many people into public knowledge and create a shared understanding of what happened. 

6throwaway245620hEdit: I was going to leave the original comment, to provide context to Vaniver’s reply. But it started receiving upvotes that brought it above “-1", making it a more prominent bad example of community norms. I think the upvotes indicate importance in the essence of the questions, but their form were ill-considered and rushed to judgement. In compromise, I've tried to rewrite them more neutrally and respectfully to all involved. I may revisit them a few more times. * What is the relationship between Leverage and Reserve, and related individuals and entities? * Under what conditions does restitution to ex-Leveragers make sense? Under what conditions does it make sense for leadership to divest themselves of resources? * In arguendo, what could restitution or divestment concretely look like?

I wanted to note that I think this comment both a) raises a good point (should Leverage pay restitution to people that were hurt by it? Why and how much?) and b) does so in a way that I think is hostile and assumes way more buy-in than it has (or would need to get support for its proposal).

First, I think most observers are still in "figuring out what's happened" mode. Was what happened with Zoe unusually bad or typical, predictable or a surprise? I think it makes sense to hear more stories before jumping to judgment, because the underlying issue isn't that urgent and the more context, the wiser a decision we can make.

Second, I think a series of leading questions asked to specific people in public looks more like norm enforcement than it does like curious information-gathering, and I think the natural response is suspicion and defensiveness. [I think we should go past the defensiveness and steelman.]

Third, I do think that it makes sense for people to make things right with money when possible; I think that this should be proportional to damages done and expectations of care, rather than just 'who has the money.' Suppose, pulling these numbers out of a hat, the total damage done to L... (read more)

In retrospect, I apologize for the strident tone and questions in my original comment. I am personally worried about further harm, in uses of money or power by Anders, and from Zoe's post it seems like there were a handful to many more people hurt. If money or tokens are possibly causally downstream of harm, restitution might reduce further harm and address harm that's already taken place. The community is doing ongoing information gathering, though, and my personal rush to judgement isn't keeping in pace with that. I'll leave my above comment as is, since it's already received a constructive reply.

Good stuff. Very similar to DeMille's interview about Hubbard. As an aside, I love how the post rejects the usual positive language about "openness to experience" and calls the trait what it is: openness to influence.

My own experience is somewhat like Linch's here, where mostly I'm vaguely aware of some things that aren't my story to tell.

For most of the past 9ish years I'd found Leverage "weird/sometimes-offputting, but not obviously moreso than other rationality orgs." I have gotten personal value out of the Leverage suite of memes and techniques (Belief Reporting was a particularly valuable thing to have in my toolkit). 

I've received one bit of secondhand info about "An ex-leverage employee (not Zoe) had an experience that seemed reasonable to describe as 'the bad kind of cult that was actually harmful'." I was told this as part of a decisionmaking process where it seemed relevant, and asked not to share it further in the past couple years. I think it makes sense to share this much meta-data in this context.

While I'm not hugely involved, I've been reading OB/LW since the very beginning. I've likely read 75% of everything that's ever been posted here.

So, I'm way more clued-in to this and related communities than your average human being and...I don't recall having heard of Leverage until a couple of weeks ago.

I'm not exactly sure what that means with regard to PR-esque type considerations.

However.  Fair or not, I find having read the recent stuff I've got an ugh field that extended to slightly include LW.  (I'm not sure what it means to "include LW"...it's just a website.  My first stab at an explanation is it's more like "people engaged in community type stuff who know IRL lots of other people who communicate on LW", but that's not exactly right either.)

I think it'd be good to have some context on why any of this is relevant to LessWrong. The whole thing is generating a ton of activity and it feels like it just came out of nowhere. 

Personally I think this story is an important warning about how people with a LW-adjacent mindset can death spiral off the deep end. This is something that happened around this community multiple times, not just in Leverage (I know of at least one other prominent example and suspect there are more), so we should definitely watch out for this and/or think how to prevent this kind of thing.

What's the other prominent example you have in mind?

I am referring the cause of this incident. This seems like a possibly good source for more information, but I only skimmed it so don't vouch for the content.

7TekhneMakre2dThanks.

Leverage has always been at least socially adjacent to LW and EA (the earliest discussion I find is in 2012), and they hosted the earliest EA summits in 2013-2014 (before CEA started running EA Global).

4Dustin3dHaving seen it, I have a very vague recollection of maybe having read that at the time. Still, the amount of activity on the recent posts about Leverage seems to me all out of proportion with previous mentions/discussions.

Also, for the extended Leverage diaspora and people who are somehow connected, LessWrong is probably the most obvious place to have this discussion, even if people familiar with Leverage make up only a small proportion of people who normally contribute here.

There are other conversations happening on Facebook and Twitter but they are all way more fragmented than the ones here.

I originally chose LessWrong, instead of some other venue, to host the Common Knowledge post primarily because (1) I wanted to create a publicly-linkable document pseudonymously, and (2) I expected high-quality continuation of information-sharing and collaborative sense-making in the comments.

As someone part of the social communities, I can confirm that Leverage was definitely a topic of discussion for a long time around Rationalists and Effective Altruists. That said, often the discussion went something like, "What's up with Leverage? They seem so confident, and take in a bunch of employees, but we have very little visibility." I think I experienced basically the same exact conversation about them around 10 times, along these lines.

As people from Leverage have said, several Rationalists/EAs were very hostile around the topic of Leverage, particularly in the last ~4 years or so. (I've heard stories of people getting shouted at just for saying they worked at Leverage at a conference). On the other hand, they definitely had support by a few rationalists/EA orgs and several higher-ups of different kinds.

They've always been secretive, and some of the few public threads didn't go well for them, so it's not too surprising to me that they've had a small LessWrong/EA Forum presence.

I've personally very much enjoyed staying mostly staying away from the controversy, though very arguably I made a mistake there.

(I should also note that I had friends who worked at or worked close to Leverage, I attended like 2 events there early on, and I applied to work from there around 6 years ago)

9Evan_Gaensbauer2dFor what it's worth, my opinion is that you sharing your perspective is the opposite of making a mistake.

Sorry, edited. I meant that it was a mistake for me to keep away before, not now.

(That said, this post is still quite safe. It's not like I have scandalous information, more that, technically I (or others) could do more investigation to figure out things better.)

8Evan_Gaensbauer1dYeah, at this point, everyone coming together to sort this out together as a way of building a virtuous spiral of making speaking up feel safe enough that it doesn't even need to be a courageous thing to do or whatever is the kind of thing I think your comment also represents and what I was getting at.

A 2012 CFAR workshop included "Guest speaker Geoff Anders presents techniques his organization has used to overcome procrastination and maintain 75 hours/week of productive work time per person." He was clearly connected to the LW-sphere if not central to it.

Re: @Ruby on my brusqueness

LW/EA has more "world saving" orgs than just Leverage. Implicit to "world saving" orgs, IMO, is that we should tolerate some impropriety for the greater good. Or that we should handle things quietly in order to not damage the greater mission. 

I think that our "world saving" orgs ask a lot of trust from the broader community -- MIRI is a very clear example. I'm not really trying to condemn secrecy I am just pointing out that trust is asked of us.

I recognize that this is inflammatory but I don't see a reason to beat around the bush:
Leverage really seems like a cult. It seems like an unsafe institution doing harmful things. I am not sure how much this stuff about Leverage is really news to people involved in our other "world saving" orgs. I think probably not much. I don't want "world saving" orgs to have solidarity. If you want my trust you have to sell out the cult leaders, the rapists, etcetera, regardless of whether it might damage your "world saving" mission. I'm not confident that that's occurring.

IMO, is that we should tolerate some impropriety for the greater good.

I agree!

I am just pointing out that trust is asked of us.

I agree!

Leverage really seems like a cult. It seems like an unsafe institution doing harmful things.

Reminder that Leverage 1.0 is defunct and it seems very unlikely that the same things are going on with Leverage 2.0 (remote team, focus on science history rather than psychology, 4 people).

I am not sure how much this stuff about Leverage is really news to people involved in our other "world saving" orgs.

The information in Zoe's Medium post was significant news to me and others I've spoken to. 

(saying the below for general clarity, not just in response to you)

I think everyone (?) in this thread is deeply concerned, but we're hoping to figure out what exactly happened, what went wrong and why (and what maybe to do about it). To do that investigation and postmortem, we can't skip to sentencing (forgive me if that's not your intention, but it reads a bit to me that that's what you want to be happening), nor would it be epistemically virtuous or just to do so. 

Some major new information came to light, people need time to process it, surface other releva... (read more)

To do that investigation and postmortem, we can't skip to sentencing (forgive me if that's not your intention, but it reads a bit to me that that's what you want to be happening), nor would it be epistemically virtuous or just to do so. 

I super agree with this, but also want to note that I feel appreciation for farp's comments here. The conversation on this page feels to me like it has a missing mood: I found myself looking for comments that said something like "wow, this account is really horrifying and tragic; we're taking these claims really seriously, and are investigating what actions we should take in response". Maybe everyone thinks that that's obvious, and so instead is emphasizing the part where we're committed to due process and careful thinking and avoiding mob dynamics. But I think it's still worth stating explicitly, especially from those in leadership positions in the community. I found myself relieved just reading Ruby's response here that "everyone in this thread is deeply concerned".

I super agree with this, but also want to note that I feel appreciation for farp's comments here.

Fair!

I found myself looking for comments that said something like "wow, this account is really horrifying and tragic; we're taking these claims really seriously, and are investigating what actions we should take in response"

My models of most of the people I know in this thread feel that way. I can say on my own behalf that I found Zoe's account shocking. I found it disturbing to think that was going on with people I knew and interacted with.  I find it disturbing that if this really is true, how did it not surface until now? (Or how it was ignored until now?)  I'm disturbed that Leverage's weirdness (and usually I'm quite okay with weirdness) turned out to enable and hide terrible things, at least for one person and likely more. I'm saddened that it happened, because based on the account, it seems like Leverage were trying to accomplish some ambitious, good things and I wish we lived in a world where the "red flags" (group-living, mental experimentation, etc) were safely ignored in the pursuit in the service of great things. 

Suddenly I am in a world more awful than the one I thought I was in, and I'm trying to reorient. Something went wrong and something different needs to happen now. Though I'm confident it will, it's just a matter of ensuring we pick the right different thing. 

Thank you, I really appreciate this response. I did guess that this was probably how you and others (like Anna, whose comments have been very measured) felt, but it is really reassuring to have it explicitly verbally confirmed, and not just have to trust that it's probably true.

2Rob Bensinger15h+1

The information in Zoe's Medium post was significant news to me and others I've spoken to. 

That's a good thing to assert. 
It seems preeeetty likely that some leaders in the community knew more or less what was up. I want people to care about whether that is true or not.

To do that investigation and postmortem, we can't skip to sentencing

I get this sentiment, but at the same time I think it's good to be clear about what is at stake. It's easy for me to interpret comments like "Reminder that Leverage 1.0 is defunct and it seems very unlikely that the same things are going on with Leverage 2.0" as essentially claiming that, while post-mortems are useful, the situation is behind us. 

Simply put, if I were a victim, I would want to speak up for the sake of accountability, not shared examination and learning. If I spoke up and found that everyone agreed the behavior was bad, but we all learned from it and are ready ot move on, I would be pretty upset by that. And my understanding is that this is how the community's leaders have handled other episodes of abuse (based on 0 private information, only public / second hand information).

But I am coming into this with a lot of assumptions as an outsider. If these assumptions don't resonate with any people who are closer to the situation then I apologize. Regardless sorry for stirring shit up with not much concrete to say. 

3Ruby17hI would be quite surprised if the people I would call leaders knew of things that were as severe as Zoe's account and "did nothing". I care a lot whether that's true. My intention was to say that we don't have reason to believe there is harm actively occurring right now that we need to intervene on immediately. A day or two to figure things out is fine. Based on what Zoe said plus general models of these situations, I believe how victims feel is likely complicated. I'm hesitant to make assumptions here. (Btw, see here [https://medium.com/@anonleverage/leverage-accounts-1cd1b6335303] for where some people are trying to set up an anonymous database of experiences at Leverage). I might suggest creating another post (so as to not interfere too much with this one) detailing what you believe to be the case so that we can discuss and figure out any systematic issues.
6farp1dThat's my context. However I agree that my contributions haven't been very high EV in that I'm very far on the outside of a delicate situation and throwing my weight around. So I won't keep trying to intervene / subtextually post.
4Dustin18hOn one level I think this is correct, but...I also think it's possibly a little naïve. In the world consists of only "us", the people who think this world saving needs done, and who think like "we" do, your statement becomes more true. In the world wherein the vast majority of people think the world saving we're talking about is unimportant, or bad, or evil, your statement requires closer and closer to perfect secrecy and insularity to remain true.