New to LessWrong?

New Comment
263 comments, sorted by Click to highlight new comments since: Today at 4:42 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Thanks for your courage, Zoe!

Personally, I've tried to maintain anonymity in online discussion of this topic for years. I dipped my toe into openly commenting last week, and immediately received an email that made it more difficult to maintain anonymity - I was told "Geoff has previously speculated to me that you are 'throwaway', the author of the 2018 basic facts post". Firstly, I very much don't appreciate my ability to maintain anonymity being narrowed like this. Rather, anonymity is a helpful defense in any sensitive online discussion, not least this one. But yes, throwaway/anonymoose is me - I posted anonymously so as to avoid adverse consequences from friends who got more involved than me. But I'm not throwaway2,  anonymous, or BayAreaHuman - those three are bringing evidence that is independent from me at least.

I only visited Leverage for a couple months, back in 2014. One thing that resonated strongly with me about your post is that the discussion is badly confused by lack of public knowledge and strong narratives, about whether people are too harsh on Leverage, what biases one might have, and so on. This is why I think we often retreat to just stating "basic" or "comm... (read more)

What's frustrating about still hearing noisy debate on this topic, so many years later, is that Leverage being a really bad org seems overdetermined at this point. On the one hand, if I ranked MIRI, CFAR, CEA, FHI, and several startups I've visited, in terms of how reality-distorting they can be, Leverage would score ~9, while no other would surpass ~7. (It manages to be nontransparent and cultlike in other ways too!).  While on the other hand, their productive output was... also like a 2/10? It's indefensible. But still only a fraction of the relevant information is in the open.

One thing to note is that if you "read the room" instead of only looking at the explicit arguments, it's noticeable that a lot of people left Leverage and the new org ("Leverage 2.0") completely switched research directions, which to me seems like tacit acknowledgement that their old methods etc aren't as good.

As far as people leaving organizations I'd love to have good data for MIRI, CFAR, CEA and FHI.

I think I could write down a full history of employment for all of these orgs (except maybe FHI, which I've had fewer tabs on), in an hour or two of effort. It's somewhat costly for me (in terms of time), but if lots of people are interested, I would be happy to do it. 

I'm personally interested, and also I think having information like this collected in one place makes it much easier for everyone to understand the history and shape of the movement. IMO an employment history of those orgs would make for a very valuable top-level post.

8AppliedDivinityStudies3y
Very interested
1NicholasKross2y
I would like to read this very much, as I want to go into technical AI alignment work and such a document would be very helpful.

Full-time at CFAR in Oct 2015 when Pete Michaud and I arrived:

Anna Salamon, Val Smith, Kenzi Amodei, Julia Galef, Dan Keys, Davis Kingsley

 

Full-time at one point or another during my tenure:

Morgan Davis, Renshin Lee, Harmanas Chopra, Adom Hartell, Lyra Sancetta

(Kenzi, Julia, Davis, and Val all left while I was there, in that order.)

 

Notable part-timers (e.g. welcome at CFAR's weekly colloquium):

Steph Zolayvar, Qiaochu Yuan, Gail Hernandez

 

At CFAR in Oct 2018 when I left:

Anna Salamon (part time), Tim Telleen-Lawton, Dan Keys, Jack Carroll, Elizabeth Garrett, Adam Scholl, Luke Raskopf, Eli Tyre (part time), Logan Strohl (part time)

 

... I may have missed an Important Person or two but that's a decent initial sketch of those three years.

1Eli Tyre3y
I think I should also be in the list of notable part-timers?  
4[DEACTIVATED] Duncan Sabien3y
You're listed as part time at CFAR when I left.
2Eli Tyre3y
I guess I don't understand your categories. I would guess that I would should be on both sub-lists. [shrug]

As someone who's been close to these, some had a few related issues, but Leverage seemed much more extreme in many of these dimensions to me.

However, now there are like 50 small EA/rationalist groups out there, and I am legitimately worried about quality control.

I generally worry about all kinds of potential bad actors associating themselves with EA/rationalists.

There seems to be a general pattern where new people come to an EA/LW/ACX/whatever meetup or seminar, trusting the community, and there they meet someone who abuses this trust and tries to extract free work / recruit them for their org / abuse them sexually, and the new person trusts them as representatives of the EA/rationalist community (they can easily pretend to be), while the actual representatives of EA/rationalist community probably don't even notice that this happens, or maybe feel like it's not their job to go reminding everyone "hey, don't blindly trust everyone you meet here".

I assume the illusion of transparency plays a big role here, where the existing members generally know who is important and who is a nobody, who plays a role in the movement and who is just hanging out there, what kind of behavior is approved and what kind is not... but the new member has no idea about anything, and may assume that if someone acts high-status then the person actually is high-status in the movement, and that whatever such person does has an approval of the community.

To put it bluntly... (read more)

I very much agree about the worry, My original comment was to make the easiest case quickly, but I think more extensive cases apply to. For example, I’m sure there have been substantial problems even in the other notable orgs, and in expectation we should expect there to continue to be so. (I’m not saying this based on particular evidence about these orgs, more that the base rate for similar projects seems bad, and these orgs don’t strike me as absolutely above these issues.)

One solution (of a few) that I’m in favor of is to just have more public knowledge about the capabilities and problems of orgs.

I think it’s pretty easy for orgs of about any quality level to seem exciting to new people and recruit them or take advantage of them. Right now, some orgs have poor reputations among those “in the know” (generally for producing poor quality output), but this isn’t made apparent publicly.[1] One solution is to have specialized systems that actually present negative information publicly; this could be public rating or evaluation systems.

This post by Nuno was partially meant as a test for this:

https://forum.effectivealtruism.org/posts/xmmqDdGqNZq5RELer/shallow-evaluations-of-longtermist... (read more)

[1] I don’t particularly blame them, consider the alternative.

I think the alternative is actually much better than silence!

For example I think the EA Hotel is great and that many "in the know" think it is not so great. I think that the little those in the know have surfaced about their beliefs has been very valuable information to the EA Hotel and to the community. I wish that more would be surfaced. 

Simply put, if you are actually trying to make a good org, being silently blackballed by those "in the know" is actually not so fun. Of course there are other considerations, such as backlash, but IDK I think transparency is good on all sorts of angles. The opinions of those "in the know" matter; they lead, and I think its better for everyone if that leadership happens in the light.

Another thing to do, of course, would be to just do some amounts of evaluation and auditing of all these efforts, above and beyond what even those currently “in the know” have. 

I think this is more than warranted at this point, yeah. I wonder who might be trusted enough to lead something like that.

I agree that it would have been really nice for grantmakers to communicate with the EA Hotel more, and other orgs more, about their issues. This is often a really challenging conversation to have ("we think your org isn't that great, for these reasons"), and we currently have very few grantmaker hours for the scope of the work, so I think grantmakers don't have much time now to spend on this. However, there does seem to be a real gap here to me. I represent a small org and have been around other small orgs, and the lack of communication with small grantmakers is a big issue. (And I probably have it much easier than most groups, knowing many of the individuals responsible)

I think the fact that we have so few grantmakers right now is a big bottleneck that I'm sure basically everyone would love to see improved. (The situation isn't great for current grantmakers, who often have to work long hours). But "figuring out how to scale grantmaking" is a bit of a separate discussion. 

Around making the information public specifically, that's a whole different matter. Imagine the value proposition, "If you apply to this grant, and get turned down, we'll write about why we don't like it publically for everyone to see." Fewer people would apply and many would complain a whole lot when it happens. The LTFF already gets flack for writing somewhat-candid information on the groups they do fund. 

(Note: I was a guest manager on the LTFF for a few months, earlier this year)

Fewer people would apply and many would complain a whole lot when it happens. The LTFF already gets flack for writing somewhat-candid information on the groups they do fund. 

I think that it would be very interesting to have a fund that has that policy. Yes, that might reduce in fewer people applying but people applying might itself be a signal that their project is worth funding.

"If you apply to this grant, and get turned down, we'll write about why we don't like it publically for everyone to see."

I feel confident that Greg of EA Hotel would very much prefer this in the case of EA Hotel. It can be optional, maybe.

2ozziegooen3y
That's good to know.  I imagine grantmakers would be skeptical about people who would say "yes" to an optional form. Like, they say they're okay with the information being public, but when it actually goes out, some of them will complain about it, leading to a lot of extra time. However, some of our community seems unusually reasonable, so perhaps there's some way to make it viable.

To put it bluntly, EA/rationalist community kinda selects for people who are easy to abuse in some ways. Willing to donate, willing to work to improve the world, willing to consider weird ideas seriously -- from the perspective of a potential abuser, this is ripe fruit ready to be taken, it is even obvious what sales pitch you should use on them. —-

For what it’s worth, I think this is true for basically all intense and moral communities out there. The EA/rationalist groups generally seem better than many religious and intense political groups in these areas, to me. However, even “better” is probably not at all good enough.

-2Said Achmiz3y
What are “intense” and/or “moral” communities? And, why is it (or is it?) a good thing for a community to be “moral” and/or “intense”?
5ChristianKl3y
There are certain goals for which having a moral or intense community is helpful. Whether or not I want to live in such a community I consider it okay for other people to build those communities. On the other hand, building cults is not okay in the same sense.  Intense communities also generally focus on something where otherwise there's not much focus in society, increase cognitive diversity and are thus able to produce certain kinds of innovations that wouldn't happen with less cognitive diversity.
4ozziegooen3y
I was just thinking of the far right-wing and left-wing in the US; radical news organizations and communities. Q-anon, some of the radical environmentalists, conspiracy groups of all types. Many intense religious communities.  I'm not making a normative claim about the value of being "moral" and/or "intense", just saying that I'd expect moral/intense groups to have some of the same characteristics and challenges.
0farp3y
I think it matters a lot whether this is true, and there is widely known evidence that it isn't true. For example Brent Dill and (if you are willing to believe victims) Robert Lecnik.  Your post is well said and I am also very worried about EA/rat spaces as a fruitful space for predatory actors. 

Which thing are you claiming here? I am a bit confused by the double negative (you're saying there's "widely known evidence that it isn't true that representatives don't even notice when abuse happens", I think; might you rephrase?).

I've made stupid and harmful errors at various time, and e.g. should've been much quicker on the uptake about Brent, and asked more questions when Robert brought me info about his having been "bad at consent" as he put it. I don't wish to be and don't think I should be one of the main people trying to safeguard victims' rights; I don't think I have needed eyes/skill for it. (Separately, I am not putting in the time and effort required to safeguard a community of many hundreds, nor is anyone that I know of, nor do I know if we know how or if there's much agreement on what kinds of 'safeguarding' are even good ideas, so there are whole piles of technical debt and gaps in common knowledge and so on here.)

Nonetheless, I don't and didn't view abuse as acceptable, nor did I intend to tolerate serious harms. Parts of Jay's account of the meeting with me are inaccurate (differ from what I'm really pretty sure I remember, and also from what Robert and his hu... (read more)

-5farp3y
5Viliam3y
If that's so, then it's very bad, and I feel like some people should receive a wake-up slap. I live on the opposite side of the planet, and I usually only learn about things after they have already exploded. Sometimes I wonder if anything would be different if I lived where most of the action happens. Generally, it seems like they should import some adults into the Bay Area. As far as I know, in the Vienna community we do not tolerate this type of behavior. (Anyone feel free to correct me if I am wrong, publicly or privately at your choice.)
4ChristianKl3y
It seems to me that quality control has always an issue with some groups no matter how many groups there were.
2ozziegooen3y
Agreed, though I think that the existence of many groups makes it a more obvious problem, and a more complicated problem.
8orellanin1y
Hi! In the past few months I've been participating in Leverage Research/EA discourse on Twitter. Now there is one Twitter thread discussing your involvement as throwaway/anonymoose: https://twitter.com/KerryLVaughan/status/1585319237018681344 (with a subthread starting at https://twitter.com/ohabryka/status/1586084766020820992 discussing anti-doxxing norms and linking back to EA Forum comments). One piece of information that's missing is why you used two throwaway accounts instead of one (and in particular, why you used one to reply to the other one, as alleged by Kerry Vaughan in https://twitter.com/KerryLVaughan/status/1585319243985424384 ). Can you tell me about your reasoning behind that decision? (If that matters, I am not affiliated with any Leverage-adjacent org and I am not a throwaway account for a different EA Forum user.)
7RyanCarey1y
Hi Orellanin, In the early stages, I had in mind that the more info any individual anon-account revealed, the more easily one could infer what time they spent at Leverage, and therefore their identity. So while I don't know for certain, I would guess that I created anonymoose to disperse this info across two accounts. When I commented on the Basic Facts post as anonymoose, It was not my intent to contrive a fake conversation between two entities with separate voices. I think this is pretty clear from anonymoose's comment, too - it's in the same bulleted and dry format that throwaway uses, so it's an immediate possibility that throwaway and anonymoose are one and the same. I don't know why I used anonymoose there. Maybe due to carelessness, or maybe because I lost access to throwaway. (I know that at one time, an update to the forum login interface did rob me of access to my anon-account, but not sure if this was when that happened).
4Evan_Gaensbauer3y
Leverage Research hosted a virtual open house and AMA a couple weeks ago for their relaunch as a new kind of organization that has been percolating for the last couple years. I attended. One subject Geoff and I talked about was the debacle that was the article in The New York Times (NYT) on Scott Alexander from several months ago. I expressed my opinion that: 1.  Scott Alexander could have managed his online presence much better than he did on and off for a number of years. 2. Scott Alexander and the rationality community in general could have handled the situation much better than they did. 3. Those are parts of this whole affair that too few in the rationality community have been willing to face, acknowledge or discuss about what can be learned from mistakes made. 4. Nonetheless, NYT was the instigating party in whatever of the situation constituted a conflict between NYT, and Scott Alexander and his supporters, and NYT is the party that should be held more accountable and is more blameworthy if anyone wants to make it about blame. Geoff nodded, mostly in agreement, and shared his own perspective on the matter that I won't share. Yet if Geoff considers NYT to have done one or more things wrong in that case,  You yourself, Ryan, never made any mistake of posting your comments online in a way that might make it easier for someone else to de-anonymize you. If you made any mistake, it's that you didn't anticipate how adeptly Geoff would apparently infer or discern your identity. I expect why it wouldn't be so hard for Geoff to have figured it out it was you because you would have shared information about the internal activities at Leverage Research you are one of only a small number of people would have had access to.  Yet that's not something you should not have had to anticipate. A presumption of good faith in a community or organization entails a common assumption that nobody would do that to their other peers. Whatever Geoff himself has been thinking about

Based on how you wrote your comment, it seems that the email you received may have come across as intimidating.

I think the important information here is how did Geoff / Leverage Research handle similar criticism in the past. (I have no idea. I assume both you and Ryan probably know more about this.) As they say, past behavior is the best predictor of future behavior. The wording of the e-mail is not so important.

0Evan_Gaensbauer3y
I previously have not been as aware that this is a pattern of how so many people have experienced responses to criticism from Geoff and Leverage in the past. 

Timeline Additions

I rather liked the idea of making a timeline!

Geoff currently had a short doc on timing of changes in org structure, but it currently doesn't include much else.

Depending on how discussion here goes, I might transfer/transform this into its own post in the future. Will link them, if so.


Preamble

Nobody has talked much in public about the most dysfunctional things, yet? I am going to switch strategies out of dark-hinting and anonymity at this point, and put my cards down on the table.

This will be a sketch of the parts of this story that I know about. I do not have exact dates, and these are just broad-strokes of some of the key incidents here.

And not all of these are my story to tell? So sometimes, I will really only feel comfortable providing the broad-strokes.

(If someone has a better 2-3 sentence summary, or the full story, for some of these? Do chime in.)

These are each things I feel pretty solid about believing in. I think these incidents belong somewhere on any good consensus-timeline, but are not the full set of relevant events.

(I only have about 3-6 relevant contacts at the moment, but I've gotten at least 2 points of confirmation on each of these. It was not... (read more)

Threads Roundup

  • Several things under the LW Leverage Tag
  • Leverage Basic Facts EA Post & comment thread
    • I discovered this one a little late? Still flipping through it.
  • BayAreaHuman LW Post
    • By now, I have been able to confirm every single concrete point made in that post seems true or reasonable, to myself or at least one of my contacts (not always two). The tone is slightly-aggressive, but seems generally truth-seeking, to me.
    • I think it leans more towards characterizing dysfunctional late-L1, than early-L1? But not strictly.
    • Someone, probably Geoff (it's apparently the kind of thing he does, confirmed by 2+ people), sent out emails to friends of Leverage framing it as an unwarranted attack and encouraging flooding the comment thread with people's positive experiences.
      • I do not like that he did this! I know someone else, who intends to write up something more thorough about this. But if they don't, I am likely to comment on it myself, after saving evidence and articulating my thoughts.
      • EDIT: I do think a lot of the positive accounts are honest! I am not accusing any commenter of lying. My concern here is selective reporting, and something of a concentration of force dynami
... (read more)
6Viliam2y
I have some difficulty understanding the descriptions by former Leverage members. Inferential distance, but even if you tell me what the words refer to, I am not sure I am painting my near-mode picture correctly. Like, when you say "bodywork", now I imagine something like one person giving the other person a massage, where both participants believe that this action not only relaxes the body, but also helps to remove some harmful memes from the mind. -- Is this a strawman? Or is it a reasonable first approximation (which of course misses some important nuance)? For me, getting these things right feels like I have an insight into how the organization actually works, on social level. Approximate descriptions are okay. If massaging someone's left shoulder helps them overcome political mindkilling, and massaging someone's right shoulder protects them from Roko's Basilisk, don't tell me! You have the NDA, and I don't actually care about this level of detail. Keep your secret tech! I just want to understand the dynamic, like if someone talks to a stranger and later feels like the person may have cast some curse on them, the reasonable response is to schedule a massage. From all descriptions I have read so far, yours felt the most helpful in this direction. Thank you!
6Spiracular2y
My impression is that Leverage's bodywork is something closer to what other people call "energy work," which probably puts it... closer to Reiki than massage? But I never had it done to me, and I don't super understand it myself! Pretty low confidence in even this answer.
4Richard_Kennaway2y
Cult symptom! Invented terminology for invented, fictitious entities.
[-]Ruby2y150

The observation might be correct but I don't love the tone. It has some feeling of "haha, got you!" that doesn't feel appropriate to these discussions.

4Richard_Kennaway2y
Point taken, but I stand by the observation.

After discussing the matter with some other (non-Leverage) EAs, we've decided to wire $15,000 to Zoe Curzi (within 35 days).

A number of ex-Leveragers seem to be worried about suffering (financial, reputational, etc.) harm if they come forward with information that makes Leverage look bad (and some also seem worried about suffering harm if they come forward with information that makes Leverage look good). This gift to Zoe is an attempt to signal support for people who come forward with accounts like hers, so that people in Zoe's reference class are more inclined to come forward.

We've temporarily set aside $85,000 in case others write up similar accounts -- in particular, accounts where it would be similarly useful to offset the incentives against speaking up. We plan to use our judgment to assess reports on a case-by-case basis, rather than having an official set of criteria. (It's hard to design formal criteria that aren't gameable, and we were a bit wary of potentially setting up an incentive for people to try to make up false bad narratives about organizations, etc.)

Note that my goal isn't to evaluate harms caused by Leverage and try to offset such harms. Instead, it's trying to ... (read more)

Note that my goal isn't to evaluate harms caused by Leverage and try to offset such harms. Instead, it's trying to offset any incentives against sharing risky honest accounts like Zoe's.

I like the careful disambiguation here.

FWIW, I independently proposed something similar to a friend in the Lightcone office last week, with an intention that was related to offsetting harm.  My reasoning:

There's often a problem in difficult "justice" situations, where people have only a single bucket for "make the sufferer feel better" and "address the wrong that was done."

This is quite bad—it often causes people to either do too little for victims, or too much to offenders, because they're trying to achieve two goals at once and one goal dominates the calculation.  Not helping someone materially because the harm proved unintentional, or punishing the active party way in excess of what they "deserve" because that's what it takes to make the injured party feel better, that sort of thing.

Separating it out into "we're still figuring out the Leverage situation but in the meantime, let's try to make this person's life a little better" is excellent.

Reiterating that I understand that's not what you are doing, here.  But I think that would separately have also been a good thing.

A few quick thoughts:

1) This seems great, and I'm impressed by the agency and speed.

2) From reading the comments, it seems like several people were actively afraid of how Leverage could retaliate. I imagine similar for accusations/whistleblowing for other organizations. I think this is both very, very bad, and unnecessary; as a whole, the community is much more powerful than individual groups, so it seems poorly managed when the community is scared of a specific group. Resources should be spent to cancel this out.

In light of this, if more money were available, it seems easy to justify a fair bit more. Or even better could be something like, "We'll help fund lawyers in case you're attacked legally, or anti-harassing teams if you're harassed or trolled". This is similar to how the EFF helps with cases from small people/groups being attacked by big companies.

I don't mean to complain; I think any steps here, especially so quickly are fantastic.

3) I'm afraid this will get lost in this comment section. I'd be excited about a list of "things to keep in mind" like this to be repeatedly made prominent somehow. For example, I could imagine that at community events or similar, there could be ... (read more)

Edit: (One person reading this reports below that this made them more reluctant to come forward with their story, and so that seems bad to me. I have mentally updated as a result. More relevant discussion below.) 

I notice that there's not that much information public about what Geoff actually Did and Did Not Do. Or what he instigated and what he did not. Or what he intended or what he did not intend. 

Um, I would like more direct evidence of what he actually did and did not do. This is cruxy for me in terms of what should happen next. 

Right now, based just on the Medium post, one plausible take is that the people in Geoff's immediate circle may have been taking advantage of their relative power in the hierarchy to abuse the people under them. 

See this example from Zoe:

A few weeks after this big success, this person told me my funding was in question — they had done all they could do to train me and thought I might be too blocked to sufficiently progress into a Master on the project. They and Geoff were questioning my commitment to and understanding of the project, and they had concerns about my debugging trajectory.

"They and Geoff" makes it sound like Zoe's super... (read more)

The most directly 'damning' thing, as far as I can tell, is Geoff pressuring people to sign NDAs.


I received an email from a Paradigm board member on behalf of Paradigm and Leverage that aims to provide some additional clarity on the information-sharing situation here. Since the email specifies that it can be shared, I've uploaded it to my Google Drive (with some names and email addresses redacted). You can view it here.

The email also links to the text of the information-sharing agreement in question with some additional annotations.

[Disclosure: I work at Leverage, but did not work at Leverage during Leverage 1.0. I'm sharing this email in a personal rather than a professional capacity.]

I do applaud explicitely clarifying that people are free to share their own experiences.

Thanks for sharing this. ! 

I believe this is public information if I look for your 990s, but could you or someone list the Board members of Leverage / Paradigm, including changes over time? 

I don't know how realistic this worry is, but I'm a bit worried about scenarios like:

  1. A signatory doesn't share important-to-share info because they interpret the lnformation Arrangement doc (even with the added comments) as too constraining.

    My sense is that there's still a lot of ambiguity about exactly how to interpret parts of the agreement? And although the doc says it "is meant to be based on norms of good behavior in society" I don't see a clause explicitly allowing people's personal consciences to supersede the agreement. (I might just have missed it.)
     
  2. Or: A signatory doesn't share important-to-share info because they see the original agreement as binding, not the new "clarifications and perspective today" comments.

    (I don't know how scrupulous ex-Leveragers are about sticking to signed informal agreements, but if the agreement has moral force, I could imagine some people going 'the author can't arbitrarily reinterpret the agreement post facto, when the agreement didn't specify that you have this power'.

    Indeed, signing a document with binding moral force seems pretty risky to me if the author has lots of leeway to later reinterpret what parts of the agreement mean. But ma
... (read more)
3ChristianKl2y
It seems to me "this is not a legal agreement" is basically such a clause.  It seems that at the end of Leverage 1.0 groups were in conflict. There's a strong interest in that conflict not playing out in a way where different people publish private information of each other and then retaliate in kind. It might very well be that plenty of the ex-Leverages don't speak out because they are afraid that private information about them will be openly published in retaliation if they do.  Given that there's a section of (10) Expected lessening it seems strange to me to see the original agreement as infinitely binding.

[...] The most important thing we want to clarify is that as far as we are concerned, at least, individuals should feel free to share their experiences or criticise Geoff or the organisations.

[... T]his document was never legally binding, was only signed by just over half of you, and almost none of you are current employees, so you are under no obligation to follow this document or the clarified interpretation here. [...]

I'm really happy to see this! Though I was momentarily confused by the "so" here -- why would there be less moral obligation to uphold an agreement, just because the agreement isn't legally binding, some other people involved didn't sign it, and the signatory has switched jobs? Were those stipulated as things that would void the agreement?

My current interpretation is that Matt's trying to say something more like 'We never took this agreement super seriously and didn't expect you to take it super seriously either, given the wording; we just wanted it as a temporary band-aid in the immediate aftermath of Leverage 1.0 dissolving, to avoid anyone taking hasty action while tensions were still high. Here's a bunch of indirect signs that the agreement is no big deal and doesn't have moral force years later in a very different context: (blah).' It's Bayesian evidence that the agreement is no big deal, not a deductive proof that the agreement is ~void. Is that right?

Another thing I want to mentally watch out for: 

It might be tempting for some ex-Leverage people to use Geoff as the primary scapegoat rather than implicating themselves fully. So as more stories come out, I plan to be somewhat delicate with the evidence. The temptation to scapegoat a leader is pretty high and may even seem justifiable in a "ends justifies the means" kind of thinking. 

I don't seem to personally be OK with using misleading information or lies to bolster a case against a person, even if this ends up "saving" a lot of people. (I don't think it actually saves them... people should come to grips with their own errors, not hide behind a fallback person.) 

So... Leverage, I'm looking at you as a whole community! You're not helpless peons of Geoff Anders. 

When spiritual gurus go out of control, it's not a one-man operation; there are corroborators, enablers, people who hid information, yes-men and sycophants, those too afraid to do the right thing or speak out against wrongdoing, those too protective of personal benefits they may be receiving (status, friends, food, housing), etc. 

There's stages of 'coming to terms' with something difficult. And a v... (read more)

I basically agree with this.

But also, I think pretty close to ZERO people who were deeply affected (aside from Zoe, who hasn't engaged beyond the post) have come forward in this thread. And I... guess we should talk about that.

I know from firsthand, that there were some pretty bad experiences in the incident that tore Leverage 1.0 apart, which nobody appears to feel able to talk about.

I am currently not at all optimistic that we're managing to balance this correctly? I also want this to go right. I'm not quite sure how to do it.

That's pretty fair. I am open to taking down this comment, or other comments I've made. (Not deleting them forever, I'll save them offline or something.) Your feedback is helpful here and revealing to me, and I feel myself updating because of it. 

I have commented somewhere else that I do not like LessWrong for this discussion... because a) It seems bad for justice to be served. and b) It removes a bunch of context data that I personally think is super relevant (including emotional, physical layers) and c) LW is absolutely not a place designed for healing or reconciliation... and it also seems only 'okay' for sense-making as a community. It is maybe better for sense-making at the individual intellectual level. So... I guess LW isn't my favorite place for this discussion to be happening... I wonder what you think. 

(Separately) I care about folks from Leverage. I am very fond of the ones I've met. Zoe charted me once, and I feel fondly about that. I've been charted a number of times at Leverage, and it was good, and I personally love CT charting / Belief Reporting and use, reference, and teach it to others to this day. Although it's my own version now. I went to a Paradigm workshop once, as well as several parties or gatherings. 

My felt sense of my time at the workshop (especially during more casual hang-out-y parts of it) is like a sense of sad distance... like, oh I would like to be friends with these people... but mentally / emotionally they seem "hard to access." 

I'm feeling compassion towards the ones who have suffered and are suffering. I don't need to be personal friends with anyone, but ... if there's a way I can be of service, I am interested. 

Open and free invitation: If anyone involved in the Leverage stuff in some way wants someone to hold space for you as you process things, I am open to offer that, over Zoom, in a confidential manner. (I am not very involved in the community normally, as I am committed to being at the Monastic Academy in Vermont for a long while, ... (read more)

Since it's mostly just pointers to stuff I've already said/implied... I'll throw out a quick comment.

I would like it if somebody started something like a carefully-moderated private Facebook group, mostly of core people who were there, to come to grips with their experiences? I think this could be good.

I am slightly concerned that people who are still in the grips of "Leverage PR campaigning" tendencies, will start trying to take it over or otherwise poison the well? (Edit: Or conversely, that people who still feel really hurt or confused about it might lash out more than I'd wish. I personally, am more worried about the former.) I still think it might be good, overall.

Be sure to be clear EARLY about who you are inviting, and who you are excluding! It changes what people are willing to talk about.

...I am not personally the right person to do this, though.

(It is too easy to "other" me, if that makes sense.)


I feel like one of the only things the public LW thread could do here?

Is ensuring public awareness of some of the unreasonably-strong reality/truth-suppressive pressures that were at play here, that there were some ways in which secrecy agreements were leveraged pretty badly to avoid accountability for harms, and showing a public ramp-down of opportunities to do so in the future.

Along with doing what we can, to signal that we generally stand against people over-simplistically demonizing the people and organizations involved in this.

... unreasonably-strong reality/truth-suppressive pressures that were at play here, that there were some ways in which secrecy agreements were leveraged pretty badly to avoid accountability for harms ... 

Hmm. This seems worth highlighting. 

The NDAs (plus pressure to sign) point to this. 

... 

( The rest of this might be triggering to anyone who's been through gaslighting / culty experiences. Blunt descriptions of certain forms of control and subjugation. ) 

...

The rest of the truth-suppressive measures I can only speculate. Here's a list of possible speculative mechanisms that come to mind, some of which were corroborated by Zoe's report but not all:

  • Group hazing or activities that cause collective shame, making certain things hard to admit to oneself and others (plus, inserting a bucket error where 'shameful activity' is bucketed with 'the whole project' or something)
    • This could include implanting group delusions that are shameful to admit. 
  • Threats to one's physical person or loved ones for revealing things
  • Threats to one's reputation or ability to acquire resources for revealing things
  • Deprivation used to negatively / positively reinforce certain behaviors
... (read more)
5TekhneMakre3y
* "You can't rely on your perspective / Everything is up for grabs." All of your mental content--ideas, concepts, motions, etc.--are potentially good (and should be leaned more heavily on, overriding others) / bad (and should be ignored / downvoted / routed around / destroyed / pushed against), and more openness to change is better, and there's no solid place from which you can stand and see things. Of course, this is in many ways true and useful; but leaning into this creates much more room for others to selectively up/downvote stuff in you to avoid you reaching conclusions they don't want you to reach; or more likely, up/downvote conclusions, and have you rearrange yourself to harmonize with those judgements. * Trolling Hope placed in the project / leadership. Like: I care deeply that things go well in the world; the only way I concretely see that might happen, is through this project; so if this project is doomed, then there's no Hope; so I may as well bet everything on worlds where the project isn't doomed; so worlds where the project is doomed are irrelevant; so I don't see / consider / admit X if X implies that the project is doomed, since X is entirely about irrelevant worlds. * Emotional reward conditioning. (This one is simple or obvious, but I think it's probably actually a significant portion of many of these sorts of situations.) When you start to say information I don't like, I'm angry at you, annoyed, frustrated, dismissive, scornful, derisive, insulting, blank-faced, uninterested, condescending, disgusted, creeped out, pained, hurt, etc. When you start to hide information I don't like, or expound the opposite, I'm pleasant, endeared, happy, admiring, excited, etc. etc. Conditioning shades into + overlaps other tactics like stonewalling (blank-faced, aiming at learned helplessness), shaming, and running intereference (changing the subject), but conditioning has a particular systematic effect of making you "walk on eggshells" about certain things and

Do you have a suggestion for another forum that you think would be better? 

In particular, do you have pointers to online forums that do incorporate the emotional and physical layers ("in a non-toxic way", he adds, thinking of twitter). Or do you think that the best way to do this is just not online at all?

3Unreal3y
CFAR's recent staff reunion seemed to do all right. It wasn't, like, optimized for safety or making sure everyone was heard equally or something like that, but such features could be added if desired. Having skilled third-party facilitators seemed good.  Oh you said 'online'. Uhhh.  Online fishbowl Double Cruxes would get us like ... 30% of the way there maybe? Private / invite only ones?  One could run an online Group Process like thing too. Invite a group of people into a Zoom call, and facilitate certain breakout sessions? Ideally with facilitation in each breakout group?  I am not thinking very hard about it.  We need a lot of skill points in the community to make such things go well. I'm not sure how many skill points we're at. 
4Spiracular3y
Meta: I think it makes some good points. I do not think it was THAT bad, and I think the discussion was good. I would keep it up, but it's your call. Possibly adding an "Edit: (further complicated thoughts)" at the top? (Respect for thinking about it, though.)

I see what you're doing? And I really appreciate that you are doing it.

...but simultaneously? You are definitely making me feel less safe to talk about my personal shit.

(My position on this is, and has always been: "I got a scar from Leverage 1.0. I am at least somewhat triggered; on both that level, and by echoes from a past experience. I am scared that me talking about my stuff, rather than doing my best to make and hold space, will scare more centrally-affected people off. And I know that some of those people, had an even WORSE experience than I did. In what was, frankly, a surreal and really awful experience for me.")

[-]weft3y190

Multiple times on this thread I've seen you make the point about figuring out what responsibility should fall on Geoff, and what should be attributed to his underlings.

I just want to point out that it is a pattern for powerful bad actors to be VERY GOOD at never explicitly giving a command for a bad thing to happen, while still managing to get all their followers on board and doing the bad thing that they only hinted at/ set up incentive structures for, etc.

I wanted to immediately agree. Now I'm pausing...

It seems good to try to distinguish between:

  • Well-meaning but flawed leader sets up a system or culture that has blatant holes that allow abuse to happen. This was unintentional but they were careless or blind or ignorant, and this resulted in harm. (In this case, the leader should be held accountable, but there's decent hope for correction.) 
    • Of course, some of the 'flawed' thing might be shadow stuff, in which case it might be slippery and difficult to see, and the leader may have various coping mechanisms that make accountability difficult. I think this is often the case with leaders, and as far as I can tell, most leaders have shadow stuff, and it negatively impacts their groups, to varying degrees. (I'm worried about Geoff in this case because I think intelligence + competence + shadow stuff is a lot more difficult. The more intelligent and powerful you are, the longer you can keep outmaneuvering attempts to get you to see your own shadow; I've seen this kind of thing, it's bad.) 
  • The leader is not well-meaning and is deliberately exploitative in an intentional way. They created a system that was designed to exploit peopl
... (read more)
9TekhneMakre3y
I'm not sure about this, and I don't think you were trying to say this, but, I doubt that the two categories you gave usefully cover the space, even at this level of abstraction. Someone could be "well-meaning" in the sense of all their explicit, and even all their conscious, motives being compassionate, life-oriented, etc., while still systematically agentically cybernetically motivatedly causing and amplifying harm. I think you were getting at this in the sub-bullet-point, but the sort of person I'm describing would both meet the description "well-meaning; unintentional harm" and also this from your second bullet-point: Maybe I'm just saying, I don't know what you (or I, or anyone) mean by "well-meaning": I don't know what it is to be well-meaning, and I don't know how we would know, and I don't know what predictions to make if someone is well-meaning or not. (I'm not saying it's not a thing, it's very clearly a thing; it's just that I want to develop our concepts more, because at least my concepts are pushed past the breaking point in abusive situations.) For example, someone might both (1) have never once consciously explicitly worked out any strategy or design to make it easier to harm people, and (2) across contexts, take actions that reliably develop/assemble a social field where people are being systematically harmed, and not update on information about how to not do that. Maybe it would help to distinguish "categories of essence" from "categories of treatment". Like, if someone is so drowning in their shadow that they reliably, proactively, systematically harm people, then a category of essence question is like, "in principle is there information that could update them to stop doing this", and a category of treatement is like, "regardless of what they really are, we are going to treat them exactly like we'd treat a conscious, malevolent, deliberate exploiter".
7Unreal3y
I appreciate the added discernment here. This is definitely the kind of conversation I'd like to be having. !  Agree. I was including that in 'shadow stuff'.  The main difference between well-meaning and not, I think for me, is that the well-meaning person is willing to start engaging in conversations or experimenting with new systems in order to help the problems be less. Even though it's in their shadow and they cannot see it and it might take a lot to convince them, after some time period (which could be years!), they are game enough to start making changes, trying to see it, etc.  I believe Anna S is an example of such a well-meaning person, but also I think it took her a pretty long time to come to grips with the patterns? I think she's still in the process of discerning it? But this seems normal. Normal human level thing. Not sociopathic Epstein thing.  More controversially perhaps, I think Brent Dill has the potential to see and eat his shadow (cuz I think he actually cares about people and I've seen his compassion), but as you put it, he is "so drowning in his shadow that he reliably, systematically harms people." And I actually think it's the compassionate thing to do to prevent him from harming more people.  So where does Geoff fall here? I am still in that inquiry. 
3ChristianKl3y
While this is true cult enviroments by their nature allow other bad actors besides the leader often to allow to rise into positions of power within them. I think the Osho community is a good example. Given that Osho himself was open about his community running the biggest bioterror attack on the US at the same which otherwise likely wouldn't have been discovered, it doesn't seem to me that he was the person most responsible for that but his right hand at the time.  As far as cult dynamics go it's not only the leader getting his followers to do things either but also various followers acting in a way where they treat the leader as a guru whether or not the leader wants that to happen which in turn does often affect the mindset and actions of the leader. At the moment it's for example unclear to me to what extend CEA shares part of the responsibility for enabling Leverage. 

My current sense? Is that both Unreal and I are basically doing a mix of "take an advocate role" and "using this as an opportunity to get some of what the community got wrong last time -with our own trauma- right." But for different roles, and for different traumas.

It seemed worth being explicit and calling this out. (I don't necessarily think this is bad? I also think both of us seem to have done a LOT of "processing our own shit' already, which helps.)

But doing this is... exhausting for me, all the same. I also, personally, feel like I've taken up too much space for a bit. It's starting to wear on me in ways I don't endorse.

I'm going to take a step back from this for a week, and get myself to focus on living the rest of my life. After a week, I will circle back. In fact, I COMMIT to circling back.


And honestly? I have told several people about the exact nature of my Leverage trauma. I will tell at least several more people about it, before all of this is over.

It's not going to vanish. I've already ensured that it can't. I can't quite commit to "going full public," because that might be the wrong move? But I will not rest on this until I have done something broadly equivalent.

I a... (read more)

Many of these things seem broadly congruent with my experiences at Pareto, although significantly more extreme. Especially: ideas about psychology being arbitrarily changeable, Leverage having the most powerful psychology/self-improvement tools, Leverage being approximately the only place you could make real progress, extreme focus on introspection and other techniques to 'resolve issues in your psyche', (one participant's 'research project' involved introspecting about how they changed their mind for 2 months) and general weird dynamics (e.g. instructors sleeping with fellows; Geoff doing lectures or meeting individually with participants in a way that felt very loaded with attempts to persuade and rhetorical tricks), and paranoia (for example: participants being concerned that the things they said during charting/debugging would be used to blackmail or manipulate them; or suspecting that the private slack channels for each participant involved discussion of how useful the participants were in various ways and how to 'make use of them' in future). On the other hand, I didn't see any of the demons/objects/occult stuff, although I think people were excited about 'energy healers'/'body work', not actually believing that there was any 'energy' going on, but thinking that something interesting in the realm of psychology/sociology was going on there. Also, I benefitted from the program in many ways, many of the techniques/attitudes were very useful, and the instructors generally seemed genuinely altruistic and interested in helping fellows learn.

Epistemic status: I have not been involved with Leverage Research in any way, and have no knowledge of what actually happened beyond what's been discussed on LessWrong. This comment is an observation I have after reading the post.

I had just finished reading Pete Walker's Complex PTSD before coming across this post. In the book, the author describes a list of calm, grounded thoughts to respond to inner critic attacks. A large part of healing is for the survivor to internalize these thoughts so they can psychologically defend themselves.

I see a stark contrast between what the CPTSD book tries to instill and the ideas Leverage Research tried to instill, per Zoe's account. It's as if some of the programs at Leverage Research were trying to unravel almost all of one's sense of self.

A few examples:

Perfectionism

From the CPTSD book:

I do not have to be perfect to be safe or loved in the present. I am letting go of relationships that require perfection. I have a right to make mistakes. Mistakes do not make me a mistake.

From the post:

We might attain his level of self-efficacy, theoretical & logical precision, and strategic skill only once we were sufficiently transformed via the use

... (read more)

More thoughts:

I really care about the conversation that’s likely to ensue here, like probably a lot of people do.

I want to speak a bit to what I hope happens, and to what I hope doesn’t happen, in that conversation. Because I think it’s gonna be a tricky one.

What I hope happens:

  • Curiosity
  • Caring,
  • Compassion,
  • Interest in understanding both the specifics of what happened at Leverage, and any general principles it might hint at about human dynamics, or human dynamics in particular kinds of groups.

What I hope doesn’t happen:

  • Distancing from uncomfortable data.
  • Using blame and politics to distance from uncomfortable data.
  • Refraining from sharing true relevant facts, out of fear that others will take them in a politicized way, or will use them as an excuse for false judgments.

This is LessWrong; let’s show the world how curiosity/compassion/inquiry is done!

Thanks, Anna!

As a LessWrong mod, I've been sitting and thinking about how to make the conversation go well for days now and have been stuck on what exactly to say.  This intention setting is a good start.

I think to your list I would add judging each argument and piece of data on its merits, .i.e., updating on evidence even if it pushes against the position we currently hold.

Phrased alternatively, I'm hoping we don't: treating arguments as soldiers: accepting bad arguments because they favor our preferred conclusion, rejecting good arguments because they don't support our preferred conclusion. I think there's a risk in this cases of knowing which side you're on and then accepting and rejecting all evidence accordingly.

Refraining from sharing true relevant facts, out of fear that others will take them in a politicized way, or will use them as an excuse for false judgments.

Are you somehow guaranteeing or confidently predicting that others will not take them in a politicized way, use them as an excuse for false judgments, or otherwise cause harm to those sharing the true relevant facts? If not, why are you asking people not to refrain from sharing such facts?

(My impression is that it is sheer optimism, bordering on wishful thinking, to expect such a thing, that those who have such a fear are correct to have such a fear, and so I am confused that you are requesting it anyway.)

Thanks for the clarifying question, and the push-back. To elaborate my own take: I (like you) predict that some (maybe many) will take shared facts in a politicized way, will use them as an excuse for false or uncareful judgments, etc. I am not guaranteeing, nor predicting, that this won’t occur.

I am intending to myself do inference and conversation in a way that tries to avoids these “politicized speech” patterns, even if it turns out politically costly or socially awkward for me to do so. I am intending to make some attempt (not an infinite amount of effort, but some real effort, at some real cost if needed) to try to make it easier for others to do this too, and/or to ask it of others who I think may be amenable to being asked this, and/or to help coalesce a conversation in what I take to be a better pattern if I can figure out how to do so. I also predict, independently of my own efforts, that a nontrivial number of others will be trying this.

If “reputation management” is a person’s main goal, then the small- to medium-sized efforts I can hope to contribute toward a better conversation, plus the efforts I’m confident in predicting independently of mine, would be insufficien... (read more)

3Rohin Shah3y
It sounds like you are predicting that the people who are sharing true relevant facts have values such that the long-term benefits to the group overall outweigh the short-term costs to them. In particular, it's a prediction about their values (alongside a prediction of what the short-term and long-term effects are). I'll just note that, on my view of the short-term and long-term effects, it seems pretty unclear whether by my values I should share additional true relevant information, and I lean towards it being negative. Of course, you have way more private info than me, so perhaps you just know a lot more about the short-term and long-term effects. I'm also not a fan of requests that presume that the listener is altruistic, and willing to accept personal harm for group benefit. I'm not sure if that's what you meant -- maybe you think in the long term sharing of additional facts would help them personally, not just help the group. Fwiw I don't have any particularly relevant facts. I once tagged along with a friend to a party that I later (i.e. during or after the party) found out was at Leverage. I've occasionally talked briefly with people who worked at Leverage / Paradigm / Reserve. I might have once stopped by a poster they had at EA Global. I don't think there have been other direct interactions with them.

I'm also not a fan of requests that presume that the listener ...

From my POV, requests, and statements of what I hope for, aren't advice. I think they don't presume that the listener will want to do them or will be advantaged by them, or anything much else about the listener except that it's okay to communicate my request/hope to them. My requests/hopes just share what I want. The listener can choose for themselves, based on what they want. I'm assuming listeners will only do things if they don't mind doing them, i.e. that my words won't coerce people, and I guess I'm also assuming that my words won't be assumed to be a trustworthy voice of authorities that know where the person's own interests are, or something. That I can be just some person, hoping and talking and expecting to be evaluated by equals.

Is it that you think these assumptions of mine are importantly false, such that I should try to follow some other communication norm, where I more try to only advocate for things that will turn out to be in the other party's interests, or to carefully disclaim if I'm not sure what'll be in their interests? That sounds tricky; I'm not peoples' parents and they shouldn't trust... (read more)

I'm assuming listeners will only do things if they don't mind doing them, i.e. that my words won't coerce people,

I feel like this assumption seems false. I do predict that (at least in the world where we didn't have this discussion) your statement would create a social expectation for the people to report true, relevant facts, and that this social expectation would in fact move people in the direction of reporting true, relevant facts.

I immediately made the inference myself on reading your comment. There was no choice in the matter, no execution of a deliberate strategy on my part, just an inference that Anna wants people to give the facts, and doesn't think that fear of reprisal is particularly important to care about. Well, probably, it's hard to remember exactly what I thought, but I think it was something like this. I then thought about why this might be, and how I might have misunderstood. In hindsight, the explanation you gave above should have occurred to me, that is the sort of thing that people who speak literally would do, but it did not.

I think there are lots of LWers who, like me, make these sorts of inferences automatically. (And I note that these kinds of inferences a... (read more)

Those making requests for others to come forward with facts in the interest of a long(er)-term common good could find norms that serves as assurance or insurance that someone will be protected against potential retaliation against their own reputation. I can't claim to know much about setting up effective norms for defending whistleblowers though.

2TekhneMakre3y
If someone takes you as an authority, then they're likely to take your wishes as commands. Imagine a CEO saying to her employees, "What I hope happens: ... What I hope doesn't happen: ...", and the (vocative/imperative mood) "Let's show the world...". That's only your responsibility insofar as you're somehow collaborating with them to have them take you as an authority; but it could happen without your collaboration. IMO no, but you could, say, ask LW to make a "comment signature" feature, and then have every comment you make link, in small font, to the comment you just made.

I read Anna's request as an attempt to create a self-fulfilling prophecy. It's much easier to bully a few individuals than a large crowd.

Yeah, I also read Anna as trying to create/strengthen local norms to the effect of 'whistleblowers, truth-tellers, and people-saying-the-emperor-has-no-clothes are good community members and to-be-rewarded/protected'. That doesn't make reprisals impossible, but I appreciated the push (as I interpreted it).

I also interpreted Anna as leading by example to some degree -- a lot of orgs wouldn't have their president join a public conversation like this, given the reputational risks. If I felt like Anna was taking on zero risk but was asking others to take on lots of risk, I may have felt differently.

Saying this publicly also (in my mind) creates some accountability for Anna to follow through. Community leaders who advocate value X and then go back on their word are in much more hot water than ones who quietly watch bad things happen.

E.g., suppose this were happening on the EA Forum. People might assume by default that CEA or whoever is opposed to candor about this topic, because they're worried hashing things out in public could damage the EA-brand (or whatever). This creates a default pressure against open and honest truth-seeking. Jumping in to say 'no, actually, having this conversat... (read more)

7Rob Bensinger3y
I'm not sure what I think of Rohin's interpretation. My initial gut feeling is that it's asking too much social ownership of the micro, or asking community leaders to baby the community too much, or spend too much time carefully editing their comments to address all possible errors (with the inevitable result that community leaders say very little and the things they say are more dead and safe). It's not that I particularly object to the proposed rephrasings, more just that I have a gut-level sense that this is in a reference class of a thousand other similarly-small ways community leaders can accidentally slightly nudge folks in the wrong direction. In this particular case, I'd rather expect a little more from the community, rather than put this specific onus on Anna. I agree there's an empirical question of how socially risky it actually is to e.g. share negative stuff about Leverage in this thread. I'm all in favor of a thread to try to evaluate that question (which could also switch to PMs as needed if some people don't feel safe participating), and I see the argument for trying to do that first, since resolving that could make it easier to discuss everything else. I just think people here are smart and independent enough to not be 'coerced' by Anna if she doesn't open the conversation with a bunch of 'you might suffer reprisals' warnings (which does have a bit of a self-fulfilling-prophecy ring to it, though I think there are skillful ways to pull it off).
6Rohin Shah3y
You're reading too much into my response. I didn't claim that Anna should have this extra onus. I made an incorrect inference, was confused, asked for clarification, was still confused by the first response (honestly I'm still confused by that response), understood after the second response, and then explained what I would have said if I were in her place when she asked about norms. (Yes, I do in fact think that the specific thing said had negative consequences. Yes, this belief shows in my comments. But I didn't say that Anna was wrong/bad for saying the specific thing, nor did I say that she "should" have done something else. Assuming for the moment that the specific statement did have negative consequences, what should I have done instead?) (On the actual question, I mostly agree that we probably have too many demands on public communication, such that much less public communication happens than would be good.) I also would have been fine with "I hope people share additional true, relevant facts". The specific phrasing seemed bad because it seemed to me to imply that the fear of reprisal was wrong. See also here.
4Rob Bensinger3y
OK, thanks for the correction! :]
1TekhneMakre3y
Of course there's also the possibility that it's worth it. E.g. because people could then notice who is doing a rush-to-judgement thing or confirmation-bias-y thing. (This even holds if there's threat of personal harm to fact-sharers, though personal harm looks like something you added to the part you quoted.)
5Rohin Shah3y
I agree that's possible, but then I'd say something like "I would love to know additional true relevant facts, but I recognize there are real risks to this and only recommend people do this if they think the benefits are worth it". Analogy: it could be worth it for an employee to publicly talk about the flaws of their company / manager (e.g. because then others know not to look for jobs at that company), even though it might get them fired. In such a situation I would say something like "It would be particularly helpful to know about the flaws of company X, but I recognize there are substantial risks involved and only recommend people do this if they feel up to it". I would not say "I hope people don't refrain from speaking up about the flaws of company X out of fear that they might be fired", unless I had good reason to believe they wouldn't be fired, or good reason to believe that it would be worth it on their values (though in that case presumably they'd speak up anyway).
1TekhneMakre3y
Thanks. I'm actually still not sure what you're saying. Hypothesis 1: you're saying, stating "I hope person A does X" implies a non-dependence on person A's information, which implies the speaker has a lot of hidden evidence (enough to make their hope unlikely to change given A's evidence). And, people might infer that there's this hidden evidence, and update on it, which might be a mistake. Hypothesis 2: you're pointing at something about how "do X, even if you have fear" is subtly coercive / gaslighty, in the sense of trying to insert an external judgement to override someone's emotion / intuition / instinct. E.g. "out of fear" might subtly frame an aversion as a "mere emotion". (Maybe these are the same...)
2Rohin Shah3y
Hypothesis 2 feels truer than hypothesis 1. (Just to state the obvious: it is clearly not as bad as the words "coercion" and "gaslighting" would usually imply. I am endorsing the mechanism, not the magnitude-of-badness.) I agree that hypothesis 1 could be an underlying generator of why the effect in hypothesis 2 exists. I think I am more confident in the prediction that these sorts of statements do influence people in ways-I-don't-endorse, than in any specific mechanism by which that happens.
1TekhneMakre3y
Okay.

I hesitated a bit before saying this? I thought it might add a little bit of clarity, so I figured I'd bring it up.

(Sorry it got long; I'm still not sure what to cut.)

There are definitely some needs-conflicts. Between (often distant) people who, in the face of this, feel the need to cling to the strong reassurance that "this could not possibly happen to them"/"they will definitely be protected from this," and would feel reassured at seeing Strong Condemning Action as soon as possible...

...and "the people who had this happen." Who might be best-served, if they absorbed that there is always some risk of this sort of shit happening to people. For them, it would probably be best if they felt their truth was genuinely heard, and took away some actionable lessons about what to avoid, without updating their personal identity to "victim" TOO much. And in the future, embraced connections that made them more robust against attaching to this sort of thing in the future.

("Victim" is just not a healthy personal identity in the long-term, for most people.)


Sometimes, these needs are so different, that it warrants having different forums of discussion. But there is some overlap in these needs (w... (read more)

There's also the need to learn from what happened, so that when designing organizations in the future the same mistakes aren't repeated. 

[-]farp3y130

I would like it if we showed the world how accountability is done, and given your position, I find it disturbing that you have omitted this objective. That is, if I wanted to deflect the conversation away from accountability, I think I would write a post similar to yours. 

I would like it if we showed the world how accountability is done

So would I. But to do accountability (as distinguished from scapegoating, less-epistemic blame), we need to know what happened, and we need to accurately trust each other (or at least most of each other) to be able to figure out what happened, and to care what actually happened.

The “figure out what happened” and “get in a position where we can have a non-fucked conversation” steps come first, IMO.

I also sort of don’t expect that much goal divergence on the accountability steps that very-optimistically come after those steps, either, basically because integrity and visible trustworthiness serve most good goals in the long run, and vengeance or temporarily-overextended-trust serves little.

Though, accountability is admittedly a weak point of mine, so I might be missing/omitting something. Maybe spell it out if so?

7farp3y
To clarify: goal divergence between whom? Geoff and Zoe? Zoe and me? Me and you?
6sapphire3y
This reaction has been predictable for years IMO. As usual, a reasonable response required people to go public. There is no internal accountability process. Luckily things have been made public.

Some thoughts related to this topic:

*

For someone familiar with Scientology, the similarities are quite funny. There is a unique genius who develops a new theory of human mind called [Dianetics | Connection Theory]. For people familiar with psychology, it's mostly a restatement of some dubious existing theories, with huge simplifications and little evidence. But many people have their minds blown.

The genius starts a group with the goal of providing almost-superpowers such as [perfect memory | becoming another Elon Musk] to his followers, with the ultimate goal of saving the planet. The followers believe this is the only organization capable of achieving such goal. They must regularly submit to having their thoughts checked at [auditing | debugging], where their sincerity is verified using [e-meter | Belief Reporting]. When the leaders runs out of useful or semi-useful ideas to teach, there is always the unending task of exorcising the [body thetans | demons].

The former members are afraid of consequences if they speak about their experience in the organization.

*

Some people expressed epistemic frustration about situation that seems important to understand correctly, but information is... (read more)

I wish there were more facts about Leverage out in actual common knowledge.

One thing I’d find really helpful, and that I suspect might be helpful broadly for untangling what happened and making parts of it obvious / common knowledge, is if I/someone/a group could assemble a Leverage timeline that included:

  • Who worked there in different years. When they came and left.
  • Who was dating whom at different years, in cases where both parties worked at Leverage and at least one was within leadership.
  • Funding cycles: when funding from different sources was applied for and/or received; what the within-Leverage narrative was for what was needed to get the funding.
  • Maybe anything else broad and simple/factual/obvious about that time period.

If anyone wants to give me any of this info, either anonymously or with your name attached, I’d be very glad to help assemble this into a timeline. I’m also at least as enthusiastic about anyone else doing this, and would be glad to pay a small amount for someone’s time if that would help. Maybe also it could be cobbled together in common here, if anyone is willing to contribute some of these basic facts in common here.

Is anyone up for collaborating toward this in some form? I’m hoping it might be easier than some kinds of sorting-through, and like it might make some of the harder stuff easier once done.

8habryka3y
I would be happy to contribute my part of this, with the memory I have. I think I could cover a decent amount of the questions above, though would also likely get some things wrong, so wouldn't be a totally dependent observer.
8Eli Tyre3y
Same of me.

Zoe - I don’t know if you want me to respond to you or not, and there’s a lot I need to consider, but I do want to say that I’m so, so sorry. However this turns out for me or Leverage, I think it was good that you wrote this essay and spoke out about your experience.

It’s going to take me a while to figure out everything that went wrong, and what I did wrong, because clearly something really bad happened to you, and it is in some way my fault. In terms of what went wrong on the project, one throughline I can see was arrogance, especially my arrogance, which affected so many of the choices we made. We dismissed a lot of the actually useful advice and tools and methods from more typical sources, and it seems that blocking out society made room for extreme and harmful narratives that should have been tempered by a lot more reality. It’s terrible that you felt like your funding, or ability to rest, or take time off, or choose how to interact with your own mind were compromised by Leverage’s narratives, including my own. I totally did not expect this, or the negative effects you experienced after leaving, though maybe I would have, had I not narrowed my attention and basically gotten way too stuck in theoryland.

I agree with you that we shouldn’t skip steps. I’ve updated accordingly. Again I’m truly sorry. I really wanted your experience on the project to be good.

Edit: I got a request to cut the chaff and boil this down to discrete actionables. Let me do that.

  1. Will you release everyone from any NDAs

  2. Will you step down from any management roles (e.g. Leverage and Paradigm)

  3. Will you state for the record, that you commit to not threaten* anyone who comes forward with reports that you do not like, in the course of this process

I get the sense that you have made people afraid to stand against you, historically. Engaging in any further threats, seems likely to impede all of our ability to make sense of, and come to terms with, whatever happened. It could also be quite incriminating on its own.

* For full points, commit to also not make any strong stealthy attempts to socially discredit people.

There's good ways to do this kind of thing and bad ways. I feel that this is a bad way? Unless I'm missing a lot of context about what's happening here. 

Other ways to go about this:

  • Hire a third-party mediator to connect aggrieved parties with Geoff
  • Have a mutual trusted friend mediate conversations between aggrieved parties and Geoff
  • Geoff and ex-Leverage staff do a postmortem of some kind
  • Leverage creates an accountability system through which is collects data and feedback

I want to suggest that Geoff doesn't need to respond to Spiracular's requests because they contain a lot of assumptions, in the same way the question "Where were you on the night of the murder" contains a lot of assumptions. And this is a bad way to go about justice. Unless, again, I'm missing a bunch of context. 

For whatever it's worth, I think "No" is a pretty acceptable answer to some of these.


"No, for reasons X, Y, Z" is a pretty ordinary answer to the NDA concern. I'd still like to see that response.

"Leverage 2.0 was deliberately structured to avoid a lot of the drawbacks of Leverage 1.0" is something I actually think is TRUE. The fact that Leverage 1.0 was sun-setted deliberately, is something that I thought actually reflected well on both Geoff and the people there.

I think from that, an argument could be made that stepping down is not necessary. I can't say I would necessarily agree with it, but I think the argument could be made.


Most of my stance, is that currently most people are too SCARED to talk. And this is actually really worrying to me.

I don't think "introducing a mediator," who would be spending about half of their time with Geoff --the epicenter of a lot of that fear-- would actually completely solve all of that problem. It would surprise me a lot if it worked here.


My #1 most desired commitment, right now? Is actually #3, and I maybe should have put it first.

A commitment to, in the future, not go after people and especially not to threaten them, for talking about their experiences.

That by itself, would be quite meaningful to me.

Well, I am at least gonna name a fraction of the assumptions that are implied by this set of requests. I am not asking you to do anything about this, but I am going to name them out loud, in the hopes that people come away more conscious of what other assumptions might be present. 

  • Geoff was the center of the problem and, by himself, should be held accountable 
  • If Geoff agrees to this, he is also agreeing on behalf of Leverage itself, including current members and potentially even past members. Meaning that if not-Geoff people break or violate these commitments, Geoff himself should be held responsible
  • Geoff has a meaningful degree of control over what other people do or do not do / say or do not say
  • The people who are scared of retaliation of some kind are mostly afraid of Geoff in particular
  • People's views of Geoff's willingness and ability to retaliate are basically correct / their fears are justified
  • The aggrieved parties should put the mass of the blame on Geoff
  • They should feel better if Geoff agrees to these requests
  • Geoff is totally free to say "no" to these requests on a public internet forum, and this won't cause a bunch of misunderstanding / assumption of guilt if he d
... (read more)

(In the Duncan-culture version of LW, comments like the above are both commonplace and highly appreciated.  I mention this because Unreal has mentioned having a tough time with LW, and imo the above comment demonstrates solidly central LW virtue.)

I appreciate this too. I think this form of push-back, is a potentially highly-productive one.

I may need to think for a bit about how to respond? But it seemed worth expressing my appreciation for it, first.

Meta-note: I tried the longer-form gentler one? But somebody ELSE complained about that structure.

(A piece of me recognizes that I can't make everybody happy here, but it's a little annoying.)

2Spiracular3y
That last point sub-point is a little vague, so let my clarify my personal cut-off on this. Others may disagree. I wouldn't object to seeing the occasional brief overt statement coming directly from Geoff that his recollection doesn't match someone else's interpretation. I would object to any further encouragement of things that resemble the "strong, repeated pressure by someone close to Geoff to have the post marked as flawed" that Ruby described. Consistently denouncing the later going forward, would be very helpful.
[-]Ruby3y120

I want to clarify that using the word "threat" in my case would cause one to overestimate the severity by 5-20x or something of the pressure I experienced (more so than "strong pressure"). Not that the word is strictly wrong, but the connotations of it are too strong. I might end up listing the actual behaviors in a bit, maybe after more dialog with the person in question.

2Spiracular3y
When I said "last sub-point?" I was referring to "make any strong stealthy attempts to socially discredit people," not "threaten" (by which I mean, "threaten"). I was deliberately treating "no threats" as minimum, and "no strong social pressure" as extra-credit.
2Ruby3y
Ah, gotcha. I misunderstood the meaning of "sub-point".
2Spiracular3y
I recognize it took some courage to talk about this in the first place, and I don't want to discount that. I am glad that you said something. ...but I also don't want to lose track of this thread. Edit: I got a request to boil this down, so I separated it to that thread. And reading the room? I think there is, broadly speaking, a lot of fear of you. And I think part of why that is true, is because you cultivated that. You have noticed that you made some errors which blinded you to the consequences of some of your actions, and I think that's a good start? I hope you might be able to agree with me that this attitude of fear, is probably blinding you to the reporting of any further harms. I recognize processing takes time, and there hasn't been a lot of time yet. But also, I think somebody needed to say this to your face, and it might as well be me. How do you want to help wind down this aura of fear, which I think is still blinding not just most of us, but also YOU, to a lot of the full reality of what happened? (And it might well be, that you will help with this by saying almost nothing and going after no-one. But if so? I think it would help, if you briefly committed to that outright.)
2Spiracular3y
I appreciate hearing from you about some of what you probably got wrong. I'm pretty sure that a lot of this started out relatively benignly, and spiraled? I agree with your impression that arrogance was at least one of several pressures that made it hard to see that things were going in a bad direction. A lot of invisible guard-rails were dropped or traded away over time, and the absence of a certain amount of reality-checking made it very hard to fix after things had veered off the rails. I hope your account contributes to making people less likely to make similar errors in the future. (I would also be very unhappy, if I ever saw you having a substantial amount of power over people again though, fwiw.)
-23[comment deleted]3y

I had thought about saying this earlier, for fairness/completeness, but didn't get around to it. I've heard some people feeling wary of speaking positively of Leverage out of vague worry of reprisal.

So... I do want to note 

a) I got a lot of personal value from interacting with Geoff personally. In some sense I'm an agent who tries to do ambitious things because of him. He looked at my early projects (Solstice in particular), he understood them, and told me he thought they were valuable. This was an experience that would later feed into my thoughts in this post.

b) I also have gotten some good techniques from the Leverage ecosystem. I'm not 100% sure which ideas came from where, but Belief Reporting in particular has been a valuable tool in my toolkit.

(none of this is meant to be evidence about a bunch of other claims in this thread. Just wanted to somewhat offset the arguments-are-soldiers default)

Piggybacking with additional accurate (albeit somewhat-tangential) positive statements, with a hope of making it seem more possible to say true positive and negative things about Leverage (since I've written mostly negative things, and am writing another negative thing as we speak):

The 2014 EA Retreat, run by Leverage, is still by far the best multi-org EA or rationalist event I've ever been to, and I think it had lots of important positive effects on EA.

I imagine a lot of people want to say a lot of things about Leverage and the dynamics around it, except it’s difficult or costly/risky or hard-to-imagine-being-heard-about or similar.

If anyone is up for saying a bit about how that is for you personally (about what has you reluctant to try to share stuff to do with Leverage, or with EA/Leverage dynamics or whatever, that in some other sense you wish you could share — whether you had much contact with Leverage or not), I think that would be great and would help open up space.

I’d say err on the side of including the obvious.

[-]Ruby3y890

I interacted with Leverage some over the years. I felt like they had useful theory and techniques, and was disappointed that it was difficult to get access to their knowledge. I enjoyed their parties. I did a Paradigm workshop. I knew people in Leverage to a casual degree.

What's live for me now is that when the other recent post about Leverage was published, I was subjected to strong, repeated pressure by someone close to Geoff to have the post marked as flawed, and asked to lean on BayAreaHuman to approximately retract the post or acknowledge its flaws. (This request was made of me in my new capacity as head of LessWrong.) "I will make a fuss" is what I was told. I agreed that the post has flaws (I commented to that effect in the thread) and this made me feel the pressure wasn't illegitimate despite being unpleasant. Now it seems to be part of a larger concerning pattern.

Further details seem pertinent, but I find myself reluctant to share them (and already apprehensive that this more muted description will have the feared effect) because I just don't want to damage the relationship I have with the person who was pressuring me. I'm unhappy about it, but I still value that relations... (read more)

With Geoff himself (with whom I personally have had a casual positive relationship) I feel more actual fear of being critical or in anyway taking the side against Leverage. I predict that if I do so, I'll be placed on the list of adversaries. And something like, just based on the reaction to the Common knowledge post, Leverage is very agenty when it comes to their reputation. Or I don't know, I don't fear any particularly terrible retribution myself, but I loathe to make "enemies".

If you do make enemies in this process, in trying to help us make sense of the situation: count me among the people you can call on to help.

Brainstorming more concrete ideas: if someone makes a GoFundMe to try to offset any financial pressure/punishment Leverage-adjacent people might experience from sharing their stories, I'll be very happy to contribute.

I'm unhappy about it, but I still value that relationship

Positive reinforcement for finding something you could say that (1) protects this sort of value at least somewhat and (2) opens the way for aggregation of the metadata, so to speak; like without your comment, and other hypothetical comments that haven't happened yet for similar reasons, the pattern could go unnoticed.


I wonder if there's an extractable social norm / conceptual structure here. Something like separating [the pattern which your friend was participating in] from [your friend as a whole, the person you have a relationship]. Those things aren't separate exactly, but it feels like it should make sense to think of them separately, e.g. to want to be adversarial towards one but not the other. Like, if there's a pattern of subtly suppressing certain information or thoughts, that's adversarial, and we can be agnostic about the structure/location of the agency behind that pattern while still wanting to respond appropriately in the adversarial frame.

My contact with Leverage over the years was fairly insignificant, which is part of why I don’t feel like it’s right for me to participate in this discussion. But there are some things that have come to mind, and since Anna’s made space for that, I’ll note them now. I still think it’s not really my place to say anything, but here’s my piece anyway. I’m speaking only for myself and my own experience.

I interviewed for an ops position at Leverage/Paradigm in early 2017, when I was still in college. The process took maybe a couple months, and the in-person interview happened the same week as my CFAR workshop; together these were my first contact with the Bay community. Some of the other rationalists I met that week warned me against Leverage in vague terms; I discussed their allegations with the ops team at my interview and came away feeling satisfied that both sides had a point.

I had a positive experience at the interview and with the ops and their team hiring process in general. The ops lead seemed to really believe in me and recommended me to other EA orgs after I didn’t get hired at Paradigm, and that was great. My (short-term) college boyfriend had a good relationship with Leverage... (read more)

The obsession with reputation control is super concerning to me, and I wonder how this connects up with Leverage's poor reputation over the years.

Like, I could imagine two simplified stories...

Story 1:

  • Leverage's early discoveries and methods were very promising, but the inferential gap was high -- they really needed a back-and-forth with someone to properly communicate, because everyone had such different objections and epistemic starting points. (This is exactly the trouble MIRI had in its early comms -- if you try to anticipate which objections will be salient to the reader, you'll usually miss the mark. And if you do this a lot, you miss the mark and are long-winded.)
  • Because of this inferential gap, Leverage acquired a very bad reputation with a bunch of people who (a) misunderstood its reasoning, and then (b) didn't get why Leverage wasn't investing more into public comms.
  • Leverage then responded by sharing less and trying to reset its public reputation to 'normal'. It wasn't trying to become super high-status, just trying to undo the damage already done / prevent things from further degrading as rumors mutated over time. Unfortunately, its approach was heavy-handed and incompet
... (read more)

Based on broad-strokes summaries said to me by ex-Leveragers (though admittedly not first-hand experience), I would say that the statement "Leverage was always unusually obsessed with its reputation, and unusually manipulative / epistemically uncooperative with non-Leveragers" rings true to what I have heard.

Some things mentioned to me by Leverage people as typical/archetypal of Geoff's attitude include being willing to lie to people outside Leverage, feeling attacked or at risk of being attacked, and viewing adjacent non-Leverage groups within the broader EA sphere as enemies.

Thanks! To check: did one or more of the ex-Leveragers say Geoff said he was willing to lie? Do you have any detail you can add there? The lying one surprises me more than the others, and is something I'd want to know.

Here is an example:

  • Zoe's report says of the information-sharing agreement "I am the only person from Leverage who did not sign this, according to Geoff who asked me at least three times to do so, mentioning each time that everyone else had (which read to me like an attempt to pressure me into signing)."

  • I have spoken to another Leverage member who was asked to sign, and did not.

  • The email from Matt Fallshaw says the document "was only signed by just over half of you". Note the recipients list includes people (such as Kerry Vaughan) who were probably never asked to sign because they were not present, but I would believe that such people are in the minority; so this isn't strict confirmation, but just increased likelihood, that Geoff was lying to Zoe.

This is lying to someone within the project. I would subjectively anticipate higher willingness to lie to people outside the project, but I don't have anything tangible I can point to about that.

I am more confident that what I heard was "Geoff exhibits willingness to lie". I also wouldn't be surprised if what I heard was "Geoff reports being willing to lie". I didn't tag the information very carefully.

My current feelings are a mixture of the following: 

  • I disagree with a lot of the details of what many people have said (both people who had bad experiences and people defending their Leverage experiences and giving positive testimonials), and feel like expressing my take has some chance of making those people feel like their experiences are invalidated, or at least spark some conflict of some type
  • I know that Geoff and Leverage more broadly in the past have said pretty straightforwardly that they will take pretty adversarial action if someone threatens their reputation or brand, and that makes me both feel like I can trust many fewer things in the discussion, and makes me personally more hesitant to share some things (while also feeling like that's kind of cowardly, but I haven't yet had the time to really work through my feelings here, which in itself has some chilling effects that I feel uncomfortable with, etc.)
  • On the other side, there have been a lot of really vicious and aggressive attacks to anyone saying anything pro-leverage for many years, with a strength that I think is overall even greater and harder to predict than what Geoff and Leverage have been doing. It's also
... (read more)

Geoff and Leverage more broadly in the past have said pretty straightforwardly that they will take pretty adversarial action if someone threatens their reputation or brand

I assume there isn't a public record of this anywhere? Could I hear more details about what was said? This sounds atrocious to me.

I similarly feel that I can't trust the exculpatory or positive evidence about Leverage much so long as I know there's pressure to withhold negative information. (Including informal NDAs and such.)

On the other side, there have been a lot of really vicious and aggressive attacks to anyone saying anything pro-leverage for many years, with a strength that I think is overall even greater and harder to predict than what Geoff and Leverage have been doing. It's also been more of a crowd-driven phenomenon, which makes it less predictable and more scary.

I agree with this too, and think it's similarly terrible, but harder to blame any individual for (and harder to fix).

I assume it's to a large extent an extreme example of the 'large inferential gaps + true beliefs that sound weird' afflicting a lot of EA orgs, including MIRI. Though if Leverage has been screwed up for a long time, some of that public reaction may also have been watered over the years by true rumors spreading about the org.

2farp3y
Let's stand up for the truth regardless of threats from Geoff/Leverage, and let's stand up for the truth regardless of the mob.  Let's stand up for the truth! Maintaining some aura of neutrality or impartiality at the expense of the truth would be IMO quite obviously bad.  I think that it is seen as not very normative on LW to say "I know things, confidential things I will not share, and because of that I have a very [bad/good] impression of this person or group". But IMO its important to surface. Vouching is an important social process. 
8ChristianKl3y
It seems that your account is registered to just participate in this discussion and you withold your personal identity.  If you sincerely believe that information should be shared, why are you withholding yourself and tell other people to take risks?
[-]farp3y100

I have no private information to share. I think there is an obvious difference between asking powerful people in the community to stand up for the truth, and asking some rando commentator to de-anonymize. 

5Ruby3y
Anna is attempting to make people comfortable having this difficult conversation about Leverage by first inviting them just to share what factors are affecting their participation. Oliver is kindly obliging and saying what's going through his mind.  This seems like a good approach to me for getting the conversation going. Once people have shared what's going through their minds–and probably these need to received with limited judgmentality–the group can then understand the dynamics at play and figure out how to proceed having a productive discussion. All that to say, I think it's better to hold off on pressuring people or saying their reactions aren't normative [1] in this sub-thread. Generally, I think having this whole conversation well requires a gentleness and patience in the face of the severe, hard-to-talk-about situation.  Or to be direct, I think your comments in this thread have been brusque/pushy in a way that's hurting the conversation (others feel free to chime in if that seems wrong to them). [1] For what it's worth, I think disclosing that your stance is informed by private info is good and proper.

I think your comments in this thread have been brusque/pushy in a way that's hurting the conversation (others feel free to chime in if that seems wrong to them).

I mentioned in a different comment that I've appreciated some of farp's comments here for pushing back against what I see as a missing mood in this conversation (acknowledgment that the events described in Zoe's account are horrifying, as well as reassurance that people in leadership positions are taking the allegations seriously and might take some actions in response). I also appreciate Ruby's statement that we shouldn't pressure or judge people who might have something relevant to say.

The unitofcaring post on mediators and advocates seems relevant here. I interpret farp (edit: not necessarily in the parent comment, but in various other comments in this thread) as saying that they'd like to see more advocacy in this thread instead of just mediation. I am not someone who has any personal experiences to share about Leverage, but if I imagine how I'd personally feel if I did, I think I agree.

On mediators and advocates: I think order-of-operations MATTERS.

You can start seeking truth, and pivot to advocate, as UOC says.

What people often can't do easily is start with advocate, and pivot to truth.

And with something like this? What you advocated early can do a lot to color both what and who you listen to, and who you hear from.

2farp3y
The entire thesis of the post is that you want a mixture of advocacy and mediation in the community. So if your proposal is that we all mediate, and then pivot to advocacy, I think that is not at all what UOC says.  Not that I super endorse the prescription / dichotomy that the post makes to begin with.
9Rob Bensinger3y
I liked Farp's "Let's stand up for the truth" comment, and thought it felt appropriate. (I think for different reasons than "mediators and advocates" -- I just like people bluntly stating what they think, saying the 'obvious', and cheerleading for values that genuinely deserve cheering for. I guess I didn't expect Ollie to feel pressured-in-a-bad-way by the comment, even if he disagrees with the implied advice.)
[-]farp3y100

Thanks. Your comments and mayleaf's do mean a lot to me. Also, I was surprised by negative reaction to that comment and didn't really expect it to come off as admonishment or pressure. Love 2 cheerlead \o/

7farp3y
I have thought about this UOC post and it has grown on me. The fact is that I believe Zoe and I believe her experience is not some sort of anomaly. But I am happy to advocate for her just on principle. Geoff has much more resources and much at stake. Zoe just has (IMO) the truth and bravery and little to gain but peace. Justice for Geoff just doesn't need my assistance, but justice for Zoe might.  So I am happy to blindly ally with Zoe and any other victims. And yes I would like others to do the same, and broadcast that we will fight for them. Otherwise they are entering a potentially shitty looking fight with little to gain against somebody with everything to lose. I don't demand that no mediation take place, but if I want to plant my flag, that's my business. It's not like I am doing anything dishonest in the course of my advocacy. And to be completely frank, as an advocate for the victims, I don't really want AnnaSalomon to be one of the major mediators here. I don't think she's got a good track record with CFAR stuff at all -- I have mentioned Robert Lecnik a few times already.  I think Kelsey's post is right -- mediators need to seem impartial. For me, Anna can't serve this role. I couldn't say how representative I am.

I will be happy to contribute financially to Zoe's legal defense, if Geoff decides to take revenge.

In the meanwhile, I am curious about what actually happened. The more people talk, the better.

I appreciate this invitation. I'll re-link to some things I already said on my own stance: https://www.lesswrong.com/posts/Kz9zMgWB5C27Pmdkh/common-knowledge-about-leverage-research-1-0?commentId=2QKKnepsMoZmmhGSe

Beyond what I laid out there:

  • It was challenging being aware of multiple stories of harm, and feeling compelled to warn people interacting with Geoff, but not wanting to go public with surprising new claims of harm. (I did mention awareness of severe harm very understatedly in the post. I chose instead to focus on "already known" properties that I feel substantially raise the prior on the actually-observed type of harm, and to disclose in the post that my motivation in cherry-picking those statements was to support pattern-matching to a specific template of harm).

  • After posting, it was emotionally a bit of a drag to receive comments that complained that the information-sharing attempt was not done well enough, and comparatively few comments grateful for attempting to share what I could, as best I could, to the best of my ability at the time, although the upvote patterns felt encouraging. I was pretty much aware that that was what was going to happen. In general, "flinc

... (read more)

Since it sounds like just-upvotes might not be as strong a signal of endorsement as positive engagement...

I want to say that I really appreciate and respect that you were willing to come forward, with facts that were broadly-known in your social graph, but had been systematically excluded from most people's models.

And you were willing to do this, in a pretty adversarial environment! You had to deal with a small invisible intellectual cold-war that ensured, almost alone, without backing down. This ​counts for even more.


I do have a little bit of sensitive insider information, and on the basis of that: Both your posts and Zoe's have looked very good-faith to me.

In a lot of places, they accord with or expand on what I know. There are a few parts I was not close enough to confirm, but they have broadly looked right to me.

I also have a deep appreciation, for Zoe calling out that different corners of Leverage had very different experiences with it. Because they did! Not all time-slices or sub-groups within it experienced the same problems.

This is probably part of why it was so easy, to systematically play people's personal experiences against each other: Since he knew the context through which Leverage was experienced, Geoff or others could systematically bias whose reports were heard.

(Although I think it will be harder in the future to engage in this kind of bullshit, now that a lot of people are aware of the pattern.)


To those who had one of the better firsthand experiences of Leverage:

I am still interested in hearing your bit! But if you are only engaging with this due to an inducement that probably includes a sampling-bias, I appreciate you including that detail.

(And I am glad to see people in this broader thread, being generally open about that detail.)

7TekhneMakre3y
I don't have anything to add, but I just want to say I felt a pronounced pang of warmth/empathy towards you reading this part. Not sure why, something about fear/bravery/aloneless/fog-of-war.

I will talk about my own bit with Leverage later, but I don't feel like it's the right time to share it yet.

(But fwiw: I do have some scars, here. I have a little bit of skin in this one. But most of what I'm going to talk about, comes from analogizing this with a different incident.)

A lot of the position I naturally slide into around this, which I have... kind of just embraced, is of trying to relate hard to the people who:

  • WERE THERE
  • May have received a lot of good along with the bad
  • May have developed a very complicated and narratively-unsatisfying opinion because of that, which feels hard to defend
  • Are very sensitized to condemning mob-speak. Because they've been told, again and again, that anything good they got out of the above, will be swept out with the bathwater if the bad comes to light.
    • This sort of thing only stays covered up for this long, if there was a lot of pressure and plausible-sounding arguments pointing in the direction of "say nothing." The particular forms of that, will vary.
    • Core Leverage seems pretty willing to resort to manipulation and threats? And despite me generally trying so hard to avoid this vibe: I want to condemn that outright.
    • Also, in any othe
... (read more)

I was once in a similar position, due to my proximity to a past (different) thing. I kinda ended up excruciatingly sensitive, to how some things might read or feel to someone who was close, got a lot of good out of it (with or without the bad), and mostly felt like there was no way their account wouldn't be twisted into something unrecognizable. And who may be struggling, with processing an abrupt shift in their own personal narrative --- although I sincerely hope the 2 years of processing helped to make this less of a thing? But if you are going through it anyway, I am sorry.

And... I want this to go right. It didn't go right then; not entirely. I think I got yelled at by someone I respect, the first time I opened up about it. I'm not quite sure how to make this less scary for them? But I want it to be.

The people I know who got swept up in this includes some exceptionally nice people. There is at least one of them, who I would ordinarily call exceptionally sane. Please don't feel like you're obligated to identify as a bad person, or as a victim, because you were swept up in this. Just because some people might say it about you, doesn't make it who you are.

While I realize I've kinda de-facto "taken a side" by this point (and probably limited who will talk to me as a result)? I was mispronouncing Geoff's name, before this hit; this is pretty indicative of how little I knew him personally. I started out mostly caring about having the consequences-for-him be reached based off of some kind of reasonable assessment, and not caring too much about having it turn out one way or another. I still feel more invested in there being a good process, and in what will generate the best outcomes for the people who worked under him (or will ever work under him), than anything else.

Compared to Brent's end-result of "homeless with health-problems in Hawaii" **? The things I've asked for have felt mild. But I also knew that if I wasn't handling mentioning them, somebody else probably would. In my eyes, we probably needed someone outside of the Leverage ecosystem who knew a lot of the story (despite the substantial information-hiding efforts) to be handling this part of the response.

Pushing for people to publish the information-hiding agreement, and proposing that Geoff maybe shouldn't have a position with a substantial amount of power over others (at lea... (read more)

9TekhneMakre3y
An abstract note: putting stock in anonymous accounts potentially opens wider a niche for false accounts, because anonymity prevents doing induction about trustworthiness across accounts by one person. (I think anonymity is a great tool to have, and don't know if this is practically a problem; I just want to track the possibility of this dynamic, and appreciate the additional value of a non-anonymous account.)

One tool here is for a non-anonymous person to vouch for the anonymous person (because they know the person, and/or can independently verify the account).

1TekhneMakre3y
True. A maybe not-immediately-obvious possibility: someone playing Aella's role of posting anonymous accounts could offer the following option: if you given an account and take this option, then if the poster later finds out that you seriously lied, then, they have the option to de-anonymize you. The point being, in the hypothetical where the account is egregiously false, the accounter's reputation still takes a hit; and so, these accounts can be trusted more. If there's no possibility of de-anonymization, then the account can only be trusted insofar as you trust the poster's ability to track accounter's trustworthiness. Which seems like a more complicated+difficult task. (This might be terrible thing to do, IDK.)
8Spiracular3y
I get VERY creepy vibes from this proposal, and want to push back hard on it. Although, hm... I think "lying" and "enemy action" are different? Enemy action occasionally warrants breaking contracts back, after they didn't respect yours. Whereas if there is ZERO lying-through-negligence in accounts of PERSONAL EXPERIENCES, we can be certain we set the bar-of-entry far too high.
-5TekhneMakre3y
6Viliam3y
Depends on the algorithm to determine whether "you seriously lied". Imagine a hypothetical situation where telling the truth puts you in danger, but you read this offer, think "well, I am telling the truth, so they will protect my anonymity", and describe truthfully your version. Unluckily for you, your opponent lied, and was more convincing than you. Afterwards, because your story contradicts the accepted version of events, it seems that you were lying, accusing unfairly the people who are deemed innocent. As a punishment for "seriously lying", your identity is exposed. If people with sensitive information suspect that something like this could happen, then it defeats the purpose of the proposal.
1TekhneMakre3y
Yeah, that seems like a big potential flaw. (Which could just mean, no one should stick their neck out like that.) I'm imagining that there's only potential benefit here in cases where the accounter also has strong trust in the poster, such that they think the poster almost certainly won't be falsely convinced that a truth is an egregious lie. In particular, the agreement isn't about whether the court of public opinion decides it was a lie, just the poster's own opinion. (The poster can't be held accountable to that by the public, unless the public changes its mind again, but the poster can at least be held accountable by the accounter.) (We could also worry that this option would only be taken by accounters with accounts that are infeasible to ever reveal as egregious lies, which would be a further selection bias, though this is sort of going down a hypothetical rabbit hole.)

In the past, I've been someone who has found it difficult and costly to talk about Leverage and the dynamics around it, or organizations that are or have been affiliated with effective altruism, though the times I've spoken up I've done more than others. I would have done it more but the costs were that some of my friends in effective altruism interacted with me less, seemed to take me less seriously in general and discouraged me from speaking up more often again with what sometimes amounted to nothing more than peer pressure. 

That was a few years ago. For lots of reasons, it's easier, less costly, less risky and easier to not feel fear for me now. I don't know yet what I'll say regarding any or all of this related to Leverage because I don't have any sense of how I might be prompted or provoked to respond. Yet I expect I'll have more to say and towards what I might share as relevant I don't have any particular feelings about yet. I'm sensitive to how my statements might impact others but for myself personally I feel almost indifferent. 

My general feeling about this is that the information I know is either well-known or otherwise "not my story to tell." 

I've had very few direct interactions with Leverage except applying to Pareto, a party or two, and some interactions with Leverage employees (not Geoff) and volunteers.  As is common with human interactions, I appreciated many but not all of my interactions.

Like many people in the extended community, I've been exposed to a non-overlapping subset of accounts/secondhand rumors of varying degrees of veracity. For some things it's been long enough that I can't track the degree of confidences I'm supposed to keep, and under which conditions, so it seems better to err on the side of silence. 

At any rate, it's ultimately not my story/tragedy. My own interactions with Leverage has not been personally noticeably harmful or beneficial.

[-]Avi3y350

FYI - Geoff will be talking about the history of Leverage and related topics on Twitch tomorrow (Saturday, October 23rd 2021) starting at 10am PT (USA West Coast Time). Apparently Anna Salamon will be joining the discussion as well.

Geoff's Tweet

Text from the Tweet (for those who don't use Twitter):

"Hey folks — I'm going live on Twitch, starting this Saturday. Join me, 10am-1pm PT:
twitch.tv/geoffanders
This first stream will be on the topic of the history of my research institute, Leverage Research, and the Rationality community, with @AnnaWSalamon as a guest."

Yep. I hope this isn’t bad to do, but I am doing it.

8Avi3y
I'm sure it'll be fine :-) I'm not involved in this in any way, but from the comments I've seen of yours in these threads you've shown great honesty and openness with everything.
5Ben Pace3y
I’d be more inclined to join if I could ask questions. I’ve not twitched before, my sense is the chat is a bit ephemeral to the video. Is the intention that it is mostly going to be you two talking? Edit: Slightly changed tone of comment to avoid potentially sounding flippant.
5LarissaRowe3y
Geoff is answering chat questions at the moment (at least until 11 AM PT) so if you have any questions you should consider joining.

Unfortunately for me, there is apparently no video recording available on Twitch for this stream? (There are two short clips, but not the full broadcast.) 

If anyone has a link to it, if you could include it here, that'd be great. ! 

Alas, no. I'm pretty bummed about it, because I thought the conversation was rather good, but Geoff pushed the "save recording" button after it was started and that didn't work.

Based on the fact Twitch is counter-intuitive about recording (it's caught me out before too) and the technical issues at the start, I made a backup recording just in case – only audio but hope it helps!:

https://drive.google.com/file/d/1Af1dl-v7Q7uJhdX8Al9FsrJDBc4BqM_f/view?usp=sharing

4habryka2y
Thank you so much! This is great!
2Rob Bensinger2y
!!! BRILLIANT. I thought the conversation was quite important, and your foresight has saved it for the community's memory. Thank you so much. :) This should be signal-boosted for all the people who missed the stream.
1Lulie2y
Update: I’ve disabled public access by request. Geoff said (here) he’s going to post the recording to his website.

I re-listened to Anna and Geoff's conversation, which is the main part of the audio that I found interesting. Timestamps for that conversation:

1:57:57 - Early EA history, the EA Summit, and early EA/Leverage interactions
2:13:34 - Narrative addiction and leaders being unable to talk to each other
2:17:20 - Early Geoff cooperativeness
2:19:58 - Possible causes for EA becoming more narrative-addicted
2:22:35 - Conflict causing group insularity
2:24:50 - Anna on narrative businesses, narrative pyramid schemes, and disagreements
2:28:28 - Geoff on narratives, morale and the epistemic sweet spot
2:30:08 - Anna on trying to block out things that would weaken the narrative, and external criticism of Leverage
2:36:30 - More on early Geoff cooperativeness
2:41:44 - 'Stealing donors', Leverage's weird vibe (non-materialism?), Anna/Geoff's early interactions, 'writing off' on philosophical grounds, and keeping weird things at arm's length
2:50:00 - The value of looking at historical details, and narrative addiction collapse
2:52:30 - Geoff wants out of the rationality community; PR and associations; and disruptive narratives

7TekhneMakre3y
Shoot. They did try at the beginning and thought they were recording. A few other points, additional to my other comment (these are half-remembered, rephrased, and presumably missing parts and context): * Anna hypothesizes that Geoff was selecting who he talked with and worked with and hired in part based on them being "not too big", so that he could intellectually dominate them. She tells a story where she and Nate went (in 2017? 2019?) to talk with Geoff, and Anna+Nate thought talking was good or fruitful or something but Geoff seemed uninterested, and maybe that's because Anna and Nate are "too big". * Geoff describes trying, 2010±2, to get EA orgs to team up / combine, but finding lack of interest. I got a (speculative) bit of a sense of frame control battles going on in the shadows, like Geoff said something like "well when you think in detail about ambitious plans, you tend to see ways in which other people could fit into them", and I could imagine his overtures having a subtle sense of trying to capture or define a frame, like some proportion of "here's how you fit into my plan" rather than "here's a common goal we're aiming at, here's synergies between our strategies, also let's continuously double crux about crucial things". (It would be bad to punish people for having ambitious plans that involve other people; it would be good to understand how to navigate "provisional plans" that can go in a direction and gain from coordination, while also remaining deeply open to members doing surprising things that upturn the plans, as well as not taking over their soul etc.) ETA: Anna's comment here seems to be counterevidence: [Anna] was like “yes, that matches my memory and perception; I remember you [Geoff] and Leverage seeming unusually interested in getting specific collaborations or common projects that might support your goals + other groups’ goals at once, going, and more than other groups, and trying to support cooperation in this way”[...]. * Geoff descri

Hi everyone. I wanted to post a note to say first, I find it distressing and am deeply sorry that anyone had such bad experiences. I did not want or intend this at all.

I know these events can be confusing or very taxing for all the people involved, and that includes me. They draw a lot of attention, both from those with deep interest in the matter, where there may be very high stakes, and from onlookers with lower context or less interest in the situation. To hopefully reduce some of the uncertainty and stress, I wanted to share how I will respond.

My current plan (after writing this note) is to post a comment about the above-linked post. I have to think about what to write, but I can say now that it will be brief and positive. I’m not planning to push back or defend. I think the post is basically honest and took incredible courage to write. It deserves to be read.

Separately, I’m going to write a letter in my role as Executive Director of Leverage Research on the topic of harms from our previous psychology research and the structure of the organization.

It may be useful to address the Leverage/Rationality relation or the Leverage/EA relation as well, but discussion of that might distract us from what is most important right now.

Given what the post said about the NDA that people signed when leaving, it seems to me like explictely releasing people from that NDA (maybe with a provision to anonymize names of other people) would be very helpful for having a productive discussion that can integrate the experiences of many people into public knowledge and create a shared understanding of what happened. 

3BlueMarlin2y
Geoff, in relation your recent livestream, which was on the topic of helping to craft good incentives so people can speak up, could you comment on the state of the NDAs, and, if people have not yet been released from them, whether or not you will explicitly release people from them in order to facilitate discussion about Leverage? And if not, why not? Given your emphasis on symmetry (of incentivizing both positive and negative accounts), it would seem obviously necessary to release people from an agreement to "be generally positive about each other" (the very first agreement in the document), which they may still feel bound by, in order to unbias incentives. Cf. the questions and concerns raised in Rob's comment, which remain pertinent.

Hi Geoff—have you posted the brief response comment anywhere yet?

I would also be interested in knowing a timeline for the response.

8Geoff_Anders3y
Yes, here: https://www.lesswrong.com/posts/XPwEptSSFRCnfHqFk/zoe-curzi-s-experience-with-leverage-research?commentId=3gMWA8PjoCnzsS7bB

Separately, I’m going to write a letter in my role as Executive Director of Leverage Research on the topic of harms from our previous psychology research and the structure of the organization.

Geoff, has this letter been published yet? And if not, when will it be published?

It was published this evening. Here is a link to the letter, and here is the announcement on Twitter.

8Freyja2y
Thank you for keeping that promise; I imagine it wasn’t easy to write.
4Freyja2y
I would also be very interested in a timeline for this.
9throwaway24563y
* For Geoff or Reserve: What is the relationship between Leverage and Reserve, and related individuals and entities? * For everyone: Under what conditions does restitution to ex-Leveragers make sense? Under what conditions does it make sense for leadership to divest themselves of resources? * For everyone: In arguendo, what could restitution or divestment concretely look like? Edit: I was going to leave the original comment, to provide context to Vaniver’s reply. But it started receiving upvotes that brought it above “-1", making it a more prominent bad example of community norms. I think the upvotes indicate importance in the essence of the questions, but their form were ill-considered and rushed to judgement. In compromise, I've tried to rewrite them more neutrally and respectfully to all involved. I may revisit them a few more times.

I wanted to note that I think this comment both a) raises a good point (should Leverage pay restitution to people that were hurt by it? Why and how much?) and b) does so in a way that I think is hostile and assumes way more buy-in than it has (or would need to get support for its proposal).

First, I think most observers are still in "figuring out what's happened" mode. Was what happened with Zoe unusually bad or typical, predictable or a surprise? I think it makes sense to hear more stories before jumping to judgment, because the underlying issue isn't that urgent and the more context, the wiser a decision we can make.

Second, I think a series of leading questions asked to specific people in public looks more like norm enforcement than it does like curious information-gathering, and I think the natural response is suspicion and defensiveness. [I think we should go past the defensiveness and steelman.]

Third, I do think that it makes sense for people to make things right with money when possible; I think that this should be proportional to damages done and expectations of care, rather than just 'who has the money.' Suppose, pulling these numbers out of a hat, the total damage done to L... (read more)

In retrospect, I apologize for the strident tone and questions in my original comment. I am personally worried about further harm, in uses of money or power by Anders, and from Zoe's post it seems like there were a handful to many more people hurt. If money or tokens are possibly causally downstream of harm, restitution might reduce further harm and address harm that's already taken place. The community is doing ongoing information gathering, though, and my personal rush to judgement isn't keeping in pace with that. I'll leave my above comment as is, since it's already received a constructive reply.

6Ben Pace3y
I appreciate you addressing Vaniver's concerns about your comment.
5farp3y
The counter argument would be: Suppose we do not think it should be profitable to start a cult and get rich. If we enforce the norm "if we find out you started a cult and got rich off it, you only get to be 90% rich instead of 100% rich", well, that is not very powerful. Maybe the rest should go to actually-effective charity or something. That said, a norm where we say "you don't get to be rich anymore" is sort of moot when ultimately Geoff has all the Leverage 🥁💥
7farp3y
I am sad that you have deleted your original comment because it was my favorite comment in this whole page! Your updated version, by comparison, is much worse (no offense).  Look, I think once you are trying to express the idea "I think you should pay millions of dollars to the people you have very badly harmed", you should not be so concerned about whether you are doing so in a "hostile" way. I hope we can all appreciate the comedy in this even if you think neutrality is ultimately better. I agree that your new version is more norm-conformant, but I am curious if you think it is an equally thought-provoking / persuasive / useful presentation of the ideas. I also think that your new version is inadequate for leaving out the important context that Reserve probably made a lot of money.

Here's anonymous submission of Leverage's Basic Information Acknowledgement Checklist document. The submitter said "The text of this document has been copied word for word from the original, except with names redacted."

https://we.tl/t-KaDXP3vrW3

I can confirm that this document is legitimate as I've seen a more recent version of the same checklist.

Leverage Research is planning to review and revise its information management policy, as soon as we have time.

Relatedly, a LessWrong user recently reached out to us directly for information about our information management policies and agreements. During the conversation, it became clear that it was difficult for them, as someone seeking information, to formulate which questions to ask and difficult for us as an organization to determine what answers they might find useful, given the differences in information and context. As a result of this conversation, we concluded it might be useful to figure out how to help people request the information that they are looking for, while at the same time protecting the institute’s time, ownership of research, and ability to carry out its mission. 

As part of this, we have now set up a request form on our website where it is possible to make information requests of the organization. We expect to respond to genuine inquiries with answers, updates to our FAQ (forthcoming), the release of documents, and more, as our other responsibilities permit.

EDIT: This comment described a bunch of emails between me and Leverage that I think would be relevant here, but I misremembered something about the thread (it was from 2017) and I'm not sure if I should post the full text so people can get the most accurate info (see below discussion), so I've deleted it for now. My apologies for the confusion

2Aella3y
Would you happen to have/be willing to share those emails?
9alyssavance3y
I have them, but I'm generally hesitant to share emails as they normally aren't considered public. I'd appreciate any arguments on this, pro or con

I generally feel reasonably comfortable sharing unsolicited emails, unless the email makes some kind of implicit request to not be published, that I judge at least vaguely valid. In general I am against "default confidentiality" norms, especially for requests or things that might be kind of adversarial. I feel like I've seen those kinds of norms weaponized in the past in ways that seems pretty bad, and think that while there is a generally broad default expectation of unsolicited private communication being kept confidential, it's not a particularly sacred protection in my mind (unless explicitly or implicitly requested, in which case I think I would talk to the person first to get a more fully comprehensive understanding for why they requested confidentiality, and would generally err on the side of not publishing, though would feel comfortable overcoming that barrier given sufficient adversarial action)

unless the email makes some kind of implicit request to not be published

What does "implicit request" mean here? There are a lot of email conversations where no one writes a single word that's alluding to 'don't share this', but where it's clearly discussing very sensitive stuff and (for that reason) no one expects it to be posted to Hacker News or whatever later.

Without having seen the emails, I'm guessing Leverage would have viewed their conversation with Alyssa as 'obviously a thing we don't want shared and don't expect you to share', and I'm guessing they'd confirm that now if asked?

I do think that our community is often too cautious about sharing stuff. But I'm a bit worried about the specific case of 'normalizing big infodumps of private emails where no one technically said they didn't want the emails shared'.

(Maybe if you said more about why it's important in this specific case? The way you phrased it sort of made it sound like you think this should be the norm even for sensitive conversations where no one did anything terrible, but I assume that's not your view.)

2habryka3y
I don't know, kind of complicated, enough that I could probably write a sequence on it, and not even sure I would have full introspective access into what I would feel comfortable labeling as an "implicit request". I could write some more detail, but it's definitely a matter of degree, and the weaker the level of implicit request, the weaker the reason for sharing needs to be, with some caveats about adjusting for people's communication skills, adversarial nature of the communication, adjusting for biases, etc.
5Spiracular3y
I want to throw out that while I am usually SUPER on team "explicit communication norms", the rule-nuances of the hardest cases might sometimes work best if they are a little chaotic & idiosyncratic. I personally think there might be something mildly-beneficial and protective, about having "adversarial case detected" escape-clauses that vary considerably from person-to-person. (Otherwise, a smart lawful adversary can reliably manipulate the shit out of things.)
[-]cata3y150

I would just ask the other party whether they are OK to share rather than speculating about what the implicit expectation is.

2Rob Bensinger3y
?!?!?!?!?!?!?!?!?!
6Rob Bensinger3y
Update: Looks like the thing I was surprised by didn't happen. Confusion noticed, I guess!

Off the cuff thoughts from me listening to the Twitch conversation between Anna and Geoff:

  • I think Geoff, more than he's seeing clearly, disagrees or at least in the past disagreed with the claim that using narratives to boost morale--specifically, deemphasizing information that contradicts a narrative plan--is basically just bad in the long run. Would be better to have deeper understanding of what morale is.
  • Geoff describes being harmed by some sort of initial rejection by the rationality/EA community (around 2011? 2010?). This suggests, to me, a (totally conjectural!) story where he got into an escalating narrative cold war with the rationality community: first he perceives (possibly correctly) that the community rejects him, and thereby cuts off his ability to work with people for projects he thinks are good; then, he corrects for this with narrative pushback--basically, firmly reemphasizing his positive vision or whatever. Then people in the community sense this as narrative distortion / deception, and react (more or less consciously) with further counter-distortion. (Where the mechanism is like, they sense something fishy but don't know how to say "Geoff is slightly distortin
... (read more)

Thanks! I would love follow-up on LW to the twitch stream, if anyone wants to. There were a lot of really interesting things being said in the text chat that we didn’t manage to engage with, for example. Although unfortunately the recording was lost, which is unfortunate because IMO it was a great conversation.

TekhneMakre writes:

This suggests, to me, a (totally conjectural!) story where [Geoff] got into an escalating narrative cold war with the rationality community: first he perceives (possibly correctly) that the community rejects him…

This seems right to me

Anna says there were in the early 2010s rumors that Leverage was trying to fundraise from "other people's donors". And that Leverage/Geoff was trying to recruit, whether ideologically or employfully, employees of other EA/rationality orgs.

Yes. My present view is that Geoff’s reaching out to donors here was legit, and my and others’ complaints were not; donors should be able to hear all the pitches, and it’s messed up to think of “person reached out to donor X to describe a thingy X might want to donate to” as a territorial infringement.

This seems to me like an example of me and others escalating the “narrative cold ... (read more)

I have video of the first 22 minutes at the beginning but at the end switched into my password manager (not showing passwords on screens but a series of sides where I'm registered), so I would want to publically post the video but I'm open to share it to individual people if someone wants to write something referencing it.

I wished I would have been more clear about how to do screen recording in a way that only captures one browser window...

How about posting the audio?

4ChristianKl2y
Geoff asked me to leave public publication to him. I send him my video with the last minute (where I had personal information) cutoff. Given that I do think that Geoff made a good effort to be cooperative, and there's no attempt to assert that something that something happened during the stream that didn't happen as asserted I see no reason to unilaterally publish something publically.

Noting that it has been 9 days and Geoff has not yet followed though on publishing the 22-minute video. Thankfully, however, a complete audio recording has been made available by another user.

I notice that my comment score above is now zero. I would like others to know that I visited Geoff's website prior to posting my comment to ensure my comment was accurate, and that these links appeared after my above comment.

5habryka2y
I did indeed misunderstand that! I didn't downvote, but my misunderstanding did cause me to not upvote. 
4ChristianKl2y
Geoff wrote me six days ago that he put it on his website. 
4BlueMarlin2y
It is possible that I missed the link, in which case I apologize, although I am surprised because I did check the website. It doesn't seem that the web archive can verify timestamps. I am glad I wrote my comments anyway, so that now the links have been shared here on LW, which I don't think they were before, and since Lulie's recording that I linked above seems to have been taken down.

Geoff was interested in publishing a transcript and a video, so I think Geoff would be happy with you publishing the audio from the recording you have.

3Rob Bensinger2y
Hope to see this posted soon! I missed the first hour of the Twitch video. (Though I'm guessing the part I saw, Geoff and Anna talking, was the most valuable part.)
7Eli Tyre2y
Is that to say that you have audio of the whole conversation, and video of the first 20 minutes?

I have a recording of 22 minutes. The last minute includes me switching into my password manager and thus I cut it off from the video that I passed on.

2TurnTrout2y
I think the question is: Why not send the audio from after the 22 minute mark? Then we won't be able to see the password manager.
4ChristianKl2y
I don't have anything after the 22 minute mark. I have a recording of 22 minutes and passed on 21 minutes of it. At the time, I didn't want to focus my cognitive resources at that point on figuring out recording but on the actual content (and you actually see my writing my comment ;) in the video ). 
2TurnTrout2y
Makes sense, thanks for clarifying and for sharing what you have.
5TekhneMakre3y
A few more half-remembered notes from the conversation: https://www.lesswrong.com/posts/XPwEptSSFRCnfHqFk/zoe-curzi-s-experience-with-leverage-research?commentId=e8vL8nyTGwDLGnR3r#Yrk2375Jt5YTs2CQg If this is true, it does strike me as important and interesting. Speaking from a very abstract viewpoint not strongly grounded in observations, I'll speculate: One contributor, naturally, would be fear of false hope. One is (correctly) afraid of hope because hope somewhat entails investment and commitment. Fear of false hope could actually make hope be genuinely false, even when there could have been true hope. This happens because hope is to some extent a decision, so *expecting* you and others in the future to not collaborate in some way, also *constitutes a decision* to not collaborate in that way. If you will in the future behave in accordance with a plan, then it's probably correct to behave now in accordance with the plan; and if you will not, then it's probably correct to not now. (I tried to meditate on this in the footnotes to my post Hope and False Hope.) (Obviously most things aren't very subject to this belief-plan mixing, and things where we can separate beliefs from plans are very useful for building foundations, but some non-separable things are important, e.g. open-ended collaboration.) This feels maybe related to a comment you Anna made in the conversation about Geoff seeming somewhat high on a dimension of manic-ness or something, and he said others have said he seems hypomanic. The story being, Geoff is more hopeful and hope-based in general, explaining why he sought collaboration, and caused collective hope in EA, and ended up feeling he had to defend his org's hope against hope-destroyers (which hope he referred to as "morale"). I kind of get the impression, based on public conversations, that some people (e.g. Eliezer) get stuck with disagreements because the real reasons for their beliefs are ideas that they don't want to spread, e.g. ideas abo

Geoff describes being harmed by some sort of initial rejection by the rationality/EA community (around 2011? 2010?).

One of the interesting things about that timeframe is that a lot of the stuff is online; here's the 2012 discussion (Jan 9th, Jan 10th, Sep 19th), for example. (I tried to find his earliest comment that I remembered, but I don't think it was with the Geoff_Anders account or it wasn't on LessWrong; I think it was before Leverage got started, and people responded pretty skeptically then also?)

6TekhneMakre3y
Thanks! One takeaway: Eliezer's interaction with Geoff does seem like Eliezer was making some sort of mistake. Not sure what the core is, but, one part is like conflating [evidence, the kind that can be interpersonally verified] with [evidence, the kind that accumulates subconsciously as many abstract percepts and heuristics, which can be observably useful while still pre-theoretic, pre-legible]. Like, maybe Eliezer wants to only talk with people where either (1) they already have enough conceptual overlap that abstract cutting-edge theories also always cash out as perceivable predictions, or (2) aren't trying to share pre-legible theories. But that's different from Geoff making some terrible incurable mistake of reasoning. (Terrible incurable mistakes are certainly correlated with illegibility, but that's not something to Goodhart.)

I'm sort of surprised that you'd interpret that as a mistake. It seems to me like Eliezer is running a probabilistic strategy, which has both type I and type II errors, and so a 'mistake' is something like "setting the level wrong to get a bad balance of errors" instead of "the strategy encountered an error in this instance." But also I don't have the sense that Eliezer was making an error.

0TekhneMakre2y
It sounds like this describes every strategy? I guess you mean, he's explicitly taking into account that he'll make errors, and playing the probabilities to get good expected value. So this makes sense, like I'm not saying he was making a strategic mistake by not, say, working with Geoff. I'm saying: sounds like he's conflating shareable and non-shareable evidence. Geoff could have seen a bunch of stuff and learned heuristics that he couldn't articulately express other than with silly-seeming "bright-line psychoanalytic rules written out in English". Again, it can make sense to treat this as "for my purposes, equivalent to being obviously wrong". But like, it's not really equivalent, you just *don't know* whether the person has hidden evidence.
3Taran2y
Even if all you have is a bunch of stuff and learned heuristics, you should be able to make testable predictions with them.  Otherwise, how can you tell whether they're any good or not? Whether the evidence that persuaded you is sharable or not doesn't affect this.  For example, you might have a prior that a new psychotherapy technique won't outperform a control because you've read like 30 different cases where a leading psychiatrist invented a new therapy technique, reported great results, and then couldn't train anyone else to get the same results he did.  That's my prior, and I suspect it's Eliezer's, but if I wanted to convince you of it I'd have a tough time because there's not really a single crux, just those 30 different cases that slowly accumulated.  And yet, even though I can't share the source of my belief, I can use it to make concrete testable predictions: when they do an RCT for the 31st therapy technique, it won't outperform the control. Geoff-in-Eliezer's-ancedote has not reached this point.  This is especially bad for a developing theory: if Geoff makes a change to CT, how will he tell if the new CT is better or worse than the old one?  Geoff-replying-to-Eliezer takes this criticism seriously, and says he can make concrete, if narrow, predictions about specific people he's charted.
2TekhneMakre2y
Certainly. But you might not be able to make testable predictions for which others will readily agree with your criteria for judgement. In the exchange, Geoff gives some "evidence", and in other places he gives additional "evidence". It's not really convincing to me, but it at least has the type signature of evidence. Eliezer responds: This is eliding that Geoff probably has significant skill in identifying more detail of how beliefs and goals interact, beyond just what someone would know if they heard about cognitive dissonance theory. Like basically I'm saying that if Eliezer sat with Geoff for a few hours through a few sessions of Geoff doing his thing with some third person, Eliezer would see Geoff behave in a way that suggests falsifiable understanding that Eliezer doesn't have. (Again, not saying he should have done that or anything.)
9Unreal3y
Well, the video is lost. But my friend Ben Pace (do you know him? he is great) was kind enough to take notes on what he said specifically in response to my question.  My question was something like: "Why do you think some people are afraid of retaliation from you? Have you made any threats? Have you ever retaliated against a Leverage associate?" This is not the exact wording but close enough. I used the words "spiteful, retaliatory, or punishing" so he repeats that in his answer.  I also explicitly told him he didn't have to answer any of these questions, like I wasn't demanding him to answer them.  I am pasting Geoff's response below. 
7Unreal3y
Anna made a relevant follow-up question. She said something like: I expect picketing to be [a more balanced response] because it's a public action. What about [non-public] (hidden) acts of retaliation?  I saw some of his reaction to this before my internet cut out again. (I think he could have used a hug in that moment... or maybe just me, maybe I could use a hug right now.) 😣 From the little glimpses I got (pretty much only during the first hour Q&A section), I got this sense (this is my own feelings and intuitions speaking): * I did not sense him being 'in cooperate mode' on the object level, but he seemed to be 'picking cooperate' on a meta level. He was trying to act according to good principles. E.g. by doing the video at all, and the way he tried to answer Qs by saying only true things. He tried not to come from a defensive place. * He seemed to keep to his own 'side of the street'. Did not try to make claims about others, did not really offer models of others, did not speculate. I think he may have also been doing the same thing with the people in the chat? (I dunno tho, I didn't see 90%.) Seems 'cleaner' to do it this way and avoids a lot of potential issues (like saying something that's someone else's to say). But meh, it's also too bad we didn't get to see his models about the people. 
3BlueMarlin3y
I don't think it's bad of you. It seemed to me that he was deflecting or redirecting many of the points Anna was trying to get at.

Good stuff. Very similar to DeMille's interview about Hubbard. As an aside, I love how the post rejects the usual positive language about "openness to experience" and calls the trait what it is: openness to influence.

While I'm not hugely involved, I've been reading OB/LW since the very beginning. I've likely read 75% of everything that's ever been posted here.

So, I'm way more clued-in to this and related communities than your average human being and...I don't recall having heard of Leverage until a couple of weeks ago.

I'm not exactly sure what that means with regard to PR-esque type considerations.

However.  Fair or not, I find having read the recent stuff I've got an ugh field that extended to slightly include LW.  (I'm not sure what it means to "include LW"...it's just a website.  My first stab at an explanation is it's more like "people engaged in community type stuff who know IRL lots of other people who communicate on LW", but that's not exactly right either.)

I think it'd be good to have some context on why any of this is relevant to LessWrong. The whole thing is generating a ton of activity and it feels like it just came out of nowhere. 

Personally I think this story is an important warning about how people with a LW-adjacent mindset can death spiral off the deep end. This is something that happened around this community multiple times, not just in Leverage (I know of at least one other prominent example and suspect there are more), so we should definitely watch out for this and/or think how to prevent this kind of thing.

What's the other prominent example you have in mind?

I am referring the cause of this incident. This seems like a possibly good source for more information, but I only skimmed it so don't vouch for the content.

7TekhneMakre3y
Thanks.

Leverage has always been at least socially adjacent to LW and EA (the earliest discussion I find is in 2012), and they hosted the earliest EA summits in 2013-2014 (before CEA started running EA Global).

4Dustin3y
Having seen it, I have a very vague recollection of maybe having read that at the time.  Still, the amount of activity on the recent posts about Leverage seems to me all out of proportion with previous mentions/discussions.  

Also, for the extended Leverage diaspora and people who are somehow connected, LessWrong is probably the most obvious place to have this discussion, even if people familiar with Leverage make up only a small proportion of people who normally contribute here.

There are other conversations happening on Facebook and Twitter but they are all way more fragmented than the ones here.

I originally chose LessWrong, instead of some other venue, to host the Common Knowledge post primarily because (1) I wanted to create a publicly-linkable document pseudonymously, and (2) I expected high-quality continuation of information-sharing and collaborative sense-making in the comments.

As someone part of the social communities, I can confirm that Leverage was definitely a topic of discussion for a long time around Rationalists and Effective Altruists. That said, often the discussion went something like, "What's up with Leverage? They seem so confident, and take in a bunch of employees, but we have very little visibility." I think I experienced basically the same exact conversation about them around 10 times, along these lines.

As people from Leverage have said, several Rationalists/EAs were very hostile around the topic of Leverage, particularly in the last ~4 years or so. (I've heard stories of people getting shouted at just for saying they worked at Leverage at a conference). On the other hand, they definitely had support by a few rationalists/EA orgs and several higher-ups of different kinds.

They've always been secretive, and some of the few public threads didn't go well for them, so it's not too surprising to me that they've had a small LessWrong/EA Forum presence.

I've personally very much enjoyed staying mostly staying away from the controversy, though very arguably I made a mistake there.

(I should also note that I had friends who worked at or worked close to Leverage, I attended like 2 events there early on, and I applied to work from there around 6 years ago)

3Evan_Gaensbauer3y
For what it's worth, my opinion is that you sharing your perspective is the opposite of making a mistake.

Sorry, edited. I meant that it was a mistake for me to keep away before, not now.

(That said, this post is still quite safe. It's not like I have scandalous information, more that, technically I (or others) could do more investigation to figure out things better.)

6Evan_Gaensbauer3y
Yeah, at this point, everyone coming together to sort this out together as a way of building a virtuous spiral of making speaking up feel safe enough that it doesn't even need to be a courageous thing to do or whatever is the kind of thing I think your comment also represents and what I was getting at. 
[-]agc3y240

A 2012 CFAR workshop included "Guest speaker Geoff Anders presents techniques his organization has used to overcome procrastination and maintain 75 hours/week of productive work time per person." He was clearly connected to the LW-sphere if not central to it.

My own experience is somewhat like Linch's here, where mostly I'm vaguely aware of some things that aren't my story to tell.

For most of the past 9ish years I'd found Leverage "weird/sometimes-offputting, but not obviously moreso than other rationality orgs." I have gotten personal value out of the Leverage suite of memes and techniques (Belief Reporting was a particularly valuable thing to have in my toolkit). 

I've received one bit of secondhand info about "An ex-leverage employee (not Zoe) had an experience that seemed reasonable to describe as 'the bad kind of cult that was actually harmful'." I was told this as part of a decisionmaking process where it seemed relevant, and asked not to share it further in the past couple years. I think it makes sense to share this much meta-data in this context.

[-]farp3y130

Re: @Ruby on my brusqueness

LW/EA has more "world saving" orgs than just Leverage. Implicit to "world saving" orgs, IMO, is that we should tolerate some impropriety for the greater good. Or that we should handle things quietly in order to not damage the greater mission. 

I think that our "world saving" orgs ask a lot of trust from the broader community -- MIRI is a very clear example. I'm not really trying to condemn secrecy I am just pointing out that trust is asked of us.

I recognize that this is inflammatory but I don't see a reason to beat around the bush:
Leverage really seems like a cult. It seems like an unsafe institution doing harmful things. I am not sure how much this stuff about Leverage is really news to people involved in our other "world saving" orgs. I think probably not much. I don't want "world saving" orgs to have solidarity. If you want my trust you have to sell out the cult leaders, the rapists, etcetera, regardless of whether it might damage your "world saving" mission. I'm not confident that that's occurring.

[-]Ruby3y120

IMO, is that we should tolerate some impropriety for the greater good.

I agree!

I am just pointing out that trust is asked of us.

I agree!

Leverage really seems like a cult. It seems like an unsafe institution doing harmful things.

Reminder that Leverage 1.0 is defunct and it seems very unlikely that the same things are going on with Leverage 2.0 (remote team, focus on science history rather than psychology, 4 people).

I am not sure how much this stuff about Leverage is really news to people involved in our other "world saving" orgs.

The information in Zoe's Medium post was significant news to me and others I've spoken to. 

(saying the below for general clarity, not just in response to you)

I think everyone (?) in this thread is deeply concerned, but we're hoping to figure out what exactly happened, what went wrong and why (and what maybe to do about it). To do that investigation and postmortem, we can't skip to sentencing (forgive me if that's not your intention, but it reads a bit to me that that's what you want to be happening), nor would it be epistemically virtuous or just to do so. 

Some major new information came to light, people need time to process it, surface other releva... (read more)

To do that investigation and postmortem, we can't skip to sentencing (forgive me if that's not your intention, but it reads a bit to me that that's what you want to be happening), nor would it be epistemically virtuous or just to do so. 

I super agree with this, but also want to note that I feel appreciation for farp's comments here. The conversation on this page feels to me like it has a missing mood: I found myself looking for comments that said something like "wow, this account is really horrifying and tragic; we're taking these claims really seriously, and are investigating what actions we should take in response". Maybe everyone thinks that that's obvious, and so instead is emphasizing the part where we're committed to due process and careful thinking and avoiding mob dynamics. But I think it's still worth stating explicitly, especially from those in leadership positions in the community. I found myself relieved just reading Ruby's response here that "everyone in this thread is deeply concerned".

[-]Ruby3y210

I super agree with this, but also want to note that I feel appreciation for farp's comments here.

Fair!

I found myself looking for comments that said something like "wow, this account is really horrifying and tragic; we're taking these claims really seriously, and are investigating what actions we should take in response"

My models of most of the people I know in this thread feel that way. I can say on my own behalf that I found Zoe's account shocking. I found it disturbing to think that was going on with people I knew and interacted with.  I find it disturbing that if this really is true, how did it not surface until now? (Or how it was ignored until now?)  I'm disturbed that Leverage's weirdness (and usually I'm quite okay with weirdness) turned out to enable and hide terrible things, at least for one person and likely more. I'm saddened that it happened, because based on the account, it seems like Leverage were trying to accomplish some ambitious, good things and I wish we lived in a world where the "red flags" (group-living, mental experimentation, etc) were safely ignored in the pursuit in the service of great things. 

Suddenly I am in a world more awful than the one I thought I was in, and I'm trying to reorient. Something went wrong and something different needs to happen now. Though I'm confident it will, it's just a matter of ensuring we pick the right different thing. 

Thank you, I really appreciate this response. I did guess that this was probably how you and others (like Anna, whose comments have been very measured) felt, but it is really reassuring to have it explicitly verbally confirmed, and not just have to trust that it's probably true.

Sorry, only just now saw that I was mentioned by name here. I agree that Zoe's experiences were horrifying and sad, and that it's worth quite a bit to try to spare others that kind of thing. Not mangling peoples' souls matters, rather a lot, both intrinsically (because people matter) and instrumentally (because we need integrity if we want to do anything real and sustained).

3Rob Bensinger3y
+1
[-]farp3y150

The information in Zoe's Medium post was significant news to me and others I've spoken to. 

That's a good thing to assert. 
It seems preeeetty likely that some leaders in the community knew more or less what was up. I want people to care about whether that is true or not.

To do that investigation and postmortem, we can't skip to sentencing

I get this sentiment, but at the same time I think it's good to be clear about what is at stake. It's easy for me to interpret comments like "Reminder that Leverage 1.0 is defunct and it seems very unlikely that the same things are going on with Leverage 2.0" as essentially claiming that, while post-mortems are useful, the situation is behind us. 

Simply put, if I were a victim, I would want to speak up for the sake of accountability, not shared examination and learning. If I spoke up and found that everyone agreed the behavior was bad, but we all learned from it and are ready ot move on, I would be pretty upset by that. And my understanding is that this is how the community's leaders have handled other episodes of abuse (based on 0 private information, only public / second hand information).

But I am coming into this with a lot of assumptions as an outsider. If these assumptions don't resonate with any people who are closer to the situation then I apologize. Regardless sorry for stirring shit up with not much concrete to say. 

It's easy for me to interpret comments like "Reminder that Leverage 1.0 is defunct and it seems very unlikely that the same things are going on with Leverage 2.0" as essentially claiming that, while post-mortems are useful, the situation is behind us.

Given my high priors on "the past behavior is the best predictor of future behavior", I would assume that the greatest difference will be better OPSEC and PR. Also, more resouces to silence critics.

8Ruby3y
I would be quite surprised if the people I would call leaders knew of things that were as severe as Zoe's account and "did nothing". I care a lot whether that's true. My intention was to say that we don't have reason to believe there is harm actively occurring right now that we need to intervene on immediately. A day or two to figure things out is fine. Based on what Zoe said plus general models of these situations, I believe how victims feel is likely complicated. I'm hesitant to make assumptions here. (Btw, see here for where some people are trying to set up an anonymous database of experiences at Leverage). I might suggest creating another post (so as to not interfere too much with this one) detailing what you believe to be the case so that we can discuss and figure out any systematic issues.
6farp3y
Look uhhh I believe at the very least the most basic claims about how Anna handled Robert Lecnik. 👍 (non sarcastic)
2philh3y
(This renders on my phone as an o with a not-umlaut-but-similar over it followed by a D, and I don't know whether that's what it was intended to look like and I just don't know what it means, or if it's intended to look different than that.)
3farp3y
its a thumbsup emoji on mac OS. 👍
2ChristianKl3y
Having a database run by an anonymous person for that purpose seems to be very questionable. Zoe's edited her post to reference Aella as a point person for people who want to share their stories, so that's likely the best place.
[-]Ruby3y110

That is the database run by Aella. By anonymous I meant it's anonymous for the posters.

6farp3y
That's my context. However I agree that my contributions haven't been very high EV in that I'm very far on the outside of a delicate situation and throwing my weight around. So I won't keep trying to intervene / subtextually post.
4Dustin3y
  On one level I think this is correct, but...I also think it's possibly a little naïve.   In the potential world which consists of only "us", the people who think this world saving needs done, and who think like "we" do, your statement becomes more true.  In the world we live in wherein the vast majority of people think the world saving we're talking about is unimportant, or bad, or evil, your statement requires closer and closer to perfect secrecy and insularity to remain true.
[+][comment deleted]3y00