I've spoken to people recently who were unaware of some basic facts about Leverage Research 1.0; facts that are more-or-less "common knowledge" among people who spent time socially adjacent to Leverage, and are not particularly secret or surprising in Leverage-adjacent circles, but aren't attested publicly in one place anywhere.

Today, Geoff Anders and Leverage 2.0 are moving into the "Progress Studies" space, and seeking funding in this area (see: Geoff recently got a small grant from Emergent Ventures). This seems like an important time to contribute to common knowledge about Leverage 1.0.

You might conclude that I'm trying to discredit people who were involved, but that's not my aim here. My friends who were involved in Leverage 1.0 are people who I respect greatly. Rather, I just keep being surprised that people haven't heard certain specific, more-or-less legible facts about the past, that seem well-known or obvious to me, and that I feel should be taken into account when evaluating Leverage as a player in the current landscape. I would like to create here a publicly-linkable document containing these statements.

Facts that are common knowledge among people I know:

  • Members of Leverage 1.0 lived and worked in the same Leverage-run building, an apartment complex near Lake Merritt. (Living there was not required, but perhaps half the members did, and new members were particularly encouraged to.)

  • Participation in the project involved secrecy / privacy / information-management agreements. People were asked to sign an agreement that prohibited publishing almost anything (for example, in one case someone I know starting a personal blog on unrelated topics without permission led to a stern reprimand).

  • Geoff developed a therapy technique, "charting". He says he developed it based on his novel and complete theory of psychology, called "Connection Theory". In my estimation, "charting" is in the same rough family of psychotherapy techniques as Internal Family Systems, Coherence Therapy, Core Transformation, and similar. Like those techniques, it leads to shifts in clients' beliefs and moods. I know people from outside Leverage who did charting sessions with a "coach" from Paradigm Academy, and reported it helped them greatly. I've also heard people who did lots of charting within Leverage report that it led to dissociation and fragmentation, that they have found difficult to reverse.

  • Members who were on payroll were expected to undergo charting/debugging sessions with a supervisory "trainer", and to "train" other members. The role of trainer is something like "manager + therapist": that is, both "is evaluating your job performance" and "is doing therapy on you".

  • Another type of practice done at the organization, and offered to some people outside the organization, was "bodywork", which involved physical contact between the trainer and the trainee. "Bodywork" could in other contexts be a synonym for "massage", but that's not what's meant here; descriptions I heard of sessions sounded to me more like "energy work". People I've spoken to say it was reported to produce deeper and less legible change.

  • Using psychological techniques to experiment on one another, and on the "sociology" of the group itself, was a main purpose of the group. It was understood among members that they were signing up to be guinea pigs for experiments in introspection, altering one's belief structure, and experimental group dynamics.

  • The stated purpose of the group was to discover more theories of human behavior and civilization by "theorizing", while building power, and then literally take over US and/or global governance (the vibe was "take over the world"). The purpose of gaining global power was to lead to better coordination and better outcomes for humanity.

  • The narrative within the group was that they were the only organization with a plan that could possibly work, and the only real shot at saving the world; that there could be no possibility of success at one's goal of saving the world outside the organization.

  • Many in the group felt that Geoff was among the best and most powerful "theorists" in the world. Geoff's power and prowess as leader was a central theme.

  • Paradigm Academy is a for-profit entity, and Leverage is a non-profit entity. Both were part of “the ecosystem”, which was the Geoff-led project behind Paradigm and Leverage. Reserve (a cryptocurrency) was founded by ecosystem members, with a goal of raising money for Leverage/Paradigm.

  • [substantial edits, moved to end of list] Geoff, as the leader of the organization, dated employees/subordinates. I'm aware of 3 women over the course of 10 years he had a sexual or non-platonic relationship with. I have no reason to believe these were non-consensual; I view these as questionable management decisions, not necessarily tangible harms. I refer people to Larissa's comment. The specific section on "Dating policies" is clear, stated by a formal spokesperson for the organization, and accords with my understanding. I do not have evidence of any further pattern of non-platonic interactions with employees. I am glad that the nonexistence of any policy on dating within the reporting chain of the organization is now a matter of official record.

    • [annoyed, editorializing] I almost regret including this bullet point, as I feel it is drastically changing what people are modeling Leverage as, in an overall inaccurate direction. I feel if I removed it, this post would give a more accurate overall impression.

Why these particular facts?

One reason I feel it is important to make these particular facts more legibly known is because these pertain to the characteristics of a "high-demand group" (which is a more specific term than "cult", as people claim all kinds of subcultures and ideologies are a "cult").

You can compare some of the above bullets with the ICSA checklist of characteristics: https://www.icsahome.com/articles/characteristics.

There are many good reasons to structure groups in ways that have some of these characteristics, and to get involved in groups that have these characteristics. But it alarms me if the presence of these characteristics is simply not known by people interacting with Geoff or with Leverage 2.0 in its new and updated mission, and so this information is not taken into account in an evaluation.

How I know these things

Between 2016 and 2018 I became friends with a few Leverage members. I do not feel I was harmed by Leverage in any substantive way. None of the facts above are things that I got from a single point-of-contact; everything I state above is largely already known among people who were socially adjacent to Leverage when I was around.

Focus on structural properties, not impacts or on-net "worth-it-ness".

I try to focus my points above on structural facts about how the organization was set up, rather than what the result was.

I know former members who feel severely harmed by their participation in Leverage 1.0. I also know former members who view Leverage 1.0 as having been a deeply worthwhile experiment in world-improving. I don't think it's even remotely clear how "good" or "bad" the on-net impact of Leverage 1.0 was, and I don't aim here to speak to that. Nor do I aim to judge whether that organization structure was, or was not, "worth trying" because of the potential of "enormous upside".

I do worry about "ends justify the means" reasoning when evaluating whether a person or project was or wasn't "good for the world" or "worth supporting". This seems especially likely when using an effective-altruism-flavored lens that only a few people/organizations/interventions will matter orders of magnitude more than others. If one believes that a project is one of very few projects that could possibly matter, and the future of humanity is at stake - and also believes the project is doing something new/experimental that current civilization is inadequate for - there is a risk of using that belief to extend unwarranted tolerance of structurally-unsound organizational decisions, including those typical of "high-demand groups" (such as use of psychological techniques to increase member loyalty, living and working in the same place, non-platonic relationships with subordinates, secrecy, and so on) without proportionate concern for the risks of structuring an organization in that way.

Going forward

I'm posting this anonymously because, at the moment, this is all I have to say and I don't want to discuss the topic at length. Also, I don't want to become known as someone saying things this organization might find unflattering. If you happen to know who wrote this post, please don't spread that knowledge. I have asked in advance for a LW moderator to vouch in a comment that I'm someone known to them, who they broadly trust to be epistemically reasonable, and to have written good posts in the past.

If anyone would like to share other information about Leverage 1.0, feel free to do so in the comments section.

New Comment
213 comments, sorted by Click to highlight new comments since: Today at 5:56 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]Aella3y1660

Here's a long, detailed account of a Leverage experience which, to me, reads as significantly more damning than the above post: https://medium.com/@zoecurzi/my-experience-with-leverage-research-17e96a8e540b
 

Miscellaneous first-pass thoughts:

Geoff had everyone sign an unofficial NDA upon leaving agreeing not to talk badly about Leverage

I really don't like this. Could I see the NDA somehow? If the wording equally forbids sharing good and bad stuff about Leverage, then I'm much less bothered by this. Likewise if the wording forbids going into certain details, but lets former staff criticize Leverage at a sufficient level of abstraction.

Otherwise, this seems very epistemically distorting to me, and in a direction that things already tend to be distorted (there's pressure against people saying bad stuff about their former employer). How am I supposed to form accurate models of Leverage if former employees can't even publicly say 'yeah, I didn't like working at Leverage'??

One of my supervisors would regularly talk about this as a daunting but inevitable strategic reality (“obviously we’ll do it, and succeed, but seems hard”).

"It" here refers to 'taking over the US government', which I assume means something like 'have lots of smart aligned EAs with very Leverage-y strategic outlooks rise to the top decision-making ranks of the USG'. If I condition on 'Leverage staff have a high probability ... (read more)

???? I'm so confused about what happened here. The aliens part (as stated) isn't a red flag for me, but the Kant thing seem transparently crazy to me. I have to imagine there's something being lost in translation here, and missing context for why people didn't immediately see that this person was having a mental breakdown?

FWIW, my own experience is that people often miss fairly blatant psychotic episodes; so I'm not sure how Leverage-specific the explanation needs to be for this one. For example, once I came to believe that an acquaintance was having a psychotic episode and suggested he see a psychiatrist; the psychiatrist agreed. A friend who'd observed most of the same data I had asked me how I'd known. I said it was several things, but that the bit where our acquaintance said God was talking to him through his cereal box was one of the tip-offs from my POV. My friend's response was "oh, I thought that was a metaphor." I know several different stories like this one, including a later instance where I was among those who missed what in hindsight was fairly blatant evidence that someone was psychotic, none of which involved weird group-level beliefs or practices.

I'd guess that the people in question had a mostly normal air to them during the episode, just starting to say weird things?

Most people's conception of a psychotic episode probably involves a sense of the person acting like a stereotypical obviously crazy person on the street. Whereas if it's someone they already know and trust, just acting slightly more eccentric than normal, people seem likely to filter everything the person says through a lens of "my friend's not crazy so if they do sound crazy, it's probably a metaphor or else I'm misunderstanding what they're trying to say".

8AnnaSalamon3y
Yes.

???? I'm so confused about what happened here. The aliens part (as stated) isn't a red flag for me, but the Kant thing seem transparently crazy to me. I have to imagine there's something being lost in translation here, and missing context for why people didn't immediately see that this person was having a mental breakdown?

I would imagine that other people saw his relationship to Kant as something like Kant being a Shoulder Advisor, maybe with additional steps to make it feel more real.

In an enviroment where some people do seances and use crystals to clean negative energy, they might have thought that if you believe in the realness of rituals things get more effective. So someone who manages to get to the position to literally believe they are talking to Kant instead of just to some abstraction of their mind of Kant being more powerful.

I do think they messed up here by not understanding why truth is valuable, but I can see how things played out that way.

If I condition on 'Leverage staff have a high probability of succeeding here', then I could imagine that a lot of the factors justifying confidence are things that I don't know about (e.g., lots of people already in high-ranking positions who are quietly very Leverage-aligned). But absent a lot of hidden factors like that, this seems very overconfident to me, and I'm surprised if this really was a widespread Leverage view.

They seem to have believed that they can turn people into having Musk level competence. A hundred people with Musk level competence might execute a plan like the one Cummings proposed to successfully take over the US government.

If they really could transform people in that way, that might be reasonable. Stories like Zoe's however suggests that they didn't really have an ability to do that and instead their experiments dissolved into strange infighting and losing touch with reality.

9ChristianKl3y
Interestingly my comment further down that asks for details about the information sharing practices has very little upvotes ( https://www.lesswrong.com/posts/Kz9zMgWB5C27Pmdkh/common-knowledge-about-leverage-research-1-0?commentId=qqAFyqZrfAdHsuBz4) It seems like most people reading this thread are more interested in upvoting judgements then in requests for information.
4PhilGoetz3y
To me, saying that someone is a better philosopher than Kant seems less crazy than saying that saying that someone is a better philosopher than Kant seems crazy.

Isn't the thing Rob is calling crazy that someone "believed he was learning from Kant himself live across time", rather than believing that e.g. Geoff Anders is a better philosopher than Kant?

Yeah, I wasn't talking about the 'better than Kant' thing.

Regarding the 'better than Kant' thing: I'm not particularly in awe of Kant, so I'm not shocked by the claim that lots of random people have better core philosophical reasoning skills than Kant (even before we factor in the last 240 years of philosophy, psychology, etc. progress, which gives us a big unfair advantage vs. Kant).

The part I'm (really quite) skeptical of is "Geoff is the best philosopher who’s ever lived". What are the major novel breakthroughs being gestured at here?

2Linch3y
It's more crazy after you load in the context that people at Leverage think Kant is more impressive than eg Jeremy Bentham. 

CFAR recently hosted a “Speaking for the Dead” event, where a bunch of current and former staff got together to try to name as much as we could of what had happened at CFAR, especially anything that there seemed to have been (conscious or unconscious) optimization to keep invisible.

CFAR is not dead, but we took the name anyhow from Orson Scott Card’s novel by the same name, which has quotes like:

“...and when their loved ones died, a believer would arise beside the grave to be the Speaker for the Dead, and say what the dead one would have said, but with full candor, hiding no faults and pretending no virtues.”

“A strange thing happened then. The Speaker agreed with her that she had made a mistake that night, and she knew when he said the words that it was true, that his judgment was correct. And yet she felt strangely healed, as if simply saying her mistake were enough to purge some of the pain of it. For the first time, then, she caught a glimpse of what the power of speaking might be. It wasn’t a matter of confession, penance, and absolution, like the priests offered. It was something else entirely. Telling the story of who she was, and then realizing that she was no longer th

... (read more)

I felt strong negative emotions reading the above comment.

I think that the description of CFAR’s recent speaking-for-the-dead leaves readers feeling positive and optimistic and warm-fuzzy about the event, and about its striving for something like whole truth.

I do believe Anna's report that it was healing and spacious for those who were there, and I share Anna's hope that something similarly good can happen re: a Leverage conversation.

But I think I see the description of the event as trying to say something like “here’s an example of the sort of good thing that is possible.”

And I wanted anyone updating on that particular example to know that I was invited to the event, and declined the invitation, explaining that I genuinely could not cause myself to believe that I was actually welcome, or that it would be safe for me to be there.

This is a fact about me, not about the event.  But it seems relevant, and I believe it changes the impression left by the above comment to be more accurate in a way that feels important.

(I was not the only staff alumnus absent, to be clear.)

I ordinarily would not have left this comment at all, because it feels dangerously ... out of control, or somethi... (read more)

The former curriculum director and head-of-workshops for the Center For Applied Rationality would not be welcome or safe at a CFAR event?

What the **** is going on?

It sounds to me like mission failure, but I suppose it could also just be eccentric people not knowing how to get along (which isn't so much different?) 😕

5[DEACTIVATED] Duncan Sabien2y
It's not just people not knowing how to get along. I am trying to navigate between Scylla and Charybdis, here; trying to adhere to normal social norms of live-and-let-live and employers and employees not badmouthing each other without serious justification and so forth.  Trying to be honest and candid without starting social wars. But it's not just people not knowing how to get along.  It's something much closer to the gestalt of this comment, although please note that I directly replied to that comment with a lot of disagreements on the level of fact.
7Spiracular3y
I had to read this a few times before I pieced it together, so I wanted to make sure to clarify this publicly. You are NOT saying this public forum is the place for that. Correct? You are proposing that it might be nice, if someone else pulled this together? Perhaps as something like a carefully-moderated facebook group, or an event. (I think this would require a good moderator, or it will generate more drama than it solves. It would have to be someone who does NOT have "Leverage PR firm vibes," and needs a lot of early clarity about who will not be invited. Also? Work out early what your privacy policy is! And be clear about how much it intends to be reports-oriented or action-oriented, and do not change that status later. People sometimes make these mistakes, and it's awful.) Because on the off-chance that you didn't mean that... I did have some contact with the Leverage strangeness here. But despite that, I have remarkably few social ties that would keep me from "saying what I think about it." I still feel seriously reluctant to get into it, on a public forum like this. I imagine that some others would have an even harder time.

That's right; I am daydreaming of something very difficult being brought together somehow, in person or in writing (probably slightly less easily-visible-across-the-whole-internet writing, if in writing). I’d be interested in helping but don’t have the know-how on my own to pull it off. I agree with you there’re lots of ways to try this and make things worse; I expect it's key to have very limited ambitions and to be clear about how very much one is not attempting/promising.

3TekhneMakre3y
This is an agreeable target, and also, it seems like we have to keep open hypotheses under which many kinds of detail are systematically not shared. E.g., if someone spent some years self-flagellating for remembering details that would contradict a narrative, those details might have not fully crystallized into verbalizable memories. So more detail is better, of course, but assuming that the ("default") asymptote of more detail will be sufficient for anything is fraught, not that anyone made that assumption.

I vouch that this person is both a LW user who has written IMO some good posts and a member of in-person rationalist/longtermist/EA communities who is in good standing.

Edit: This comment is not meant as an endorsement (nor is this a disendorsement) of the content of the post. I generally support LWers and rationalists being able to post pseudonymously and have their identity as longstanding members of the various communities verified.

Using psychological techniques to experiment on one another, and on the "sociology" of the group itself, was a main purpose of the group. It was understood among members that they were signing up to be guinea pigs for experiments in introspection, altering one's belief structure, and experimental group dynamics.

The Pareto program felt like it had substantial components of this type of social/psychological experimentation, but participants were not aware of this in advance and did not give informed consent. Some (maybe most?) Pareto fellows, including me, were not even aware that Leverage was involved in any way in running the program until they arrived, and found out they were going to be staying in the Leverage house.

CEA regards it as one of our mistakes that the Pareto Fellowship was a CEA program, but our senior management didn't provide enough oversight of how the program was being run. To Beth and other participants or applicants who found it misleading or harmful in some way - we're sorry.

Why doesn't the mistake page say anything about Leverage being involved with the Pareto Fellowship? Is that a statement that this part wasn't seen as a mistake?

Sorry I missed this - we're working on a couple of updates to the mistakes page, including about this. I can let you know once the new text is up.

The new text is finally up: https://www.centreforeffectivealtruism.org/our-mistakes

Do you have a link for more description of the Pareto program?

The basic outline is:

There were ~20 Fellows, mostly undergrad-aged with one younger and a few older.

Stayed in Leverage house for ~3 months in summer 2016 and did various trainings followed by doing a project with mentorship to apply things learnt from trainings

Training was mostly based on Leverage ideas but also included fast-forward versions of CFAR workshop, 80k workshop. Some of the content was taught by Leverage staff and some by CEA staff who were very 'in Leverage's orbit'.

I think most fellows felt that it was really useful in various ways but also weird and sketchy and maybe harmful in various other ways.

Several fellows ended up working for Leverage afterwards; the whole thing felt like a bit of a recruiting drive.

Hi all, former Leverage 1.0 employee here. 

The original post and some of the comments seem epistemically low quality to me compared to the typical LessWrong standard. In particular, on top of a lot of insinuations, there are some false facts. This seems especially problematic given that the post is billed as common knowledge. 

There’s a lot of dispute and hate directed towards Leverage, which frankly, has made me hesitant to defend it online. However, a friend of mine in the community recently said something to the effect of, “Well, no former Leverage employee has ever defended it on the attack posts, which I take as an indication of silent agreement.” 

That rattled me and so I’ve decided to weigh in. I typically stay quiet about Leverage online because I don’t know how to say nuanced or positive things without fear of that blowing back on me personally. For now, I’d ask to remain anonymous, but if it ever seems like people are willing to approach the Leverage topic differently, I intend to put my name on this post. I don’t expect my opinion alone (especially anonymously) to substantially change anything, but I hope it will be considered and incorporated into a coheren... (read more)

Thank you for this.

In retrospect, I could've done more in my post to emphasize:

  1. Different members report very different experiences of Leverage.

  2. Just because these bullets enumerate what is "known" (and "we all know that we all know") among "people who were socially adjacent to Leverage when I was around", does not mean it is 100% accurate or complete. People can "all collectively know" something that ends up being incomplete, misleading, or even basically false.

I think my experience really mismatched the picture of Leverage described by OP.

I fully believe this.

It's also true that I had at least 3 former members, plus a large handful of socially-adjacent people, look over the post, and they all affirmed that what I had written was true to their experience; fairly obvious or uncontroversial; and they expected would be held to be true by dozens of people. Comments on this post attest to this, as well.

I don't advocate for an epistemic standard in which a single person, doing anything less than a singlehanded investigative journalistic dive, is expected to do more than that, epistemic-verification-wise, before sharing their current understanding publicly and soliciting more i... (read more)

I don't advocate for an epistemic standard in which a single person, doing anything less than a singlehanded investigative journalistic dive, is expected to do more than that, epistemic-verification-wise, before sharing their current understanding publicly and soliciting more information in the comments.

Sure, but you called the post “Common Knowledge Facts”. If you’d called the post “Me and my friends’ beliefs about Leverage 1.0” or “Basic claims I believe about Leverage 1.0” then that would IMO be a better match for the content and less so claim to universality (that everyone should assume the content of the post as consensus and only question it if strong counter evidence comes in).

Right now, for someone to disagree with the post, they’re in a position where they’re challenging the “facts” of the situation that “everyone knows”. In contrast I think the reality is that if people bring forward their personal impressions as different to the OP, this should in large part be treated as more data, and not a challenge.

Completely fair. I've removed "facts" from the title, and changed the sub-heading "Facts I'd like to be common knowledge" (which in retrospect is too pushy a framing) to "Facts that are common knowledge among people I know"

I totally and completely endorse and co-sign "if people bring forward their personal impressions as different to the OP, this should in large part be treated as more data, and not a challenge."

Appreciate you editing the post, that seems like an improvement to me.

5Ruby3y
It feels like the "common knowledge" framing is functioning as some form of evidence claim? "Evidence for the truth of these statements is that lots of people believe them". And if it's true that lots of people believe them, that is legitimate Bayesian evidence. At the same time, it's kind of hard to engage with and I think saying "everyone knows" make it feel harder to argue with.  A framing I like (although I'm not sure if entirely helps here with ease of engagement) is the "this is what I believe and how I came to believe it" approach, as advocated here. So you'd start of with "I believe Leverage Research 1.0 has many of the properties of a high-demand group such as" proceeding to "I believe this because of X things I observed and Y things that I heard and were corroborated by groups A and B", etc.

I appreciate hearing clearly what you'd prefer to engage with.

I also feel that this response doesn't adequately acknowledge how tactically adversarial this context is, and how hard it is to navigate people's desire for privacy.

( ... which makes me feel sad, discouraged, and frustrated. It comes across as "why didn't you just say X", when there are in fact strong reasons why I couldn't "just" say X.)

By "tactically adversarial", I mean that Geoff has an incredibly strong incentive to suppress clarity, and make life harder for people contributing to clarity. Zoe's post goes into more detail about specific fears.

By "desire for privacy", I mean I can't publicly lay out a legible map of where I got information from, or even make claims that are specific enough that they could've only come from one person, because the first-hand sources do not want to be identifiable.

Unlike former members, Pareto fellows, workshop attendees, and other similar commenters here, I did not personally experience anything first-hand that is "truly mine to share".

It was very difficult for me to create a document that I felt comfortable making public, without feeling I was compromising the identity of any primary... (read more)

[-]Ruby3y220

I'm very sorry. Despite trying to closely follow this thread, I missed your reply until now.

I also feel that this response doesn't adequately acknowledge how tactically adversarial this context is, and how hard it is to navigate people's desire for privacy.

You're right, it doesn't. I wasn't that aware or thinking about those elements as much as I could have been. Sorry for that.

It was very difficult for me to create a document that I felt comfortable making public...

It makes sense now that this is the document you ended up writing. I do appreciate you went to the effort to write up a critical document to bring important concerns. It is valuable and important that people do so.

My hope is that inch by inch, step by step, more and more truth and clarity can come out, as more and more people become comfortable sharing their personal experience.

Hear, hear.

--

If you'll forgive me suggesting again what you should have written, I'm thinking the adversarial context might have been it. If I had read that you were aware of a number of severe harms that weren't publicly known, but that you couldn't say anything more specific because of fears of retribution and the need to protect privacy–that would have been a large and important update to me regarding Leverage. And it might have got a conversation going into the situation to figure out whether and what information was being suppressed.

But it's easier to say that in hindsight.

Thanks, this all helps. At the time, I felt that writing this with the meta-disclosures you're describing would've been a tactical error. But I'll think on this more; I appreciate the input, it lands better this time.

I did write both "I know former members who feel severely harmed" and "I don't want to become known as someone saying things this organization might find unflattering". But those are both very, very understated, and purposefully de-emphasized.

Another former Leverage employee here. I agree with the bullet points in Prevlev's post. And my experience of Leverage broadly matches theirs.

This is great, and straightforward, and I’m glad you joined the conversation. Thank you.

It would be useful to have a clarification of these points, to know how different of an org you actually encountered, compared to the one I did when I (briefly) visited in 2014.

It is not true that people were expected to undergo training by their manager.

OK, but did you have any assurance that the information from charting was kept confidential from other Leveragers? I got the impression Geoff charted people who he raised money from, for example, so it at least raises the question whether information gleaned from debugging might be discussed with that person's manager.

“being experimented on” was not my primary purpose in joining nor would I now describe it as a main focus of my time at Leverage. 

OK, but would you agree that a primary activity of leverage was to do psych/sociology research, and a major (>=50%) methodology for that was self-experimentation?

I did not find the group to be overly focused on “its own sociology.”

OK, but would you agree that at least ~half of the group spent at least ~half of their time studying psychology and/or sociology, using the group as subjects?

The stated purpose of Leverage 1.0 was not to literally take over the US and/or global governance

... (read more)

+1 for the detail. Right now there's very little like this explained publicly (or accessible in other ways to people like myself). I found this really helpful.

I agree that the public discussion on the topic has been quite poor.

This is subjective and all, but I met Geoff Anders at our 2012 CFAR workshop, I absolutely had the "this person wants to be a cult leader" vibe from him then, and I've been telling people as much for the entire time since. (To the extent of hurting my previously good friendships with two increasingly-Leverage-enmeshed people, in the mid-2010s.)

I don't know why other people's cult-leader-wannabe-detectors are set so differently from mine, but it's a similar (though less deadly) version of how I quickly felt about a certain person [don't name him, don't summon him] who's been booted from the Berkeley community for good reason.

He's also told me, deadpan, that he would like to be starting a cult if he wasn't running Leverage.

6matt3y
I've read this comment several times, and it seems open to interpretation whether RyanCarey is mocking orthonormal for presenting weak evidence by presenting further obviously weak evidence, or whether RyanCarey is presenting weak evidence believing it to be strong. Just to lean on the scales a little here, towards readers taking from these two comments (Ryan's and orthonormal's) what I think could (should?) be taken from them… An available interpretation of orthonormal's comment is that orthonormal: 1. had a first impression of Geoff that was negative, 2. then backed that first impression so hard that they "[hurt their] previously good friendships with two increasingly-Leverage-enmeshed people" (which seems to imply: backed that first impression against the contrary opinions of two friends who were in a position to gather increasingly overwhelmingly more information by being in a position to closely observe Geoff and his practices), 3. while telling people of their first impression "for the entire time since" (for which, absent other information about orthonormal, it is an available interpretation that orthonormal engaged in what could be inferred to be hostile gossip based on very little information and in the face of an increasing amount of evidence (from their two friends) that their first impression was false (assuming that orthonormal's friends were themselves reasonable people)). 4. (In this later comment) orthonormal then reports interacting with Geoff "a few times since 2012" (and reports specific memory of one conversation, I infer with someone other than Geoff, about orthonormal’s distrust of Leverage) (for which it is an available interpretation that orthonormal gathered much less information than their "Leverage-enmeshed" friends would have gathered over the same period, stuck to their first impression, and continued to engage in hostile gossip). Those who know orthonormal may know that this interpretation is unreasonable given their knowledge o

As in, 5+ years ago, around when I'd first visited the Bay, I remember meeting up 1:1 with Geoff in a cafe. One of the things I asked, in order to understand how he thought about EA strategy, was what he would do if he wasn't busy starting Leverage. He said he'd probably start a cult, and I don't remember any indication that he was joking whatsoever. I'd initially drafted my comment as "he told me, unjokingly", except that it's a long time ago, so I don't want to give the impression that I'm quite that certain.

accumulated 30 points of karma from what seems to me to be… unimpressive as presented?

I upvoted on the value of the comment as additional source data (IIRC when the comment had much lower karma). This value shouldn't be diminished by questionable interpretation/attitude bundled with it, since the interpretation can be discarded, but the data can't be magicked up.

This is a general consideration that applies to communications that provoke a much stronger urge to mute them, for example those that defend detestable positions. If such communications bring you new relevant data, even data that doesn't significantly change your understanding of the situation, they are still precious, the effects of processing them and not ignoring them sum up over all such instances. (I think the comment to this post most rich in relevant data is prevlev-anon's, which I strong-upvoted.)

7[DEACTIVATED] Duncan Sabien3y
This makes sense to me in my first pass of thinking about it, and I agree.   There's something subtle and extremely hard to pull off (perhaps impossible) in: "in the wishing world, what do we think a shared voting policy should be, such that the aggregate of everyone voting consistently according to that policy leaves all comments in approximately the same order that a single extremely perceptive and high-quality reasoner would rank them?" As opposed to comments just trending toward infinities.
5Vladimir_Nesov3y
This works out for the earlier top level comments (that see similar voter turnout), the absolute numbers just scale with popularity of the post. If something is not in its place in your ideal ranking, it's possible to use the vote to move it that way. Vote weights do a little bit to try improving the quality (or value lock-in) of the ranking. One issue with the system is the zero equilibrium on controversial things, with the last voters randomly winning irrespective of the actual distribution of opinion. It's unclear how to get something more informative for such situations, but this should be kept in mind as a use case for any reform.
2matt3y
I'm trying to apply the ITT to your position, and I'm pretty sure I'm failing (and for the avoidance of doubt I believe that you are generally very well informed, capable and are here engaging in good faith, so I anticipate that the failing is mine, not yours). I hope that you can help me better understand your position: My background assumptions (not stated or endorsed by you): Conditional on a contribution (a post, a comment) being all of (a) subject to a reasonably clear interpretation (for the reader alone, if that is the only value the reader is optimising for, or otherwise for some (weighted?) significant portion of the reader community), (b) with content that is relevant and important to a question that the reader considers important (most usually the question under discussion), and (c) that is substantially true, and it is evident that it is true from the content as it is presented (for the reader alone, or the reader community), then… My agreement with the value that I think you're chasing: … I agree that there is at least an important value at stake here, and the reader upvoting a contribution that meets those conditions may serve that important value. Further elaboration of my background assumptions: If (a) (clear interpretation) is missing, then the reader won't know there's value there to reward, or must (should?) at least balance the harms that I think are clear from the reader or others misinterpreting the data offered. If (b) (content is relevant) is missing, then… perhaps you like rewarding random facts? I didn't eat breakfast this morning. This is clear and true, but I really don't expect to be rewarded for sharing it. If (c) (evident truth) is missing, then either (not evident) you don't know whether to reward the contribution or not, or (not true) surely the value is negative? My statement of my confusion: Now, you didn't state these three conditions, so you obviously get to reject my claim of their importance… yet I've pretty roundly convinc

There is an important class of claims detailed enough to either be largely accurate or intentional lies, their distortion can't be achieved with mere lack of understanding or motivated cognition. These can be found even in very strange places, and still be informative when taken out of context.

The claim I see here is that orthonormal used a test for dicey character with reasonable precision. The described collateral damage of just one positive reading signals that it doesn't trigger all the time, and there was at least one solid true positive. The wording also vaguely suggests that there aren't too many other positive readings, in which case the precision is even higher than the collateral damage signals.

Since base rate is lower than the implied precision, a positive reading works as evidence. For the opposite claim, that someone has an OK character, evidence of this form can't have similar strength, since the base rate is already high and there is no room for precision to get significantly higher.

It's still not strong evidence, and directly it's only about character in the sense of low-level intuitive and emotional inclinations. This is in turn only weak evidence of actual behavio... (read more)

The culture of Homo Sabiens often clashes pretty hard with the culture of LessWrong, so I can't speak to how this will shake out overall.

But in the culture of Homo Sabiens, and in the-version-of-LessWrong-built-and-populated-by-Duncans, this is an outstanding comment, exhibiting several virtues, and also explicitly prosocial in its treatment of orthonormal and RyanCarey in the process of disagreement (being careful and explicit, providing handholds, preregistering places where you might be wrong, distinguishing between claims about the comments and about the overall people, being honest about hypotheses and willing to accept social disapproval for them, etc.)

I have strong-upvoted and hope further interaction with RyanCarey and orthonormal and other commenters both a) happens, and b) goes well for all involved.  I would try to engage more substantively, but I'm currently trying to kill a motte-and-bailey elsewhere.

-43Kerry Vaughan3y

> some of the people who don’t like us
https://www.lesswrong.com/posts/Kz9zMgWB5C27Pmdkh/common-knowledge-facts-about-leverage-research-1-0-1?commentId=jSCFY2ypMpvAZr8sy

> However, I would also like to note that Leverage 1.0 has historically been on the receiving end of substantial levels of bullying, harassment, needless cruelty, public ridicule, and more by people who were not engaged in any legitimate epistemic activity. I do not think this is OK. I intend to call out this behavior directly when I see it. I would ask that others do so as well.
https://www.lesswrong.com/posts/Kz9zMgWB5C27Pmdkh/common-knowledge-facts-about-leverage-research-1-0-1?commentId=hqDXAtk6cnqDStkGC

 

It would be sad if people came away with the idea that the OP was motivated by hate, jealousy, or tribalism. I think the OP is motivated out of deep compassion for the wider community.

Leverage keeps coming up because Geoff Anders (and associates) emit something epistemically and morally corrosive and are gaslighting the commons about it. And Geoff keeps trying to disingenuously hit the reset button and hide it, to exploit new groups of people. That’s what people are responding to and trying to counterac... (read more)

Leverage keeps coming up because Geoff Anders (and associates) emit something epistemically and morally corrosive and are gaslighting the commons about it. And Geoff keeps trying to disingenuously hit the reset button and hide it, to exploit new groups of people. That’s what people are responding to and trying to counteract in posts like the OP.

This seems pretty unfair to me and I believe we’re trying quite hard to not hide the legacy of Leverage 1.0. For example, we (1) specifically chose to keep the Leverage name; (2) are transparent about our intention to stand up for Leverage 1.0; and (3) Geoff’s association with Leverage 1.0 is quite clear from his personal website. Additionally, given the state of Leverage’s PR after Leverage 1.0 ended, the decision to keep the name was quite costly and stemmed from a desire to preserve the legacy of Leverage 1.0.

You know, I'm not necessarily a great backer of Leverage Research, especially some of its past projects, but I feel the level of criticism that it has faced relative to other organizations in the space is a bit bizarre. Many of the things that Leverage is criticized for (such as being secretive, seeing themselves at least in part as saving the world, investing in projects that look crazy to intelligent outsiders, etc.) in my view apply to many rationalist/EA organizations. This is not to say that those other organizations are wrong to do these things necessarily, just that it's weird to me that people go after Leverage-in-particular for reasons that often don't seem to be consistently applied to other projects in the space.

(I have never been an employee of Leverage Research, though at one point they were potentially interested in recruiting me and I was not interested; at another point I checked in re: potentially working there but didn't like the sound of the projects they seemed to be recruiting for at the time.)


EDIT 10/13: My original comment was written before the Medium post from Zoe Curzi. The contents of that Medium post are very concerning to me and seem very unlike what I've encountered in other rationalist or EA organizations.

[-]Ruby3y310

The new Medium post does imply that Leverage cannot be simply lumped with other EA/Rationalist orgs (I too haven't heard anything that concerning reported of any other org), but I don't think that invalidates your original point that the criticisms in this post, as written, could be levelled at many orgs. (I actually wrote such a damning-sounding list for LessWrong/Lightcone).

I agree, but I wanted to be clear that my original comment was largely in reply to the original post and in my view does not much apply to the Medium post, which I consider much more specific and concerning criticism.

5Ruby3y
Entirely fair!

My own strong agreement with the content makes it hard to debias my approval here, but I want to generally massively praise edits that explicitly cross out the existing comment, and state that they've changed their minds, and why they've done so.

(There are totally good reasons to retract without comment, of course, and I'm glad that LW now offers this option. I'm just giving Davis credit for putting his update out there like this.)

Wanna +1 all these things are points I've heard from people who were at Leverage, also. I also have a more negative opinion of Leverage than might be implied by the points alone, for the record.

Speaking personally, based on various friendships with people within Leverage, attending a Leverage-hosted neuroscience reading group for a few months, and having attended a Paradigm Academy weekend workshop.

I think Leverage 1.0 was a genuine good-faith attempt at solving various difficult coordination problems. I can’t say they succeeded or failed; Leverage didn’t obviously hit it out of the park, but I feel they were at least wrong in interesting, generative ways that were uncorrelated with the standard and more ‘boring’ ways most institutions are wrong. Lots of stories I heard sounded weird to me, but most interesting organizations are weird and have fairly strict IP protocols so I mostly withhold judgment.

The stories my friends shared did show a large focus on methodological experimentation, which has benefits and drawbacks. Echoing some of the points, I do think when experiments are done on people, and they fail, there can be a real human cost. I suspect some people did have substantially negative experiences from this. There’s probably also a very large set of experiments where the result was something like, “I don’t know if it was good, or if was bad, but something feels dif... (read more)

inventing and spreading various rationality techniques

Besides belief reporting, which rationality techniques did they invent and spread intot he community where they should get credit?

Goal factoring is another that comes to mind, but people who worked at CFAR or Leverage would know the ins and outs of the list better than I.

My understanding is that Geoff Anders and Andrew Critch each independently invented goal factoring, and had even been using the same diagramming software to do it! (I'm not sure which one of them first brought it to CFAR.)

Geoff Anders was the first one to teach it at CFAR workshops, I think in 2013. This is the first time I've heard claims of independent invention, at the time all the CFAR people who mentioned it were synced on the story that Anders was a guest instructor teaching a technique that Leverage had developed. (Andrew Critch worked at CFAR at the time. I don't specifically remember whether or not I heard anything about goal factoring from him.)

Anna & Val taught goal factoring at the first CFAR workshop (May 2012). I'm not sure if they used the term "goal factoring" at the workshop (the title on the schedule was "Microeconomics 1: How to have goals"), but that's what they were calling it before the workshop including in passing on LW. Geoff attended the third CFAR workshop as a participant and first taught goal factoring at the fourth workshop (November 2012), which was also the first time the class was called "Goal Factoring". Geoff was working on similar stuff before 2012, but I don't know enough of the pre-2012 history to know if there was earlier cross-pollination between Geoff & CFAR folks.

Critch developed aversion factoring.

In this video from March 2014 https://www.youtube.com/watch?v=k255UjGEO_c Andrew Critch says he developed "Aversion factoring".

7lincolnquirk3y
I believe this. Aversion factoring is a separate insight from goal factoring.
7Raj Thimmiah3y
Do you have a link to more info on how they do goal factoring/what software they were using?

When I learned it from Geoff in 2011, they were recommending yEd Graph Editor. The process is to generally write things you do or want to do as nodes, and then connect them to each other using "achieves or helps to achieve" edges (i.e., if you go to work, that achieves making money, which achieves other things you want).

1Alex K. Chen (parrot)1y
When was the precursor to the first EAG? Before 2015?

facts that are more-or-less "common knowledge" among people who spent time socially adjacent to Leverage


Yup, sounds right. As someone who visited the rationality community in the bay a bunch in 2013-2018, almost nothing listed in the bullet points was a surprise to me, and off-hand I can think of dozens of other people who I would assume also know almost everything written above. (I'm sure there are more such people, that I haven't met or wouldn't remember.)

I don't have anything in particular to say about the implications of these facts, just seemed worth mentioning this thing re common knowledge.

(The main thing I hadn't heard about was the sexual relationships bullet point.)

man, i'm kinda mad about something going on with this "knowledge" word. i'd really like to insert some space in here between "lots of people believe a thing" and "lots of people know a thing".

i believed most of the bullet points in a low-confidence, easy-to-change-my-mind kind of way. the real thing is that all the bullet points have been widely rumored. it's not the case that all those rumoring people had justified true belief that everyone else had justified true belief about the bullet points, or whatever. if you announce a bunch of rumors with the word "knowledge" attached, it increases people's confidence and a bunch of switches in their mind flip from "here's a hypothesis i'm holding lightly because it came from the rumor mill" over to "yeah i wasn't surprised to hear those things, yet now i'm even more sure of them".

and like, i do recognize that in the vernacular, "common knowledge" (everyone knows everyone knows) isn't really distinguished from a weaker thing that might be called "common belief" (everyone at-least-somewhat-believes everyone at-least-somewhat-believes). but that doesn't mean we should go around conflating such things all to hell like normal people do.

ugh blerg grump. i am kind of exasperated. i guess i really want the top level post to own a bunch more of its shit, epistemically.

and i didn't really mean to direct all of that right at you, Malcolm, your comment just helped the blergness snap into place in my head enough that i ended up typing things.

Thanks for this. I think these distinctions are important.

Let me clarify: In this post when I say "Common knowledge among people who spent time socially adjacent to Leverage", what I mean is:

  • I heard these directly from multiple different Leverage members.
  • When I said these to others, they shared they had also heard the same things directly from other Leverage members, including members other than the ones I had spoken to.
  • I was in groups of people where we all discussed that we had all heard these things directly from Leverage members. Some of these discussions included Leverage members who affirmed these things.

I believe there are several dozen people in the set of people this is true of.

So I did mean "People in my circles all know that we all know these things", and by "know" I meant "believe, with sourcing to multiple independent first-hand witnesses".

I do not count you as being in the "common knowledge" set, as your self-report is that you lightly believed these based on third-hand information that was "widely rumored". Rather than having been directly told it by a member; witnessing others being directly told it by members; and having people tell you they were directly tol... (read more)

6MalcolmOcean3y
Glad to have helped your blergness snap into place—not taking it personally. I share your concerns here in the specific case and in the general case re the word "knowledge"! And that people understanding the difference between "common knowledge" and other things is important. More accurately maybe I could say "this matches what I understand to be the widespread model of Leverage known by dozens of people to be held among those dozens" Some of it I observed directly or was told it by Leverage folks myself though, so "rumor" doesn't feel like an adequate descriptor from my vantage point.

Hi, I'm Olli Payne. I first encountered Leverage in person during the summer of 2018 and worked at Paradigm from August 2019 through April 2020.

I moved to the Bay from NYC in April 2018, after hearing about communities there (EA, Rationality, Leverage, Futurism, etc) that are focused on thinking long-term and having a large positive impact, something that resonated with me and my goals. After attending several EA meetups, I went to a few EA Global afterparties, including one at Leverage's Lake Merritt apartment.

I'd already started to hang out with Leverage employees who I'd met at the afterparty when I requested to be invited to a Paradigm workshop. I attended the workshop in June of 2018 and after finding the tools incredibly useful, I began to pursue a job at Leverage.

During the year before I was hired at Paradigm, I made many friends with employees of both Paradigm and Leverage. We went bouldering, saw movies, played video games, tried to perfect the baking of pies... I'm very happy to say that I'm still close with many of these friends.

This was my take-away from being around Leverage 1.0:

The organization and its members did have the stated goal of "world-saving," but that phras... (read more)

Participation in the project involved secrecy / privacy / information-management agreements. 

How strong were those agreements? How much were the participants allowed to share privately with friends, family or outside therapists?

Yup. I have known all of these things since 2018-2019, and know or know of maybe a few dozen people who also know these things. I’m glad this bare minimum is being discussed openly, publicly.

Secondhand, I have a very negative view of at least some parts of what happened in Leverage 1.0. My best guess is that the relationships and events that some people have (mostly privately) described as controlling or abusive were not evenly distributed across the whole organisation. So it would have been straightforward for someone to be working at Leverage and never see or get deeply involved with situations that a handful of people have, in private or in semi-public conversations, described clearly as cultic abuse. It seems like there are on the order of dozens of people who probably had a roughly fine time being involved in Leverage for many years, and at least a handful of people who report much more negative experiences.

(I’m @utotranslucence on Twitter; never officially had a LessWrong account before but been around the Bay Area community since 2017. I attended one Paradigm training weekend in early 2018 and some parties at the Lake Merritt building but most of my knowledge comes from conversations with friends who did work there, and there are plenty of things I still don’t know with great clarity.)

1LarissaRowe3y
Hi Freyja, I just wanted to reply to this to let you know that it is totally plausible to me that some people who were involved in Leverage 1.0 or any affiliated organizations might have had pretty bad experiences, especially towards the end. I haven't heard any specific cases personally, but by all accounts, there were some pretty intense group dynamics and I can very much imagine that could have been quite harmful to people. I’m not saying this is the same and I don’t want to speak for anyone else’s experience, but I’ve been involved in intense ideological work cultures in the past myself. When everyone involved cares deeply about something, it can be really horrible when it goes wrong. This is why it's very plausible to me that something similar might have happened here. I really don’t want anyone's negative experiences to get lost or overlooked because of the tribal fight taking place between Leverage and some of the people who don’t like us. I said in my post that I want to defend the people in Leverage 1.0 who feel like they’ve been constantly harassed and maligned over the years. But I want to defend them from disingenuous attacks. That does not include hearing from anyone from Leverage 1.0 with a genuine negative experience. I want to ensure that people who had negative experiences can have their voices be heard, that any wrongdoings and harms are addressed, and that we as an organization learn and improve. I’m going to send you a private message on LessWrong in case you would like to talk about any of this. I understand if you decide you don’t want to spend the emotional energy or don’t feel comfortable talking to me, but if there is anything I can do that would make it okay for you, or people you’re in touch with, to have a conversation with me, I’d like to try.

It seems plausible that in the future, if there aren't already, there will be many groups that use the language and terminology of rationality to serve more self-interested and orthogonal objectives.

I do worry about "ends justify the means" reasoning when evaluating whether a person or project was or wasn't "good for the world" or "worth supporting". This seems especially likely when using an effective-altruism-flavored lens that only a few people/organizations/interventions will matter orders of magnitude more than others. If one believes that a project is one of very few projects that could possibly matter, and the future of humanity is at stake - and also believes the project is doing something new/experimental that current civilization is inadequate for - there is a risk of using that belief to extend unwarranted tolerance of structurally-unsound organizational decisions, including those typical of "high-demand groups" (such as use of psychological techniques to increase member loyalty, living and working in the same place, non-platonic relationships with subordinates, secrecy, and so on) without proportionate concern for the risks of structuring an organization in that way.

There is (roughly) a sequences post for that. :P

For those who think the above description reads like one of a typical cult, it's worse reading how a description of an actual cult reads.

There's currently a cult trying to take over a place that hosts personal development seminars (and I know people personally who went there for seminars unrelated to the cult.

https://metamoderna.org/how-a-psychedelic-sex-cult-infiltrated-a-german-ecovillage/

My name is Joe Corabi. I am a philosophy professor at Saint Joseph’s University in Philadelphia and a longtime friend of Geoff Anders.  I have known Geoff since we were grad students together at Rutgers.  We have also collaborated over the years on a number of philosophical projects, both related to and separate from Leverage Research. 

I have been a volunteer off and on at Leverage since its founding and I wanted to share my experience of Leverage in the hopes that it provides some unique evidence about the organization and its history.  I was troubled by the recent Less Wrong post and I spoke to Geoff about the possibility of writing something that can hopefully provide some additional context for those looking to evaluate the situation. 

I was initially drawn to Leverage by the enthusiasm of its members and Geoff’s vision for the organization.  In my view, which is from someone who has spent over 20 years studying philosophy in an intensive way, Geoff is a highly skilled philosopher who has both an expert knowledge of the field and a sensitivity to methodological concerns, the combination of which is quite rare.  In my view, professional ana... (read more)

Figured I'd chime in too—I'm Jordan Alexander and I was one of the Pareto Fellows back in 2016. I've been involved with the EA and rationality community to various degrees since then (Stanford EA, internship at CHAI, active GWWC pledge) so I thought I'd give my account of the program. I recognize that other people may have had different experiences during the program and that there may have been issues that I was not personally aware of as a participant in the program.  

As for my relationship with Leverage: I have a few friends at Leverage, though we're not in close contact. I participated in Paradigm Coaching (essentially a combination of personal and professional one-on-one coaching) for a few months at the end of 2019 and found it incredibly helpful while working on the mundane problem known as "job-hunting". Finally, one of my friends at Leverage reached out and asked me if I was interested in sharing my experience at the Pareto Fellowship after this post popped up. Frankly I'm annoyed that I have to do this but it seems unfair that these sorts of posts reappear every year. I work as a software engineer and have no professional or financial ties to Leverage. 

Here's an... (read more)

Given the comments that have surfaced it sounds like my annoyance at these posts was unjustified and that 1) I underestimated how long it takes for structural weaknesses to surface and have effects that are clearly visible to outsiders, and 2) underrated how valuable it was to open a space for people to share their experiences with Leverage. Glad that the original post was able to do this in a way that preserved anonymity for people that understandably needed it. 

I also want to highlight that while I still stand by my personally positive experience at the Pareto Fellowship in 2016 this is not meant to be a universal account of events [and certainly not of Leverage Research] and a proper judgement of the program itself would involve polling a representative sample of former Pareto Fellows. 

Finally, I recognize that it's especially difficult to recount experiences when someone has experienced deep trauma so thanks to Zoe Curzi for the courage involved in telling her story and to anyone else sharing their experiences, anonymously or otherwise.

[-]Ruby3y110

Thanks for taking the time to recount your experiences there.

I do want to register that I expect the experience afforded to fellows as part of a few-month program to be different, and milder, than want long-term employees would experience.

Am I crazy or was something really similar to this, with the same thing of asking for a LW moderator to vouch, posted like a year ago? I didn't immediately find it by searching.

https://forum.effectivealtruism.org/posts/qYbqX3jX4JnTtHA5f/leverage-research-reviewing-the-basic-facts

This is also a useful resource, and the pingbacks link to other resources.

I want to gesture at "The Plan", linked from Gregory Lewis's comment (https://forum.effectivealtruism.org/posts/qYbqX3jX4JnTtHA5f/leverage-research-reviewing-the-basic-facts?commentId=8goitqWAZfEmEDrBT), as supporting evidence for the explicit "take over the world" vibe, in terms of how exactly beneficial outcomes for humanity were meant to result from the project, best viewable as PDF.

3Sniffnoy3y
I'm not involved with the Bay Area crowd but I remember seeing things about how Leverage is a scam/cult years ago; I was surprised to learn it's still around...? I expected most everyone would have deserted it after that...

This reminds me of the focusing/circling/NVC discussions, one group (to which I belonged) was like "this is obviously culty mindfuckery, can't you see" and the other group couldn't see, and arguments couldn't bridge that gap. It's like how some people can recognize bullying and others will say "boys will be boys", while looking at the exact same situation.

I can verify that I saw some version of their document The Plan[1] (linked in the EA Forum post below) in either 2018 or 2019 while discussing Leverage IRL with someone rationalist adjacent that I don't want to doxx. While I don't have first hand knowledge (so you might want to treat this as hearsay), my interlocutor did and told me that they believed they were the only one with a workable plan, along with the veneration of Geoff.

[1]: I don't remember all of the exact details, but I do remember the shape of the flowchart and that looks like it. It's possible that my interlocutor also got it from Gregory Lewis, but I don't think so.

I'm not sure what the meaning, if any, of the following fact is, but: I notice that I would feel very positively about Leverage as it's portrayed here if there weren't relationships with multiple younger subordinates (e.g. if the leader had been monogamously married), and as it is I feel mildly negative about it on net.

That wasn't necessary evidence for me. The secrecy + "charting/debugging" + "the only organization with a plan that could possibly work, and the only real shot at saving the world" is (if true) adequate to label the organization a cult (in the colloquial sense). This are all ideas/systems/technologies that are consistently and systematically used by manipulative organizations to break a person's ability to think straight. Any two of these might be okay if used extremely carefully (psychiatry uses secrecy + debugging) but having all three brings it solidly into cult territory. Also, psychiatry has lots of rules to prevent abuse, including public, well-established ethical standards.

Are Leverage's standard operating procedures auditable knowledge to outsiders? If not, this is the mother of all red flags and we should default to "cult".

Edit: LarissaRowe didn't reply to this comment because Leverage doesn't have a leg to stand on.

Edit ×2: Shaming someone into a response violates the norms of Less Wrong. The first edit was a mistake. I apologize.

psychiatry uses secrecy 

In psychiatry there's no secrecy for treatment protocols and there are no secrecy rules for patients that prevent them from sharing about their experience.

That's a good point. The psychiatrist (who has power) is sworn to secrecy but the patient (who is vulnerable) isn't.

-1TekhneMakre3y
>"the only organization with a plan that could possibly work, and the only real shot at saving the world" It's definitely a warpy sort of belief. The issue to me, and why I could still feel positively about such an organization, is that the strong default for people and organizations might be a strong false lack of hope. In which case, it might be correct to have what seems like a delusional bubble of exceptionalism. It still seems to have some significant bad effects, and is still probably partly delusional, but if we don't know how to get the magic of Hope without some delusion I don't think that means we should throw away Hope. >Are Leverage's standard operating procedures auditable knowledge to outsiders? It would be nice to live in a world where this standard were good and feasible, but I don't think we do. Not holding this standard does open us up for the possibility of all sorts of abuse hiding in relative secrecy, but unfortunately I don't see how to avoid that risk without becoming ineffective. I think the things you point out are big risk factors, but to me don't seem to indicate a "poison" in the telos of the organization. Whereas sexual/romantic stuff seems like significant evidence towards "poison", in the sense of "it would actually be bad if these people were in power".

The real problem is to have the belief that you are the only organization with a plan that might work while at the same time requiring secrecy that prevents the participants from feedback from the outside world that might make participants doubt that this is the case. If you then add strong self-modification techniques that also strengthen the belief, that's no good enviroment.

7TekhneMakre3y
I'm not sure how to pinpoint disagreement here. I think it's bad, possibly very bad, to have delusional beliefs like this. But I think by default we don't already know how to decouple belief from intention. Saying "we're the only ones with a plan to save the world that might work" is part belief (e.g., it implies that you expect to always find fatal flaws in others's world-saving plans), and part intention (as in, I'm going to make myself have a plan that might work). We also can't by default decouple belief from caring. Specialization can be interpreted as a belief that being a certain way is the best way for you to be; it's not true, objectively, but it results in roughly the same actions. The intention to make your plans work and caring about the worlds in which you can possibly succeed is good, and if we can't decouple these things, it might be worth having false beliefs (though of course it's also extremely worth becoming able to decouple belief from caring and intention, and ameliorating the negative effects on the margin by forming separate beliefs about things that you are able to decouple, e.g. using explicit reason to figure out whether someone else's plan might work, even if intuitively you're "sure" that no one else's plan could work). I think it's clearly bad to prevent feedback for the sake of protecting "beliefs". But secrecy makes sense for other reasons. (Intentions matter because they affect many details of the implementation, which can add up to large overall effects on the outcomes.)
6ChristianKl3y
I think there are two kinds of secrecy. One is about not answering every questions that outsiders have. The other is about forbidding insiders from sharing information to the outside. Power easily corrupts processes. Playing around with strong self modification is playing with a lot of power.  Secrecy has a lot easily visible benefits because you reduce your attack surface. But it has it's costs and it's generally wise to be skeptical of versions of it that prevent insiders from sharing information that's not of a personal nature when doing radical projects.

>"the
>Are

Formatting note — if you put a space between the '>' and the next character, it'll format correctly as a proper block quote.

7Vladimir_Nesov3y
Planning for success doesn't require knowledge of success, doesn't get better if you believe things that can't be known. Hope is a good concept for this situation: a risk of success where the probability of success needn't be significant, it's the value of success that makes hope relevant. Hope makes sense as a concept of curiosity more than as one of decision making, so that you are not vulnerable to misleading expected utility calculations, but get some guidance for filling in the chart of possible plans, taking steps towards enacting them.
2TekhneMakre3y
Yeah, if I follow you, I think I agree that Hope is most essential in the realm of curiosity. It seems like Leverage was/is aimed at realms that are deeply ontologically uncertain (what are the possibilities for using my mind radically more effectively, what really matters for affecting the world), which entails that curiosity and probing possibility-space is a nearly permanent central feature of what they're trying to do. To say it more concretely, asking a really weird question and trying out really weird answers might feel intuitively more appealing if you think that you're exceptional, and if you think your social context is exceptionally able to pick up on weird but true/important results.
5Vladimir_Nesov3y
More appealing compared to what alternative? Don't stand still, do the work. There is rarely a reason to prefer a particular step of a large journey over all other steps. That's the character of curiosity.
1TekhneMakre3y
Hm, it seems like you're arguing against the stance I'm describing, where my main point is just that this is a stance many people take. I sometimes find that I've been taking a stance like this; when I reflect on it I've never agreed with it, but that doesn't mean it's not happening. Maybe you're rejecting putting effort into accommodating this stance, rather than unraveling it?
2Vladimir_Nesov3y
Formulating what might be going on gives something specific to talk about. But then what's the point to settling on an emotional valence? Discussing the error seems interesting, regardless of what attitude that props up. The patch I proposed actually preserves the positive qualities, isn't a demonstration of their absence.
3TekhneMakre3y
>There is rarely a reason to prefer a particular step of a large journey over all other steps.That's the character of curiosity. I didn't get the essence of your proposal from this. Could you phrase this as advice to, for example, Elon Musk (taking Elon as an example of someone who's making good use of slightly delusional "beliefs" about his plans, while still remaining very solidly in contact with reality)?
6ChristianKl3y
Elon is one of the least delusional people. Not many people start companies like Elon when they believe there's only a ten percent chance of success. Elon sets goals that often won't be archieved but that's not the same as having delusional beliefs.
6TekhneMakre3y
I agree he's exceptionally well in contact with reality. But also part of his "setting goals" involves making "predictions" about timelines. Which are very often wrong, quantitatively (while being correct "in spirit" in the sense that they achieve the goal, just later than "predicted").
2ChristianKl3y
Elon generally is not public about the likelihood of various events in timelines and speaks about his timelines as being optimistic guesses. 
4Vladimir_Nesov3y
When a civilization gets curious, each individual only gets to work on a few observations, and most of these observations are not going to be foreknowably more important than others, or useful in isolation from others that are not even anticipated at the time, yet the whole activity is worthwhile. So absence of a reason to pursue a particular activity compared to other activities is no reason for not taking it seriously. It's only presence of a reason to take up a different activity that warrants change.
1TekhneMakre3y
What if there's an abundance of specific reasons to take up various activities, and which ones you want to invest in seems to depend heavily on "follow through", i.e. "are people going to keep working on this"?
4Vladimir_Nesov3y
With some transitivity of preference and a world that's not perpetually chaotically unsettled, people or organizations should be able to find something to work on for which they have no clearly better alternatives. My point is that this is good and worth doing well even when there is no reason to see what they are currently doing as clearly better than the other things they might've been doing instead. And if not enough people work on something, it won't get done, which is OK if there is no reason to prefer it to other things people are actually working on (assuming that neglectedness is not forgotten as a reason to prefer something).
1TekhneMakre3y
Well, one might prefer that something rather than nothing gets done. In which case it matters whether other people will work on it. In particular, when an organization with multiple people "decides" to do something, that's tied up with believing that they will work on it, which affects motivation to work on it. So, if you believe that you're doing an "objectively" better plan, in particular you think that other people will recognize that your plan is good, and will want to work on it; so your belief is tied up with acting in a way that will be successful if other people will continue your work.

It provides an alternative version for the motivation of the entire project. More disturbingly, the alternative seems to explain some facts better, such as why after all that work and money spent, after all the grandiose secret plans, there is still no tangible output.

EDIT: The part "no tangible output" was not fair, I apologize for that. I am not updating the comment, because it would feel like moving the goalpost.

I appreciate the edit, Viliam.

I know that it was a meme about Leverage 1.0 that it was impossible to understand, but I think that is pretty unfair today. If anyone is curious here are some relevant links:

We're no longer engaged with the Rationality community so this information might not have become common knowledge. Hopefully, this helps.

I added a sub-bullet to the main post, to clarify my epistemic status on that point.

I have now made an even more substantial edit to that bullet point.

I think Bismarck Analysis Consulting Company, Paradigm Academy Training, and Reserve Cryptocurrency all came out of Leverage.

1[comment deleted]3y

I should clarify upfront that I am not a rationalist, and am not a fan of LessWrong. 

That said, I have some experience when it comes to... this sort of thing.

So when I was a little younger, I was the figurehead and leader of a sex cult. (Oddly enough, I did this without ever really understanding that it was, in fact, a sex cult. One of my best friends described this as a "Jerry Smith plot", which I found hilarious.) This cult was, in practice, a discord server focused around my erotic hypnosis work. I copied the model from another server that was definitely a sex cult, and tried to strip out all of the culty elements and just leave the aesthetics (because a lot of us liked the aesthetics). But you really can't reconstitute the structure of a high-control group without, in various ways, reconstituting the behavior of a high-control group. It doesn't work - the culty shit works its way back in if you're not extremely careful. And I was not careful, for reasons that may be obvious if you think about the perks one gets as the figurehead of a sex cult. A lot of people got hurt.

Why bring that up? Because hoo boy does this tick a lot of similar boxes. 

A lot of things scream "hig... (read more)

There's a lot going on in this comment, but I note with interest that this is the first time I've seen someone weigh in on questions of cultish behavior from the perspective of a former cult leader. 

I'm fascinated with the claim that if you take on the outer facade of a cult, you now have a strong incentive gradient to turn up the cultishness (maybe because you're now drawing in people who are looking for more of that, and driving away anyone who's put off by it). Obviously the claim needs more than one person's testimony, but it makes sense.

I wonder if some early red flags with Leverage (living together with your superiors who also did belief reporting sessions with you, believing Geoff's theories were the word of god, etc) were explicitly laughed off as "oh, haha, we know we're not a cult, so we can chuckle about our resemblances to cults".

I think from a world and historical perspective, dating subordinates is a very common thing. The American cult bundle of traits is much more specific and rare. For me, the first red flag is shared housing for followers of the idea. Any movement that does it is already kind of weird to me (including the rationalist movement). If there's also some kind of group psychological exercise, that takes it all the way to "nope" (again, including some parts of the rationalist movement).

4agrippa3y
I will say that the EA Hotel, during my 7 months of living there, was remarkably non-cult-like.  You would think otherwise given Greg's forceful, charismatic presence /j

Hi BayAreaHuman, 

I just posted an update on behalf of Leverage Research to LessWrong along with an invite to an AMA with Leverage Research next weekend, as it seems from the comments that there isn’t a lot of common knowledge about our current work or other aspects of our history. I encourage people to read this for additional context, and I hope the OP will be able to update this post to incorporate some of that.

I also want to briefly address some of the items raised here.

 

Information management policies

Leverage Research has for a long time been concerned about the potential negative consequences of the potential misuse of knowledge garnered from research. These concerns are widely shared for research in the hard sciences (e.g., nuclear physics), but are valid as well for the social sciences. 

Starting in 2012, Leverage Research had an information management policy designed to prevent negative intended consequences from the premature dissemination of information. Our information policy from 2012-2016 required permission for the release of longform information on the internet. We had an information approval team, with most information release requests being approved. ... (read more)

If anyone is aware of harms or abuses that have taken place involving staff at Leverage Research, please email me, in confidence, at larissa@leverageresearch.org or larissa.e.rowe@gmail.com

I would suggest that anything in this vein should be reported to Julia Wise, as I believe she is a designated person for reporting concerns about community health, harmful behaviours, abuse, etc. She is unaffiliated with Leverage, and is a trained social worker.

[-]guzey3y1590

(deleted)

This was indeed a big screwup on my part.  Again, I'm really sorry I broke your trust.

To add detail about my mistake:

When you asked if you could confidentially send me a draft of your post about Will's book to check, I said yes.

The next week you sent me a couple more emails with different versions of the draft. When I saw that the draft was 18 pages of technical material, I realized I wasn't going to be a good person to review it. That's when I forwarded to someone on Will's team asking if they could look at it instead of me. 

I should never have done that, because your original email asked me not to share it with anyone. For what it’s worth, the way that this happened is that when I was deciding what to do with the last email in the chain, I didn't remember and didn't check that the first email in the chain requested confidentiality. This was careless of me, and I’m very sorry about it.

I think the underlying mistake I made was not having this kind of situation flagged as sensitive in my mind, which contributed to my forgetting the original confidentiality request. If the initial email had been about some more personal situation, I am much more sure it would have been flagged in my mind as confidential. But because this was a critique of a book, I had it flagged as something like “document review” in my mind. This doesn’t excuse my mistake - and any breach of trust is a serious problem given my role - but I hope it helps show that it wasn’t intentional.

I now try to be much more careful about situations where I might make a similar mistake.

I've now added info on this to the post about being a contact person and to CEA's mistakes page.

[-]idle3y160

Personally, I don't really blame you or think less of you for this screwup. I never got the impression that you are the sort of person who should be sent confidential book review drafts. Maybe you'd disagree, but that seems like a misunderstanding of your role to me.

It seemed clear to me that you made yourself available to confidential reports regarding conflict, abuse, and community health. Not disagreements with a published book. It makes sense that you didn't have a habit of mentally flagging those emails as confidential.

Regardless, I trust that you've been more careful since then, and I appreciate how clearly you own up to this mistake.

I want to offer my +1 that I strongly believe Julia's trustworthy for reports regarding Leverage.

9ChristianKl2y
I would generally expect that if I give someone access to a draft of any kind and they want to forward it to someone else to put the author of the draft in the CC. Even of the absence of the promise of confidentiality, I consider sharing someone's draft without their permission and witholding the information that you share it bad behavior.
6ChristianKl3y
Of course it doesn't show that it wasn't intentional to say "my mistake wasn't intentional but accidental". The only thing that shows that it wasn't intentional would be to take actual consequences that are meaningful enough so that it doesn't look like you benefited CEA with your mistake.

Saying "I'm sorry I broke your trust" without engaging in any consequences for it feels cheap. To me such a mistake feels like you owe something to guzey.

One thing you could have done if you actually cared would have been to advocate for guzey in this exchange even if that goes against your personal positions.

Only admitting the mistake at comments and not in a more visible manner also doesn't feel like you treat it seriously enough. It likely deserves the same treatment as the mistakes on https://www.centreforeffectivealtruism.org/our-mistakes

Only admitting the mistake at comments and not in a more visible manner also doesn't feel like you treat it seriously enough. It likely deserves the same treatment as the mistakes on https://www.centreforeffectivealtruism.org/our-mistakes

For what it's worth, I do think this is probably a serious enough mistake to go on this page.

Wow, that is very bad. Personally I'd still trust Julia as someone to report harms from Leverage to, mostly from generally knowing her and knowing her relationship to Leverage, but I can see why you wouldn't.

One of the negative consequences of our information policy, as we have learned, is the way it made some regular interactions with people outside of the relevant information circles more difficult than intended.

 

Is Leverage willing to grant a blanket exemption from the NDAs which people evidently signed, to rectify the potential ongoing harms of not having information available? If not, can you share the text of the NDAs?

Hi Larissa -

Dangers and harms from psychological practices

Please consider that the people who most experienced harms from psychological practices at Leverage may not feel comfortable emailing that information to you. Given what they experienced, they might reasonably expect the organization to use any provided information primarily for its own reputational defense, and to discredit the harmed parties.

Dating policies

Thank you for the clarity here.

Charting/debugging was always optional

This is not my understanding. My impression is that a strong expectation was established by individual trainers with their trainees. And that charting was generally done during the hiring process. Even if the stated policy was that it was not required/mandatory.

It seems that Leverage currently in planning to publish a bunch of their techniques and from Leverages point of view, there are considerations that releasing the techniques could be dangerous for people using them. To me that does suggest a sincere desire to use provided information in a useful way. 

See from https://www.lesswrong.com/posts/3GgoJ2nCj8PiD4FSi/updates-from-leverage-research-history-and-recent-progress :

If you are interested in being involved in the beta testing of the starter pack, or if you have experienced negative effects from psychological experimentation, including with rationality training, meditation, circling, Focusing, IFS, Leverage’s charting or belief reporting tools (or word-of-mouth copies of these tools), or similar techniques please do reach out to us at contact@leverageresearch.org. We are keen to gain as much information as possible on the harms and dangers as we prepare to release our psychology research.

If there are particular people who feel that they have been damaged, it would be great to still have a way that the information reaches Leverage. Maybe, a third-party could be found to mediate the conversation?

Is there anything else you could think of that would be a credible signal that Leverage is sincere about seeking the information about harms?

9ooverthinkk3y
Why is this getting downvotes? It's a constructive comment containing a good idea (mediation to address concerns) and pointing at a source of transparency, which everyone here has been asking for. I'm not a rationalist, and I'm new to actually saying anything on LW (despite lurking for 4ish years now - and yes, I made this alt today), but it seems like this would be the type of community to be more open-minded about a topic than what I'm seeing. By "what I'm seeing" I mean people are just throwing rocks and being unwilling to find any way to work with someone who's trying to address the concerns of the OP and commenters.

I didn't downvote ChristianKI's comment, but I feel like it's potentially a bit naive. 

>Is there anything else you could think of that would be a credible signal that Leverage is sincere about seeking the information about harms?

In my view, the question isn't so much about whether they genuinely don't want harms to happen (esp. because harming people psychologically often isn't even good for growing the organization, not to mention the reputational risks). I feel like the sort of thing ChristianKI pointed out is just a smart PR move given what people already think about Leverage, and, conditional on the assumption that Leverage is concerned about their reputation in EA, it says nothing about genuine intentions.

Instead, what I'd be curious to know is whether they have the integrity to be proactively transparent about past mistakes, radically changed course when it comes to potentially harmful practices, and refrain from using any potentially harmful practices in cases where it might be advantageous on a Machiavellian-consequentialist assessment. To ascertain those things, one needs to go beyond looking at stated intentions. "Person/organization says nice-sounding thing, so they seem genuinely concerned about nice aims, therefore stop being so negative" is a really low bar and probably leads to massive over-updating in people who are prone to being too charitable. 

4ChristianKl3y
I didn't argue that it says something about good intentions. My main argument is that it's useful to cooperate with Leverage on releasing their techniques with the safety warnings that are warrented given past problems instead of not doing that which increases the chances that people will use the techniques in a way that messes them up. I do consider belief reporting to be a very valuable invention and I think that it's plausible that this is true for more of what leverage produced. I do see that a technique like belief reporting allows for doing scientific experiments that weren't possible before.  Information gathered from the experiments already run can quite plausible help other people from encoutering harm when integrated in the starter kit that they develop.
0Kerry Vaughan3y
I think skepticism about nice words without difficult-to-fake evidence is warranted, but I also think some of this evidence is already available. For example, I think it's relatively easy to verify that Leverage is a radically different organization today. The costly investments we've made in history of science research provide the clearest example as does the fact that we're no longer pursuing any new psychological research.

I think the fact that it is now a four person remote organization doing mostly research on science as opposed to an often-live-in organization with dozens of employees doing intimate psychological experiments as well as following various research paths tells me that you are essentially a different organization and the only commonalities are the name and the fact that Geoff is still the leader.

If you hover over the karma counter, you can see that the comment is sitting at -2 with 12 votes, which means that there is a significant disagreement on how to judge it, not agreement that it should go away.

(It makes some sense to oppose somewhat useful things that aren't as useful as they should be, or as safe as they should be, I think that is the reason for this reaction. And then there is the harmful urge to punish people who don't punish others, or might even dare suggest talking to them.)

6gjm3y
What are your personal connections, if any, to Leverage Research (either "1.0" or "2.0")?

I'd rather not say, for the sake of my anonymity - something which is important to me because this:

However, I would also like to note that Leverage 1.0 has historically been on the receiving end of substantial levels of bullying, harassment, needless cruelty, public ridicule, and more by people who were not engaged in any legitimate epistemic activity. I do not think this is OK. I intend to call out this behavior directly when I see it. I would ask that others do so as well.

is a real concern. I've seen it firsthand - people associated with Leverage being ostracized, bullied, and made to feel very unwelcome and uncomfortable at social events and in online spaces by people in nearby communities, including this one.

It seems like a real risk to me that any amount of personal information I give will be used to discover my identity, and I'll be subject to the same.


Which, by the way, is despicable, and I find it alarming that only one person (besides Kerry) in this thread has acknowledged this behavior pattern.

I said in another comment that I didn't make an alt to come here and "defend Leverage" - this instance is the exception to that. These people are human beings.

(quote from Kerry's co... (read more)

[-]gjm3y280

If people are being bullied, that's extremely bad, and if you see that and call it out you're doing a noble thing.

But all I've seen in this thread -- I can't comment on e.g. what happens in person in the Bay Area, since that's thousands of miles away from where I am -- is people saying negative things about Leverage Research itself and not about individuals associated with it, with the single exception of the person in charge of Leverage, who fairly credibly deserves some criticism if the negative things being said about the organization are correct.

Bullying people is cruel and harmful. I'm not so sure there's anything wrong with "bullying" an organization. Especially if that organization is doing harm, or if there is good reason to think it is likely to do harm in the future.

I've seen someone from a different org, but with a similar valence in the community, get treated quite poorly at a party when they let their association be known. It was like the questioner stopped seeing them as a person with feelings and only treated them as an extension of the organization. I felt gross watching it and regret not saying anything at the time. 

It seems overwhelmingly likely to me that Leveragers faced the same thing, and also that some members lumped some legitimate criticisms or refusals to play along in with this unacceptable treatment, because that's a human thing to do. 

ETA: I talked to the person in question and they don't remember this, so apparently it made a bigger emotional impression on me than them (they remembered a different convo at the same event that seemed like the same kind of thing, but didn't report it being particularly unpleasant). I maintain that if I were regularly subject to what I saw it would have been quite painful, and imagine that to be true for at least some other people.

I'm not so sure there's anything wrong with "bullying" an organization.

There's a pragmatic question of building reliable theory of what's going on, which requires access to the facts. Even trivial inconvenience for those who have the facts in communicating them does serious damage to this process's ability to understand what's going on.

The most valuable facts are those that contradict the established narrative of the theory, they can actually be relevant for improving it, for there is no improvement without change. Seeing a narrative that contradicts the facts someone has is already disheartening, so everything else that could possibly be done to make sharing easier, and not make it harder, should absolutely be done.

6ooverthinkk3y
Yes, but imagine for a second that you worked at Leverage, and you're reading this thread (noting that I'd be surprised if several people from both 1.0 and 2.0 were not). Do you think that, whether they had a negative experience or a positive experience, they would feel comfortable commenting on that here? (This is the relevant impact of the things mentioned in my previous comment.) No. Of course not. Because the overpowering narrative in this thread, regardless of the goals or intentions of the OP, is "Leverage was/is a cult". No one accused of being in a cult is going to come into the community of their accusers and say a word. Of course, with the exception of two people in 2.0 who have posted here, one of which is a representative who has been accused of plotting to coerce and manipulate victims, and the other of which has been falsely accused of trying to hide their identity in the thread.  And this is despite Leverage's efforts to become more legible and transparent. If someone who worked there had negative experiences as a result, then, of course, they may not want to post publicly in an environment where the initiative that they once put their time, energy, and effort into is being so highly criticized, and in some cases, again, blatantly accused of being a literal cult or what I would call a "strawman's term" for a cult. They also may not want to air their concerns with their ex-employers in this public setting. And on the other hand, if someone who worked there had positive experiences, they are left to watch as, once again, the discourse of this group disallows them from giving input without figuratively burning them at the stake for supporting something that they personally experienced and had no issue with. And these are just the first few things that came to mind for me when considering why they may not be present in this conversation. My main concern here is that this space doesn't allow them to speak AT ALL without serious repercussions, and t
7gjm3y
I don't know how comfortable any given person would feel commenting here. I do know that Kerry Vaughan, who is with Leverage now, has evidently felt comfortable enough to comment. I have no idea who you are but it seems fairly apparent that you have some association with Leverage, and you evidently feel comfortable enough to comment. You say that one of those people (presumably meaning Kerry) "has been accused of plotting to coerce and manipulate victims". I can't find anywhere where anyone has made any such accusation. I can't find any instance of "coerce" or any of its forms other than in your comment above. I find two other instances of "manipulate" and related words; one is specifically about Geoff Anders (who so far as I know is not the same person as Kerry Vaughan) and the other is talking generally about psychological manipulation and doesn't make any accusations about specific people. You say that the other person (presumably meaning you) "has been falsely accused of trying to hide their identity", but so far as I can make out you are openly trying to hide your identity (on the grounds that if people could tell who you are then you would be mistreated on account of being associated with Leverage). (I have to say that I'm a bit confused by the anonymity thing. Are you concerned that if you were onymous then people "in real life" would read what you say here, realise that you're associated with Leverage, and mistreat you? Or that if you were onymous then people here would recognize your name, realise that you're associated with Leverage, and mistreat you? Or something else? The first would make sense only if "in real life" you were concealing whatever associations you have with Leverage, which I have to say would itself be a bit concerning; the second would make sense only if knowing your name would make people in this thread think you more closely associated with Leverage than they already think you, and unless you're Literal Geoff Anders or something that
9David Hornbein3y
No comment on your larger point but  "You are in a cult" is absolutely an accusation directed at the person. I can understand moral reasons why someone might wish for a world in which people assigned blame differently, and technical reasons why this feature of the discourse makes purely descriptive discussions unhelpfully fraught, but none of that changes the empirical fact that "You are in a cult" functions as an accusation in practice, especially when delivered in a public forum. I expect you'll agree if you recall specific conversations-besides-this-one where you've heard someone claim that another participant is in a cult.
9gjm3y
Maybe you're right. So, same question as for ooverthinkk: suppose you think some organization that people you know belong to is a cult, or has some of the same bad features as cults. What should you do? (It seems to me that ooverthinkk feels that at least some of what is being said in this thread about Leverage is morally wrong, and I hope there's some underlying principle that's less overreaching than "never say that anything is cult-like" and less special-pleading than "never say bad things about Leverage" -- but I don't yet understand what that underlying principle is.)
1ooverthinkk3y
(edit: moved to the correct reply area)
7ooverthinkk3y
The first person was Larissa, the second person was Kerry. The "anonymity thing" does not fall under the first category. I'd just prefer, as I stated before, not to be targeted "in real life" for my views on this thread. The "bullying" that I'm referring to happened/happens outside of this thread, and is in no way limited to instances of people being accused of being "in a cult".  
5gjm3y
D'oh! I'd forgotten that Larissa had commented here too. My apologies. As I've said, I have no knowledge of any bullying that may or may not be occurring elsewhere (especially in person in the Bay Area), and if anyone's getting bullied then that's bad. If that isn't common knowledge, then there's a problem. But the things in this thread that you've taken exception to don't seem to me to come close to bullying. (Obviously, though, they could be part of a general pattern of excessive hostility to all things Leverage.) Do you think OP was wrong to post what they did? If so, is that because you think the things they've said about Leverage are factually wrong, or because you think people who think they see an organization behaving in potentially harmful ways shouldn't say so, or what?

If anyone is aware of harms or abuses that have taken place involving staff at Leverage Research, please email me, in confidence, at larissa@leverageresearch.org.

Bullshit. This is not how you prevent abuse of power. This is how you cover it up.

-7ooverthinkk3y

These concerns are widely shared for research in the hard sciences (e.g., nuclear physics), but are valid as well for the social sciences.

Social science infohazards are not a thing because they must be implemented by an organization to work and organizations leak like a sieve. Even nuclear secrets leak. This demand for secrecy is an blatant excuse used to obstruct oversight and to prevent peer review. What you're doing is the opposite of science.

3Kerry Vaughan3y
Interestingly, "peer review" occurs pretty late in the development of scientific culture. It's not something we see in our case studies on early electricity, for example, which currently cover the period between 1600 and 1820.  What we do see throughout the history is the norm of researchers sharing their findings with others interested in the same topics. It's an open question whether Leverage 1.0 violated this norm. On the one hand, they had a quite vibrant and open culture around their findings internally and did seek out others who might have something to offer to their project. On the other hand, they certainly didn't make any of this easily accessible to outsiders. I'm inclined to think they violated some scientific norms in this regard, but I think the work they were doing is pretty clearly science albeit early stage science.
2lsusr3y
I want to draw attention to the fact that "Kerry Vaughan" is a brand new account that has made exactly three comments, all of them on this thread. "Kerry Vaughan" is associated with Leverage. "Kerry Vaughan"'s use of "they" to describe Leverage is deliberately misleading. If "it's not unscientific because it merely takes science back 200-400 years" is the best defense that LEVERAGE ITSELF can give for its own epistemic standards then any claims it has to scientific rigor are laughable. 1600 was the time of William Shakespeare. Edit: I'm not saying that science in 1600 was laughable. I'm saying that performing 1600-style science today is laughable.

I want to draw attention to the fact that "Kerry Vaughan" is a brand new account that has made exactly three comments, all of them on this thread. "Kerry Vaughan" is associated with Leverage. "Kerry Vaughan"'s use of "they" to describe Leverage is deliberately misleading.

I'm not hiding my connection to Leverage which is why I used my real name, mentioned that I work at Leverage in other comments, and used "we" in connection with a link to Leverage's case studies. I used "they" to refer to Leverage 1.0 since I didn't work at Leverage during that time.

I want to draw attention to the fact that "Kerry Vaughan" is a brand new account that has made exactly three comments, all of them on this thread. "Kerry Vaughan" is associated with Leverage. "Kerry Vaughan"'s use of "they" to describe Leverage is deliberately misleading.

To be fair, KV was open about that association in both previous comments, using 'we' in the first and including this disclaimer in the second --

(I currently work at Leverage research but did not work at Leverage during Leverage 1.0 (although I interacted with Leverage 1.0 and know many of the people involved). Before working at Leverage I did EA community building at CEA between Summer 2014 and early 2019.)

-- which also seems to explain the use of 'they' in KV's third comment, which referred specifically to "Leverage 1.0".

(I hope this goes without saying on LW, but I don't mean this as a general defense of Leverage or of KV's opinions. I know nothing about either beyond what I've read here, and I haven't even read all the relevant comments. Personally I wouldn't get involved with an organisation like Leverage.)

2ChristianKl3y
The problem is that currents academic standards lead to fields like psychology being very unproductive.  Experimenting with going back to scientific norms from before the great stagnation is one way to work to achieve scientific progress.
2Raemon3y
(This account is the same Kerry btw, my guess is Kerry happened to try logging in with google, which doesn't actually connect to existing accounts)
7Kerry Vaughan3y
I don't think that's my account actually. It's entirely possible that I never created a LW account before now.

When I hear that a few people within Leverage ended up with serious negative consequences because of charting, it's unclear for me from the outside what that means.

It's my understanding that Leverage did a lot of experiments. It could be that some experiments ended messing up some of the participants. It could also be that "normal charting" without doing any experiments messed the people up. 

I would offer that "normal charting" as offered to external clients was being done in a different incentive landscape than "normal charting" as conducted on trainees within the organization. I mean both incentives on the trainer, and incentives on the trainee.

Concretely, incentives-wise:

  • The organization has an interest in ensuring that the trainee updates their mind and beliefs to accord with what the organization thinks is right/good/true, what the organization thinks makes a person "effective", and what the organization needs from the member.
  • The trainee may reasonably believe they could be de-funded, or at least reduced in status/power in the org, if they do not go along.

Hi all,

During my years in the Bay I spent some of my time as an employee of Paradigm, a Leverage 1.0 affiliate. I also spent a good amount of time living and hanging out at the Leverage house/offices.

I'm writing here from a coffeeshop in Berlin because...why? I think because I get frustrated by the balance of coverage that Leverage gets. When I consider what sorts of things produce value, they tend to start off being very high-variance. They tend to have very weird-seeming history. 

For instance: Whispers have it that – before AI X-Risk was a respectable, well-known cause backed by people like Elon Musk – that a high school drop-out named "Eliezer Yudkowsky" wrote a Harry Potter fan-fiction to bring hundreds of people into a rationalist movement that might someday save the world from runaway algorithms. Did you know Trump-supporter Peter Thiel was an early funder of one of its main organizations?! Did you know that many rationalists have become affiliated with Neoreaction, an alt-right group with members that support authoritarianism?!! Don't get me started on a different now-respectable org – one staffed by many rationalists – that bootstrapped itself in part through astroturf... (read more)

Follow-up: I wanted to acknowledge that some other people who spent time at Leverage had much worse experiences than I did. I don't want to downplay that. My experience may have been unique since I focused on building an external company and since my social circle in the Bay Area was mostly non-Leveragers. 

All that said, I still stand by what I wrote above. I was reacting mainly to the original post wearing a guise of objectivity. I think I would have no gripe with it if the title was, "I have beef with Leverage and so here are some biasing facts I'd like to highlight about them" – though, to be fair, that's a really long title, and also I could be projecting.

[-]Ruby3y130

I think Leverage is worthy of deep criticisms (and thought so even before yesterday's Medium post) but also what you say about "guise of objectivity" is something that bothered me about this post and I'm glad you voiced it.

oh ps, I'm sure this has already been mentioned in the 100+ comments I haven't read, but it's weird to call Leverage a "high-demand" group since – during my time there – people were regularly complaining about basically having too much freedom. I can't remember a single day there that anyone demanded I do anything, in the way a manager demands of employees or guru makes demands of disciples. (Actually there might have been a few where, eg, there was a mandatory meeting that we all install a good password manager so we don't get hacked. But often people missed these "mandatory" meetings.) Most days I just did whatever I wanted. Often people felt like they were floating and wanted *more* directives.

My current model is that this changed around 2017 or so. At least my sense was that people from before that time often had tons of side-projects and free time, and afterwards people seemed to have really full schedules and were constantly busy.

7polyphony3y
Oh you might be right. I think around 2017 was when the overall thing started to separate into subgroups, some of which I remember having stronger requirements (eg do one presentation every X weeks or something). Around that time I was off doing Reserve, which largely got started in New York, and wasn't so in touch with the rest of the "ecosystem" in the Bay Area. OK, yeah, I think this makes me not a reliable commentator on the 2017-on period. 
2polyphony3y
Maybe one thing worth mentioning on this: If my memory serves correct, Reserve was founded with the goal of funding existential risk work. This included funding the Leverage ecosystem (since many of the members thought of themselves as working on solutions that would help with X-Risk) but would have also included other people and orgs.

Update from Leverage Research: a reminder about our AMA & other ways to get updates
For anyone in this thread who still has questions about Leverage Research, I just wanted to remind you about the AMA we are running at our virtual office tomorrow (Saturday, October 2, at 12 PM PT). 

The event is open to anyone interested in our work and is designed to allow people to ask questions about our history, current work, and future plans. See this comment for further details.

Beyond that, we're currently exploring different ways to ensure we hear from people who were part of the Leverage 1.0 ecosystem about their experiences, especially before we release some of our psychology tools and as we write our FAQ on our history (see this post for more details on these two initiatives). This includes looking into neutral third-party moderators and ways of gathering anonymous feedback. If you want to stay up to date on the steps we're taking, or our current work in general, subscribe to our quarterly newsletter or follow us on Twitter.

How do you say "this a cult" without literally saying the words "this is a cult"? (In the common colloquial sense of the word "cult", as opposed the historical academic sense of the word.)

I've never heard of this organization until now and I'd be happy never to hear about them in the future. (This isn't a criticism of OP.)

Once I wrote an article about how to unpack "cult" into eight more specific behaviors. It wasn't received well. Ironically, one of the objections was that this would also classify Leverage as a cult. ¯\_(ツ)_/¯

7ChristianKl3y
No, that was not the objection. My main point was about you asserting a bad binary classification frame. I made no assertion that Leverage fulfilled all the criteria.  It was rather the opposite. If Leverage would indeed fulfill all criteria then a binary classification of them as being a cult wouldn't be a problem.
1lsusr3y
Hahaha!

It's been helpful for me to think of three signs of a cult:

1. Transformational experiences that can only be gotten through The Way.

2. Leader has Ring of Power that gives access to The Way.

3. Clear In-Group vs. Out-Group dynamics to achieve The Way.

Leverage 1.0 seems to have all three:

1. Transformational experiences through charting and bodywork.

2. Geoff as Powerful Theorist.

3. In-group with Leverage as "only powerful group".

Given this, I'm most curious about what Geoff has done to reflect/improve and what the ~rationalist community would want to see from h... (read more)

8ChristianKl3y
Just because an organization provides transformational experiences doesn't necessarily mean that there's a belief that only the techniques of the organization can provide those experiences.  If you for example ask the Dalei Lama about Christianity, the Dalei Lama will grant that it provides transformational experiences for some people and might be good for some people. That's very different then Scientology which claims that everything that Hubert didn't develop doesn't really provide transformational change.
5Viliam3y
Is this also very different from Scientology?
7ChristianKl3y
In high-level Scientology saving the world means auditing a lot of thetans in your body so that all Thetans can be free. Generally, there's a belief that saving the world can only achieved via Scientologies method because it's inherently about doing auditing on a lot of people. Leverage on the other hand does not advocate that everyone has to do Leverage techniques to be saved and I expect that I would find the outcome that Geoff wants to bring about a desirable outcome. The is a huge difference between claiming that someone is the best and someone is the only person who understands how things work and thus everything else is foreign tech that is to be shunned. If you say someone is among the best, that acknowledges that there are other people worth learning from.  The strategy of Leverage involved doing things like holding EA Global which is about Leverage having part of it's impact through helping other organizations. There's no "you are either with us or against us"-vibe but willingness to help without needing to buy all the beliefs of Geoff but willingness to interact freely with other organizations.

I think the way the term cult (or euphemisms like “high-demand group”) has been used by the OP and by many commenters in this thread is extremely unhelpful and, I suspect, not in keeping with the epistemic standards of this community.

At its core, labeling a group as a cult is an out-grouping power move used to distance the audience from that group’s perspective. You don’t need to understand their thoughts, explain their behavior, form a judgment on their merits. They’re a cult. 

This might be easier to see when you consider how, from an outside perspec... (read more)

This might be easier to see when you consider how, from an outside perspective, many behaviors of the Rationality community that are, in fact, fine might seem cultish. Consider, for example, the numerous group houses, hero-worship of Eliezer, the tendency among Rationalists to hang out only with other Rationalists, the literal take over the world plan (AI), the prevalence of unusual psychological techniques (e.g., rationality training, circling), and the large number of other unusual cultural practices that are common in this community. To the outside world, these are cult-like behaviors. They do not seem cultish to Rationalists because the Rationality community is a well-liked ingroup and not a distrusted outgroup. 

 

I think there's actually been a whole lot of discourse and thought about Are Rationalists A Cult, focusing on some of this same stuff? I think the most reasonable and true answers to this are generally along the lines of "the word 'cult' bundles together some weird but neutral stuff and some legitimately concerning stuff and some actually horrifying stuff, and rationalists-as-a-whole do some of the weird neutral stuff and occasionally (possibly more often tha... (read more)

There is a huge difference between "tendency to hang out with other Rationalists" and having mandatory therapy sessions with your supervisor or having to ask for permission to write a personal blog.

Yeah, 'cult' is a vague term often overused. Yeah, a lot of rationality norms can be viewed as cultish. 

How would you suggest referring to an 'actual' cult - or, if you prefer not to use that term at all, how would you suggest we describe something like scientology or nxivm? Obviously those are quite extreme, but I'm wondering if there is 'any' degree of group-controlling traits that you would be comfortable assigning the word cult to? Or if I refer to scientology as a cult, do you consider this an out-grouping power move used to distance people from scientology's perspective?

This strikes me as an obviously good question and I'm surprised it hasn't been answered.

I think the way the term cult (or euphemisms like “high-demand group”) has been used by the OP and by many commenters in this thread is extremely unhelpful and, I suspect, not in keeping with the epistemic standards of this community.

No. As demonstrated by this comment by Viliam, the word "cult" refers is a well-defined set of practices used to break people's ability to think rationally. Leverage does not deny using these practices. To the contrary, it appears flagrantly indifferent to the abuse potential. Cult techniques of brainwashing an attractor of human social behavior. Eliezer Yudkowsky warned about this attractor. Your attempt to redefine cult more broadly is a signal you're bullshitting us.

9TAG3y
It's useful to be able to conceptualise something that is 50% or 90% of the way to becoming a cult, because then you can jump off.
8ChristianKl3y
Leverage is not doing everything that Viliam described in his post. Your mind belongs to the group : In the description above there's no mention of people needing to confess sins. A sacred science : Leverage did not have an intellectual environment that didn't allow for doubts.  Map over the territory : There's no assertion of that in the common knowledge facts and I doubt it's true for Leverage.
9Viliam3y
They call it "Belief Reporting", it's described in one of the documents that were removed from Internet Archive. The members are (were?) supposed to do it regularly with their manager. That is like "auditing" in Scientology, except instead of using an e-meter they rely on nerds being pathologically honest.
7ChristianKl3y
There's no inherent need to confess having violated any rules and comitted sins in belief reporting.  It's a debugging technique and while you can use any debugging technique to debug someone having comitted sins no one here who has closer information about leverage charged that they do that. Scientology actually does force people to confess sins when they commit what they consider ethics violations (scientology calls their code of conduct ethics). Anyone involved in scientology would easily classify what scientology does as including a need to confess sins. On the other hand, that's far how the participants of belief reporting sessions at Leverage likely thought about it. At the moment there's no source that anybody in Leverage got an impression that this is what happened to them. It's quite toxic for rational discussion to make those accusations instead of focus on the facts that are actually out in the open. 
3tcheasdfjkl3y
What's the content of belief reporting?

I learned belief reporting from a person who attended a Leverage workshop and haven't had any direct face-to-face exposure to Leverage.

Belief reporting is a debugging technique. You have a personal issue you want to address. Then you look at related beliefs. 

Leverage found that if someone sets an intention of "I will tell the truth" and then speaks out of a belief like "I'm a capable person" and they don't believe that (at a deep level), they will have a physical sensation of resistance. 

Afterwards, there's an attempt to trace the belief to it's roots. The person can then speak out various forms of "I'm not a capable person because X" and "I'm not a capable person because Y". Then recursively the process gets applied to seek for the root. Often that allows uncovering that there's some confusion at the base of the belief and then after having uncovered the confusion it's possible to work the tree up again to get rid of the "I'm not a capable person" belief and switch it into "I'm a capable person".

This often leads to discovering that one holds beliefs at a deep level that one's system II considers silly but that still are the base of other beliefs and that affect our actions. 

Thanks for the description!

In my opinion, this sounds interesting as a confidential voluntary therapy, but Orwellian when:

Members who were on payroll were expected to undergo charting/debugging sessions with a supervisory "trainer", and to "train" other members. The role of trainer is something like "manager + therapist": that is, both "is evaluating your job performance" and "is doing therapy on you".

So, your supervisor is debugging your beliefs, possibly related to your job performance, and you are supposed to not only tell the truth, but also "seek for the root"... and yet, in your opinion, this does not imply "having to confess violation of the rules or committed sins"?

What exactly happens when you start having doubts about the organization or the leader, and as a result your job performance drops, and then you are having the session with your manager? Would you admit, truthfully, "you know, recently I started having some doubts about whether we are really doing our best to improve the world, or just using the effective altruist community as a finshing pond for people who are idealistic and willing to sacrifice... and I guess these thoughts distract me from my tasks", and then your therapist/manager is going to say... what?

6ChristianKl3y
Nothing written above suggests that doubt about central strategy would have been seen as sin, especially when it isn't necessarily system II endorsed. It's my understanding that talking about the theories of change through which Leverage is going to have an effect on the world was one of the main activities Leverage engaged in. Besides the word sin is generally about taking actions that are in violation of norms of an organization. In the Scientology context it's for example a sin to watch a documentary about Scientology on normal TV. In Christianity masturbation would be a sin.  Leverage doesn't have a similar behavior codex that declares certain actions as sins that have to be confessed.  Role conflicts between being a manager and a therapist can easily produce problems but analysing them through a frame as it being about "confessing sins" is not an useful lense to think coherently about the involved problems.
1tcheasdfjkl3y
Interesting, thanks!
-1ooverthinkk3y
You missed the part where this person was pointing out that there is Deliberately Vague Language used by the OP. Imo, this language doesn't create enough of a structure for commenters to construct an adequate dialogue about several sub-topics in this thread. Also, what's "flagrantly indifferent" about Larissa wanting to hear out people who feel wronged? You seem to be quite upset by all of this, why not reach out and let her know? 
5cousin_it3y
Nah, he's alright. If someone calls a cult a cult, that's not a reason to call them upset. Plus, he writes about plenty of other things; you're the one with the new account made only to defend Leverage.

you're the one with the new account made only to defend Leverage

The social pressure against defending Leverage is in the air, so anonymity shouldn't be held against someone who does that, it's already bad enough that there is a reason for anonymity.

4ooverthinkk3y
If questioning the "rationality" of the discourse is defending them, then what do you suppose you're doing? I just don't see the goals or values of this community reflected here and it confuses me. That's why I made this account - to get clarity on what seems to me to be a total anomaly case in how the rationalist community members (at least as far as signaling goes, I guess) conduct themselves. Because I've only seen what is classifiable as a hysteric response to this topic, the Leverage topic.