All of Kerry Vaughan's Comments + Replies

Transcript for Geoff Anders and Anna Salamon's Oct. 23 conversation

'Phenomenal consciousness exists'.

Sorry if this comes off as pedantic, but I don't know what this means. The philosopher in me keeps saying "I think we're playing a language game," so I'd like to get as precise as we can. Is there a paper or SEP article or blog post or something that I could read which defines the meaning of this claim or the individual terms precisely? 

Because the logical structure is trivial -- Descartes might just as well have asked 'could a deceiver make 2 + 2 not equal 4?'

[...]

I'd guess also truths of arithmetic, and such? If Geo

... (read more)
1TAG1moIt doesn't have to mean anything strange or remarkable. It's basically ordinary waking consciousness. If you are walking around noticing sounds and colours smells ,that's phenomenal consciousness. As opposed to things that actually are strange , like blindsight or sleepwalking. But it can be overloaded with other, more controversial, ideas, such as the idea that it is incorrigible (how we got on to the subject), or necessarily non-physical.
6dxu1moMy view of Descartes' cogito is either that (A) it is a standard claim, in which case all the usual rules apply, including the one about infinite certainty not being allowed, or (B) it is not a standard claim, in which case the usual rules don't apply, but also it becomes less clear that the cogito is actually a thing which can be "believed" in a meaningful sense to begin with. I currently think (B) is much closer to being the case than (A). When I try to imagine grounding and/or operationalizing the cogito by e.g. designing a computer program that makes the same claim for the same psychological reasons, I run into a dead end fairly quickly, which in my experience is strong evidence that the initial concept was confused and/or incoherent. Here's a quick sketch of my reasoning: Suppose I have a computer program that, when run, prints "I exist" onto the screen. Moreover, suppose this computer program accomplishes this via means of a simple print statement; there is no internal logic, no if-then conditional structure, that modulates the execution of the print statement, merely the naked statement, which is executed every time the program runs. Then I ask: is there a meaningful sense in which the text the program outputs is correct? It seems to me that, on the one hand, that the program cannot possibly be wrong here. Perhaps the statement it has printed is meaningless, but that does not make it false; and conversely if the program's output were to be interpreted as having meaning, then it seems obvious that the statement in question ("I exist") is correct, since the program does in fact exist and was run. But this latter interpretation feels very suspicious to me indeed, since it suggests that we have managed to create a "meaningful" statement with no truth-condition; by hypothesis there is no internal logic, no conditional structure, no checks that the program administers before outputting its claim to exist. This does not (intuitively) seem to me as though it ca
2Rob Bensinger1moWe're all philosophers here, this is a safe space for pedantry. :) Below, I'll use the words 'phenomenal property' and 'quale' interchangeably. An example of a phenomenal property is the particular redness of a particular red thing in my visual field. Geoff would say he's certain, while he's experiencing it, that this property is instantiated. I would say that there's no such property, though there is a highly similar property that serves all the same behavioral/cognitive/functional roles (and just lacks that extra 'particular redness', and perhaps that extra 'inwardness / inner-light-ness / interiority / subjectivity / perspectivalness' -- basically, lacks whatever aspects make the hard problem seem vastly harder than the 'easy' problems of reducing other mental states to physical ones). This, of course, is a crazy-sounding view on my part. It's weird that I even think Geoff and I have a meaningful, substantive disagreement. Like, if I don't think that Geoff's brain really instantiates qualia, then what do I think Geoff even means by 'qualia'? How does Geoff successfully refer to "qualia, if he doesn't have them? Why not just say that 'qualia' refers to something functional? Two reasons: * I think hard-problem intuitions are grounded in a quasi-perceptual illusion, not a free-floating delusion. If views like Geoff's and David Chalmers' were grounded in a free-floating delusion, then we would just say 'they have a false belief about their experiences' and stop there. If we're instead positing that there's something analogous to an optical illusion happening in people's basic perception of their own experiences, then it makes structural sense to draw some distinction between 'the thing that's really there' and 'the thing that's not really there, but seems to be there when we fall for the illusion'. I may not think that the latter concept really and truly has the full phenomenal richness that Geoff / Chalmers / et
Transcript for Geoff Anders and Anna Salamon's Oct. 23 conversation

On reflection, it seems right to me that there may not be a contradiction here. I'll post something later if I conclude otherwise.

(I think I got a bit too excited about a chance to use the old philosopher's move of "what about that claim itself.")

2Rob Bensinger1mo:) Yeah, it is an interesting case but I'm perfectly happy to say I'm not-maximally-certain about this.
Transcript for Geoff Anders and Anna Salamon's Oct. 23 conversation

It's not clear what "I" means here . . .

Oh, sorry, this was a quote from Descartes that is the closest thing that actually appears in Descartes to "I think therefore I am" (which doesn't expressly appear in the Meditations)

Descartes's idea doesn't rely on any claims about persistent psychological entities (that would require the supposition of memory, which Descartes isn't ready to accept yet!). Instead, he postulates an all-powerful entity that is specifically designed to deceive him and tries to determine whether anything at all can be known given... (read more)

3Rob Bensinger1mo'Can a deceiver trick a thinker into falsely believing they're a thinker?' has relevantly the same structure as 'Can you pick up a box that's not a box?' -- it deductively follows that 'no', because the thinker's belief in this case wouldn't be false. (Though we've already established that I don't believe in infinite certainty. I forgive Descartes for living 60 years before the birth of Thomas Bayes, however. :) And Bayes didn't figure all this out either.) Because the logical structure is trivial -- Descartes might just as well have asked 'could a deceiver make 2 + 2 not equal 4?' -- I have to worry that Descartes is sneaking in more content that is in fact deducible here. For example, 'a thought exists, therefore a thinker exists' may not be deductively true, depending on what is meant by 'thought' and 'thinker'. A lot of philosophers have commented that Descartes should have limited his conclusion to 'a thought exists' (or 'a mental event exists'), rather than 'a thinker exists'. 'Phenomenal consciousness exists'. I'd guess also truths of arithmetic, and such? If Geoff is Bayesian enough to treat those as probabilistic statements, that would be news to me!
Transcript for Geoff Anders and Anna Salamon's Oct. 23 conversation

I don't think people should be certain of anything

What about this claim itself?

8dxu1mo[Disclaimer: not Rob, may not share Rob's views, etc. The reason I'm writing this comment nonetheless is that I think I share enough of Rob's relevant views here (not least because I think Rob's views on this topic are mostly consonant with the LW "canon" view) to explain. Depending on how much you care about Rob's view specifically versus the LW "canon" view, you can choose to regard or disregard this comment as you see fit.] I don't think this is the gotcha [I think] you think it is. I think it is consistent to hold that (1) people should not place infinite certainty in any beliefs, including meta-beliefs about the normative best way to construct beliefs, and that (2) since (1) is itself a meta-belief, it too should not be afforded infinite certainty. Of course, this conjunction has the interesting quality of feeling somewhat paradoxical, but I think this feeling doesn't stand up to scrutiny. There doesn't seem to me to be any actual contradiction you can derive from the conjunction of (1) and (2); the first seems simply to be a statement of a paradigm that one currently believes to be normative, and the second is a note that, just because one currently believes a paradigm to be normative, does not necessarily mean that that paradigm is normative. The fact that this second note can be construed as coming from the paradigm itself does not undermine it in my eyes; I think it is perfectly fine for paradigms to exist that fail to assert their own correctness. I think, incidentally, that there are many people who [implicitly?] hold the negation of the above claim, i.e. they hold that (3) a valid paradigm must be one that has faith in its own validity. The paradigm may still turn out to be false, but this ought not be a possibility that is endorsed from inside the paradigm; just as individuals cannot consistently assert themselves to be mistaken [https://en.wikipedia.org/wiki/Moore%27s_paradox] about something (even if they are in fact mistaken), the inside of a pa
Transcript for Geoff Anders and Anna Salamon's Oct. 23 conversation

This comment is excellent. I really appreciate it. 

I probably share some of your views on the "no no no no (yes),  no no no no (yes), no no no no (yes)" thing, and we don't want to go too far with it, but I've come to like it more over time. 

(Semi-relatedly: I think I rejected the sequences unfairly when I first encountered them early on for something like this kind of stylistic objection. Coming from a philosophical background I was like "Where are the premises? What is the argument? Why isn't this stated more precisely?" Over time I've come to appreciate the psychological effect of these kinds of writing styles and value that more than raw precision.)

Transcript for Geoff Anders and Anna Salamon's Oct. 23 conversation

It seems to me that you're arguing against a view in the family of claims that include "It seems like the one thing I can know for sure is that I'm having these experiences" but I'm having trouble determining the precise claim you are refuting. I think this is because I'm not sure which claims that are meant precisely and which are meant rhetorically or directionally. 

Since this is a complex topic which lots of potential distinctions to be made, it might be useful to determine your views on a few different claims in the family of "It seems like the on... (read more)

3Rob Bensinger1moI don't think people should be certain of anything; see How to Convince Me That 2 + 2 = 3 [https://www.lesswrong.com/posts/6FmqiAgS8h4EJm86s/how-to-convince-me-that-2-2-3] ; Infinite Certainty [https://www.lesswrong.com/s/FrqfoG3LJeCZs96Ym/p/ooypcn7qFzsMcy53R]; and 0 and 1 Are Not Probabilities [https://www.lesswrong.com/s/FrqfoG3LJeCZs96Ym/p/QGkYCwyC7wTDyt3yT]. We can build software agents that live in virtual environments we've constructed, and we can program the agents to never make certain kinds of mistakes (e.g., never make an invalid reasoning step, or never misperceive the state of tiles they're near). So in that sense, there's nothing wrong with positing 'faculties that always get the right answer in practice', though I expect these to be much harder to evolve than to design. But a software agent in that environment shouldn't be able to arrive at 100% certainty that one of its faculties is infallible, if it's a smart Bayesian. Even we, the programmers, can't be 100% certain that we programmed the agent correctly. Even an automated proof of correctness won't get us to 100% certainty, because the theorem-prover's source code could always have some error (or the hardware it's running on could have been struck by a spare gamma ray, etc.) It's not clear what "I" means here, but it seems fine to say that there's some persistent psychological entity roughly corresponding to the phrase "Rob Bensinger". :) I'm likewise happy to say that "thinking", "experience", etc. can be interpreted in (possibly non-joint-carving) ways that will make them pick out real things.
Transcript for Geoff Anders and Anna Salamon's Oct. 23 conversation

Rob: Where does the reasoning chain from 1 to 3a/3b go wrong in your view? I get that you think it goes wrong in that the conclusions aren't true, but what is your view about which premise is wrong or why the conclusion doesn't follow from the premises?

In particular, I'd be really interested in an argument against the claim "It seems like the one thing I can know for sure is that I'm having these experiences."

1TAG1moIf you want to claim some definitive disproof of aspect dualism, a minimal requirement would be to engage with it. I' ve tried talking to you about it several times, and each time you cut off the conversation at your end.
6Rob Bensinger1moI think that the place the reasoning goes wrong is at 1 ("It seems like the one thing I can know for sure is that I'm having these experiences."). I think this is an incredibly intuitive view, and a cornerstone of a large portion of philosophical thought going back centuries. But I think it's wrong. (At least, it's wrong -- and traplike -- when it's articulated as "know for sure". I have no objection to having a rather high prior probability that one's experiences are real, as long as a reasonably large pile of evidence to the contrary could change your mind. But from a Descartes-ish perspective, 'my experiences might not be real' is just as absurd as 'my experiences aren't real'; the whole point is that we're supposed to have certainty in our experiences.) Here's how I would try to motivate 'illusionism is at least possibly true' today, and more generally 'there's no way for a brain to (rationally) know with certainty that any of its faculties are infallible': _________________________________________________ First, to be clear: I share the visceral impression that my own consciousness is infallibly manifest to me, that I couldn't possibly not be having this experience. Even if all my beliefs are unreliable, the orange quale itself is no belief, and can't be 'wrong'. Sure, it could bear no resemblance to the external world -- it could be a hallucination. But the existence of hallucinations can't be a hallucination, trivially. If it merely 'seems to me', perceptually, as though I'm seeing orange -- well, that perceptual seeming is the orange quale! In some sense, it feels as though there's no 'gap' between the 'knower' and the 'known'. It feels as though I'm seeing the qualia, not some stand-in representation for qualia that could be mistaken. All of that feels right to me, even after 10+ years of being an illusionist. But when I poke at it sufficiently, I think it doesn't actually make sense. Intuition pump 1: How would my physical brain, hands, etc. k
Transcript for Geoff Anders and Anna Salamon's Oct. 23 conversation

OK, excellent this is also quite helpful. 

For both my own thought and in high-trust conversations I have a norm that's something like "idea generation before content filter" which is designed to allow one to think uncomfortable thoughts (and sometimes say them) before filtering things out. I don't have this norm for "things I say on the public internet" (or any equivalent norm). I'll have to think a bit about what norms actually seem good to me here.

I think I can be on board with a norm where one is willing to say rude or uncomfortable things provided... (read more)

Transcript for Geoff Anders and Anna Salamon's Oct. 23 conversation

That seems basically fair. 

An unendorsed part of my intention is to complain about the comment since I found it annoying. Depending on how loudly that reads as being my goal, my comment might deserve to be downvoted to discourage focusing the conversation on complaints of this type.

The endorsed part of my intention is that the LW conversations about Leverage 1.0 would likely benefit from commentary by people who know what actually went on in Leverage 1.0. Unfortunately, the set of "people who have knowledge of Leverage 1.0 and are also comfortable on ... (read more)

6Viliam1moI hope you will feel comfortable here. I think you are following the LW norms quite okay. You seem to take the karma too seriously, but that's what new users are sometimes prone to do; karma is an important signal, but it also inevitably contains noise; in long term it usually seems to work okay. If that means something for you, your comments are upvoted a lot. I apologize for the annoying style of my comment. I will try to avoid doing so in the future, though I cannot in good faith make a promise to do so; sorry about that. I sincerely believe that Geoff is a dangerous person, and I view his actions with great suspicion. This is not meant as an attack on you. Feel free to correct me whenever I am factually wrong; I prefer being corrected to staying mistaken. (Also, thanks to both Rob and Said for doing what they believed was the right thing.) [Biting my tongue hard to avoid a sarcastic response. Trying to channel my inner Duncan. Realizing that I am actually trying to write a sarcastic response using mock-Duncan's voice. Sheesh, this stuff is difficult... Am I being meta-sarcastic now? By the way, Wikipedia says that sarcasm is illegal in North Korea; I am not [https://en.wikipedia.org/wiki/Sarcasm#Legality] making this up...] I am under impression that (some) Leverage members signed non-disclosure agreements. Therefore, when I observe the lack of Leverage supporters on LW, there are at least two competing explanations matching the known data, and I am not sure how to decide which one is closer to reality: * rationalist community and LW express negative attitude towards people supporting Leverage, so they avoid the environment they perceive as unfriendly; * people involved with Leverage cannot speak openly about Leverage... maybe only about some aspects of it, but not discussing Leverage at all helps them stay on the safe side; and perhaps, also some kind of "null hypothesis" is worth considering, such as: * LW only attracts a small fraction o
Transcript for Geoff Anders and Anna Salamon's Oct. 23 conversation

Thanks a lot for taking the time to write this. The revised version makes it clearer to me what I disagree with and how I might go about responding.

An area of overlap that I notice between Duncan-norms and LW norms are sentences like this:
 

(This is not me being super charitable, but: it seems to me that the whole demons-and-crystals thing, which so far has not been refuted, to my knowledge, is also a start.  /snark)

Where the pattern is something like: "I know this is uncharitable/rude, but [uncharitable/rude thing]. Where I come from the caveat i... (read more)

If I tried to make it explicit, I guess the rudeness disclaimer means that the speaker believed there was a politeness-clarity tradeoff, and decided to sacrifice politeness in order to maximize clarity.

If the observer appreciates the extra clarity, and thinks the sacrifice was worth it, the rudeness disclaimer serves as a reminder that they might want to correspondingly reduce the penalty they typically assign for rudeness.

Depending on context, the actually observer may be the addressee and/or third party. So, if the disclaimer has no effect on you, maybe ... (read more)

Happy to try.

There are sort of two parts to this, but they overlap and I haven't really teased them apart, so sorry if this is a bit muddled.

I think there's a tension between information and adherence-to-norms.

Sometimes we have a rude thought.  Like, it's not just that its easiest expression is rude, it's that the thought itself is fundamentally rude.  The most central example imo is when you genuinely think that somebody is wrong about themselves/their own thought processes/engaging in self-deception/in the grips of a blind spot.  When your... (read more)

Transcript for Geoff Anders and Anna Salamon's Oct. 23 conversation

As of writing (November 9, 2021) this comment has 6 Karma across 11 votes. As a newbie to LessWrong with only a general understanding of LessWrong norms, I find it surprising that the comment is positive. I was wondering if those who voted on this comment (or who have an opinion on it) would be interested in explaining what Karma score this comment should have and why.

My view based on my own models of good discussion norms is that the comment is mildly toxic and should be hovering around zero karma or in slightly negative territory for the following reason... (read more)

"6 Karma across 11 votes" is, like, not good. It's about what I'd expect from a comment that is "mildly toxic [but] does raise [a] valid consideration" and "none of the offenses ... are particularly heinous", as you put it. (For better or worse, comments here generally don't get downvoted into the negative unless they're pretty heinous; as I write this only one comment on this post has been voted to zero, and that comment's only response describes it as "borderline-unintelligible".) It sounds like you're interpreting the score as something like qualified a... (read more)

5Rob Bensinger1moFWIW I downvoted Viliam's comment soon after he posted it, and have strong-downvoted it now that it has more karma.

I can't speak to either real-Viliam or the people upvoting or downvoting the comment, but here's my best attempt to rewrite the comment in accordance with Duncan-norms (which overlap with but are not the same as LessWrong norms).  Note that this is based off my best-guess interpretation of what real-Viliam was going for, which may not be what real-Viliam wanted or intended.  Also please note that my attempt to "improve" Viliam's comment should not be taken as a statement about whether or not it met some particular standard (even things that are a... (read more)

Zoe Curzi's Experience with Leverage Research

The most directly 'damning' thing, as far as I can tell, is Geoff pressuring people to sign NDAs.


I received an email from a Paradigm board member on behalf of Paradigm and Leverage that aims to provide some additional clarity on the information-sharing situation here. Since the email specifies that it can be shared, I've uploaded it to my Google Drive (with some names and email addresses redacted). You can view it here.

The email also links to the text of the information-sharing agreement in question with some additional annotations.

[Disclosure: I work at L... (read more)

I don't know how realistic this worry is, but I'm a bit worried about scenarios like:

  1. A signatory doesn't share important-to-share info because they interpret the lnformation Arrangement doc (even with the added comments) as too constraining.

    My sense is that there's still a lot of ambiguity about exactly how to interpret parts of the agreement? And although the doc says it "is meant to be based on norms of good behavior in society" I don't see a clause explicitly allowing people's personal consciences to supersede the agreement. (I might just have missed it
... (read more)

[...] The most important thing we want to clarify is that as far as we are concerned, at least, individuals should feel free to share their experiences or criticise Geoff or the organisations.

[... T]his document was never legally binding, was only signed by just over half of you, and almost none of you are current employees, so you are under no obligation to follow this document or the clarified interpretation here. [...]

I'm really happy to see this! Though I was momentarily confused by the "so" here -- why would there be less moral obligation to uphold an... (read more)

Thanks for sharing this. ! 

I believe this is public information if I look for your 990s, but could you or someone list the Board members of Leverage / Paradigm, including changes over time? 

I do applaud explicitely clarifying that people are free to share their own experiences.

Common knowledge about Leverage Research 1.0

Instead, what I'd be curious to know is whether they have the integrity to be proactively transparent about past mistakes, radically changed course when it comes to potentially harmful practices, and refrain from using any potentially harmful practices in cases where it might be advantageous on a Machiavellian-consequentialist assessment.

I think skepticism about nice words without difficult-to-fake evidence is warranted, but I also think some of this evidence is already available.

For example, I think it's relatively easy to verify that Leverage is a radica... (read more)

I think the fact that it is now a four person remote organization doing mostly research on science as opposed to an often-live-in organization with dozens of employees doing intimate psychological experiments as well as following various research paths tells me that you are essentially a different organization and the only commonalities are the name and the fact that Geoff is still the leader.

Common knowledge about Leverage Research 1.0

This is a good point. I think I reacted too harshly. I've added an apology to the orthonormal to the original comment

Common knowledge about Leverage Research 1.0

Assuming something like this represents your views Freyja, then I think you’ve handled the situation quite well. 

I hope you can see how that is quite different from the comment I was replying to which is someone who appears to have met Geoff once. I'm sure you can similarly imagine how you would feel if people made comments like the one from orthonormal about friends of yours without knowing them.

  1. Thank you for scaling back your initial response.
  2. I've interacted with Geoff a few times since 2012, and continued to have that bad feeling about him. 
  3. I wanted to let people know that these impressions started even prior to Leverage, and that I know I'm not retconning my memory, because I remember a specific conversation in summer 2014 about my distrust of Leverage (and I believe that wasn't the first such conversation). This post would not have surprised 2012!me; the signs may have been subjective but they were there.
  4. Without getting to the object leve
... (read more)

He said that he had significant discussions about Geoff with people near Leverage afterwards that damaged those relationships. That suggests that the sense was very strong and he had talked about it with people who actually know him more deeply.

Real people can and often are extremely dangerous and it is not rude to describe dangerous people as acting in dangerous ways, or if it is then it is a valuable form of rudeness. 

I have a sincere question for you, Kerry, because you seem to be upset by the approach commenters here are taking to talking about this issue and the people involved, and people here are openly discussing the character of your employer, which I can imagine to be really painful.

If your sister or brother or your significant other had become enmeshed in a controlling group and you believed the group and in particular its leader had done them serious psychological harm, how would you want people to talk about the group and its leader in public, after the fact?... (read more)

Common knowledge about Leverage Research 1.0

Leverage keeps coming up because Geoff Anders (and associates) emit something epistemically and morally corrosive and are gaslighting the commons about it. And Geoff keeps trying to disingenuously hit the reset button and hide it, to exploit new groups of people. That’s what people are responding to and trying to counteract in posts like the OP.

This seems pretty unfair to me and I believe we’re trying quite hard to not hide the legacy of Leverage 1.0. For example, we (1) specifically chose to keep the Leverage name; (2) are transparent about our intention ... (read more)

Common knowledge about Leverage Research 1.0

I want to draw attention to the fact that "Kerry Vaughan" is a brand new account that has made exactly three comments, all of them on this thread. "Kerry Vaughan" is associated with Leverage. "Kerry Vaughan"'s use of "they" to describe Leverage is deliberately misleading.

I'm not hiding my connection to Leverage which is why I used my real name, mentioned that I work at Leverage in other comments, and used "we" in connection with a link to Leverage's case studies. I used "they" to refer to Leverage 1.0 since I didn't work at Leverage during that time.

Common knowledge about Leverage Research 1.0

I don't think that's my account actually. It's entirely possible that I never created a LW account before now.

Common knowledge about Leverage Research 1.0

This demand for secrecy is an blatant excuse used to obstruct oversight and to prevent peer review. What you're doing is the opposite of science.

Interestingly, "peer review" occurs pretty late in the development of scientific culture. It's not something we see in our case studies on early electricity, for example, which currently cover the period between 1600 and 1820. 

What we do see throughout the history is the norm of researchers sharing their findings with others interested in the same topics. It's an open question whether Leverage 1.0 violated th... (read more)

2lsusr2moI want to draw attention to the fact that "Kerry Vaughan" is a brand new account that has made exactly three comments, all of them on this thread. "Kerry Vaughan" is associated with Leverage. [https://www.lesswrong.com/posts/Kz9zMgWB5C27Pmdkh/common-knowledge-facts-about-leverage-research-1-0-1?commentId=ThqA9hjghodMbXPHk] "Kerry Vaughan"'s use of "they" to describe Leverage is deliberately misleading. If "it's not unscientific because it merely takes science back 200-400 years" is the best defense that LEVERAGE ITSELF can give for its own epistemic standards then any claims it has to scientific rigor are laughable. 1600 was the time of William Shakespeare. Edit: I'm not saying that science in 1600 was laughable. I'm saying that performing 1600-style science today is laughable.
Common knowledge about Leverage Research 1.0

I think the way the term cult (or euphemisms like “high-demand group”) has been used by the OP and by many commenters in this thread is extremely unhelpful and, I suspect, not in keeping with the epistemic standards of this community.

At its core, labeling a group as a cult is an out-grouping power move used to distance the audience from that group’s perspective. You don’t need to understand their thoughts, explain their behavior, form a judgment on their merits. They’re a cult. 

This might be easier to see when you consider how, from an outside perspec... (read more)

Yeah, 'cult' is a vague term often overused. Yeah, a lot of rationality norms can be viewed as cultish. 

How would you suggest referring to an 'actual' cult - or, if you prefer not to use that term at all, how would you suggest we describe something like scientology or nxivm? Obviously those are quite extreme, but I'm wondering if there is 'any' degree of group-controlling traits that you would be comfortable assigning the word cult to? Or if I refer to scientology as a cult, do you consider this an out-grouping power move used to distance people from scientology's perspective?

This might be easier to see when you consider how, from an outside perspective, many behaviors of the Rationality community that are, in fact, fine might seem cultish. Consider, for example, the numerous group houses, hero-worship of Eliezer, the tendency among Rationalists to hang out only with other Rationalists, the literal take over the world plan (AI), the prevalence of unusual psychological techniques (e.g., rationality training, circling), and the large number of other unusual cultural practices that are common in this community. To the outside worl

... (read more)

I think the way the term cult (or euphemisms like “high-demand group”) has been used by the OP and by many commenters in this thread is extremely unhelpful and, I suspect, not in keeping with the epistemic standards of this community.

No. As demonstrated by this comment by Viliam, the word "cult" refers is a well-defined set of practices used to break people's ability to think rationally. Leverage does not deny using these practices. To the contrary, it appears flagrantly indifferent to the abuse potential. Cult techniques of brainwashing an attractor of... (read more)

Common knowledge about Leverage Research 1.0

I appreciate the edit, Viliam.

I know that it was a meme about Leverage 1.0 that it was impossible to understand, but I think that is pretty unfair today. If anyone is curious here are some relevant links:

... (read more)