LESSWRONG
LW

Center for Applied Rationality (CFAR)Machine Intelligence Research Institute (MIRI)DramaLeverage Research
Personal Blog

95

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

by jessicata
16th Oct 2021
26 min read
950

95

Center for Applied Rationality (CFAR)Machine Intelligence Research Institute (MIRI)DramaLeverage Research
Personal Blog

95

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)
405Scott Alexander
99devi
90jessicata
103Scott Alexander
57ChristianKl
8Viliam
12Holly_Elmore
4ChristianKl
41jessicata
31Scott Alexander
8jessicata
23Natália
10TekhneMakre
5jessicata
-1[comment deleted]
33Benquo
20Holly_Elmore
5Benquo
87habryka
8Benquo
2NancyLebovitz
2Jayson_Virissimo
38Benquo
31Hazard
36CharlieTheBananaKing
11mic
25ChristianKl
9NancyLebovitz
17jessicata
11ChristianKl
2[comment deleted]
6jefftk
3ChristianKl
11jefftk
16ChristianKl
41habryka
21Rob Bensinger
27habryka
13ChristianKl
20jefftk
7jefftk
15habryka
9ChristianKl
22habryka
4ChristianKl
22Benquo
7Hazard
22ChristianKl
21Benquo
4NancyLebovitz
84Zack_M_Davis
93Scott Alexander
82Scott Alexander
39mathenjoyer
55Unreal
48Said Achmiz
22ChristianKl
15mathenjoyer
26Unreal
16mathenjoyer
7ChristianKl
5ChristianKl
4Benquo
6pjen
5mathenjoyer
51Said Achmiz
5mathenjoyer
-1Benquo
23mathenjoyer
4Benquo
3mathenjoyer
0Benquo
9Unreal
24cousin_it
5mathenjoyer
20FeepingCreature
10mathenjoyer
2Hazard
-46xtz05qw
6mathenjoyer
18Rafael Harth
6TekhneMakre
1mathenjoyer
24Said Achmiz
14Rafael Harth
5mathenjoyer
22Viliam
16Zack_M_Davis
27steven0461
21jessicata
23steven0461
10PeteMichaud
2jessicata
12steven0461
3dxu
2jessicata
4Unreal
0ChristianKl
66Andrew Rettek
56AnnaSalamon
24jimrandomh
29ChristianKl
184humantoo
37Ruby
19jessicata
2ChristianKl
3jessicata
3ChristianKl
10Avi
46David Hornbein
28Benquo
5Benquo
3Eli Tyre
-22Benquo
58dxu
1Benquo
46humantoo
23Benquo
16Benquo
6Scott Alexander
53jessicata
104Scott Alexander
65jessicata
59Scott Alexander
11hg00
11TekhneMakre
9Rafael Harth
19TekhneMakre
4CronoDAS
49nshepperd
26jessicata
108Viliam
104Eliezer Yudkowsky
14orthonormal
10Eliezer Yudkowsky
18ChristianKl
11Viliam
7jimrandomh
2Tenoke
31TekhneMakre
28Unreal
24countingtoten
1ChristianKl
29countingtoten
25Davis_Kingsley
22jessicata
-3nshepperd
27jessicata
13devi
42jimrandomh
4Viliam
15jimrandomh
42Scott Alexander
34gwern
44jessicata
33Desrtopa
67Vanessa Kosoy
35Wei Dai
7Wei Dai
-10[comment deleted]
13Viliam
56AnnaSalamon
22Eli Tyre
6Unreal
25JenniferRM
11Avi
4Avi
7CronoDAS
32ChristianKl
30Scott Alexander
21ChristianKl
18ChristianKl
34JoshuaFox
16ChristianKl
0Viliam
5ChristianKl
17jessicata
8lc
1FinalFormal2
6Dr_Manhattan
1Yoav Ravid
361mingyuan
39Ruby
231Aella
177mingyuan
25ChristianKl
25Avi
19AnnaSalamon
33Spiracular
19Spiracular
7Unreal
5Spiracular
23Avi
22AnnaSalamon
16Viliam
-9Puxi Deek
8Viliam
-36Kenny
21Viliam
165Eliezer Yudkowsky
21Benquo
64Eliezer Yudkowsky
11jessicata
75Ruby
37Freyja
22Viliam
10habryka
12Viliam
16Eliezer Yudkowsky
50jessicata
58ChristianKl
35Ben Pace
106Zack_M_Davis
25Ben Pace
28Benquo
38Aella
4Benquo
56Aella
55Unreal
15Benquo
13Unreal
9Benquo
6Unreal
15Benquo
17Unreal
6Benquo
11Unreal
10Benquo
3Unreal
4Benquo
5Unreal
0Benquo
5Kaj_Sotala
2Benquo
5[comment deleted]
4[comment deleted]
8Ben Pace
24Jarred Filmer
33Aella
22romeostevensit
53Aella
18Duncan Sabien (Inactive)
13romeostevensit
18hg00
0farp
75Viliam
64Aella
-28farp
44Duncan Sabien (Inactive)
-42farp
20Aella
-13farp
18Aella
199Ben Pace
189Eliezer Yudkowsky
60jessicata
20Benquo
59habryka
23Benquo
34Sniffnoy
4Benquo
10TekhneMakre
5Benquo
3TekhneMakre
4Eli Tyre
4Sniffnoy
5Benquo
4Sniffnoy
9lsusr
-66throwaway46237896
6ESRogs
13TurnTrout
0hg00
5Rob Bensinger
-6TAG
1[anonymous]
3ESRogs
-12throwaway46237896
24hg00
9ESRogs
-2Zack_M_Davis
27Duncan Sabien (Inactive)
1hg00
7throwaway46237896
35philh
12Duncan Sabien (Inactive)
4throwaway46237896
13Rob Bensinger
-41throwaway46237896
12philh
-32throwaway46237896
1hg00
5philh
32Peter Wildeford
23Alexander
22Eli Tyre
10Viliam
161Rob Bensinger
35LGS
48JenniferRM
6TekhneMakre
32mingyuan
18LGS
6Alex Vermillion
0Puxi Deek
17romeostevensit
6LGS
19TekhneMakre
10LGS
10Freyja
4TekhneMakre
4jessicata
37Duncan Sabien (Inactive)
31jessicata
4Gunnar_Zarncke
2Linch
156nostalgebraist
68Davis_Kingsley
50cousin_it
2[comment deleted]
22hg00
53romeostevensit
24Rob Bensinger
21ozziegooen
1Zian
25abiggerhammer
18ChristianKl
9Avi
18TekhneMakre
21nostalgebraist
5TekhneMakre
15Gunnar_Zarncke
7Elizabeth
-18TAG
2habryka
30Duncan Sabien (Inactive)
21habryka
16Benquo
1habryka
151orthonormal
48Gunnar_Zarncke
84habryka
49Linch
7Gunnar_Zarncke
11Linch
4Gunnar_Zarncke
4habryka
7Gunnar_Zarncke
6habryka
23Linch
10habryka
3Linch
47orthonormal
20Vaniver
10PeteMichaud
10Eli Tyre
14Vanessa Kosoy
4orthonormal
4Vanessa Kosoy
10orthonormal
12Davidmanheim
4Vanessa Kosoy
4novalinium
1Vanessa Kosoy
7novalinium
7Vanessa Kosoy
5Vaniver
119Eli Tyre
96Eli Tyre
47Eli Tyre
14Hazard
36jessicata
23Eli Tyre
22Hazard
2Eli Tyre
11Vladimir_Nesov
113Vanessa Kosoy
15Dojan
14ChristianKl
24philip_b
19ChristianKl
20benjamin.j.campbell
15ChristianKl
5jessicata
124nostalgebraist
9jessicata
81Eli Tyre
16jessicata
-2Dr_Manhattan
40Vaniver
31iceman
28jefftk
9RobertM
15jessicata
73Vanessa Kosoy
108PhoenixFriend
129Duncan Sabien (Inactive)
11TekhneMakre
60Duncan Sabien (Inactive)
2TekhneMakre
9TekhneMakre
104Eli Tyre
20Duncan Sabien (Inactive)
90AnnaSalamon
94AnnaSalamon
6TekhneMakre
6AnnaSalamon
3TekhneMakre
5Duncan Sabien (Inactive)
7Beckeck
23Duncan Sabien (Inactive)
12Viliam
16AnnaSalamon
66Davis_Kingsley
57Adam Scholl
20jessicata
25Adam Scholl
24Unreal
51steven0461
24Adam Scholl
16PeterMcCluskey
15Raemon
8jessicata
26jimrandomh
13Duncan Sabien (Inactive)
22Aella
18Scott Alexander
95Scott Alexander
45iceman
12Ben Pace
1Richard_Kennaway
11Vladimir_Nesov
8Unreal
2jessicata
10Adam Scholl
3Unreal
2Benquo
27Davis_Kingsley
10Benquo
27Eli Tyre
26Davis_Kingsley
2Benquo
20Davis_Kingsley
2[comment deleted]
107CronoDAS
198Eliezer Yudkowsky
148Rob Bensinger
23ioannes
11Vaniver
13Gunnar_Zarncke
2jefftk
9Gunnar_Zarncke
52Viliam
81Holly_Elmore
17Rob Bensinger
19Kaj_Sotala
1MondSemmel
3Kaj_Sotala
8Rafael Harth
8Rob Bensinger
3Rafael Harth
3Holly_Elmore
4Rob Bensinger
4Kaj_Sotala
13Rob Bensinger
3Holly_Elmore
2Slider
2Holly_Elmore
11romeostevensit
1ioannes
8Kaj_Sotala
21Holly_Elmore
2Kaj_Sotala
4Unreal
2Rob Bensinger
6Unreal
5Said Achmiz
16Rob Bensinger
5Said Achmiz
19Kaj_Sotala
3Holly_Elmore
2Said Achmiz
2Holly_Elmore
5Said Achmiz
12Richard_Kennaway
1Holly_Elmore
2Said Achmiz
3Holly_Elmore
2Said Achmiz
4Rob Bensinger
56Rob Bensinger
11Matt Goldenberg
3ioannes
34wunan
4Gunnar_Zarncke
2Kenny
2Laszlo_Treszkai
0Kenny
26Tomás B.
84Aella
28Duncan Sabien (Inactive)
25Tomás B.
20Aella
5ChristianKl
4Tomás B.
1Thoth Hermes
17Rafael Harth
5steven0461
49Elizabeth
32ioannes
11ChristianKl
28Vaniver
17Holly_Elmore
17Vaniver
12Holly_Elmore
5Benquo
27ChristianKl
6Gunnar_Zarncke
13ChristianKl
-1TekhneMakre
16Unreal
16Duncan Sabien (Inactive)
2Chris_Leong
1ToasterLightning
67Kaj_Sotala
13ioannes
12Avi
17[anonymous]
3Avi
-7[anonymous]
36Holly_Elmore
4[anonymous]
16Matt Goldenberg
42Duncan Sabien (Inactive)
27iceman
18jessicata
11CronoDAS
101rationalistthrowaway
38Benquo
15jessicata
27Viliam
30Vaniver
8jessicata
5Linch
8jessicata
38TurnTrout
12jessicata
40dxu
8Viliam
6Vaniver
4jessicata
21localdeity
4jessicata
92Duncan Sabien (Inactive)
22AnnaSalamon
15Alex Vermillion
14Duncan Sabien (Inactive)
10AnnaSalamon
1Alex Vermillion
4Duncan Sabien (Inactive)
3Alex Vermillion
3Duncan Sabien (Inactive)
18Scott Garrabrant
1Duncan Sabien (Inactive)
11Linch
1Duncan Sabien (Inactive)
4Linch
5AnnaSalamon
4Duncan Sabien (Inactive)
6Scott Garrabrant
7Eli Tyre
84AnnaSalamon
85Eli Tyre
73AnnaSalamon
32hg00
82Unreal
59Unreal
13Viliam
6ESRogs
4Unreal
9Douglas_Knight
22cousin_it
2iceman
3TekhneMakre
72Adam Scholl
46AnnaSalamon
3Viliam
12Duncan Sabien (Inactive)
7Benquo
31ESRogs
6Adam Scholl
24Adam Scholl
7Thrasymachus
10Puxi Deek
64AnnaSalamon
4LoganStrohl
35philip_b
51AnnaSalamon
13TurnTrout
27AnnaSalamon
3epistemic meristem
111AnnaSalamon
19Duncan Sabien (Inactive)
33AnnaSalamon
4[comment deleted]
78Aryeh Englander
11jessicata
12Alex Vermillion
8jessicata
6Alex Vermillion
10Duncan Sabien (Inactive)
11jessicata
75So8res
62jessicata
12hg00
84Scott Garrabrant
11Aella
9jessicata
27Connor_Flexman
74gallabytes
5Benquo
5lex
20jessicata
1Benquo
70temporary_visitor_account
70gwillen
52River
26Sniffnoy
12Linch
32temporary_visitor_account
42philh
8Linch
5temporary_visitor_account
9Linch
4ChristianKl
67Unreal
87Viliam
37Unreal
31Viliam
6Unreal
28Vladimir_Nesov
81Vaniver
16Unreal
7Unreal
62lwanon
12Freyja
58Unreal
41Duncan Sabien (Inactive)
57AnnaSalamon
159habryka
15ChristianKl
14Unreal
45Freyja
146LoganStrohl
37Vladimir_Nesov
9jessicata
7Vladimir_Nesov
8jessicata
4Vladimir_Nesov
9jessicata
5Vladimir_Nesov
4jessicata
1Alex Vermillion
13Benquo
17Duncan Sabien (Inactive)
23Benquo
140Viliam
19Eli Tyre
10Duncan Sabien (Inactive)
-1farp
138habryka
23jessicata
80habryka
37Andrew Rettek
9habryka
15jessicata
5habryka
10jessicata
6habryka
6jessicata
3habryka
6jessicata
7Ben Pace
9jessicata
3Chris_Leong
14habryka
3ChristianKl
0Gunnar_Zarncke
12habryka
5Gunnar_Zarncke
3Gunnar_Zarncke
4[anonymous]
29AnnaSalamon
16Eli Tyre
11jessicata
14Duncan Sabien (Inactive)
6Gunnar_Zarncke
10Viliam
14jessicata
56Duncan Sabien (Inactive)
44Unreal
13jessicata
11Eli Tyre
8habryka
6Kaj_Sotala
8habryka
10Holly_Elmore
5Linch
8Vaniver
15habryka
11Ben Pace
54Zack_M_Davis
87Eliezer Yudkowsky
90Davidmanheim
13Zack_M_Davis
6ChristianKl
20Davidmanheim
34anon03
23Davidmanheim
46Vaniver
30Vladimir_Nesov
17Chris_Leong
16ChristianKl
52daig
18clone of saturn
18AnnaSalamon
51Rafael Harth
42Connor_Flexman
22Benquo
32Rob Bensinger
42Rob Bensinger
5Said Achmiz
22Rob Bensinger
19Unreal
4Said Achmiz
14Connor_Flexman
4Said Achmiz
1TekhneMakre
20Connor_Flexman
8Said Achmiz
2Unreal
4Unreal
18Duncan Sabien (Inactive)
13Unreal
4Connor_Flexman
5Duncan Sabien (Inactive)
4Connor_Flexman
39Nisan
39Chris_Leong
16jessicata
36Said Achmiz
27Rob Bensinger
24jessicata
1Said Achmiz
24LoganStrohl
15FeepingCreature
7Said Achmiz
5Shmi
14Said Achmiz
2Chris_Leong
2Jonathan_Graehl
4Chris_Leong
4Chris_Leong
1Taran
24Zack_M_Davis
1Chris_Leong
4Chris_Leong
2ESRogs
4Chris_Leong
38Unreal
10LoganStrohl
35Dawn Drescher
13jessicata
4Ben Pace
32ChristianKl
16Benquo
1Dacyn
3jessicata
8jessicata
21Freyja
7jessicata
31AlexMennen
60habryka
10Steven Byrnes
42habryka
12Steven Byrnes
35habryka
32Zack_M_Davis
29Vaniver
8Steven Byrnes
2AlexMennen
29Adam Zerner
28cousin_it
27Avi
26bn22
19Jarred Filmer
-2Viliam
18mukashi
7Linch
15Charlie Steiner
12Vanessa Kosoy
11jessicata
2Vanessa Kosoy
3jessicata
57Scott Garrabrant
12Benquo
33Scott Garrabrant
0hg00
41Scott Garrabrant
3hg00
24Scott Garrabrant
13Scott Garrabrant
2Linch
7Linch
2Rafael Harth
2Scott Garrabrant
32Scott Garrabrant
20Scott Garrabrant
17Scott Garrabrant
32dxu
8hg00
41Scott Garrabrant
7Scott Garrabrant
1hg00
4Ben Pace
1TekhneMakre
27dxu
3hg00
22dxu
1hg00
5Scott Garrabrant
11James_Miller
23ioannes
40Linch
10Linch
2Viliam
5Linch
1ioannes
17James_Miller
16ioannes
9James_Miller
5Avi
12Duncan Sabien (Inactive)
13nostalgebraist
1Duncan Sabien (Inactive)
10Kaj_Sotala
16Duncan Sabien (Inactive)
2Kaj_Sotala
10Benquo
10Avi
6James_Miller
5Avi
3James_Miller
17Matt Goldenberg
7algekalipso
2James_Miller
6ChristianKl
2Matt Goldenberg
3James_Miller
11Matt Goldenberg
11Kaj_Sotala
1James_Miller
18Avi
4ChristianKl
6Viliam
4James_Miller
-17Puxi Deek
-13Puxi Deek
14Ben Pace
2[comment deleted]
3Kaj_Sotala
4James_Miller
2Matt Goldenberg
1James_Miller
12Benquo
12Avi
8Avi
4James_Miller
1Linch
1James_Miller
7habryka
4Rob Bensinger
5ozziegooen
5James_Miller
7vV_Vv
21Dach
9Rafael Harth
25AnnaSalamon
23Ruby
23Rob Bensinger
5Viliam
12Rafael Harth
4Viliam
3Rafael Harth
6Viliam
10vV_Vv
1ioannes
18vV_Vv
12Viliam
4ioannes
-40IlyaShpitser
29Ben Pace
29TekhneMakre
13dxu
19philh
4clone of saturn
2Ben Pace
8dxu
16Zack_M_Davis
17dxu
28Ben Pace
5dxu
5Zack_M_Davis
9hg00
14dxu
28dxu
11Richard_Kennaway
6dxu
15hg00
25dxu
4hg00
10dxu
7dxu
6Duncan Sabien (Inactive)
4dxu
9Ruby
6dxu
6Ruby
7philip_b
11Linch
6jessicata
8jefftk
8Benquo
8jefftk
10Benquo
15jefftk
9jessicata
4Jonas Hallgren
21Rafael Harth
17ChristianKl
16TurnTrout
5[anonymous]
3Alexander
2agrippa
38Vaniver
8ChristianKl
1DPiepgrass
1seed
0EI
-1Kenny
21Viljami
0Kenny
4Viljami
0Kenny
-50TAG
-72[anonymous]
25habryka
3Viliam
-5IlyaShpitser
30[comment deleted]
12[comment deleted]
New Comment
Rendering 948/950 comments, sorted by
top scoring
(show more)
Click to highlight new comments since: Today at 5:44 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]Scott Alexander4y*4051

I want to add some context I think is important to this.

Jessica was (I don't know if she still is) part of a group centered around a person named Vassar, informally dubbed "the Vassarites". Their philosophy is complicated, but they basically have a kind of gnostic stance where regular society is infinitely corrupt and conformist and traumatizing and you need to "jailbreak" yourself from it (I'm using a term I found on Ziz's discussion of her conversations with Vassar; I don't know if Vassar uses it himself). Jailbreaking involves a lot of tough conversations, breaking down of self, and (at least sometimes) lots of psychedelic drugs.

Vassar ran MIRI a very long time ago, but either quit or got fired, and has since been saying that MIRI/CFAR is also infinitely corrupt and conformist and traumatizing (I don't think he thinks they're worse than everyone else, but I think he thinks they had a chance to be better, they wasted it, and so it's especially galling that they're just as bad).  Since then, he's tried to "jailbreak" a lot of people associated with MIRI and CFAR - again, this involves making them paranoid about MIRI/CFAR and convincing them to take lots of drugs. The combinat... (read more)

Reply
[-]devi4y990

Including Olivia, and Jessica, and I think Devi. Devi had a mental breakdown and detransitioned IIHC

Digging out this old account to point out that I have not in fact detransitioned, but find it understandable why those kinds of rumours would circulate given my behaviour during/around my experience of psychosis. I'll try to explain some context for the record.

In other parts of the linked blogpost Ziz writes about how some people around the rationalist community were acting on or spreading variations of the meme "trans women are [psychologically] men". I experienced this while dating AM (same as mentioned above). She repeatedly brought up this point in various interactions. Since we were both trans women this was hurting us both, so I look back with more pity than concern about malice. At some point during this time I started treating this as a hidden truth that I was proud of myself for being able to see, which I in retrospect I feel disgusted and complicit to have accepted. This was my state of mind when I discussed these issues with Zack reinforcing each others views. I believe (less certain) I also broached the topic with Michael and/or Anna at some point which probably went... (read more)

Reply
[-]jessicata4y*900

I want to point out that the level of mental influence being attributed to Michael in this comment and others (e.g. that he's "causing psychotic breaks" and "jailbreaking people" through conversation, "that listening too much to Vassar [causes psychosis], predictably") isn't obviously less than the level of mental influence Leverage attributed to people in terms of e.g. mental objects. Some people in the thread are self-congratulating on the rationalists not being as crazy and abusive as Leverage was in worrying that people were spreading harmful psychological objects to each other, and therefore isolating these people from their friends. Yet many in this comment thread are, literally, calling for isolating Michael Vassar from his friends on the basis of his mental influence on others.

Reply
[-]Scott Alexander4y1030

Yes, I agree with you that all of this is very awkward.

I think the basic liberal model where everyone uses Reason a lot and we basically trust their judgments is a good first approximation and we should generally use it.

But we have to admit at least small violations of it even to get the concept of "cult". Not just the sort of weak cults we're discussing here, but even the really strong cults like Heaven's Gate or Jamestown. In the liberal model, someone should be able to use Reason to conclude that being in Heaven's Gate is bad for them, and leave. When we use the word "cult", we're implicitly agreeing that this doesn't always work, and we're bringing in creepier and less comprehensible ideas like "charisma" and "brainwashing" and "cognitive dissonance".

(and the same thing with the concept of "emotionally abusive relationship")

I don't want to call the Vassarites a cult because I'm sure someone will confront me with a Cult Checklist that they don't meet, but I think that it's not too crazy to argue that some of these same creepy ideas like charisma and so on were at work there. And everyone knows cults can get weird and end in mental illness. I agree it's weird that you can get tha... (read more)

Reply
[-]ChristianKl4y572

It seems to me like in the case of Leverage, them working 75 hours per week reduced the time the could have used to use Reason to conclude that they are in a system that's bad for them. 

That's very different from someone having a few conversation with Vassar and then adopting a new belief and spending a lot of the time reasoning about that alone and the belief being stable without being embedded into a strong enviroment that makes independent thought hard because it keeps people busy.

A cult in it's nature is a social institution and not just a meme that someone can pass around via having a few conversations.

Reply
8Viliam4y
Perhaps the proper word here might be "manipulation" or "bad influence".
[-]Holly_Elmore4y120

I think "mind virus" is fair. Vassar spoke a lot about how the world as it is can't be trusted. I remember that many of the people in his circle spoke, seemingly apropos of nothing, about how bad involuntary commitment is, so that by the time someone was psychotic their relationship with psychiatry and anyone who would want to turn to psychiatry to help them was poisoned. Within the envelope of those beliefs you can keep a lot of other beliefs safe from scrutiny. 

Reply
4ChristianKl4y
The thing with "bad influence" is that it's a pretty value-laden thing. In a religious town the biology teacher who tells the children about evolution and explains how it makes sense that our history goes back a lot further then a few thousands years is reasonably described as bad influence by the parents.  The religion teacher gets the children to doubt the religious authorities. Those children then can also be a bad influence on others by also getting them to doubt authorities. In a similar war Vassar gets people to question other authorities and social conventions and how those ideas can then be passed on.  Vassar speaks about things like Moral Mazes. Memes like that make people distrust institutions. There are the kind of bad influence that can get people to quit their job. Talking about the biology teacher like they are intend to start an evolution cult feels a bit misleading.
[-]jessicata4y*412

It seems to me that, at least in your worldview, this question of whether and what sort of subtle mental influence between people is possible is extremely important, to the point where different answers to the question could lead to pretty different political philosophies.

Let's consider a disjunction: 1: There isn't a big effect here, 2: There is a big effect here.

In case 1:

  • It might make sense to discourage people from talking too much about "charisma", "auras", "mental objects", etc, since they're pretty fake, really not the primary factors to think about when modeling society.
  • The main problem with the relevant discussions at Leverage is that they're making grandiose claims of mind powers and justifying e.g. isolating people on the basis of these, not actual mental influence.
  • The case made against Michael, that he can "cause psychotic breaks" by talking with people sometimes (or, in the case of Eric B, by talking sometimes with someone who is talking sometimes with the person in question), has no merit. People are making up grandiose claims about Michael to justify scapegoating him, it's basically a witch hunt. We should have a much more moderated, holistic picture where ther
... (read more)
Reply
[-]Scott Alexander4y310

I agree I'm being somewhat inconsistent, I'd rather do that than prematurely force consistency and end up being wrong or missing some subtlety. I'm trying to figure out what went on in these cases in more details and will probably want to ask you a lot of questions by email if you're open to that.

Reply
8jessicata4y
Yes, I'd be open to answering email questions.
[-]Natália4y230

This misses the fact that people’s ability to negatively influence others might vary very widely, making it so that it is silly to worry about, say, 99.99% of people strongly negatively influencing you, but reasonable to worry about the other 0.01%. If Michael is one of those 0.01%, then Scott’s worldview is not inconsistent.

Reply
[-]TekhneMakre4y100

If it's reasonable to worry about the .01%, it's reasonable to ask how the ability varies. There's some reason, some mechanism. This is worth discussing even if it's hard to give more than partial, metaphorical hypotheses. And if there are these .01% of very strong influencers, that is still an exception to strong liberal individualism.

Reply
5jessicata4y
That would still admit some people at Leverage having significant mental influence, especially if they got into weird mental tech that almost no one gets into. A lot of the weirdness is downstream of them encountering "body workers" who are extremely good at e.g. causing mental effects by touching people's back a little; these people could easily be extremal, and Leverage people learned from them. I've had sessions with some post-Leverage people where it seemed like really weird mental effects are happening in some implicit channel (like, I feel a thing poking at the left side of my consciousness and the person says, "oh, I just did an implicit channel thing, maybe you felt that"), I've never experienced effects like that (without drugs, and not obviously on drugs either though the comparison is harder) with others including with Michael, Anna, or normal therapists. This could be "placebo" in a way that makes it ultimately not that important but still, if we're admitting that 0.01% of people have these mental effects then it seems somewhat likely that this includes some Leverage people. Also, if the 0.01% is disproportionately influential (which, duh), then getting more detailed models than "charisma" is still quite important.
-1[comment deleted]4y
[-]Benquo4y330

One important implication of "cults are possible" is that many normal-seeming people are already too crazy to function as free citizens of a republic.

In other words, from a liberal perspective, someone who can't make their own decisions about whether to hang out with Michael Vassar and think about what he says is already experiencing a severe psychiatric emergency and in need of a caretaker, since they aren't competent to make their own life decisions. They're already not free, but in the grip of whatever attractor they found first.

Personally I bite the bullet and admit that I'm not living in a society adequate to support liberal democracy, but instead something more like what Plato's Republic would call tyranny. This is very confusing because I was brought up to believe that I lived in a liberal democracy. I'd very much like to, someday.

Reply
[-]Holly_Elmore4y208

I think there are less extreme positions here. Like "competent adults can make their own decisions, but they can't if they become too addicted to certain substances." I do think manipulation by others can rise to the level of drugs and is an exceptional case, not proof that a lot of people are fundamentally incapable of being free.  

Reply
5Benquo4y
I think the principled liberal perspective on this is Bryan Caplan's: drug addicts have or develop very strong preferences for drugs. The assertion that they can't make their own decisions is a declaration of intent to coerce them, or an arrogation of the right to do so. I don't think that many people are "fundamentally incapable of being free." But it seems like some people here are expressing grievances that imply that either they themselves or some others are, right now, not ready for freedom of association. The claim that someone is dangerous enough that they should be kept away from "vulnerable people" is a declaration of intent to deny "vulnerable people" freedom of association for their own good. (No one here thinks that a group of people who don't like Michael Vassar shouldn't be allowed to get together without him.)
[-]habryka4y870

drug addicts have or develop very strong preferences for drugs. The assertion that they can't make their own decisions is a declaration of intent to coerce them, or an arrogation of the right to do so.

I really don't think this is an accurate description of what is going on in people's mind when they are experiencing drug dependencies. I've spent a good chunk of my childhood with an alcoholic father, and he would have paid most of his wealth to stop being addicted to drinking, went through great lengths trying to tie himself to various masts to stop, and generally expressed a strong preference for somehow being able to self-modify the addiction away, but ultimately failed to do so. 

Of course, things might be different for different people, but at least in the one case where I have a very large amount of specific data, this seems like it's a pretty bad model of people's preferences. Based on the private notebooks of his that I found after his death, this also seemed to be his position in purely introspective contexts without obvious social desirability biases. My sense is that he would have strongly preferred someone to somehow take control away from him, in this specific domain of his life.

Reply
8Benquo4y
This seems like some evidence that the principled liberal position is false - specifically, that it is not self-ratifying. If you ask some people what their preferences are, they will express a preference for some of their preferences to be thwarted, for their own good. Contractarianism can handle this sort of case, but liberal democracy with inalienable rights cannot, and while liberalism is a political philosophy, contractarianism is just a policy proposal, with no theory of citizenship or education.
2NancyLebovitz4y
https://en.wikipedia.org/wiki/Olivier_Ameisen A sidetrack, but a French surgeon found that Baclofen (a muscle relaxant) cured his alcoholism by curing the craving. He was surprised to find that it cured compulsive spending when he didn't even realize he had a problem. He had a hard time raising money for an official experiment, and it came out inconclusive, and he died before the research got any further.  
2Jayson_Virissimo4y
This is more-or-less Aristotle's defense of (some cases of) despotic rule: it benefits those that are naturally slaves (those whose deliberative faculty functions below a certain threshold) in addition to the despot (making it a win-win scenario).
[-]Benquo4y380

Aristotle seems (though he's vague on this) to be thinking in terms of fundamental attributes, while I'm thinking in terms of present capacity, which can be reduced by external interventions such as schooling.

Thinking about people I know who've met Vassar, the ones who weren't brought up to go to college* seem to have no problem with him and show no inclination to worship him as a god or freak out about how he's spooky or cultish; to them, he's obviously just a guy with an interesting perspective.

*As far as I know I didn't know any such people before 2020; it's very easy for members of the educated class to mistake our bubble for statistical normality.

Reply
[-]Hazard4y310

Thinking about people I know who've met Vassar, the ones who weren't brought up to go to college* seem to have no problem with him and show no inclination to worship him as a god or freak out about how he's spooky or cultish; to them, he's obviously just a guy with an interesting perspective.

This is very interesting to me! I'd like to hear more about how the two group's behavior looks diff, and also your thoughts on what's the difference that makes the difference, what are the pieces of "being brought up to go to college" that lead to one class of reactions?

Reply
[-]CharlieTheBananaKing4y36-8

I have talked to Vassar, while he has a lot of "explicit control over conversations" which could be called charisma, I'd hypothesize that the fallout is actually from his ideas. (The charisma/intelligence making him able to credibly argue those)

My hypothesis is the following:  I've met a lot of rationalists + adjacent people. A lot of them care very deeply about EA and AI alignment. In fact, it seems to me to be a core part of a lot of these people's identity ("I'm an EA person, thus I'm a good person doing important work"). Two anecdotes to illustrate this:
- I'd recently argued against a committed EA person. Eventually, I started feeling almost-bad about arguing (even though we're both self-declared rationalists!) because I'd realised that my line of reasoning questioned his entire life. His identity was built deeply on EA, his job was selected to maximize money to give to charity. 
- I'd had a conversation with a few unemployed rationalist computer scientists. I suggested we might start a company together. One I got: "Only if it works on the alignment problem, everything else is irrelevant to me". 

Vassar very persuasively argues against EA and work done at MIRI/CFAR... (read more)

Reply
[-]mic4y110

What are your or Vassar's arguments against EA or AI alignment? This is only tangential to your point, but I'd like to know about it if EA and AI alignment are not important.

Reply
[-]ChristianKl4y250

The general argument is that EA's are not really doing what they say they do. One example from Vassar would be that when it comes to COVID-19 for example there seem to be relatively little effective work by EA's. In contrast Vassar considered giving prisoners access to personal equipment the most important and organized effectively for that to happen. 

EA's created in EA Global an enviroment where someone who wrote a good paper warning about the risks of gain-of-function research doesn't address that directly but only talks indirectly about it to focus on more meta-issues. Instead of having conflicts with people doing gain-of-function research the EA community mostly ignored it's problems and funded work that's in less conflict with the establishment. There's nearly no interest in learning from those errors in the EA community and people rather avoid conflicts.

If you read the full comments of this thread you will find reports that CEA used legal threats to cover up Leverage related information. 

AI alignment is important but just because one "works on AI risk" doesn't mean that the work actually decreases AI risk. Tying your personal identity to being someone who works to d... (read more)

Reply
9NancyLebovitz4y
Did Vassar argue that existing EA organizations weren't doing the work they said they were doing, or that EA as such was a bad idea? Or maybe that it was too hard to get organizations to do it?
[-]jessicata4y172

He argued

(a) EA orgs aren't doing what they say they're doing (e.g. cost effectiveness estimates are wildly biased, reflecting bad procedures being used internally), and it's hard to get organizations to do what they say they do

(b) Utilitarianism isn't a form of ethics, it's still necessary to have principles, as in deontology or two-level consequentialism

(c) Given how hard it is to predict the effects of your actions on far-away parts of the world (e.g. international charity requiring multiple intermediaries working in a domain that isn't well-understood), focusing on helping people you have more information about makes sense unless this problem can be solved

(d) It usually makes more sense to focus on ways of helping others that also build capacities, including gathering more information, to increase long-term positive impact

Reply
[-]ChristianKl4y110

If you for example want the critcism on GiveWell, Ben Hoffman was employed at GiveWell and made experiences that suggest that the process based on which their reports are made has epistemic problems. If you want the details talk to him. 

The general model would be that between actual intervention and the top there are a bunch of maze levels. GiveWell then hired normal corporatist people who behave in the dynamics that the immoral maze sequence describes play themselves out.

Vassar's action themselves are about doing altruistic actions more directly by looking for who are most powerless who need help and working to help them. In the COVID case he identified prisoners and then worked on making PPE available for them.

You might see his thesis is that "effective" in EA is about adding a management layer for directing interventions and that management layer has the problems that the immoral maze sequence describes. According to Vassar someone who wants to be altrustic shouldn't delegate his judgements of what's effective and thus warrents support to other people.

Reply
2[comment deleted]4y
6jefftk4y
Link? I'm not finding it
3ChristianKl4y
https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe?commentId=zqcynfzfKma6QKMK9
[-]jefftk4y110

I think what you're pointing to is:

I have a large number of negative Leverage experiences between 2015-2017 that I never wrote up due to various complicated adversarial dynamics surrounding Leverage and CEA (as well as various NDAs and legal threats, made by both Leverage and CEA, not leveled at me, but leveled at enough people around me that I thought I might cause someone serious legal trouble if I repeat a thing I heard somewhere in a more public setting)

I'm getting a bit pedantic, but I wouldn't gloss this as "CEA used legal threats to cover up Leverage related information". Partly because the original bit is vague, but also because "cover up" implies that the goal is to hide information.

For example, imagine companies A and B sue each other, which ends up with them settling and signing an NDA. Company A might accept an NDA because they want to move on from the suit and agreeing to an NDA does that most effectively. I would not describe this as company A using legal threats to cover up B-related information.

Reply
[-]ChristianKl4y160

In the timeframe CEA and Leverage where doing together the Pareto Fellowship. If you read the common knowledge post you find people finding that they were mislead by CEA because the announcement didn't mention that the Pareto Fellowship was largely run by Leverage.

On their mistakes page CEA, they have a section about the Pareto Fellowship but it hides the fact that Leverage was involved in the Pareto Fellowship but says "The Pareto Fellowship was a program sponsored by CEA and run by two CEA staff, designed to deepen the EA involvement of promising students or people early in their careers."

That does look to me like hiding information about the cooperation between Leverage and CEA. 

I do think that publically presuming that people who hide information have something to hide is useful. If there's nothing to hide I'd love to know what happened back then or who thinks what happened should stay hidden. At the minimum I do think that CEA witholding the information that the people who went to their programs spend their time in what now appears to be a cult is something that CEA should be open about in their mistakes page. 

Reply
[-]habryka4y*410

Yep, I think CEA has in the past straightforwardly misrepresented (there is a talk on the history of EA by Will and Toby that says some really dubious things here, IIRC) and sometimes even lied in order to not mention Leverage's history with Effective Altruism. I think this was bad, and continues to be bad.

Reply
[-]Rob Bensinger4y210

My initial thought on reading this was 'this seems obviously bad', and I assumed this was done to shield CEA from reputational risk.

Thinking about it more, I could imagine an epistemic state I'd be much more sympathetic to: 'We suspect Leverage is a dangerous cult, but we don't have enough shareable evidence to make that case convincingly to others, or we aren't sufficiently confident ourselves. Crediting Leverage for stuff like the EA Summit (without acknowledging our concerns and criticisms) will sound like an endorsement of Leverage, which might cause others to be drawn into its orbit and suffer harm. But we don't feel confident enough to feel comfortable tarring Leverage in public, or our evidence was shared in confidence and we can't say anything we expect others to find convincing. So we'll have to just steer clear of the topic for now.'

Still seems better to just not address the subject if you don't want to give a fully accurate account of it. You don't have to give talks on the history of EA!

Reply
[-]habryka4y270

I think the epistemic state of CEA was some mixture of something pretty close to what you list here, and something that I would put closer to something more like "Leverage maybe is bad, or maybe isn't, but in any case it looks bad, and I don't think I want people to think EA or CEA is bad, so we are going to try to avoid any associations between these entities, which will sometimes require stretching the truth".

Reply
[-]ChristianKl4y130

"Leverage maybe is bad, or maybe isn't, but in any case it looks bad, and I don't think I want people to think EA or CEA is bad, so we are going to try to avoid any associations between these entities, which will sometimes require stretching the truth"

That has the collary: "We don't expect EA's to care enough about the truth/being transparent that this is a huge reputational risk for us."

Reply
[-]jefftk4y200

It does look weird to me that CEA doesn't include this on the mistakes page when they talk about Pareto. I just sent CEA an email to ask:

Hi CEA,

On https://www.centreforeffectivealtruism.org/our-mistakes I see "The Pareto Fellowship was a program sponsored by CEA and run by two CEA staff, designed to deepen the EA involvement of promising students or people early in their careers. We realized during and after the program that senior management did not provide enough oversight of the program. For example, reports by some applicants indicate that the interview process was unprofessional and made them deeply uncomfortable."

Is there a reason that the mistakes page does not mention the involvement of Leverage in the Pareto Fellowship? [1]

Jeff

[1] https://www.lesswrong.com/posts/Kz9zMgWB5C27Pmdkh/common-knowledge-about-leverage-research-1-0?commentId=znudKxFhvQxgDMv7k

Reply
7jefftk4y
They wrote back, linking me to https://www.lesswrong.com/posts/Kz9zMgWB5C27Pmdkh/common-knowledge-about-leverage-research-1-0?commentId=2QcdhTjqGcSc99sNN ("we're working on a couple of updates to the mistakes page, including about this")
[-]habryka4y150

Yep, I think the situation is closer to what Jeff describes here, though, I honestly don't actually know, since people tend to get cagey when the topic comes up.

Reply
9ChristianKl4y
I talked with Geoff and according to him there's no legal contract between CEA and Leverage that prevents information sharing. All information sharing is prevented by organization internal NDA's.
[-]habryka4y220

Huh, that's surprising, if by that he means "no contracts between anyone currently at Leverage and anyone at CEA". I currently still think it's the case, though I also don't see any reason for Geoff to lie here. Maybe there is some technical sense in which there is no contract between Leverage and CEA, but there are contracts between current Leverage employees, who used to work at CEA, and current CEA employees? 

Reply
4ChristianKl4y
What he said is compatible with Ex-CEA people still being bound by the NDA's they signed they were at CEA. I don't think anything happened that releases ex-CEA people from NDAs. The important thing is that CEA is responsible for those NDA and is free to unilaterally lift them if they would have an interest in the free flow of information. In the case of a settlement with contracts between the two organisations CEA couldn't unilaterally lift the settlement contract. Public pressure on CEA seems to be necessary to get the information out in the open.
[-]Benquo4y220

The people I know who weren't brought up to go to college have more experience navigating concrete threats and dangers, which can't be avoided through conformity, since the system isn't set up to take care of people like them. They have to know what's going on to survive. This results in an orientation less sensitive to subtle threats of invalidation, and that sees more concrete value in being informed by someone.

In general this means that they're much more comfortable with the kind of confrontation Vassar engages in, than high-class people are.

Reply
7Hazard4y
This makes a lot of sense. I can notice ways in which I generally feels more threatened by social invalidation than actual concrete threats of violence.
[-]ChristianKl4y220

Talking with Vassar feels very intellectually alive. Maybe, like a high density of insight porn. I imagine that the people Ben talks about wouldn't get much enjoyment out of insight porn either, so that emotional impact isn't there.

There's probably also an element that plenty of people who can normally follow an intellectual conversation can't keep up a conversation with Vassar and then are filled after a conversation with a bunch of different ideas that lack order in their mind. I imagine that sometimes there's an idea overload that prevents people from critically thinking through some of the ideas.

If you have a person who hasn't gone to college, they are used to encountering people who make intellectual arguments that go over their head and have a way to deal with that. 

From meeting Vassar, I don't feel like he has the kind of charisma that someone like Valentine has (which I guess Valentine has downstream of doing a lot of bodywork stuff). 

Reply
[-]Benquo4y*210

This seems mostly right; they're more likely to think "I don't understand a lot of these ideas, I'll have to think about this for a while" or "I don't understand a lot of these ideas, he must be pretty smart and that's kinda cool" than to feel invalidated by this and try to submit to him in lieu of understanding.

Reply
4NancyLebovitz4y
This is interesting to me because I was brought up to go to college, but I didn't take it seriously (plausibly from depression or somesuch), and I definitely think of him as a guy with an interesting perspective. Okay, a smart guy with an interesting perspective, but not a god. It had never occurred to me before that maybe people who were brought up to assume they were going to college might generally have a different take on the world than I do.
[-]Zack_M_Davis4y841

I talked and corresponded with Michael a lot during 2017–2020, and it seems likely that one of the psychotic breaks people are referring to is mine from February 2017? (Which Michael had nothing to do with causing, by the way.) I don't think you're being fair.

"jailbreak" yourself from it (I'm using a term I found on Ziz's discussion of her conversations with Vassar; I don't know if Vassar uses it himself)

I'm confident this is only a Ziz-ism: I don't recall Michael using the term, and I just searched my emails for jailbreak, and there are no hits from him.

again, this involves making them paranoid about MIRI/CFAR and convincing them to take lots of drugs [...] describing how it was a Vassar-related phenomenon

I'm having trouble figuring out how to respond to this hostile framing. I mean, it's true that I've talked with Michael many times about ways in which (in his view, and separately in mine) MIRI, CfAR, and "the community" have failed to live up to their stated purposes. Separately, it's also true that, on occasion, Michael has recommended I take drugs. (The specific recommendations I recall were weed and psilocybin. I always said No; drug use seems like a very bad idea giv... (read more)

Reply
[-]Scott Alexander4y930

I don't want to reveal any more specific private information than this without your consent, but let it be registered that I disagree with your assessment that your joining the Vassarites wasn't harmful to you. I was not around for the 2017 issues (though if you reread our email exchanges from April you will understand why I'm suspicious), but when you had some more minor issues in 2019 I was more in the loop and I ended out emailing the Vassarites (deliberately excluding you from the email, a decision I will defend in private if you ask me) accusing them of making your situation worse and asking them to maybe lay off you until you were maybe feeling slightly better, and obviously they just responded with their "it's correct to be freaking about learning your entire society is corrupt and gaslighting" shtick. 

Reply
[-]Scott Alexander4y*820

I'm having trouble figuring out how to respond to this hostile framing. I mean, it's true that I've talked with Michael many times about ways in which (in his view, and separately in mine) MIRI, CfAR, and "the community" have failed to live up to their stated purposes. Separately, it's also true that, on occasion, Michael has recommended I take drugs. (The specific recommendations I recall were weed and psilocybin. I always said No; drug use seems like a very bad idea given my history of psych problems.)

[...]

Michael is a charismatic guy who has strong views and argues forcefully for them. That's not the same thing as having mysterious mind powers to "make people paranoid" or cause psychotic breaks! (To the extent that there is a correlation between talking to Michael and having psych issues, I suspect a lot of it is a selection effect rather than causal: Michael told me once that he specifically seeks out people who are high in Eysenckian psychoticism.) If someone thinks Michael is wrong about something, great: I'm sure he'd be happy to argue about it, time permitting. But under-evidenced aspersions that someone is somehow dangerous just to talk to are not an argument.

I more or les... (read more)

Reply
[-]mathenjoyer4y390

Thing 0:

Scott.

Before I actually make my point I want to wax poetic about reading SlateStarCodex.

In some post whose name I can't remember, you mentioned how you discovered the idea of rationality. As a child, you would read a book with a position, be utterly convinced, then read a book with the opposite position and be utterly convinced again, thinking that the other position was absurd garbage. This cycle repeated until you realized, "Huh, I need to only be convinced by true things."

This is extremely relatable to my lived experience. I am a stereotypical "high-functioning autist." I am quite gullible, formerly extremely gullible. I maintain sanity by aggressively parsing the truth values of everything I hear. I am extremely literal. I like math.

To the degree that "rationality styles" are a desirable artifact of human hardware and software limitations, I find your style of thinking to be the most compelling.

Thus I am going to state that your way of thinking about Vassar has too many fucking skulls.

Thing 1:

Imagine two world models:

  1. Some people want to act as perfect nth-order cooperating utilitarians, but can't because of human limitations. They are extremely scrupulous, so they feel
... (read more)
Reply
[-]Unreal4y550

I enjoyed reading this. Thanks for writing it. 

One note though: I think this post (along with most of the comments) isn't treating Vassar as a fully real person with real choices. It (also) treats him like some kind of 'force in the world' or 'immovable object'. And I really want people to see him as a person who can change his mind and behavior and that it might be worth asking him to take more responsibility for his behavior and its moral impacts. I'm glad you yourself were able to "With basic rationality skills, avoid contracting the Vassar, then [heal] the damage to [your] social life." 

But I am worried about people treating him like a force of nature that you make contact with and then just have to deal with whatever the effects of that are. 

I think it's pretty immoral to de-stabilize people to the point of maybe-insanity, and I think he should try to avoid it, to whatever extent that's in his capacity, which I think is a lot. 

"Vassar's ideas are important and many are correct. It just happens to be that he might drive you insane."

I might think this was a worthwhile tradeoff if I actually believed the 'maybe insane' part was unavoidable, and I do not believ... (read more)

Reply
[-]Said Achmiz4y480

I think that treating Michael Vassar as an unchangeable force of nature is the right way to go—for the purposes of discussions precisely like this one. Why? Because even if Michael himself can (and chooses to) alter his behavior in some way (regardless of whether this is good or bad or indifferent), nevertheless there will be other Michael Vassars out there—and the question remains, of how one is to deal with arbitrary Michael Vassars one encounters in life.

In other words, what we’ve got here is a vulnerability (in the security sense of the word). One day you find that you’re being exploited by a clever hacker (we decline to specify whether he is a black hat or white hat or what). The one comes to you and recommends a patch. But you say—why should we treat this specific attack as some sort of unchangeable force of nature? Rather we should contact this hacker and persuade him to cease and desist. But the vulnerability is still there…

Reply
[-]ChristianKl4y221

I think you can either have a discussion that focuses on an individual and if you do it makes sense to model them with agency or you can have more general threat models. 

If you however mix the two you are likely to get confused in both directions. You will project ideas from your threat model into the person and you will take random aspects of the individual into your threat model that aren't typical for the threat.

Reply
[-]mathenjoyer4y150

I am not sure how much 'not destabilize people' is an option that is available to Vassar.

My model of Vassar is as a person who is constantly making associations, and using them to point at the moon. However, pointing at the moon can convince people of nonexistent satellites and thus drive people crazy. This is why we have debates instead of koan contests.

Pointing at the moon is useful when there is inferential distance; we use it all the time when talking with people without rationality training. Eliezer used it, and a lot of "you are expected to behave better for status reasons look at my smug language"-style theist-bashing, in the Sequences. This was actually highly effective, although it had terrible side effects.

I think that if Vassar tried not to destabilize people, it would heavily impede his general communication. He just talks like this. One might say, "Vassar, just only say things that you think will have a positive effect on the person." 1. He already does that. 2. That is advocating that Vassar manipulate people. See Valencia in Worth the Candle.

In the pathological case of Vassar, I think the naive strategy of "just say the thing you think is true" is still correct.

Menta... (read more)

Reply
[-]Unreal4y261

I think that if Vassar tried not to destabilize people, it would heavily impede his general communication.

My suggestion for Vassar is not to 'try not to destabilize people' exactly. 

It's to very carefully examine his speech and its impacts, by looking at the evidence available (asking people he's interacted with about what it's like to listen to him) and also learning how to be open to real-time feedback (like, actually look at the person you're speaking to as though they're a full, real human—not a pair of ears to be talked into or a mind to insert things into). When he talks theory, I often get the sense he is talking "at" rather than talking "to" or "with". The listener practically disappears or is reduced to a question-generating machine that gets him to keep saying things. 

I expect this process could take a long time / run into issues along the way, and so I don't think it should be rushed. Not expecting a quick change. But claiming there's no available option seems wildly wrong to me. People aren't fixed points and generally shouldn't be treated as such. 

Reply
[-]mathenjoyer4y160

This is actually very fair. I think he does kind of insert information into people.

I never really felt like a question-generating machine, more like a pupil at the foot of a teacher who is trying to integrate the teacher's information.

I think the passive, reactive approach you mention is actually a really good idea of how to be more evidential in personal interaction without being explicitly manipulative.

Thanks!

Reply
7ChristianKl4y
I think I interacted with Vassar four times in person, so I might get some things wrong here, but I think that he's pretty disassociated from his body which closes a normal channel of perceiving impacts on the person he's speaking with. This thing looks to me like some bodily process generating stress / pain and being a cause for disassociation. It might need a body worker to fix whatever goes on there to create the conditions for perceiving the other person better. Beyond that Circling might be an enviroment in which one can learn to interact with others as humans who have their own feelings but that would require opening up to the Circling frame. 
5ChristianKl4y
You are making a false dichomaty here. You are assuming that everything that has a negative effect on a person is manipulation.  As Vassar himself sees the situation people believe a lot of lies for reasons of fitting in socially in society. From that perspective getting people to stop believing in those lies will make it harder to fit socially into society.  If you would get a Nazi guard at Ausschwitz into a state where the moral issue of their job can't be disassociated anymore, that's very predicably going to have a negative effect on that prison guard.  Vassar position would be that it would be immoral to avoid talking about the truth about the nature of their job when talking with the guard in a motivation to make life easier for the guard. 
4Benquo4y
I think this line of discussion would be well served by marking a natural boundary in the cluster "crazy." Instead of saying "Vassar can drive people crazy" I'd rather taboo "crazy" and say: Personally I care much more, maybe lexically more, about the upside of minds learning about their situation, than the downside of mimics going into maladaptive death spirals, though it would definitely be better all round if we can manage to cause fewer cases of the latter without compromising the former, much like it's desirable to avoid torturing animals, and it would be desirable for city lights not to interfere with sea turtles' reproductive cycle by resembling the moon too much.
6pjen4y
My problem with this comment is it takes people who: * can't verbally reason without talking things through (and are currently stuck in a passive role in a conversation) and who: * respond to a failure of their verbal reasoning * under circumstances of importance (in this case moral importance) * and conditions of stress, induced by * trying to concentrate while in a passive role * failing to concentrate under conditions of high moral importance by simply doing as they are told - and it assumes they are incapable of reasoning under any circumstances. It also then denies people who are incapable of independent reasoning the right to be protected from harm.
5mathenjoyer4y
EDIT: Ben is correct to say we should taboo "crazy." This is a very uncharitable interpretation (entirely wrong). The highly scrupulous people here can undergo genuine psychological collapse if they learn their actions aren't as positive utility as they thought. (entirely wrong) I also don't think people interpret Vassar's words as a strategy and implement incoherence. Personally, I interpreted Vassar's words as factual claims then tried to implement a strategy on them. When I was surprised by reality a bunch, I updated away. I think the other people just no longer have a coalitional strategy installed and don't know how to function without one. This is what happened to me and why I repeatedly lashed out at others when I perceived them as betraying me, since I no longer automatically perceived them as on my side. I rebuilt my rapport with those people and now have more honest relationships with them. (still endorsed) Beyond this, I think your model is accurate.
[-]Said Achmiz4y510

The highly scrupulous people here can undergo genuine psychological collapse if they learn their actions aren’t as positive utility as they thought.

“That which can be destroyed by the truth should be”—I seem to recall reading that somewhere.

And: “If my actions aren’t as positive utility as I think, then I desire to believe that my actions aren’t as positive utility as I think”.

If one has such a mental makeup that finding out that one’s actions have worse effects than one imagined causes genuine psychological collapse, then perhaps the first order of business is to do everything in one’s power to fix that (really quite severe and glaring) bug in one’s psyche—and only then to attempt any substantive projects in the service of world-saving, people-helping, or otherwise doing anything really consequential.

Reply
5mathenjoyer4y
Thank you for echoing common sense!
-1Benquo4y
What is psychological collapse? For those who can afford it, taking it easy for a while is a rational response to noticing deep confusion, continuing to take actions based on a discredited model would be less appealing, and people often become depressed when they keep confusedly trying to do things that they don't want to do. Are you trying to point to something else? What specific claims turned out to be false? What counterevidence did you encounter?
[-]mathenjoyer4y*230

Specific claim: the only nontrivial obstacle in front of us is not being evil

This is false. Object-level stuff is actually very hard.

Specific claim: nearly everyone in the aristocracy is agentically evil. (EDIT: THIS WAS NOT SAID. WE BASICALLY AGREE ON THIS SUBJECT.)

This is a wrong abstraction. Frame of Puppets seems naively correct to me, and has become increasingly reified by personal experience of more distant-to-my-group groups of people, to use a certain person's language. Ideas and institutions have the agency; they wear people like skin.

Specific claim: this is how to take over New York.

Didn't work.

Reply
4Benquo4y
I think this needs to be broken up into 2 claims: 1 If we execute strategy X, we'll take over New York. 2 We can use straightforward persuasion (e.g. appeals to reason, profit motive) to get an adequate set of people to implement strategy X. 2 has been falsified decisively. The plan to recruit candidates via appealing to people's explicit incentives failed, there wasn't a good alternative, and as a result there wasn't a chance to test other parts of the plan (1). That's important info and worth learning from in a principled way. Definitely I won't try that sort of thing again in the same way, and it seems like I should increase my credence both that plans requiring people to respond to economic incentives by taking initiative to play against type will fail, and that I personally might be able to profit a lot by taking initiative to play against type, or investing in people who seem like they're already doing this, as long as I don't have to count on other unknown people acting similarly in the future. But I find the tendency to respond to novel multi-step plans that would require someone do take initiative by sitting back and waiting for the plan to fail, and then saying, "see? novel multi-step plans don't work!" extremely annoying. I've been on both sides of that kind of transaction, but if we want anything to work out well we have to distinguish cases of "we / someone else decided not to try" as a different kind of failure from "we tried and it didn't work out."
3mathenjoyer4y
This is actually completely fair. So is the other comment.
0Benquo4y
This seems to be conflating the question of "is it possible to construct a difficult problem?" with the question of "what's the rate-limiting problem?". If you have a specific model for how to make things much better for many people by solving a hard technical problem before making substantial progress on human alignment, I'd very much like to hear the details. If I'm persuaded I'll be interested in figuring out how to help. So far this seems like evidence to the contrary, though, as it doesn't look like you thought you could get help making things better for many people by explaining the opportunity.
9Unreal4y
To the extent I'm worried about Vassar's character, I am as equally worried about the people around him. It's the people around him who should also take responsibility for his well-being and his moral behavior. That's what friends are for. I'm not putting this all on him. To be clear. 
[-]cousin_it4y240

I think it's a fine way of think about mathematical logic, but if you try to think this way about reality, you'll end up with views that make internal sense and are self-reinforcing but don't follow the grain of facts at all. When you hear such views from someone else, it's a good idea to see which facts they give in support. Do their facts seem scant, cherrypicked, questionable when checked? Then their big claims are probably wrong.

The people who actually know their stuff usually come off very different. Their statements are carefully delineated: "this thing about power was true in 10th century Byzantium, but not clear how much of it applies today".

Also, just to comment on this:

It is called Taking Ideas Seriously and using language literally. It is my personal favorite strategy, but I have no other options considering my neurotype.

I think it's somewhat changeable. Even for people like us, there are ways to make our processing more "fuzzy". Deliberately dimming some things, rounding others. That has many benefits: on the intellectual level you learn to see many aspects of a problem instead of hyperfocusing on one; emotionally you get more peaceful when thinking about things; a... (read more)

Reply
5mathenjoyer4y
On the third paragraph: I rarely have problems with hyperfixation. When I do, I just come back to the problem later, or prime myself with a random stimulus. (See Steelmanning Divination.) Peacefulness is enjoyable and terminally desirable, but in many contexts predators want to induce peacefulness to create vulnerability. Example: buying someone a drink with ill intent. (See "Safety in numbers" by Benjamin Ross Hoffman. I actually like relaxation, but agree with him that feeling relaxed in unsafe environments is a terrible idea. Reality is mostly an unsafe environment. Am getting to that.) I have no problem enjoying warm fuzzies. I had problems with them after first talking with Vassar, but I re-equilibrated. Warm fuzzies are good, helpful, and worth purchasing. I am not a perfect utilitarian. However, it is important that when you buy fuzzies instead of utils, as Scott would put it, you know what you are buying. Many will sell fuzzies and market them as utils. I sometimes round things, it is not inherently bad. Dimming things is not good. I like being alive. From a functionalist perspective, the degree to which I am aroused (with respect to the senses and the mind) is the degree to which I am a real, sapient being. Dimming is sometimes terminally valuable as relaxation, and instrumentally valuable as sleep, but if you believe in Life, Freedom, Prosperity And Other Nice Transhumanist Things then dimming being bad in most contexts follows as a natural consequence. On the second paragraph: This is because people compartmentalize. After studying a thing for a long time, people will grasp deep nonverbal truths about that thing. Sometimes they are wrong; without the legibility of the elucidation, false ideas such gained are difficult to destroy. Sometimes they are right! Mathematical folklore is an example: it is literally metis among mathematicians. Highly knowledgeable and epistemically skilled people delineate. Sometimes the natural delineation is "this is tru
[-]FeepingCreature4y200

I mostly see where you're coming from, but I think the reasonable answer to "point 1 or 2 is a false dichotomy" is this classic, uh, tumblr quote (from memory):

"People cannot just. At no time in the history of the human species has any person or group ever just. If your plan relies on people to just, then your plan will fail."

This goes especially if the thing that comes after "just" is "just precommit."

My expectation is that interaction with Vassar is that the people who espouse 1 or 2 expect that the people interacting are incapable of precommitting to the required strength. I don't know if they're correct, but I'd expect them to be, because I think people are just really bad at precommitting in general. If precommitting was easy, I think we'd all be a lot more fit and get a lot more done. Also, Beeminder would be bankrupt.

Reply
[-]mathenjoyer4y100

This is a very good criticism! I think you are right about people not being able to "just."

My original point with those strategies was to illustrate an instance of motivated stopping about people in the community who have negative psychological effects, or criticize popular institutions. Perhaps it is the case that people genuinely tried to make a strategy but automatically rejected my toy strategies as false. I do not think it is, based on "vibe" and on the arguments that people are making, such as "argument from cult."

I think you are actually completely correct about those strategies being bad. Instead, I failed to point out that I expect a certain level of mental robustness-to-nonsanity from people literally called "rationalists." This comes off as sarcastic but I mean it completely literally.

Precommitting isn't easy, but rationality is about solving hard problems. When I think of actual rationality, I think of practices such as "five minutes of actually trying" and alkjash's "Hammertime." Humans have a small component of behavior that is agentic, and a huge component of behavior that is non-agentic and installed by vaguely agentic processes (simple conditioning, mimicry, social... (read more)

Reply
2Hazard4y
I found many things you shared useful. I also expect that because of your style/tone you'll get down voted :(
-46xtz05qw4y
[-]Viliam4y*220

Michael is very good at spotting people right on the verge of psychosis

...and then pushing them.

Michael told me once that he specifically seeks out people who are high in Eysenckian psychoticism.

So, this seems deliberate. [EDIT: Or not. Zack makes a fair point.] He is not even hiding it, if you listen carefully.

Reply
[-]Zack_M_Davis4y160

Michael told me once that he specifically seeks out people who are high in Eysenckian psychoticism.

So, this seems deliberate.

Because high-psychoticism people are the ones who are most likely to understand what he has to say.

This isn't nefarious. Anyone trying to meet new people to talk to, for any reason, is going to preferentially seek out people who are a better rather than worse match. Someone who didn't like our robot cult could make structurally the same argument about, say, efforts to market Yudkowsky's writing (like spending $28,000 distributing copies of Harry Potter and the Methods to math contest winners): why, they're preying on innocent high-IQ systematizers and filling their heads with scary stories about the coming robot apocalypse!

I mean, technically, yes. But in Yudkowsky and friends' worldview, the coming robot apocalypse is actually real, and high-IQ systematizers are the people best positioned to understand this important threat. Of course they're going to try to market their memes to that neurotype-demographic. What do you expect them to do? What do you expect Michael to do?

Reply
[-]steven04614y270

There's a sliding scale ranging from seeking out people who are better at understanding arguments in general to seeking out people who are biased toward agreeing with a specific set of arguments (and perhaps made better at understanding those arguments by that bias). Targeting math contest winners seems more toward the former end of the scale than targeting high-psychoticism people. This is something that seems to me to be true independently of the correctness of the underlying arguments. You don't have to already agree about the robot apocalypse to be able to see why math contest winners would be better able to understand arguments for or against the robot apocalypse.

If Yudkowsky and friends were deliberately targeting arguments for short AI timelines at people who already had a sense of a foreshortened future, then that would be more toward the latter end of the scale, and I think you'd object to that targeting strategy even though they'd be able to make an argument structurally the same as your comment.

Reply
[-]jessicata4y211

Yudkowsky and friends are targeting arguments that AGI is important at people already likely to believe AGI is important (and who are open to thinking it's even more important than they think), e.g. programmers, transhumanists, and reductionists. The case is less clear for short timelines specifically, given the lack of public argumentation by Yudkowsky etc, but the other people I know who have tried to convince people about short timelines (e.g. at the Asilomar Beneficial AI conference) were targeting people likely to be somewhat convinced of this, e.g. people who think machine learning / deep learning are important.

In general this seems really expected and unobjectionable? "If I'm trying to convince people of X, I'm going to find people who already believe a lot of the pre-requisites for understanding X and who might already assign X a non-negligible prior". This is how pretty much all systems of ideas spread, I have trouble thinking of a counterexample.

I mean, do a significant number of people not select who they talk with based on who already agrees with them to some extent and is paying attention to similar things?

Reply
[-]steven04614y230

If short timelines advocates were seeking out people with personalities that predisposed them toward apocalyptic terror, would you find it similarly unobjectionable? My guess is no. It seems to me that a neutral observer who didn't care about any of the object-level arguments would say that seeking out high-psychoticism people is more analogous to seeking out high-apocalypticism people than it is to seeking out programmers, transhumanists, reductionists, or people who think machine learning / deep learning are important.

Reply
[-]PeteMichaud4y100

The way I can make sense of seeking high-psychoticism people being morally equivalent to seeking high IQ systematizers, is if I drain any normative valance from "psychotic," and imagine there is a spectrum from autistic to psychotic. In this spectrum the extreme autistic is exclusively focused on exactly one thing at a time, and is incapable of cognition that has to take into account context, especially context they aren't already primed to have in mind, and the extreme psychotic can only see the globally interconnected context where everything means/is connected to everything else. Obviously neither extreme state is desirable, but leaning one way or another could be very helpful in different contexts.  

See also: indexicality.

On the other hand, back in my reflective beliefs, I think psychosis is a much scarier failure mode than "autism," on this scale, and I would not personally pursue any actions that pushed people toward it without, among other things, a supporting infrastructure of some kind for processing the psychotic state without losing the plot (social or cultural would work, but whatever).

Reply
2jessicata4y
I wouldn't find it objectionable. I'm not really sure what morally relevant distinction is being pointed at here, apocalyptic beliefs might make the inferential distance to specific apocalyptic hypotheses lower.
[-]steven04614y120

Well, I don't think it's obviously objectionable, and I'd have trouble putting my finger on the exact criterion for objectionability we should be using here. Something like "we'd all be better off in the presence of a norm against encouraging people to think in ways that might be valid in the particular case where we're talking to them but whose appeal comes from emotional predispositions that we sought out in them that aren't generally either truth-tracking or good for them" seems plausible to me. But I think it's obviously not as obviously unobjectionable as Zack seemed to be suggesting in his last few sentences, which was what moved me to comment.

Reply
3dxu4y
I don't have well-formed thoughts on this topic, but one factor that seems relevant to me has a core that might be verbalized as "susceptibility to invalid methods of persuasion", which seems notably higher in the case of people with high "apocalypticism" than people with the other attributes described in the grandparent. (A similar argument applies in the case of people with high "psychoticism".)
2jessicata4y
That might be relevant in some cases but seems unobjectionable both in the psychoticism case and the apocalypse case. I would predict that LW people cluster together in personality measurements like OCEAN and Eysenck, it's by default easier to write for people of a similar personality to yourself. Also, people notice high rates of Asperger's-like characteristics around here, which are correlated with Jewish ethnicity and transgenderism (also both frequent around here).
4Unreal4y
It might not be nefarious.  But it might also not be very wise.  I question Vassar's wisdom, if what you say is indeed true about his motives.  I question whether he's got the appropriate feedback loops in place to ensure he is not exacerbating harms. I question whether he's appropriately seeking that feedback rather than turning away from the kinds he finds overwhelming, distasteful, unpleasant, or doesn't know how to integrate.  I question how much work he's done on his own shadow and whether it's not inadvertently acting out in ways that are harmful. I question whether he has good friends he trusts who would let him know, bluntly, when he is out of line with integrity and ethics or if he has 'shadow stuff' that he's not seeing.  I don't think this needs to be hashed out in public, but I hope people are working closer to him on these things who have the wisdom and integrity to do the right thing. 
0ChristianKl4y
Rumor has it that https://www.sfgate.com/news/bayarea/article/Man-Gets-5-Years-For-Attacking-Woman-Outside-13796663.php is due to Vassar recommended drugs. In the OP that case does get blamed on CFAR's enviroment without any mentioning of that part. When talking about whether or not CFAR is responsible for that stories factors like that seem to me to matter quite a bit. I'd love whether anyone who's nearer can confirm/deny the rumor and fill in missing pieces. 
[-]Andrew Rettek4y660

As I mentioned elsewhere, I was heavily involved in that incident for a couple months after it happened and I looked for causes that could help with the defense. AFAICT No drugs were taken in the days leading up to the mental health episode or arrest (or people who took drugs with him lied about it).

Reply
[-]AnnaSalamon4y560

I, too, asked people questions after that incident and failed to locate any evidence of drugs.

Reply
[-]jimrandomh4y240

As I heard this story, Eric was actively seeking mental health care on the day of the incident, and should have been committed before it happened, but several people (both inside and outside the community) screwed up. I don't think anyone is to blame for his having had a mental break in the first place.

Reply
[-]ChristianKl4y290

I now got some better sourced information from a friend who's actually in good contact with Eric. Given that I'm also quite certain that there were no drugs involved and that isn't a case of any one person being mainly responsible for it happening but multiple people making bad decisions. I'm currently hoping that Eric will tell his side himself so that there's less indirection about the information sourcing so I'm not saying more about the detail at this point in time.

Reply
[-]humantoo4y*1840

Edit: The following account is a component of a broader and more complex narrative. While it played a significant role, it must be noted that there were numerous additional challenges concurrently affecting my life. Absent these complicating factors, the issues delineated in this post alone may not have precipitated such severe consequences. There is also the inherent stress in concluding that humanity is heading toward an apocalypse that will kill everyone you care about, even when effective support networks are present. Additionally, I have made minor revisions to the second-to-last bullet point for clarity.

It is pertinent to provide some context to parts of my story that are relevant to the ongoing discussions.

  • My psychotic episode was triggered by a confluence of factors, including acute physical and mental stress, as well as exposure to a range of potent memes.
  • During my psychotic break, I believed that someone associated with Vassar had administered LSD to me. Although I no longer hold this belief, I cannot entirely dismiss it. Nonetheless, given my deteriorated physical and mental health at the time, the vividness of my experiences could be attributed to a placebo effect
... (read more)
Reply
[-]Ruby4y370

Thank you for sharing such personal details for the sake of the conversation.

Reply
[-]jessicata4y191

Thanks for sharing the details of your experience. Fyi I had a trip earlier in 2017 where I had the thought "Michael Vassar is God" and told a couple people about this, it was overall a good trip, not causing paranoia afterwards etc.

If I'm trying to put my finger on a real effect here, it's related to how Michael Vassar was one of the initial people who set up the social scene (e.g. running singularity summits and being executive director of SIAI), being on the more "social/business development/management" end relative to someone like Eliezer; so if you live in the scene, which can be seen as a simulacrum, the people most involved in setting up the scene/simulacrum have the most aptitude at affecting memes related to it, like a world-simulator programmer has more aptitude at affecting the simulation than people within the simulation (though to a much lesser degree of course).

As a related example, Von Neumann was involved in setting up post-WWII US Modernism, and is also attributed extreme mental powers by modernism (e.g. extreme creativity in inventing a wide variety of fields); in creating the social system, he also has more memetic influence within that system, and could more effectively change its boundaries e.g. in creating new fields of study.

Reply1
2ChristianKl4y
2017 would be the year Eric's episode happened as well. Did this result in multiple conversation about "Michael Vassar is God" that Eric might then picked up when he hang around the group?
3jessicata4y
I don't know, some of the people were in common between these discussions so maybe, but my guess would be that it wasn't causal, only correlational. Multiple people at the time were considering Michael Vassar to be especially insightful and worth learning from.
3ChristianKl4y
I haven't used the word god myself nor have heard it used by other people to refer to someone who's insightful and worth learning from. Traditionally, people learn from prophets and not from gods.
[-]Avi4y100

Can someone please clarify what is meant in this conext by 'Vassar's group', or the term 'Vassarites' used by others?

My intution previously was that Michael Vassar had no formal 'group' or insitution of any kind, and it was just more like 'a cluster of friends who hung out together a lot', but this comment makes it seem like something more official.

Reply
[-]David Hornbein4y460

While "Vassar's group" is informal, it's more than just a cluster of friends; it's a social scene with lots of shared concepts, terminology, and outlook (although of course not every member holds every view and members sometimes disagree about the concepts, etc etc). In this way, the structure is similar to social scenes like "the AI safety community" or "wokeness" or "the startup scene" that coordinate in part on the basis of shared ideology even in the absence of institutional coordination, albeit much smaller. There is no formal institution governing the scene, and as far as I've ever heard Vassar himself has no particular authority within it beyond individual persuasion and his reputation.

Median Group is the closest thing to a "Vassarite" institution, in that its listed members are 2/3 people who I've heard/read describing the strong influence Vassar has had on their thinking and 1/3 people I don't know, but AFAIK Median Group is just a project put together by a bunch of friends with similar outlook and doesn't claim to speak for the whole scene or anything.

Reply
[-]Benquo4y280

As a member of that cluster I endorse this description.

Reply
5Benquo4y
Michael and I are sometimes-housemates and I've never seen or heard of any formal "Vassarite" group or institution, though he's an important connector in the local social graph, such that I met several good friends through him.
3Eli Tyre4y
Thank you very much for sharing. I wasn't aware of any of these details.
-22Benquo4y
6Scott Alexander4y
If this information isn't too private, can you send it to me? scott@slatestarcodex.com
[-]jessicata4y530

I feel pretty defensive reading and responding to this comment, given a previous conversation with Scott Alexander where he said his professional opinion would be that people who have had a psychotic break should be on antipsychotics for the rest of their life (to minimize risks of future psychotic breaks). This has known severe side effects like cognitive impairment and brain shrinkage and lacks evidence of causing long-term improvement. When I was on antipsychotics, my mental functioning was much lower (noted by my friends) and I gained weight rapidly. (I don't think short-term use of antipsychotics was bad, in my case)

It is in this context that I'm reading that someone talking about the possibility of mental subprocess implantation ("demons") should be "treated as a psychological emergency", when the Eric Bryulant case had already happened, and talking about the psychological processes was necessary for making sense of the situation. I feared involuntary institutionalization at the time, quite a lot, for reasons like this.

If someone expresses opinions like this, and I have reason to believe they would act on them, then I can't believe myself to have freedom of speech. That ... (read more)

Reply
[-]Scott Alexander4y1040

I don't remember the exact words in our last conversation. If I said that, I was wrong and I apologize.

My position is that in schizophrenia (which is a specific condition and not just the same thing as psychosis), lifetime antipsychotics might be appropriate. EG this paper suggests continuing for twelve months after a first schizophrenic episode and then stopping and seeing how things go, which seems reasonable to me. It also says that if every time you take someone off antipsychotics they become fully and dangerous psychotic again, then lifetime antipsychotics are probably their best bet. In a case like that, I would want the patient's buy-in, ie if they were medicated after a psychotic episode I would advise them of the reasons why continued antipsychotic use was recommended in their case, if they said they didn't want it we would explore why given the very high risk level, and if they still said they didn't want it then I would follow their direction.

I didn't get a chance to talk to you during your episode, so I don't know exactly what was going on. I do think that psychosis should be thought of differently than just "weird thoughts that might be true", as more of a whole-body n... (read more)

Reply
[-]jessicata4y650

I don’t remember the exact words in our last conversation. If I said that, I was wrong and I apologize.

Ok, the opinions you've described here seem much more reasonable than what I remember, thanks for clarifying.

I do think that psychosis should be thought of differently than just “weird thoughts that might be true”, since it’s a whole-body nerve-and-brain dysregulation of which weird thoughts are just one symptom.

I agree, yes. I think what I was afraid of at the time was being called crazy and possibly institutionalized for thinking somewhat weird thoughts that people would refuse to engage with, and showing some signs of anxiety/distress that were in some ways a reaction to my actual situation. By the time I was losing sleep etc, things were quite different at a physiological level and it made sense to treat the situation as a psychiatric emergency.

If you can show someone that they're making errors that correspond to symptoms of mild psychosis, then telling them that and suggesting corresponding therapies to help with the underlying problem seems pretty reasonable.

Reply
[-]Scott Alexander4y590

Thanks, if you meant that, when someone is at a very early stage of thinking strange things, you should talk to them about it and try to come to a mutual agreement on how worrying this is and what the criteria would be for psych treatment, instead of immediately dehumanizing them and demanding the treatment right away, then I 100% agree.

Reply
[-]hg004y110

I think if someone has mild psychosis and you can guide them back to reality-based thoughts for a second, that is compassionate and a good thing to do in the sense that it will make them feel better, but also kind of useless because the psychosis still has the same chance of progressing into severe psychosis anyway - you're treating a symptom.

If psychosis is caused by an underlying physiological/biochemical process, wouldn't that suggest that e.g. exposure to Leverage Research wouldn't be a cause of it?

If being part of Leverage is causing less reality-based thoughts and nudging someone into mild psychosis, I would expect that being part of some other group could cause more reality-based thoughts and nudge someone away from mild psychosis. Why would causation be possible in one direction but not the other?

I guess another hypothesis here is that some cases are caused by social/environmental factors and others are caused by biochemical factors. If that's true, I'd expect changing someone's environment to be more helpful for the former sort of case.

Reply
[-]TekhneMakre4y110

[probably old-hat [ETA: or false], but I'm still curious what you think] My (background unexamined) model of psychosis-> schizophrenia is that something, call it the "triggers", sets a person on a trajectory of less coherence / grounding; if the trajectory isn't corrected, they just go further and further. The "triggers" might be multifarious; there might be "organic" psychosis and "psychic" psychosis, where the former is like what happens from lead poisoning, and the latter is, maybe, what happens when you begin to become aware of some horrible facts. If your brain can rearrange itself quickly enough to cope with the newly known reality, your trajectory points back to the ground. If it can't, you might have a chain reaction where (1) horrible facts you were previously carefully ignoring, are revealed because you no longer have the superstructure that was ignore-coping with them; (2) your ungroundedness opens the way to unepistemic beliefs, some of which might be additionally horrifying if true; (3) you're generally stressed out because things are going wronger and wronger, which reinforces everything.

If this is true, then your statement:

. I think if someone has mild psychosis a
... (read more)
Reply
9Rafael Harth4y
There is this basic idea (I think from an old blogpost that Eliezer wrote) that if someone says there are goblins in the closet, dismissing them outright is confusing rationality with trust in commonly held claims, whereas the truly rational thing is to just open the closet and look. I think this is correct in principle but not applicable in many real-world cases. The real reason why even rational people routinely dismiss many weird explanations for things isn't that they have sufficient evidence against them, it's that the weird explanation is inconsistent with a large set of high confidence beliefs that they currently hold. If someone tells me that they can talk to their deceased parents, I'm probably not going to invest the time to test whether they can obtain novel information this way; I'm just going to assume they're delusional because I'm confident spirits don't exist. That said, if that someone helped write the logical induction paper, I personally would probably hear them out regardless of how weird the thing sounds. Nonetheless, I think it remains true that dismissing beliefs without considering the evidence is often necessary in practice.
[-]TekhneMakre4y190
If someone tells me that they can talk to their deceased parents, I'm probably not going to invest the time to test whether they can obtain novel information this way; I'm just going to assume they're delusional because I'm confident spirits don't exist.

This is failing to track ambiguity in what's being refered to. If there's something confusing happening--something that seems important or interesting, but that you don't yet have words to well-articulate it--then you try to say what you can (e.g. by talking about "demons"). In your scenario, you don't know exactly what you're dismissing. You can confidently dismiss, in the absence of extraordinary evidence, that (1) their parents's brains have been rotting in the ground, and (2) they are talking with their parents, in the same way you talk to a present friend; you can't confidently dismiss, for example, that they are, from their conscious perspective, gaining information by conversing with an entity that's naturally thought of as their parents (which we might later describe as, they have separate structure in them, not integrated with their "self", that encoded thought patterns from their parents, blah blah blah etc.). You can say "oh well yes of course if it's *just a metaphor* maybe I don't want to dismiss them", but the point is that from a partially pre-theoretic confusion, it's not clear what's a metaphor and it requires further work to disambiguate what's a metaphor.

Reply
4CronoDAS4y
As the joke goes, there's nothing crazy about talking to dead people. When dead people respond, then you start worrying.
[-]nshepperd4y490

I don’t think we need to blame/ostracize/cancel him and his group, except maybe from especially sensitive situations full of especially vulnerable people.

Based on the things I am reading about what has happened, blame, ostracism, and cancelling seem like the bare minimum of what we should do.

Vassar has had, I think about 6, transfems gravitate to him, join his projects, go on his quests, that I’ve heard. Including Olivia, and Jessica, and I think Devi. Devi had a mental breakdown and detransitioned IIHC. Jessica had a mental breakdown and didn’t detransition. Olivia became an agent of mental breakdown, compulsively breaking others via drug trips because they went through gates they shouldn’t’ve.

This is really, really serious. If this happened to someone closer to me I'd be out for blood, and probably legal prosecution.

Let's not minimize how fucked up this is.

Reply
[-]jessicata4y260

Olivia, Devi and I all talked to people other than Michael Vassar, such as Anna Salamon. We gravitated towards the Berkeley community, which was started around Eliezer's writing. None of us are calling for blame, ostracism, or cancelling of Michael. Michael helped all of us in ways no one else did. None of us have a motive to pursue a legal case against him. Ziz's sentence you quoted doesn't implicate Michael in any crimes.

The sentence is also misleading given Devi didn't detransition afaik.

Reply
[-]Viliam4y*1080

Jessicata, I will be blunt here. This article you wrote was [EDIT: expletive deleted] misleading. Perhaps you didn't do it on purpose; perhaps this is what you actually believe. But from my perspective, you are an unreliable narrator.

Your story, original version:

  • I worked for MIRI/CFAR
  • I had a psychotic breakdown, and I believed I was super evil
  • the same thing also happened to a few other people
  • conclusion: MIRI/CFAR is responsible for all this

Your story, updated version:

  • I worked for MIRI/CFAR
  • then Michael Vassar taught me that everyone is super evil, including CFAR/MIRI, and told me to use drugs in order to get a psychotic breakdown and liberate myself from evil
  • I actually used the drugs
  • I had a psychotic breakdown, and I believed I was super evil
  • the same thing also happened to a few other people
  • conclusion: I still blame MIRI/CFAR, and I am trying to downplay Vassar's role in this

If you can't see how these two stories differ, then... I don't have sufficiently polite words to describe it, so let's just say that to me these two stories seem very different.

Lest you accuse me of gaslighting, let me remind you that I am not doubting any of the factual statements you made. (I actually tried to... (read more)

Reply
[-]Eliezer Yudkowsky4y*1040

I could be very wrong, but the story I currently have about this myself is that Vassar himself was a different and saner person before he used too much psychedelics. :( :( :(

Reply
[-]orthonormal4y140

Non-agenda'd question: about when did you notice changes in him?

Reply
[-]Eliezer Yudkowsky4y100

My autobiographical episodic memory is nowhere near good enough to answer this question, alas.

Reply
[-]ChristianKl4y180

Do you have a timeline of when you think that shift happened? That might make it easier for other people who knew Vassar at the time to say whether their observation matched yours.

Reply
[-]Viliam4y110

That... must have hurt a lot.

(I hope your story is right.)

Reply
7jimrandomh4y
I saw some him make some questionable drug use decisions at Burning Man in 2011 and 2012, including larger than normal doses, and I don't think I saw all of it.
2Tenoke4y
A lot of people take a lot of drugs on big events like Burning Man with little issue. In my observation, it's typically the overly frequent and/or targeted psychedelic use that causes such big changes at least in those that start of fairly stable.
[-]TekhneMakre4y310
you publicly describe your suffering as a way to show people that MIRI/CFAR is evil.

Could you expand more on this? E.g. what are a couple sentences in the post that seem most trying to show this.

Because it seems like you call it bad when you attribute it to MIRI/CFAR, but when other people suggest that Vassar was responsible, then it seems a bit like no big deal, definitely not anything to blame him for.

I appreciate the thrust of your comment, including this sentence, but also this sentence seems uncharitable, like it's collapsing down stuff that shouldn't be collapsed. For example, it could be that the MIRI/CFAR/etc. social field could set up (maybe by accident, or even due to no fault of any of the "central" people) the conditions where "psychosis" is the best of the bad available options; in which case it makes sense to attribute causal fault to the social field, not to a person who e.g. makes that clear to you, and therefore more proximal causes your breakdown. (Of course there's disagreement about whether that's the state of the world, but it's not necessarily incoherent.)

I do get the sense that jessicata is relating in a funny way to Michael Vassar, e.g. by warping the narrative around him while selectively posing as "just trying to state facts" in relation to other narrative fields; but this is hard to tell, since it's also what it might look like if Michael Vassar was systematically scapegoated, and jessicata is reporting more direct/accurate (hence less bad-seeming) observations.

Reply
[-]Unreal4y280

Where did jessicata corroborate this sentence "then Michael Vassar taught me that everyone is super evil, including CFAR/MIRI, and told me to use drugs in order to get a psychotic breakdown and liberate myself from evil" ? 

Reply
[-]countingtoten4y240

I should note that, as an outsider, the main point I recall Eliezer making in that vein is that he used Michael Vassar as a model for the character who was called Professor Quirrell. As an outsider, I didn't see that as an unqualified endorsement - though I think your general message should be signal-boosted.

Reply
1ChristianKl4y
The claim that Michael Vassar is substantially like Quirrell seems to me strange. Where did you get the claim that Eliezer modelled Vassar after Quirrell? To make the claim a bit more based on public data, take Vassar's TedX talk. I think it gives a good impression of how Vassar thinks. There are some official statistics that claim for Jordan that life expectancy, so I think there's a good chance that Vassar here actually believes what he says. If you however look deeper then Jordan's life expectancy is not as high as is asserted by Vassar. Given that the video is in the public record that's an error that everybody can find who tries to check what Vassar is saying. I don't think it's in Vassar's interest to give a public talk like that with claims that are easily found to be wrong by factchecking. Quirrell wouldn't have made an error like this but is a lot more controlled.  Eliezer made Vassar president of the precursor of MIRI. That's a strong signal of trust and endorsement.
[-]countingtoten4y292

https://yudkowsky.tumblr.com/writing/empathyrespect

Reply
[-]Davis_Kingsley4y250

Eliezer has openly said Quirrell's cynicism is modeled after a mix of Michael Vassar and Robin Hanson.

Reply
[-]jessicata4y220

But from my perspective, you are an unreliable narrator.

I appreciate you're telling me this given that you believe it. I definitely am in some ways, and try to improve over time.

then Michael Vassar taught me that everyone is super evil, including CFAR/MIRI, and told me to use drugs in order to get a psychotic breakdown and liberate myself from evil

I said in the text that (a) there were conversations about corruption in EA institutions, including about the content of Ben Hoffman's posts, (b) I was collaborating with Michael Vassar at the time, (c) Michael Vassar was commenting about social epistemology. I admit that connecting points (a) and (c) would have made the connection clearer, but it wouldn't have changed the text much.

In cases where someone was previously part of a "cult" and later says it was a "cult" and abusive in some important ways, there has to be a stage where they're thinking about how bad the social context was, and practically always, that involves conversations with other people who are encouraging them to look at the ways their social context is bad. So my having conversations where people try to convince me CFAR/MIRI are evil is expected given what el... (read more)

Reply
-3nshepperd4y
What I'm saying is that the Berkeley community should be. Supplying illicit drugs is a crime (but perhaps the drugs were BYO?). IDK if doing so and negligently causing permanent psychological injury is a worse crime, but it should be.
[-]jessicata4y270

I'm not going to comment on drug usage in detail for legal reasons, except to note that there are psychedelics legal in some places, such as marijuana in CA.

It doesn't make sense to attribute unique causal responsibility for psychotic breaks to anyone, except maybe to the person it's happening to. There are lots of people all of us were talking to in that time period who influenced us, and multiple people were advocating psychedelic use. Not all cases happened to people who were talking significantly with Michael around the time. As I mentioned in the OP, as I was becoming more psychotic, people tried things they thought might help, which generally didn't, and they could have done better things instead. Even causal responsibility doesn't imply blame, e.g. Eliezer had some causal responsibility due to writing things that attracted people to the Berkeley scene where there were higher-variance psychological outcomes. Michael was often talking with people who were already "not ok" in important ways, which probably affects the statistics.

Reply
[-]devi4y130

Please see my comment on the grandparent.

I agree with Jessica's general characterization that this is better understood as multi-causal rather than the direct cause of actions by one person.

Reply
[-]jimrandomh4y420

Relevant bit of social data: Olivia is the most irresponsible-with-drugs person I've ever met, by a sizeable margin; and I know of one specific instance (not a person named in your comment or any other comments on this post) where Olivia gave someone an ill-advised drug combination and they had a bad time (though not a psychotic break).

Reply
4Viliam4y
I don't remember specific names, but something similar happened at one of the first rationality minicamps. Technically, this was not about drugs but some supplements (i.e. completely legal things), but there was someone mixing various kinds of powders and saying "yeah, trust me, I have a lot of experience with this, I did a lot of research, it is perfectly safe to take a dose this high, really", and then an ambulance had to be called. So, I assume you meant that Olivia goes even far beyond this, right?
[-]jimrandomh4y150

My memory of the RBC incident you're referring to was that it wasn't supplements that did it, it was a caffeine overdose from energy drinks leading into a panic attack. But there were certainly a lot of supplements around and they could've played a role I didn't know about.

When I say that I believe Olivia is irresponsible with drugs, I'm not excluding the unscheduled supplements, but the story I referred to involved the scheduled kind.

Reply
[-]Scott Alexander4y420

I've posted an edit/update above after talking to Vassar.

Reply
[-]gwern4y340

A question for the 'Vassarites', if they will: were you doing anything like the "unihemispheric sleep" exercise (self-inducing hallucinations/dissociative personalities by sleep deprivation) the Zizians are described as doing?

Reply
[-]jessicata4y440

No. All sleep deprivation was unintentional (anxiety-induced in my case).

Reply
[-]Desrtopa4y330

So, it's been a long time since I actually commented on Less Wrong, but since the conversation is here...

Hearing about this is weird for me, because I feel like, compared to the opinions I heard about him from other people in the community, I kind of... always had uncomfortable feelings about Mike Vassar? And I say this without having had direct personal contact with him except, IIRC, maybe one meetup I attended where he was there and we didn't talk directly, although we did occasionally participate in some of the same conversations online.

 

By all accounts, it sounds like he's always been quite charismatic in person, and this isn't the first time I've heard someone describe him as a "wizard." But empirically, there are some people who're very charismatic who propagate some really bad ideas and whose impacts on the lives of people around them, or on society at large, can be quite negative. As of last I was paying attention to him, I wouldn't have expected Mike Vassar to have that negative an effect on the lives of the people around him, but I was always stuck in an awkward position of feeling like I was surrounded by people who took him more seriously than I felt like he ought ... (read more)

Reply
[-]Vanessa Kosoy4y670

I met Vassar once. He came across as extremely charismatic (with a sort of charisma that probably only works on a particular type of people, which includes me), creating the impression of saying wise and insightful things (especially if you lack relevant domain knowledge), while in truth he was saying a lot of stuff which was patently absurd. Something about his delivery was so captivating, that it took me a while to "shake off the fairy dust" and realize just how silly some of his claims were, even when it should have been obvious from the start. Moreover, his worldview seemed heavily based on paranoidal / conspiracy-theory type of thinking. So, yes, I'm not too surprised by Scott's revelations about him.

Reply
[-]Wei Dai4y350

He came across as extremely charismatic (with a sort of charisma that probably only works on a particular type of people, which includes me), creating the impression of saying wise and insightful things (especially if you lack relevant domain knowledge), while in truth he was saying a lot of stuff which was patently absurd.

Yeah, it definitely didn't work on me. I believe I wrote this thread shortly after my one-and-only interaction with him, in which he said a lot of things that made me very skeptical but that I couldn't easily refute, or had much time to think about before he would move on to some other topic. (Interestingly, he actually replied in that thread even though I didn't mention him by name.)

It saddens me to learn that his style of conversation/persuasion "works" on many people who otherwise seem very smart and capable (and even self-selected for caring about being rational). It seems like pretty bad news as far as what kind of epistemic situation humanity is in (e.g., how easily we will be manipulated by even slightly-smarter-than-human AIs / human-AI systems).

Reply
7Wei Dai4y
Oh, this is because the OP that I was replying to did mention him by name:
-10[comment deleted]4y
[-]Viliam4y130

I was always stuck in an awkward position of feeling like I was surrounded by people who took him more seriously than I felt like he ought to be taken.

Heh, the same feeling here. I didn't have much opportunity to interact with him in person. I remember repeatedly hearing praise about how incredibly smart he is (from people whom I admired), then trying to find something smart written by him, and feeling unimpressed and confused, like maybe I wasn't reading the right texts or I failed to discover the hidden meaning that people smarter than me have noticed.

Hypothesis 1: I am simply not smart enough to recognize his greatness. I can recognize people one level above me, and they can recognize people one level above them, but when I try to understand someone two levels above me, it's all gibberish to me.

Hypothesis 2: He is more persuasive in person than in writing. (But once he impressed you in person, you will now see greatness in his writing, too. Maybe because of halo affect. Maybe because now you understand the hidden layers of what he actually meant by that.) Maybe he is more persuasive in person because he can make his message optimized for the receiver; which might be a good thing... (read more)

Reply
[-]AnnaSalamon4y560

Not a direct response to you, but if anyone who hasn't talked to Vassar is wanting an example of Vassar-conversation that may be easier to understand or get some sense from than most examples would (though it'll have a fair bit in it that'll probably still seem false/confusing), you might try Spencer Greenberg's podcast with Vassar.

Reply
[-]Eli Tyre4y220

As a datapoint: I listened to that podcast 4 times, and took notes 3 of those 4 times, to try and clearly parse what he's saying. I certainly did not fully succeed. 

My notes.

It seems like he said some straightforwardly contradictory things? For instance, that strong conflict theorists trust their own senses and feelings more, but also trust them less?

I would really like to understand what he's getting at by the way, so if it is clearer for you than it is for me, I'd actively appreciate clarification.

Reply
6Unreal4y
i tried reading / skimming some of that summary it made me want to scream  what a horrible way to view the world / people / institutions / justice  i should maybe try listening to the podcast to see if i have a similar reaction to that 
[-]JenniferRM4y251

Seeing as how you posted this 9 days ago, I hope you did not bite off more than you could chew, and I hope you do not want to scream anymore.

In Harry Potter the standard practice seems to be to "eat chocolate" and perhaps "play with puppies" after exposure to ideas that are both (1) possibly true, and (2) very saddening to think about.

Then there is Gendlin's Litany (and please note that I am linking to a critique, not to unadulterated "yay for the litany" ideas) which I believe is part of Lesswrong's canon somewhat on purpose. In the critique there are second and third thoughts along these lines, which I admire for their clarity, and also for their hopefulness.

Ideally [a better version of the Litany] would communicate: “Lying to yourself will eventually screw you up worse than getting hurt by a truth,” instead of “learning new truths has no negative consequences.”

This distinction is particularly important when the truth at hand is “the world is a fundamentally unfair place that will kill you without a second thought if you mess up, and possibly even if you don’t.”

EDIT TO CLARIFY: The person who goes about their life ignoring the universe’s Absolute Neutrality is very fundamentally

... (read more)
Reply
[-]Avi4y110

There's also these 2 podcasts which cover quite a variety of topics, for anyone who's interested:
You've Got Mel - With Michael Vassar
Jim Rutt Show - Michael Vassar on Passive-Aggressive Revolution

Reply
4Avi4y
I haven't seen/heard anything particularly impressive from him either, but perhaps his 'best work' just isn't written down anywhere?
7CronoDAS4y
My impression as an outsider (I met him once and heard and read some things people were saying about him) was that he seemed smart but also seemed like kind of a kook...
[-]ChristianKl4y321

I banned him from SSC meetups for a combination of reasons including these

If you make bans like these it would be worth to communicate them to the people organizing SSC meetups. Especially, when making bans for safety reasons of meetup participants not communicating those bans seems very strange to me.

Vassar lived a while after he left the Bay Area in Berlin and for decisions whether or not to make an effort to integrate someone like him (and invite him to LW and SSC meetups) such kind of information is valuable and Bay people not sharing it but claiming that they do anything that would work in practice like a ban feels misleading. 

For reasons I don't fully understand and which might or might not be related to this, he left the Bay Area. This was around the time COVID happened, so everything's kind of been frozen in place since then.

I think Vassar left the Bay area more then a year before COVID happened. As far as I remember his stated reasoning was something along the lines of everyone in the Bay Area getting mindkilled by leftish ideology.

Reply
[-]Scott Alexander4y300

It was on the Register of Bans, which unfortunately went down after I deleted the blog. I admit I didn't publicize it very well because this was a kind of sensitive situation and I was trying to do it without destroying his reputation.

Reply
[-]ChristianKl4y210

If there are bans that are supposed to be enforced, mentioning that in the mails that go out to organizers for a ACX everywhere event would make sense. I'm not 100% sure that I got all the mails because Ruben forwarded mails for me (I normally organize LW meetups in Berlin and support Ruben with the SSC/ACX meetups), but in those there was no mention of the word ban.

I don't think it needs to be public but having such information in a mail like the one Aug 23 would likely to be necessary for a good portion of the meetup organizers to know that there an expectation that certain people aren't welcome.

Reply
[-]ChristianKl4y180

https://www.lesswrong.com/posts/iWWjq5BioRkjxxNKq/michael-vassar-at-the-slatestarcodex-online-meetup seems to have happened after that point in time. Vassar not only attended a Slate Star Codex but was central in it and presenting his thoughts.

Reply
[-]JoshuaFox4y*340

I organized that, so let me say that:

  • That online meetup, or the invitation to Vassar, was not officially affiliated to or endorsed by SSC. Any responsibility for inviting him is mine.
  • I have  conversed with him a few times, as follows:
  • I met him in Israel around 2010. He was quite interesting, though he did try to get me to withdraw my retirement savings to invest with him. He was somewhat persuasive. During our time in conversation, he made some offensive statements, but I am perhaps less touchy about such things than the younger generation.
  • In 2012, he explained  Acausal Trade to me, and that was the seed of  . That discussion was quite sensible and I thank him for that.
  • A few years later, I invited him to speak at LessWrong Israel.  At that time I thought him a mad genius -- truly both.  His talk was verging on incoherence, with flashes of apparent insight.
  • Before the online meetup, 2021, he insisted on a preliminary talk; he made statements that produced twinges of persuasiveness. (Introspecting that is kind of interesting, actually.) I stayed with it for 2 or more hours before begging off, because it was fascinating in a way. I was able to analyze
... (read more)
Reply
[-]ChristianKl4y160

It seems to me that despite organizing multiple SSC events you had no knowledge that Vassar was banned from SSC events. Neither had anyone reading the event anouncement to the extend that they would tell you that Vassar was banned before the event happened.

To me that suggests that there's a problem of not sharing information about who's banned to those organizing meetups in an effective way, so that a ban has the consequence one would expect it to have.

Reply
0Viliam4y
It might be useful to have a global blacklist somewhere. Possible legal consequences, if someone decides to sue you for libel. (Perhaps the list should only contain the names, not the reasons?) EDIT: Nevermind. There are more things I would like to say about this, but this is not the right place. Later I may write a separate article explaining the threat model I had in mind.
5ChristianKl4y
Legal threats matter a great deal for what can be done in a situation like this. When it comes to a "global blacklist" there's the question about governance. Who decides who's on and who isn't. When it comes to SSC or ACX meetups the governance question is clear. Anybody who's organizing a meetup under those labels should follow Scott's guidance.  That however only works if that information is communicated to meetup organizers. 
[-]jessicata4y170

I have replied to this comment in a top-level post.

Reply
8lc2y
Ziz's perspective here gives you a pretty detailed example of how this social trick works (i.e. spontaneously pretend something someone else did was objectionable, to make the other person walk on eggshells or chase you).
1FinalFormal21mo
What's even the point of that? Did Vassar do a lot of that type of thing?
6Dr_Manhattan4y
Since comments get occluded you should refer to an edit/update somewhere at the top if you want it to be seen by those who already read your original comment.
1Yoav Ravid4y
Is this the highest rated comment on the site?
[-]mingyuan4y3610

Okay, meta: This post has over 500 comments now and it's really hard to keep a handle on all of the threads. So I spent the last 2 hours trying to outline the main topics that keep coming up. Most top-level comments are linked to but some didn't really fit into any category, so a couple are missing; also apologies that the structure is imperfect.

Topic headers are bolded and are organized very roughly in order of how important they seem (both to me personally and in terms of the amount of air time they've gotten). 

  • Discussion of MIRI/CFAR vs Leverage comparison 
    • Extent to which this post pulls attention away from (and cheapens) the important discussion that was being had about Leverage
      • Aella’s thread
    • Discussion of extent to which the comparison is misleading, and concrete places where the comparison breaks down
      • Eli’s thread
      • Viliam’s subsubthread
      • Habryka’s subthread
      • Vanessa’s thread
      • Different drivers of mental health problems (rationalistthrowaway’s thread)
      • Different norms wrt criticsim (Viliam’s subthread)
      • We don’t actually know how bad Leverage was (Freyja’s subthread)
    • Accounts from other MIRI and CFAR employees (current or former)
      • Addressing factual statements
        • orthonormal’s thread
        • Anna
... (read more)
Reply
[-]Ruby4y390

This is hugely helpful, a great community service! Thanks so much, mingyuan.

Reply
[-]Aella4y2310

I find something in me really revolts at this post, so epistemic status… not-fully-thought-through-emotions-are-in-charge?

Full disclosure: I am good friends with Zoe; I lived with her for the four months leading up to her post, and was present to witness a lot of her processing and pain. I’m also currently dating someone named in this post, but my reaction to this was formed before talking with him.

First, I’m annoyed at the timing of this. The community still seems in the middle of sensemaking around Leverage, and figuring out what to do about it, and this post feels like it pulls the spotlight away. If the points in the post felt more compelling, then I’d probably be more down for an argument of “we should bin these together and look at this as a whole”, but as it stands the stuff listed in here feels like it’s describing something significantly less damaging, and of a different kind of damage. I’m also annoyed that this post relies so heavily on Zoe’s, and the comparison feels like it cheapens what Zoe went through. I keep having a recurring thought that the author must have utterly failed to understand the intensity of the very direct impact from Leverage’s operations on Zoe. Mo... (read more)

Reply
[-]mingyuan4y1770

I want to note that this post (top-level) now has more than 3x the number of comments that Zoe's does (or nearly 50% more comments than the Zoe+BayAreaHuman posts combined, if you think that's a more fair comparison), and that no one has commented on Zoe's post in 24 hours. [ETA: This changed while I was writing this comment. The point about lowered activity still stands.]

This seems really bad to me — I think that there was a lot more that needed to be figured out wrt Leverage, and this post has successfully sucked all the attention away from a conversation that I perceive to be much more important. 

I keep deleting sentences because I don't think it's productive to discuss how upset this makes me, but I am 100% with Aella here. I was wary of this post to begin with and I feel something akin to anger at what it did to the Leverage conversation.

I had some contact with Leverage 1.0 — had some friends there, interviewed for an ops job there, and was charted a few times by a few different people. I have also worked for both CFAR and MIRI, though never as a core staff member at either organization; and more importantly, I was close friends with maybe 50% of the people who worked at ... (read more)

Reply
[-]ChristianKl4y250

It seems like it's relatively easy for people to share information in the CFAR+MIRI conversation. On the other hand, for those people who have actually the most central information to share in the Leverage conversation it's not as easy to share them. 

In many cases I would expect that private in person conversation are needed to progress the Leverage debate and that just takes time. Those people at leverage who want to write up their own experience likely benefit from time to do that.

Practically, helping Anna get an overview over timeline of members and funders and getting people to share stories with Aella seems to be the way going forward that's largely not about leaving LW comments.

Reply
[-]Avi4y250

I agree with the intent of your comment mingyuan, but perhaps the reason for the asymmetry in activity on this post is simply due to the fact that there are an order of magnitude (or several orders of magnitude?) more people with some/any experience and interaction with CFAR/MIRI (especially CFAR) compared to Leverage?

Reply
[-]AnnaSalamon4y190

I think some of it has got to be that it's somehow easier to talk about CFAR/MIRI, rather than a sheer number of people thing. I think Leverage is somehow unusually hard to talk about, such that maybe we should figure out how to be extraordinarily kind/compassionate/gentle to anyone attempting it, or something.

Reply
[-]Spiracular4y*330

I agree that Leverage has been unusually hard to talk about bluntly or honestly, and I think this has been true for most of its existence.

I also think the people at the periphery of Leverage, are starting to absorb the fact that they systematically had things hidden from them. That may be giving them new pause, before engaging with Leverage as a topic.

(I think that seems potentially fair, and considerate. To me, it doesn't feel like the same concern applies in engaging about CFAR. I also agree that there were probably fewer total people exposed to Leverage, at all.)


...actually, let me give you a personal taste of what we're dealing with?

The last time I choose to talk straightforwardly and honestly about Leverage, with somebody outside of it? I had to hard-override an explicit but non-legal privacy agreement*, to get a sanity check. When I was honest about having done so shortly thereafter, I completely and permanently lost one of my friendships as a result.

Lost-friend says they were traumatized as a result of me doing this. That having "made the mistake of trusting me" hurt their relationships with other Leveragers. That at the time, they wished they'd lied to me, which stung.

I t... (read more)

Reply
[-]Spiracular4y190

I'm finally out about my story here! But I think I want to explain a bit of why I wasn't being very clear, for a while.

I've been "hinting darkly" in public rather than "telling my full story" due to a couple of concerns:

  1. I don't want to "throw ex-friend under the bus," to use their own words! Even friend's Leverager partner (who they weren't allowed to visit, if they were "infected with objects") seemed more "swept-up in the stupidity" than "malicious." I don't know how to tell my truth, without them feeling drowned out. I do still care about that. Eurgh.

  2. Via models that come out of my experience with Brent: I think this level of silence, makes the most sense if some ex-Leveragers did get a substantial amount of good out of the experience (sometimes with none of the bad, sometimes alongside it), and/or if there's a lot of regrettable actions taken by people who were swept up in this at the time, by people who would ordinarily be harmless under normal circumstances. I recognize that bodywork was very helpful to my friend, in working through some of their (unrelated) trauma. I am more than a little reluctant to put people through the sort of mob-driven invalidation I felt, in the

... (read more)
Reply
7Unreal4y
Any thoughts on why this was coming about in the culture?  If anyone feels that way (like the lost friend) and wants to talk to me about it, I'd be interested in learning more about it. 
5Spiracular4y
* I could tell that this had some concerning toxic elements, and I needed an outside sanity-check. I think under the circumstances, this was the correct call for me. I do not regret picking the particular person I chose as a sanity-check. I am also very sympathetic to other people not feeling able to pull this, given the enormous cost to doing it at the time. This is not a strong systematic assessment of how I usually treat privacy agreements. My harm-assessment process is usually structured a bit like this, with some additional pressure from an "agreement-to-secrecy," and also factors in the meta-secrecy-agreements around "being able to be held to secrecy agreements" and "being honest about how well you can be held to secrecy agreements." No, I don't feel like having a long discussion about privacy policies right now. But if you care? My thoughts on information-sharing policy were valuable enough to get me into the 2019 Review. If you start on this here, I will ignore you.
[-]Avi4y230

The fact that the people involved apparently find it uniquely difficult to talk about is a pretty good indication that Leverage != CFAR/MIRI in terms of cultishness/harms etc.

Reply
[-]AnnaSalamon4y220

Yes; I want to acknowledge that there was a large cost here. (I wasn't sure, from just the comment threads; but I just talked to a couple people who said they'd been thinking of writing up some observations about Leverage but had been distracted by this.)

I am personally really grateful for a bunch of the stuff in this post and its comment thread. But I hope the Leverage discussion really does get returned to, and I'll try to lend some momentum that way. Hope some others do too, insofar as some can find ways to actually help people put things together or talk.

Reply
[-]Viliam4y160

Seems to me that, given the current situation, it would probably be good to wait maybe two more days until this debate naturally reaches the end. And then restart the debate about Leverage.

Otherwise, we risk having two debates running in parallel, interfering with each other.

The comments section of this post is full of CFAR and MIRI employees attempting to do collaborative truth-seeking. The only comments made by Leverage employees in comparable threads were attempts at reputation management. That alone tells you a lot!

Then it is good that this debate happened. (Despite my shock when I saw it first.) It's just the timing with regards to the debate about Leverage that is unfortunate.

Reply
-9Puxi Deek4y
-36Kenny4y
[-]Eliezer Yudkowsky4y1650

By way of narrowing down this sense, which I think I share, if it's the same sense: leaving out the information from Scott's comment about a MIRI-opposed person who is advocating psychedelic use and causing psychotic breaks in people, and particularly this person talks about MIRI's attempts to have any internal info compartments as a terrible dark symptom of greater social control that you need to jailbreak away from using psychedelics, and then those people have psychotic breaks - leaving out this info seems to be not something you'd do in a neutrally intended post written from a place of grave concern about community dynamics.  It's taking the Leverage affair and trying to use it to make a point, and only including the info that would make that point, and leaving out info that would distract from that point.  And I'm not going to posture like that's terribly bad inhuman behavior, but we can see it and it's okay to admit to ourselves that we see it.

And it's also okay for somebody to think that the original Leverage affair needed to be discussed on its own terms, and not be carefully reframed in exactly the right way to make a point about a higher-profile group the author... (read more)

Reply
[-]Benquo4y210

not something you'd do in a neutrally intended post written from a place of grave concern about community dynamics

 

I'm not going to posture like that's terribly bad inhuman behavior, but we can see it and it's okay to admit to ourselves that we see it

These have the tone of allusions to some sort of accusation, but as far as I can tell you're not actually accusing Jessica of any transgression here, just saying that her post was not "neutrally intended," which - what would that mean? A post where Gricean implicature was not relevant?

Can you clarify whether you meant to suggest Jessica was doing some specific harmful thing here or whether this tone is unendorsed?

Reply
[-]Eliezer Yudkowsky4y*640

Okay, sure.  If what Scott says is true, and it matches my recollections of things I heard earlier - though I can attest to very little of it of my direct observation - then it seems like this post was written with knowledge of things that would make the overall story arc it showed, look very different, and those things were deliberately omitted.  This is more manipulation than I myself would personally consider okay to use in a situation like this one, though I am ever mindful of Automatic Norms and the privilege of being more verbally facile than others in which facts I can include but still make my own points.

Reply
[-]jessicata4y110

See Zack's reply here and mine here. Overall I didn't think the amount of responsibility was high enough for this to be worth mentioning.

Reply
[-]Ruby4y750

First, I’m annoyed at the timing of this. The community still seems in the middle of sensemaking around Leverage, and figuring out what to do about it, and this post feels like it pulls the spotlight away...

I want to second this reaction (basically your entire second paragraph). I have been feeling the same but hadn't worked up the courage to say it.

Reply
[-]Freyja4y370

I am also mad at what I see to be piggybacking on Zoe’s post, downplaying of the harms described in her post, and a subtle redirection of collective attention away from potentially new, timid accounts of things that happened to a specific group of people within Leverage and seem to have a lot of difficulty talking about it.

I hope that the sustained collective attention required to witness, make sense of and address the accounts of harm coming out of the psychology division of Leverage doesn’t get lost as a result of this post being published when it was.

Reply
[-]Viliam4y*220

For a moment I actually wondered whether this was a genius-level move by Leverage, but then I decided that I am just being paranoid. But it did derail the previous debate successfully.

On the positive side, I learned some new things. Never heard about Ziz before, for example.

EDIT:

Okay, this is probably silly, but... there is no connection between the Vassarites and Leverage, right? I just realized that my level of ignorance does not justify me dismissing a hypothesis so quickly. And of course, everyone knows everyone, but there are different levels of "knowing people", and... you know what I mean, hopefully. I will defer to judgment of people from Bay Area about this topic.

Reply
[-]habryka4y100

Outside of "these people probably talked to each other like once every few months" I think there is no major connection between Leverage and the Vassarites that I am aware of.

Reply
[-]Viliam4y120

Thanks.

I mostly assumed this; I suppose in the opposite case someone probably would have already mentioned that. But I prefer to have it confirmed explicitly.

Reply
[-]Eliezer Yudkowsky4y160

+2.

Reply
[-]jessicata4y500

The community still seems in the middle of sensemaking around Leverage, and figuring out what to do about it, and this post feels like it pulls the spotlight away.

I'm assuming that sensemaking is easier, rather than harder, with more relevant information and stories shared. I guess if it's pulling the spotlight away, it's partially because it's showing relevant facts about things other than Leverage, and partially because people will be more afraid of scapegoating Leverage if the similarities to MIRI/CFAR are obvious. I don't like scapegoating, so I don't really care if it's pulling the spotlight away for the second reason.

If the points in the post felt more compelling, then I’d probably be more down for an argument of “we should bin these together and look at this as a whole”, but as it stands the stuff listed in here feels like it’s describing something significantly less damaging, and of a different kind of damage.

I don't really understand what Zoe went through, just reading her post (although I have talked with other ex-Leverage people about the events). You don't understand what I went through, either. It was really, really psychologically disturbing. I sound paran... (read more)

Reply
[-]ChristianKl4y580

I don't really understand what Zoe went through, just reading her post (although I have talked with other ex-Leverage people about the events). You don't understand what I went through, either. It was really, really psychologically disturbing. I sound paranoid writing what I wrote, but this paranoia affected so many people. 

It would have probably better if you would have focused on your experience and drop all of the talk about Zoe from this post. That would make it easier for the reader to just take the information value from your experience.

I think that your post is still valuable information but that added narrative layer makes it harder to interact with then it would have been if it would have been focused more on your experience.

Reply
[-]Ben Pace4y350

One example for this is comparing Zoe’s mention of someone at Leverage having a psychotic break to the author having a psychotic break. But Zoe’s point was that Leverage treated the psychotic break as an achievement, not that the psychotic break happened. 

From the quotes in Scott's comment, it seems to me also the case that Michael Vassar also treated Jessica's and Ziz's psychoses as an achievement.

Reply
[-]Zack_M_Davis4y1060

it seems to me also the case that Michael Vassar also treated Jessica's [...] psycho[sis] as an achievement

Objection: hearsay. How would Scott know this? (I wrote a separate reply about the ways in which I think Scott's comment is being unfair.) As some closer-to-the-source counterevidence against the "treating as an achievement" charge, I quote a 9 October 2017 2:13 p.m. Signal message in which Michael wrote to me:

Up for coming by? I'd like to understand just how similar your situation was to Jessica's, including the details of her breakdown. We really don't want this happening so frequently.

(Also, just, whatever you think of Michael's many faults, very few people are cartoon villains that want their friends to have mental breakdowns.)

Reply
[-]Ben Pace4y250

Thanks for the counter-evidence.

Reply
[-]Benquo4y280

First, I’m annoyed at the timing of this. The community still seems in the middle of sensemaking around Leverage, and figuring out what to do about it, and this post feels like it pulls the spotlight away.

If we're trying to solve problems rather than attack the bad people, then the boundaries of the discussion should be determined by the scope of the problem, not by which people we're saying are bad. If you're trying to attack the bad people without standards or a theory of what's going on, that's just mob violence.

Reply
[-]Aella4y380

I... think I am trying to attack the bad people? I'm definitely conflict-oriented around Leverage; I believe that on some important level treating that organization or certain people in it as good-intentioned-but-misguided is a mistake, and a dangerous one. I don't think this is true for MIRI/CFAR; as is summed up pretty well in the last section of Orthonormal's post here. I'm down for the boundaries of the discussion being determined by the scope of the problem, but I perceive the original post here to be outside the scope of the problem. 

I'm also not sure how to engage with your last sentence. I do have theories for what is going on (but regardless I'm not sure if you give a mob a theory that makes it not a mob).

Reply
4Benquo4y
This is explicitly opposed to Zoe's stated intentions. Other people, including me and Jessica, also want to reveal and discuss bad behavior, but don't consent to violence in the name of our grievances. Agnes Callard's article is relevant here: I Don’t Want You to ‘Believe’ Me. I Want You to Listen. We want to reveal problems so that people can try to understand and solve those problems. Transforming an attempt to discussion of abuse into a scapegoating movement silences victims, preventing others from trying to interpret and independently evaluate the content of what they are saying, simplifying it to a bid to make someone the enemy. Historically, the idea that instead of trying to figure out which behaviors are bad and police them, we need to try to quickly attack the bad people, is how we get Holocausts and Stalinist purges. In this case I don't see any upside.
[-]Aella4y560

I perceive you as doing a conversational thing here that I don't like, where you like... imply things about my position without explicitly stating them? Or talk from a heavy frame that isn't explicit? 

  1. Which stated intentions? Where she asks people 'not to bother those who were there'? What thing do you think I want to do that Zoe doesn't want me to do? 
  2. Are you claiming I am advocating violence? Or simply implying it?
  3. Are you trying to argue that I shouldn't be conflict oriented because Zoe doesn't want me to be? The last part feels a little weird for someone to tell me, as I'm good friends with Zoe and have talked with her extensively about this.
  4. I support revealing problems so people can understand and solve them. I also don't like whatever is happening in this original article due to reasons you haven't engaged with.
  5. You're saying transforming an attempt to discuss abuse into scapegoating silences victims, keeps other ppl from evaluating the content, and simplifies it a bid to make someone the enemy. But in the comment you were responding to, I was talking about Leverage, not the author of this post. I view Leverage and co. as bad actors, but you sort of... reframe it to make it sound like I'm using a conflict mindset towards Jessica?
  6. You're also not engaging with the points I made, and you're responding to arguments I don't condone.

I don't really view you as engaging in good faith at this point, so I'm precommitting not to respond to you after this.

Reply
[-]Unreal4y550

Flagging that... I somehow want to simultaneously upvote and downvote Benquo's comment here. 

Upvote because I think he's standing for good things. (I'm pretty anti-scapegoating, especially of the 'quickly' kind that I think he's concerned about.) 

Downvote because it seems weirdly in the wrong context, like he's trying to punch at some kind of invisible enemy. His response seems incongruous with Aella's actual deal.  

I have some probability on miscommunication / misunderstanding. 

But also ... why ? are you ? why are your statements so 'contracting' ? Like they seem 'narrowizing' of the discussion in a way that seems like it philosophically tenses with your stated desire for 'revealing problems'. And they also seem weirdly 'escalate-y' like somehow I'm more tense in my body as I read your comments, like there's about to be a fight? Not that I sense any anger in you, but I sense a 'standing your ground' move that seems like it could lead to someone trying to punch you because you aren't budging. 

This is all metaphorical language for what I feel like your communication style is doing here. 

Reply
[-]Benquo4y150

Thanks for separating evaluation of content from evaluation of form. That makes it easy for me to respond to your criticism of my form without worrying so much that it's a move to suppress imperfectly expressed criticism.

The true causal answer is that when I perceive someone as appealing to a moralistic framework, I have a tendency to criticize their perspective from inside a moralistic frame, even though I don't independently endorse moralizing. While this probably isn't the best thing I could do if I were perfectly poised, I don't think this is totally pointless either. Attempts to scapegoat someone via moralizing rely on the impression that symmetric moral reasoning is being done, so they can be disrupted by insistent opposition from inside that frame.

You might think of it as standing in territory I think someone else has unjustly claimed, and drawing attention to that fact. One might get punched sometimes in such circumstances, but that's not so terrible; definitely not as bad as being controlled by fear, and it helps establish where recourse/justice is available and where it isn't, which is important information to have! Occasionally bright young people with a moral compass ge... (read more)

Reply
[-]Unreal4y130

The true causal answer is that when I perceive someone as appealing to a moralistic framework, I have a tendency to criticize their perspective from inside a moralistic frame, even though I don't independently endorse moralizing.

o

hmmm, well i gotta chew on that more but

Aella seems like a counter-productive person to stand your ground against. I sense her as mainly being an 'advocate' for Zoe. She claims wanting to attack the bad people, but compared with other commenters, I sense less 'mob violence' energy from her and ... maybe more fear that an important issue will be dropped / ignored. (I am not particularly afraid of this; the evidence against Leverage is striking and damning enough that it doesn't seem like it will readily be dropped, even if the internet stops talking about it. In fact I hope to see the internet talking about it a bit less, as more real convos happen in private.) 

I'm a bit worried about the way Scott's original take may have pulled us towards a shared map too quickly. There's also a general anti-jessicata vibe I'm getting from 'the room' but it's non-specific and has a lot to do with karma vote patterns. Naming these here for the sake of group awareness ... (read more)

Reply
9Benquo4y
I think it seems hard to find a disagreement because we don't disagree about much here. Aella was being basically cooperative in revealing some details about her motives, as was Logan. But that behavior is only effectively cooperative if people can use that information to build shared maps. I tried to do that in my replies, albeit imperfectly & in a way that picked a bit more of a fight than I ideally would have. At leisure, I do this. I'm working on a blog post trying to explain some of the structural factors that cause orgs like Leverage to go wrong in the way Zoe described. I've written extensively about both scapegoating and mind control outside the context of particular local conflicts, and when people seem like they're in a helpable state of confusion I try to help them. I spent half an hour today using a massage gun on my belly muscles, which improved my reading comprehension of your comment and let me respond to it more intelligently. But I'm in an adversarial situation. There are optimizing processes trying to destroy what I'm trying to build, trying to threaten people into abandoning their perspectives and capitulating to violence. It seems like you're recommending that I build new capacities instead of defending old ones. If I'm deciding between those, I shouldn't always get either answer. Instead, for any process damaging me, I should compare these two quantities: (A) The cost of replacement - how much would it cost me to repair the damage or build an equivalent amount of capacity elsewhere? (B) The cost of preventing the damage. I should work on prevention when B<A, and building when A>B. Since I expect my adversaries to make use of resources they seize to destroy more of what I care about, I need to count that towards the total expected damage caused (and therefore the cost of replacement). If I'd been able to costlessly pause the world for several hours to relax and think about the problem, I would almost certainly have been able to write a b
6Unreal4y
Well I feel somewhat more relaxed now, seeing that you're engaging in a pretty open and upfront manner. I like Tai Chi :)  The main disagreement I see is that you are thinking strategically and in a results-oriented fashion about actions you should take; you're thinking about things in terms of resource management and cost-benefit analysis. I do not advocate for that. Although I get that my position is maybe weird?  I claim that kind of thinking turns a lot of situations into finite games. Which I believe then contributes to life-ending / world-ending patterns.  ...  But maybe a more salient thing: I don't think this situation is quite as adversarial as you're maybe making it out to be? Or like, you seem to be adding a lot to an adversarial atmosphere, which might be doing a fair amount of driving towards more adversarial dynamics in the group in general.  I think you and I are not far apart in terms of values, and so ... I kind of want to help you? But also ... if you're attached to certain outcomes being guaranteed, that's gonna make it hard... 
[-]Benquo4y150

I don't understand where guarantees came into this. I don't understand how I could answer a question of the form "why did you do X rather than Y" without making some kind of comparison of the likely outcomes of X and Y.

I do know that in many cases people falsely claim to be comparing costs and benefits honestly, or falsely claim that some resource is scarce, as part of a strategy of coercion. I have no reason to do this to myself but I see many people doing it and maybe that's part of what turned you off from the idea.

On the other hand, there's a common political strategy where a dominant coalition establishes a narrative that something should be provided universally without rationing, or that something should be absolutely prevented without acknowledging taboo tradeoffs. Since this policy can't be implemented as stated, it empowers people in the position to decide which exceptions to make, and benefits the kinds of people who can get exceptions made, at the expense of less centrally connected people.

It seems to me like thinking about tradeoffs is the low-conflict alternative to insisting on guaranteed outcomes.

Generalizing from your objection to thinking about things in terms of r... (read more)

Reply
[-]Unreal4y170

Uhhh sorry, the thing about 'guarantees' was probably a mis-speak. 

For reference, I used to be a competitive gamer. This meant I used to use resource management and cost-benefit analysis a lot in my thinking. I also ported those framings into broader life, including how to win social games. I am comfortable thinking in terms of resource constraints, and lived many years of my life in that mode. (I was very skilled at games like MTG, board games, and Werewolf/Mafia.) 

I have since updated to realize how that way of thinking was flawed and dissociated from reality.

I don't understand how I could answer a question of the form "why did you do X rather than Y" without making some kind of comparison of the likely outcomes of X and Y.

I wrote a whole response to this part, but ... maybe I'm missing you. 

Thinking strategically seems fine to the extent that one is aligned with love / ethics / integrity and not acting out of fear, hate, or selfishness. The way you put your predicament caused me to feel like you were endorsing a fear-aligned POV. 

"Since I expect my adversaries to make use of resources they seize to destroy more of what I care about," "But I'm in an adversaria

... (read more)
Reply
6Benquo4y
I'm talking about optimizing processes coordinating with copies of themselves, distributed over many people. My blog post Civil Law and Political Drama is a technically precise description of this, though Towards optimal play as Villager in a mixed game adds some color that might be helpful. I don't think my interests are opposed to the autonomous agency of almost anyone. I do think that some common trigger/trauma behavior patterns are coordinating against autonomous human agency. The gaming detail helps me understand where you're coming from here. I don't think the right way to manage my resource constraints looks very much like playing a game of MtG. I am in a much higher-dimensional environment where most of my time should be spent playing/exploring, or resolving tension patterns that impede me from playing/exploring. My endorsed behavior pattern looks a little more like the process of becoming a good MtG player, or discovering that MtG is the sort of thing I want to get good at. (Though empirically that's not a game it made sense to me to invest in becoming good at - I chose Tai Chi instead for reasons!) I endorse using the capacities I already have, even when those capacities are imperfect. When responding to social conflict, it would almost always be more efficient and effective for me to try to clarify things out of a sense of open opportunity, than from a fear-based motive. This can be true even when a proper decision-theoretic model the situation would describe it as an adversarial one with time pressure; I might still protect my interests better by thinking in a free and relaxed way about the problem, than tensing up like a monkey facing a physical threat. But a relaxed attitude is not always immediately available to me, and I don't think I want to endorse always taking the time to detrigger before responding to something in the social domain. Part of loving and accepting human beings as they are, without giving up on intention to make things better,
[-]Unreal4y110

optimizing processes coordinating with copies of themselves, distributed over many people

Question about balance: how do you not end up reifying these in your mind, creating a paranoid sense of 'there be ghosts lurking in shadows' ? 

This question seems central to me because the poison I detect in Vassar-esque-speak is 

a) Memetically more contagious stories seem to include lurking ghosts / demons / shadows because adding a sense of danger or creating paranoia is sticky and salient. Vassar seems to like inserting a sense of 'hidden danger' or 'large demonic forces' into his theories and way of speaking about things. I'm worried this is done for memetic intrigue, viability, and stickiness, not necessarily because it's more true. It makes people want to listen to him for long periods of time, but I don't sense it being an openly curious kind of listening but a more addicted / hungry type of listening. (I can detect this in myself.) 

I guess I'm claiming Vassar has an imbalance between the wisdom/truth of his words and the power/memetic viability of his words. With too much on the side of power. 

b) Reifying these "optimizing processes coordinating" together, maybe "aga... (read more)

Reply
[-]Benquo4y100

Question about balance: how do you not end up reifying these in your mind, creating a paranoid sense of ‘there be ghosts lurking in shadows’ ?

Mostly just by trying to think about this stuff carefully, and check whether my responses to it add up & seem constructive. I seem to have been brought up somehow with a deep implicit faith that any internal problem I have, I can solve by thinking about - i.e. that I don't have any internal infohazards. So, once I consciously notice the opportunity, it feels safe to be curious about my own fear, aggression, etc. It seems like many other people don't have this faith, which would make it harder for them to solve this class of problem; they seem to think that knowing about conflicts they're engaged in would get them hurt by making them blameworthy; that looking the thing in the face would mark them for destruction.

My impression is that insofar as I'm paranoid, this is part of the adversarial process I described, which seems to believe in something like ontologically fundamental threats that can't be reduced to specific mechanisms by which I might be harmed, and have to be submitted to absolutely. This model doesn't stand up to a serious e... (read more)

Reply
3Unreal4y
Thanks for your level-headed responses. At this point, I have nothing further to talk about on the object-level conversation (but open to anything else you want to discuss).  For information value, I do want to flag that...  I'm noticing an odd effect from talking with you. It feels like being under a weighted blanket or a 'numbing' effect. It's neither pleasant nor unpleasant. My sketchpad sense of it is: Leaning on the support of Reason. Something wants me to be soothed, to be reassured, that there is Reasonableness and Order, and it can handle things. That most things can be Solved with ... correct thinking or conceptualization or model-building or something.  So, it's a projection and all, but I don't trust this "thing" whatever it is, much. It also seems to have many advantages. And it may make it pretty hard for me to have a fully alive and embodied conversation with you.  Curious if any of this resonates with you or with anyone else's sense of you, or if I'm off the mark. But um also this can be ignored or taken offline as well, since it's not adding to the overall conversation and is just an interpersonal thing. 
4Benquo4y
I did feel inhibited from having as much fun as I'd have liked to in this exchange because it seemed like while you were on the whole trying to make a good thing happen, you were somewhat scared in a triggered and triggerable way. This might have caused the distortion you're describing. Helpful and encouraging to hear that you picked up on that and it bothered you enough to mention.
5Unreal4y
Your response here is really perplexing to me and didn't go in the direction I expected at all. I am guessing there's some weird communication breakdown happening. ¯\_(ツ)_/¯ I guess all I have left is: I care about you, I like you, and I wish well for you. <3 
0Benquo4y
It seems like you're having difficulty imagining that I'm responding to my situation as I understand it, and I don't know what else you might think I'm doing.
5Kaj_Sotala4y
I read the comment you're responding to as suggesting something like "your impression of Unreal's internal state was so different from her own experience of her internal state that she's very confused".
2Benquo4y
I was relying on her self-reports, like https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe#g9vLjj7rpGDH99adj
5[comment deleted]4y
4[comment deleted]4y
8Ben Pace4y
What do you think the problem is that Jessica is trying to solve? (I'm also interested in what problem you think Zoe is trying to solve.)
[-]Jarred Filmer4y240

I empathise with the feeling of slipperyness in the OP, I feel comfortable attributing that to the subject matter rather than malice.

If I had an experience that matched zoe's to the degree jessicata's did (superficially or otherwise) I'd feel compelled to post it. I found it helpful in the question of whether "insular rationalist group gets weird and experiences rash of psychotic breaks" is a community problem, or just a problem with stray dude.

Reply
[-]Aella4y330

Scott's comment does seem to verify the "insular rationalist group gets weird and experiences rash of psychotic breaks" trend, but it seems to be a different group than the one named in the original post.

Reply
[-]romeostevensit4y220

One of the things that can feel like gaslighting in a community that attracts highly scrupulous people is when posting about your interpretation of your experience is treated as a contractual obligation to defend the claims and discuss any possible misinterpretations or consequences of what is a challenging thing to write in the first place.

Reply
[-]Aella4y530

I feel like here and in so many other comments in this discussion that there's important and subtle distinctions that are being missed. I don't have any intention to conditionlessly accept and support all accusations made (I have seen false accusations cause incredible harm and suicidality in people close to me). I do expect people who make serious claims about organizations to be careful about how they do it. I think Zoe's Leverage post easily met my standard, but that this post here triggered a lot of warning flags for me, and I find it important to pay attention to those.

Reply
[-]Duncan Sabien (Inactive)4y180

Speaking of highly scrupulous...

I think that the phrases "treated as a contractual obligation" and "any possible misinterpretations or consequences" are both hyperbole, if they are (as they seem) intended as fair summaries or descriptions of what Aella wrote above.

I think there's a skipped step here, where you're trying to say that what Aella wrote above might imply those things, or might result in those things, or might be tantamount to those things, but I think it's quite important to not miss that step.

Before objecting to Aella's [A] by saying "[B] is bad!" I think one should justify or at least explicitly assert [A—>B]

Reply
[-]romeostevensit4y130

Yes, and to clarify I am not attempting to imply that there is something wrong with Aella's comment. It's more like this is a pattern I have observed and talked about with others. I don't think people playing a part in a pattern that has some negative side effects should necessarily have a responsibility frame around that, especially given that one literally can't track all various possible side effects of actions. I see epistemic statuses as partially attempting to give people more affordance for thinking about possible side effects of the multi context nature of online comms and that was used to good effect here, I likely would have had a more negative reaction to Aella's post if it hadn't included the epistemic status.

Reply
[-]hg004y180

The community still seems in the middle of sensemaking around Leverage

Understanding how other parts of the community were similar/dissimilar to Leverage seems valuable from a sensemaking point of view.

Lots of parts the post sort of implicitly presents things as important, or asks you to draw conclusions without explicitly pointing out those conclusions.

I think you may be asking your reader to draw the conclusion that this is a dishonest way to write, without explicitly pointing out that conclusion :-) Personally, I see nothing wrong with presenting only observations.

Reply
0farp4y
Yeesh. I don't think we should police victims' timing. That seems really evil to me. We should be super skeptical of any attempts to tell people to shut up about their allegations, and "your timing is very insensitive to the real victims" really does not pass the smell test for me.
[-]Viliam4y750

Some context, please. Imagine the following scenario:

  • Victim A: "I was hurt by X."
  • Victim B: "I was hurt by Y."

There is absolutely nothing wrong with this, whether it happens the same day, the next day, or week later. Maybe victim B was encouraged by (reactions to) victim A's message, maybe it was just a coincidence. Nothing wrong with that either.

Another scenario:

  • Victim A: "I was hurt by X."
  • Victim B: "I was also hurt by X (in a different way, on another day etc.)."

This is a good thing to happen; more evidence, encouragement for further victims to come out.

But this post is different in a few important ways. First, Jessicata piggybacks on Zoe's story a lot, insinuating analogies, but providing very little actual data. (If you rewrote the article to avoid referring to Zoe, it would be 10 times shorter.) Second, Jessicata repeatedly makes comparison between Zoe's experience at Leverage and her experience at MIRI/CFAR, and usually concludes that Leverage was less bad (for reasons that are weird to me, such as because their abuse was legible, or because they provided space for people to talk about demons and exorcise them). Here are some quotes:

I want to disagree with a frame that says th

... (read more)
Reply
[-]Aella4y640

I don't think "don't police victims' timing" is an absolute rule; not policing the timing is a pretty good idea in most cases. I think this is an exception. 

And if I wasn't clear, I'll explicitly state my position here: I think it's good to pay close attention to negative effects communities have on its members, and I am very pro people talking about this, and if people feel hurt by an organization it seems really good to have this publicly discussed. 

But I believe the above post did not simply do that. It also did other things, which is frame things I perceive in misleading ways, leave out key information relevant to a discussion (as per Eliezer's comment here), and also rely very heavily directly on Zoe's account at Leverage to bring validity to their own claims when I perceive Leverage as have been being both significantly worse and worse in a different category of way. If the above post hadn't done these things, I don't think I would have any issue with the timing.

Reply
-28farp4y
-42farp4y
[-]Ben Pace4y*1990

Someone in the community told me that for me to think AGI probably won't be developed soon, I must think I'm better at meta-rationality than Eliezer Yudkowsky, a massive claim of my own specialness.

Just zooming in on this, which stood out to me personally as a particular thing I'm really tired of. 

If you're not disagreeing with people about important things then you're not thinking. There are many options for how to negotiate a significant disagreement with a colleague, including spending lots of time arguing about it, finding a compromise action, or stopping collaborating with the person (if it's a severe disagreement, which often it can be). But telling someone that by disagreeing they're claiming to be 'better' than another person in some way always feels to me like an attempt to 'control' the speech and behavior of the person you're talking to, and I'm against it.

It happens a lot. I recently overheard someone (who I'd not met before) telling Eliezer Yudkowsky that he's not allowed to have extreme beliefs about AGI outcomes. I don't recall the specific claim, just that EY's probability mass for the claim was in the 95-99% range. The person argued that because EY disagrees w... (read more)

Reply
[-]Eliezer Yudkowsky4y1890

I affirm the correctness of Ben Pace's anecdote about what he recently heard someone tell me.

"How dare you think that you're better at meta-rationality than Eliezer Yudkowsky, do you think you're special" - is somebody trolling?  Have they never read anything I've written in my entire life?  Do they have no sense, even, of irony?  Yeah, sure, it's harder to be better at some things than me, sure, somebody might be skeptical about that, but then you ask for evidence or say "Good luck proving that to us all eventually!"  You don't be like, "Do you think you're special?"  What kind of bystander-killing argumentative superweapon is that?  What else would it prove?

I really don't know how I could make this any clearer.  I wrote a small book whose second half was about not doing exactly this.  I am left with a sense that I really went to some lengths to prevent this, I did what society demands of a person plus over 10,000% (most people never write any extended arguments against bad epistemology at all, and society doesn't hold that against them), I was not subtle.  At some point I have to acknowledge that other human beings are their own people... (read more)

Reply1
[-]jessicata4y600

The irony was certainly not lost on me; I've edited the post to make this clearer to other readers.

Reply
[-]Benquo4y209

I'm glad you agree that the behavior Jessica describes is explicitly opposed to the content of the Sequences, and that you clearly care a lot about this. I don't think anyone can reasonably claim you didn't try hard to get people to behave better, or could reasonably blame you for the fact that many people persistently try to do the opposite of what you say, in the name of Rationality.

I do think it would be a very good idea for you to investigate why & how the institutions you helped build and are still actively participating in are optimizing against your explicitly stated intentions. Anna's endorsement of this post seems like reasonably strong confirmation that organizations nominally committed to your agenda are actually opposed to it, unless you're actually checking. And MIRI/CFAR donors seem to for the most part think that you're aware of and endorse those orgs' activities.

When Jessica and another recent MIRI employee asked a few years ago for some of your time to explain why they'd left, your response was:

My guess is that I could talk over Signal voice for 30 minutes or in person for 15 minutes on the 15th, with an upcoming other commitment providing a definite cutoff poi

... (read more)
Reply
[-]habryka4y*590

Anna's endorsement of this post seems like reasonably strong confirmation that organizations nominally committed to your agenda are actually opposed to it,

Presumably Eliezer's agenda is much broader than "make sure nobody tries to socially enforce deferral to high-status figures in an ungrounded way" though I do think this is part of his goals.

The above seems to me like it tries to equivocate between "this is confirmation that at least some people don't act in full agreement with your agenda, despite being nominally committed to it" and "this is confirmation that people are actively working against your agenda". These two really don't strike me as the same, and I really don't like how this comment seems like it tries to equivocate between the two.

Of course, the claim that some chunk of the community/organizations Eliezer created are working actively against some agenda that Eliezer tried to set for them is plausible. But calling the above a "strong confirmation" of this fact strikes me as a very substantial stretch.

Reply
[-]Benquo4y239

It's explicitly opposition to core Sequences content, which Eliezer felt was important enough to write a whole additional philosophical dialogue about after the main Sequences were done. Eliezer's response when informed about it was:

is somebody trolling? Have they never read anything I’ve written in my entire life? Do they have no sense, even, of irony?

That doesn't seem like Eliezer agrees with you that someone got this wrong by accident, that seems like Eliezer agrees with me that someone identifying as a Rationalist has to be trying to get core things wrong to end up saying something like that.

Reply
[-]Sniffnoy4y340

I don't think this follows. I do not see how degree of wrongness implies intent. Eliezer's comment rhetorically suggests intent ("trolling") as a way of highlighting how wrong the person is; he is free to correct me if I am wrong, but I am pretty sure that is not an actual suggestion of intent, only a rhetorical one.

I would say moreover, that this is the sort of mistake that occurs, over and over, by default, with no intent necessary. I might even say that it is avoiding, not committing, this sort of mistake, that requires intent. Because this sort of mistake is just sort of what people fall into by default, and avoiding it requires active effort.

Is it contrary to everything Eliezer's ever written? Sure! But reading the entirety of the Sequences, calling yourself a "rationalist", does not in any way obviate the need to do the actual work of better group epistemology, of noticing such mistakes (and the path to them) and correcting/avoiding them.

I think we can only infer intent like you're talking about if the person in question is, actually, y'know, thinking about what they're doing. But I think people are really, like, acting on autopilot a pretty big fraction of the time; n... (read more)

Reply
4Benquo4y
This seems exactly backwards, if someone makes uncorrelated errors, they are probably unintentional mistakes. If someone makes correlated errors, they are better explained as part of a strategy. I can imagine, after reading the sequences, continuing to have the epistemic modesty bias in my own thoughts, but I don't see how I could have been so confused as to refer to it in conversation as a valid principle of epistemology.
[-]TekhneMakre4y100

Behavior is better explained as strategy than as error, if the behaviors add up to push the world in some direction (along a dimension that's "distant" from the behavior, like how "make a box with food appear at my door" is "distant" from "wiggle my fingers on my keyboard"). If a pattern of correlated error is the sort of pattern that doesn't easily push the world in a direction, then that pattern might be evidence against intent. For example, the conjunction fallacy will produce a pattern of wrong probability estimates with a distinct character, but it seems unlikely to push the world in some specific direction (beyond whatever happens when you have incoherent probabilities). (Maybe this argument is fuzzy on the edges, like if someone keeps trying to show you information and you keep ignoring it, you're sort of "pushing the world in a direction" when compared to what's "supposed to happen", i.e. that you update; which suggests intent, although it's "reactive" rather than "proactive", whatever that means. I at least claim that your argument is too general, proves too much, and would be more clear if it were narrower.)

Reply
5Benquo4y
The effective direction the epistemic modesty / argument from authority bias pushes things, is away from shared narrative as something that dynamically adjusts to new information, and towards shared narrative as a way to identify and coordinate who's receiving instructions from whom. People frequently make "mistakes" as a form of submission, and it shouldn't be surprising that other types of systematic error function as a means of domination, i.e. of submission enforcement.
3TekhneMakre4y
(I indeed find this a more clear+compelling argument and appreciate you trying to make this known.)
4Eli Tyre2y
That does seem right to me. It seems like very often correlated errors are the result of a mistaken, upstream crux. They're making one mistake, which is flowing into a bunch of specific instances. This at least has to be another hypothesis, along with "this is a conscious or unconscious strategy to get what they want."
4Sniffnoy4y
I mean, there is a word for correlated errors, and that word is "bias"; so you seem to be essentially claiming that people are unbiased? I'm guessing that's probably not what you're trying to claim, but that is what I am concluding? Regardless, I'm saying people are biased towards this mistake. Or really, what I'm saying it's the same sort of phenomenon that Eliezer discusses here. So it could indeed be construed as a strategy as you say; but it would not be a strategy on the part of the conscious agent, but rather a strategy on the part of the "corrupted hardware" itself. Or something like that -- sorry, that's not a great way of putting it, but I don't really have a better one, and I hope that conveys what I'm getting at. Like, I think you're assuming too much awareness/agency of people. A person who makes correlated errors, and is aware of what they are doing, is executing a deliberate strategy. But lots of people who make correlated errors are just biased, or the errors are part of a built-in strategy they're executing, not deliberately, but by default without thinking about it, that requires effort not to execute. We should expect someone calling themself a rationalist to be better, obviously, but, IDK, sometimes things go bad? I mean people don't necessarily fully internalize everything they read, and in some people the "hold on what am I doing?" can be weak? <shrug> I mean I certainly don't want to rule out deliberate malice like you're talking about, but neither do I think this one snippet is enough to strongly conclude it.
5Benquo4y
In most cases it seems intentional but not deliberate. People will resist pressure to change the pattern, or find new ways to execute it if the specific way they were engaged in this bias is effectively discouraged, but don't consciously represent to themselves their intent to do it or engage in explicit means-ends reasoning about it.
4Sniffnoy4y
Yeah, that sounds about right to me. I'm not saying that you should assume such people are harmless or anything! Just that, like, you might want to try giving them a kick first -- "hey, constant vigilance, remember?" :P -- and see how they respond before giving up and treating them as hostile.
9lsusr4y
"How dare you think that you're better at meta-rationality than Eliezer Yudkowsky, do you think you're special" reads to me as something Eliezer Yudkowsky himself would never write.
-66throwaway462378964y
[-]Peter Wildeford4y320

I don't recall the specific claim, just that EY's probability mass for the claim was in the 95-99% range. The person argued that because EY disagrees with some other thoughtful people on that question, he shouldn't have such confidence.

 

I think people conflate the very reasonable "I am not going to adopt your 95-99% range because other thoughtful people disagree and I have no particular reason to trust you massively more than I trust other people" with the different "the fact that other thoughtful people mean there's no way you could arrive at 95-99% confidence" which is false. I think thoughtful people disagreeing with you is decent evidence you are wrong but can still be outweighed.

Reply
[-]Alexander4y*230

I sought a lesson we could learn from this situation, and your comment captured such a lesson well.

This is reminiscent of the message of the Dune trilogy. Frank Herbert warns about society's tendencies to "give over every decision-making capacity" to a charismatic leader. Herbert said in 1979:

The bottom line of the Dune trilogy is: beware of heroes. Much better rely on your own judgment, and your own mistakes.

Reply
[-]Eli Tyre4y220

If you're not disagreeing with people about important things then you're not thinking.

This is a great sentence. I kind of want it on a t-shirt.

Reply
[-]Viliam4y100

If you're not disagreeing with people about important things then you're not thinking.

Indeed. And if the people object against someone disagreeing with them, that would imply they are 100% certain about being right.

I recently overheard someone (who I'd not met before) telling Eliezer Yudkowsky that he's not allowed to have extreme beliefs about AGI outcomes.

On one hand, this suggests that the pressure to groupthink is strong. On the other hand, this is evidence of Eliezer not being treated as an infallible leader... which I suppose is a good news in this avalanche of bad news.

(There is a method to reduce group pressure, by making everyone write their opinion first, and only then tell each other the opinions. Problem is, this stops working if you estimate the same thing repeatedly, because people already know what the group opinion was in the past.)

Reply
[-]Rob Bensinger4y1610

Kate Donovan messaged me to say:

I think four people experiencing psychosis in a period of five years, in a community this large with high rates of autism and drug use, is shockingly low relative to base rates.

[...]

A fast pass suggests that my 1-3% for lifetime prevalence was right, but mostly appearing at 15-35.

And since we have conservatively 500 people in the cluster (a lot more people than that attended CFAR workshops or are in MIRI or CFAR's orbit), 4 is low. Given that I suspect the cluster is larger and I am pretty sure my numbers don't include drug induced psychosis, just primary psychosis.

The base rate seems important to take into account here, though per Jessica, "Obviously, for every case of poor mental health that 'blows up' and is noted, there are many cases that aren't." (But I'd guess that's true for the base-rate stats too?)

Reply
[-]LGS4y350

I'm a complete outsider looking in here, so here's an outsider's perspective (from someone in CS academia, currently in my early 30s).

I've never heard or seen anyone, in real life, ever have psychosis. I know of 0 cases. Yeah, I know that people don't share such things, but I've heard of no rumors either.

By contrast, depression/anxiety seems common (especially among grad students) and I know of a couple of suicides. There was even a murder! But never psychosis; without the internet I wouldn't even know it's a real thing.

I don't know what the official base rate is, but saying "4 cases is low" while referring to the group of people I'm familiar with (smart STEM types) is, from my point of view, absurd.

The rate you quote is high. There may be good explanations for this: maybe rationalists are more open about their psychosis when they get it. Maybe they are more gossipy so each case of psychosis becomes widely known. Maybe the community is easier to enter for people with pre-existing psychotic tendencies. Maybe it's all the drugs some rationalists use.

But pretending the reported rate of psychosis is low seems counterproductive to me.

Reply
[-]JenniferRM4y480

I lived in a student housing cooperative for 3 years during my undergrad experience. These were non-rationalists. I lived with 14 people, then 35, then 35 (somewhat overlapping) people.

In these 3 years I saw 3 people go through a period of psychosis.

Once it was because of whippets, basically, and updated me very very strongly away from nitrous oxide being safe (it potentiates response to itself, so there's a positive feedback loop, and positive feedback loops in biology are intrinsically scary). Another time it was because the young man was almost too autistic to function in social environments and then feared that he'd insulted a woman and would be cast out of polite society for "that action and also for overreacting to the repercussions of the action". The last person was a mixture of marijuana and having his Christianity fall apart after being away from the social environment of his upbringing.

A striking thing about psychosis is that up close it really seems more like a biological problem rather than a philosophic one, whereas I had always theorized naively that there would be something philosophically interesting about it, with opportunities to learn or teach in a way that conn... (read more)

Reply
6TekhneMakre4y
Thanks for this account. Feels like there's more to the story here. Two of the cases you gave do sound like they had some mental thing (Christianity, social fear) that precipitated the psychosis, even if the psychosis itself was non-mental.
[-]mingyuan4y320

I agree with other commenters that you are just less likely to see psychosis even if it's there, both because it's not ongoing in the way that depression and anxiety are, and because people are less likely to discuss it. I was only one step away from Jessica in the social graph in October of 2017 and never had any inkling that she'd had a psychotic episode until just now. I also wasn't aware that Zack Davis had ever had a psychotic episode, despite having met him several times and having read his blog a bit. I also lived with Olivia during the time that she was apparently inspiring psychosis in others. 

In fact, the only psychotic episodes I've known about are ones that had news stories written about them, which suggests to me that you are probably underestimating the extent to which people keep quiet about the psychotic episodes of themselves and those close to them. It seems in quite poor taste to gossip about, akin to gossiping about friends' suicide attempts (which I also assume happen much more often than I hear about — I think one generally only hears about the ones that succeed or that are publicized to spread awareness).

Just for thoroughness, here are the psychotic epis... (read more)

Reply
[-]LGS4y180

I feel like people keep telling me that psychosis around me should be higher than what I hear about, which is irrelevant to my point: my point is the frequency in which I hear about psychosis in the rationalist community is like an order of magnitude higher than the frequency I hear about it elsewhere.

It doesn't matter whether people hide psychosis among my social group; the observation to explain is why people don't hide psychosis in the rationalist community to the same extent.

For example, you mention 2 separate example of Bay Area rationalists making the news for psychosis. I know of no people in my academic community who have made the news for psychosis. Assuming equal background rates, what is left to explain is why rationalists are more likely to make the news when they get psychosis.

Another example: there have now been 1-2 people who have admitted to psychosis in blog posts intended as public callouts. I know of no people in my academic community who have written public callout blog posts in which they say they've had psychosis. Is there an explanation for why rationalists who've had psychosis are more likely to write public callout blog posts?

Anyway, this discussion feels kind of moot now that I've read Scott Alexander's update to his comment. He says that several people (who knew each other) all had psychosis around the same time in 2017. No reasonable person can think this is merely baseline; some kind of social contagion is surely involved (probably just people sharing drugs or drug recommendations).

Reply
6Alex Vermillion4y
I think part of it is that this isn't related to your social network, but your news habits and how your news sources cover your social network. You probably don't read newspapers that are as certain to write about your neighbor having any kind of "psychosis", but you read forums that tell you about Rationalists doing the same.
0Puxi Deek4y
Them leaving out the exact details of what went on with their groups make the whole discussion sketchy. Maybe they just want to keep the conversation to themselves. If that's the case, why are they posting on LW?
[-]romeostevensit4y170

Sampling error. Psychosis is not an ongoing thing, yielding many fewer chances to observe it than months or years long depression or anxiety. Psychosis often manifests when people are already isolated due to worsening mental health, whereas depression and anxiety can be exactly exacerbated by the situations in which you would observe it i.e. socializing. Nor would people volunteer their experience due to much greater stigma.

Reply
6LGS4y
I am not comparing "number of psychosis among my friends" to "number of depression episodes among my friends". I am comparing "number of psychosis among my friends" to "number of psychosis among rationalists". Any sampling errors should apply equally to the rationalists (or if not, that demands an explanation). The observation is that there's a lot more reported psychosis among rationalists than reported psychosis among (say) CS grad students. I don't have an explanation (and maybe there's an innocuous one), but I don't think people should be denying this fact.
[-]TekhneMakre4y190

A hypothesis is that rationalists are a larger gossip community, so that e.g. you might hear about psychosis from 4 years ago in people you're nth-degree socially connected with, where maybe most other communities aren't like that?

Reply
[-]LGS4y100

Certainly possible! I mentioned this hypothesis upthread.

I wonder if there are ways to test it. For instance, do non-Bay-Arean rationalists also have a high rate of reported psychosis? I think not (not sure though), though perhaps most of the gossip centers on the Bay Area.

Are Bay Area rationalists also high in reported levels of other gossip-mediated things? I'm trying to name some, but most sexual ones are bad examples because of the polyamory confounder. How about: are Bay rationalists high in reported rates of plastic surgery? How about abortion? These seem like somewhat embarrassing things that you'd normally not find out about, but that people like to gossip about.

Or maybe people don't care to gossip about these things on the internet, because they are less interesting the psychosis.

Reply
[-]Freyja4y100

I’m someone with a family history of psychosis and I spend quite a lot of time researching it—treatments, crisis response, cultural responses to it. There are roughly the same number of incidences of psychosis in my immediate to extended family than are described in this post in the extended rationalist community. Major predictive factors include stress, family history and use of marijuana (and, to a lesser extent, other psychedelics). I don’t have studies to back this up but I have an instinct based on my own experience that openness-to-experience and risk-of-psychosis are correlated in family risk factors. So given the drugs, stress and genetic openness, I’d expect generic Bay Area smart people to have a fairly high risk of psychosis compared to, say, people in more conservative areas already.

Reply
4TekhneMakre4y
(Sort of; you did say "more gossipy -> more widely known", but I wanted to specifically add the word "larger", the point being that a small + extra gossipy community would have a higher that usual report rate, and so would a large + extra gossipy (+ memory-ful) community; but the larger one would have more raw numbers, so you'd get a wrong estimate of the proportional rate if you estimated the size of the relevant reference class using intuitions based on small gossip communities. And maybe even a less gossipy but larger network would still have this effect; like, I *never* hear gossip about people in communities I'm not a part of, even if I talk to some people from those communities, so there's more structure than just the rate of gossip. It's more a question of how large is the "gossip-percolation connected component".)
4jessicata4y
See PhoenixFriend's comment, there were multiple cases I didn't know about, so a lot of people's thoughts about this post are recapitulating sampling bias from my own knowledge (which is from my own social network, e.g. oversampling trans people and people talking with Michael). This confirms that people are avoiding volunteering the information that they had a psychotic break.
[-]Duncan Sabien (Inactive)4y370

PhoenixFriend alleges multiple cases you didn't know about, but so far no one else has affirmed that those cases existed or were closely connected with CFAR/MIRI.

I think it's entirely possible that those cases did exist and will be affirmed, but at the moment my state is "betting on skeptical."

Reply
[-]jessicata4y*310

This is a good point regarding the broader community. I do think that, given that at least 2 cases were former MIRI employees, there might be a higher rate in that subgroup.

EDIT: It's also relevant that a lot of these cases happened in the same few years. 4 of the 5 cases of psychiatric hospitalization or jail time I know about happened in 2017, the other happened sometime 2017-2019. I think these people were in the 15-35 age range, which spans 20 years.

Reply
4Gunnar_Zarncke4y
See also studies about base-rate here: https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe?commentId=pHaq26ZrznpC7D5f4 
2Linch4y
? 😭😤😂🤔🤔🤔
[-]nostalgebraist4y1560

First, thank you for writing this.

Second, I want to jot down a thought I've had for a while now, and which came to mind when I read both this and Zoe's Leverage post.

To me, it looks like there is a recurring phenomenon in the rationalist/EA world where people...

  • ...become convinced that the future is in their hands: that the fate of the entire long-term future ("the future light-cone") depends on the success of their work, and the work of a small circle of like-minded collaborators
  • ...become convinced that (for some reason) only they, and their small circle, can do this work (or can do it correctly, or morally, etc.) -- that in spite of the work's vast importance, in spite of the existence of billions of humans and surely at least thousands with comparable or superior talent for this type of work, it is correct/necessary for the work to be done by this tiny group
  • ...become less concerned with the epistemic side of rationality -- "how do I know I'm right? how do I become more right than I already am?" -- and more concerned with gaining control and influence, so that the long-term future may be shaped by their own (already-obviously-correct) views
  • ...spend more effort on self-experimenta
... (read more)
Reply
[-]Davis_Kingsley4y*680

I worked for CFAR full-time from 2014 until mid to late 2016, and have worked for CFAR part-time or as a frequent contractor ever since. My sense is that dynamics like those you describe were mostly not present at CFAR, or insofar as they were present weren't really the main thing. I do think CFAR has not made as much research progress as I would like, but I think the reasoning for that is much more mundane and less esoteric than the pattern you describe here.

The fact of the matter is that for almost all the time I've been involved with CFAR, there just plain hasn't been a research team. Much of CFAR's focus has been on running workshops and other programs rather than on dedicated work towards extending the art; while there have occasionally been people allocated to research, in practice even these would often end up getting involved in workshop preparation and the like.

To put things another way, I would say it's much less "the full-time researchers are off unproductively experimenting on their own brains in secret" and more "there are no full-time researchers". To the best of my knowledge CFAR has not ever had what I would consider a systematic research and development program -- ... (read more)

Reply
[-]cousin_it4y500

Maybe offtopic, but the "trying too hard to try" part rings very true to me. Been on both sides of it.

The tricky thing about work, I'm realizing more and more, is that you should just work. That's the whole secret. If instead you start thinking how difficult the work is, or how important to the world, or how you need some self-improvement before you can do the work effectively, these thoughts will slow you down and surprisingly often they'll be also completely wrong. It always turns out later that your best work wasn't the one that took the most effort, or felt the most important at the time; you were just having a nose-down busy period, doing a bunch of things, and only the passage of time made clear which of them mattered.

Reply
2[comment deleted]4y
[-]hg004y*220

Does anyone have thoughts about avoiding failure modes of this sort?

Especially in the "least convenient possible world" where some of the bullet points are actually true -- like, if we're disseminating principles for wannabe AI Manhattan Projects, and we're optimizing the principles for the possibility that one of the wannabe AI Manhattan Projects is the real deal, what principles should we disseminate?


Most of my ideas are around "staying grounded" -- spend significant time hanging out with "normies" who don't buy into your worldview, maintain your sense of humor, fully unplug from work at least one day per week, have hobbies outside of work (perhaps optimizing explicitly for escapism in the form of computer games, TV shows, etc.) Possibly live somewhere other than the Bay Area, someplace with fewer alternative lifestyles and a stronger sense of community. (I think Oxford has been compared favorably to Berkeley with regard to presence of homeless people, at least.)

But I'm just guessing, and I encourage others to share their thoughts. Especially people who've observed/experienced mental health crises firsthand -- how could they have been prevented?

EDIT: I'm also curious how to ... (read more)

Reply
[-]romeostevensit4y530

IMO, A large number of mental health professionals simply aren't a good fit for high intelligence people having philosophical crises. People know this and intuitively avoid the large hassle and expense of sorting through a large number of bad matches. Finding solid people to refer to who are not otherwise associated with the community in any way would be helpful.

Reply
[-]Rob Bensinger4y240

I know someone who may be able to help with finding good mental health professionals for those situations; anyone who's reading this is welcome to PM me for contact info.

Reply
[-]ozziegooen4y210

There's an "EA Mental Health Navigator" now to help people connect to the right care.
https://eamentalhealth.wixsite.com/navigator

I don't know how good it is yet. I just emailed them last week, and we set up an appointment for this upcoming Wednesday. I might report back later, as things progress.

Reply
1Zian4y
Unfortunately, by participating in this community (LW/etc.), we've disqualified ourselves from asking Scott to be our doctor (should I call him "Dr. Alexander" when talking about him-as-a-medical-professional while using his alias when he's not in a clinical environment?). I concur with your comment about having trouble finding a good doctor for people like us. p(find a good doctor) is already low and difficult given the small n (also known as the doctor shortage). If you combine p(doctor works well with people like us), the result may rapidly approach epsilon. It seems that the best advice is to make n bigger by seeking care in a place with a large per capita population of the doctors you need. For example, by combining https://nccd.cdc.gov/CKD/detail.aspx?Qnum=Q600 with the US Census ACS 2013 population estimates (https://data.census.gov/cedsci/table?t=Counts,%20Estimates,%20and%20Projections%3APopulation%20Total&g=0100000US%240400000&y=2013&tid=ACSDT1Y2013.B01003&hidePreview=true&tp=true), we see that the following states had >=0.9 primary care doctors per 1,000 people: * District of Columbia (1.4) * Vermont (1.1) * Massachusetts (1.0) * Maryland (0.9) * Minnesota (0.9) * Rhode Island (0.9) * New York (0.9) * Connecticut (0.9)
[-]abiggerhammer4y250

Does anyone have thoughts about avoiding failure modes of this sort?

Meredith from Status451 here. I've been through a few psychotic episodes of my own, often with paranoid features, for reasons wholly unrelated to anything being discussed at the object-level here; they're unpleasant enough, both while they're going on and while cleaning up the mess afterward, that I have strong incentives to figure out how to avoid these kinds of failure modes! The patterns I've noticed are, of course, only from my own experience, but maybe relating them will be helpful.

  • Instrumental scrupulousness is a fantastic tool. By "instrumental scrupulousness" I simply mean pointing my scrupulousness at trying to make sure I'm not doing something I can't undo. More or less what you describe in your edit, honestly. As for how much is too much, you absolutely don't want to paralyse yourself into inaction through constantly second-guessing yourself. Real artists ship, after all!
  • Living someplace with good mental health care has been super crucial for me. In my case that's Belgium. I've only had to commit myself once, but it saved my life and was, bizarrely, one of the most autonomy-respecting experiences I've ev
... (read more)
Reply
[-]ChristianKl4y180

I do think that encouraging people to stay in contact with their family and work to have good relationships is very useful. Family can provide a form of grounding that having small talk with normies while going dancing or persuing other hobbies doesn't provide. 

When deciding whether a personal development group is culty I think it's a good test to ask whether or not the work of the group lead to the average person in the group having better or worse relationships with their parents. 

Reply
9Avi4y
I agree, and think it's important to 'stay grounded' in the 'normal world' if you're involved in any sort of intense organization or endeavor. You've made some great suggestions. I would also suggest that having a spouse who preferably isn't too involved, or involved at all, and maybe even some kids, is another commonality among people who find it easier to avoid going too far down these rabbit holes. Also, having a family is positive in countless other ways, and what I consider part of the 'good life' for most people.
[-]TekhneMakre4y180
It would be both surprising news, and immensely bad news, to learn that only a tiny group of people could (or should) work on such a problem -- that would mean applying vastly less parallel "compute" to the problem, relative to what is theoretically available, and that when the problem is forbiddingly difficult to begin with.  

I have substantial probability on an even worse state: there's *multiple* people or groups of people, *each* of which is *separately* necessary for AGI to go well. Like, metaphorically, your liver, heart, and brain would each be justified in having a "rarity narrative". In other words, yes, the parallel compute is necessary--there's lots of data and ideas and thinking that has to happen--but there's a continuum of how fungible the compute is relative to the problems that need to be solved, and there's plenty of stuff at the "not very fungible but very important" end. Blood is fungible (though you definitely need it), but you can't just lose a heart valve, or your hippocampus, and be fine.

Reply
[-]nostalgebraist4y210

I didn't mention it in the comment, but having a larger pool of researchers is not only useful for doing "ordinary" work in parallel -- it also increases the rate at which your research community discovers and accumulates outlier-level, irreplaceable genius figures of the Euler/Gauss kind.

If there are some such figures already in the community, great, but there are presumably others yet to be discovered.  That their impact is currently potential, not actual, does not make its sacrifice any less damaging.

Reply
5TekhneMakre4y
Yep. (And I'm happy this overall discussion is happening, partly because, assuming rarity narratives are part of what leads to all this destructive psychic stuff as you described, then if a research community wants to work with people about whom rarity narratives would actually be somewhat *true*, the research community has as an important subgoal to figure out how to have true rarity narratives in a non-harmful way.)
[-]Gunnar_Zarncke4y150

Most of these bullet points seem to apply to some degree to every new and risky endeavor ever started. How risky things are is often unclear at the start. Such groups are build from committed people. Small groups develop their own dynamics. Fast growth leads to social growing pains. Lack of success leads to a lot of additional difficulties. Also: Evaporative cooling. And if (partial) success happens even more growth leads to needed management level etc etc. And later: Hindsight bias. 

Reply
7Elizabeth4y
Without commenting on the object level, I am really happy to see someone lay this out in terms of patterns that apply to a greater or lesser extent, with correlations but not in lockstep.
-18TAG4y
[-]orthonormal4y*1510

Thank you for writing this, Jessica. First, you've had some miserable experiences in the last several years, and regardless of everything else, those times sound terrifying and awful. You have my deep sympathy.

Regardless of my seeing a large distinction between the Leverage situation and MIRI/CFAR, I agree with Jessica that this is a good time to revisit the safety of various orgs in the rationality/EA space.

I almost perfectly overlapped with Jessica at MIRI from March 2015 to June 2017. (Yes, this uniquely identifies me. Don't use my actual name here anyway, please.) So I think I can speak to a great deal of this.

I'll run down a summary of the specifics first (or at least, the specifics I know enough about to speak meaningfully), and then at the end discuss what I see overall.

Claim: People in and adjacent to MIRI/CFAR manifest major mental health problems, significantly more often than the background rate.

I think this is true; I believe I know two of the first cases to which Jessica refers; and I'm probably not plugged-in enough socially to know the others. And then there's the Ziz catastrophe.

Claim: Eliezer and Nate updated sharply toward shorter timelines, other MIRI researchers... (read more)

Reply
[-]Gunnar_Zarncke4y480

: People in and adjacent to MIRI/CFAR manifest major mental health problems, significantly more often than the background rate.

I think this is true

My main complaint about this and the Leverage post is the lack of base-rate data. How many people develop mental health problems in a) normal companies, b) startups, c) small non-profits, d) cults/sects? So far, all I have seen are two cases. And in the startups I have worked at, I would also have been able to find mental health cases that could be tied to the company narrative. Humans being human narratives get woven. And the internet being the internet, some will get blown out of proportion. That doesn't diminish the personal experience at all. I am updating only slightly on CFAR or MIRI. And basically not at all on "things look better from the outside than from the inside."

Reply
[-]habryka4y840

In particular, I want to remind people here that something like 30-40% of grad students at top universities have either clinically diagnosed depression or anxiety (link). I think given the kind of undirected, often low-paid, work that many have been doing for the last decade, I think that's the right reference class to draw from, and my current guess is we are roughly at that same level, or slightly below it (which is a crazy high number, and I think should give us a lot of pause). 

Reply
[-]Linch4y490

I want to remind people here that something like 30-40% of grad students at top universities have either clinically diagnosed [emphasis mine] depression or anxiety (link)

I'm confused about how you got to this conclusion, and think it is most likely false. Neither your link, the linked study, or the linked meta-analysis in the linked study of your link says this. Instead the abstract of the linked^3 meta-analysis says:

Among 16 studies reporting the prevalence of clinically significant symptoms of depression across 23,469 Ph.D. students, the pooled estimate of the proportion of students with depression was 0.24 (95% confidence interval [CI], 0.18-0.31; I2 = 98.75%). In a meta-analysis of the nine studies reporting the prevalence of clinically significant symptoms of anxiety across 15,626 students, the estimated proportion of students with anxiety was 0.17 (95% CI, 0.12-0.23; I2 = 98.05%).

Further, the discussion section of the linked^3 study emphasizes:

While validated screening instruments tend to over-identify cases of depression (relative to structured clinical interviews) by approximately a factor of two67,68, our findings nonetheless point to a major public health problem among Ph

... (read more)
Reply
7Gunnar_Zarncke4y
Note that the pooled prevalence is 24% (CI 18-31). But it differs a lot across studies, symptoms, and location. In the individual studies, the range is really from zero to 50% (or rather to 38% if you exclude a study with only 6 participants). I think a suitable reference class would be the University of California which has 3,190 participants and a prevalence of 38%.  
[-]Linch4y110

Sorry, am I misunderstanding something? I think taking "clinically significant symptoms", specific to the UC system, as a given is wrong because it did not directly address either of my two criticisms:

1. Clinically significant symptoms =/= clinically diagnosed even in worlds where there is a 1:1 relationship between clinically significant symptoms and would have been clinically diagnosed, as many people do not get diagnosed

2. Clinically significant symptoms do not have a 1:1 relationship with would have been clinically diagnosed.

Reply
4Gunnar_Zarncke4y
Well, I agree that the actual prevalence you have in mind would be roughly half of 38% i.e. ~20%. That is still much higher than the 12% you arrived at. And either value is so high that there is little surprise some severe episodes of some people happened in a 5-year frame. 
4habryka4y
The UC Berkeley study was the one that I had cached in my mind as generating this number. I will reread it later today to make sure that it's right, but it sure seems like the most relevant reference class, given the same physical location.
7Gunnar_Zarncke4y
I had a look at the situation in Germany and it doesn't look much better. 17% of students are diagnosed with at least one psychical disorder. This is based on the health records of all students insured by one of the largest public health insurers in Germany (about ten percent of the population): https://www.barmer.de/blob/144368/08f7b513fdb6f06703c6e9765ee9375f/data/dl-barmer-arztreport-2018.pdf 
6habryka4y
I feel like the paragraph you cited just seems like the straightforward explanation of where my belief comes from?  24% of people have depression, 17% have anxiety, resulting in something like 30%-40% having one or the other.  I did not remember the section about the screening instruments over-identifying cases of depression/anxiety by approximately a factor of two, which definitely cuts down my number, and I should have adjusted it in my above comment. I do think that factor of ~2 does maybe make me think that we are doing a bit worse than grad students, though I am not super sure.
[-]Linch4y230

Sorry, maybe this is too nitpicky, but clinically significant symptoms =/= clinically diagnosed, even in worlds where the clinically significant symptoms are severe enough to be diagnosed as such.

If you instead said in "population studies 30-40% of graduate students have anxiety or depression severe enough to be clinically diagnosed as such were they to seek diagnosis" then I think this will be a normal misreading from not jumping through enough links.

Put another way, if someone in mid-2020 told me that they had symptomatic covid and was formally diagnosed with covid, I would expect that they had worse symptoms than someone who said they had covid symptoms and later tested for covid antibodies. This is because jumping through the hoops to get a clinical diagnosis is nontrivial Bayesian evidence of severity and not just certainty, under most circumstances, and especially when testing is limited and/or gatekeeped (which is true for many parts of the world for covid in 2020, and is usually true in the US for mental health). 

Reply
[-]habryka4y100

Ah, sorry, yes. Me being unclear on that was also bad. The phrasing you give is the one I intended to convey, though I sure didn't do it.

Reply
3Linch4y
Thanks, appreciate the update!
[-]orthonormal4y470

Additionally, as a canary statement: I was also never asked to sign an NDA.

Reply
[-]Vaniver4y200

I think CFAR would be better off if Anna delegated hiring to someone else.

I think Pete did (most of?) the hiring as soon as he became ED, so I think this has been the state of CFAR for a while (while I think Anna has also been able to hire people she wanted to hire).

Reply
[-]PeteMichaud4y100

It's always been a somewhat group-involved process, but yes, I was primarily responsible for hiring for roughly 2016 through the end of 2017, then it would have been Tim. But again, it's a small org and always involved some involvement of the whole group. 

Reply
[-]Eli Tyre4y100

Without denying that it is a small org and staff usually have some input over hiring, that input is usually informal.

My understanding is that in the period when Anna was ED, there was an explicit all-staff discussion when they were considering a hire (after the person had done a trial?). In the Pete era, I'm sure Pete asked for staff members' opinions, and if (for instance) I sent him an email with my thoughts on a potential hire, he would take that info into account, but there was not institutional group meeting. 

Reply
[-]Vanessa Kosoy4y140

if one believed somebody else were just as capable of causing AI to be Friendly, clearly one should join their project instead of starting one's own.

Nitpicking: there are reasons to have multiple projects, for example it's convenient to be in the same geographic location but not anyone can relocate to any place.

Reply
4orthonormal4y
Sure - and MIRI/FHI are a decent complement to each other, the latter providing a respectable academic face to weird ideas.  Generally though, it's far more productive to have ten top researchers in the same org rather than having five orgs each with two top researchers and a couple of others to round them out. Geography is a secondary concern to that.
4Vanessa Kosoy4y
A "secondary concern" in the sense that, we should work remotely? Or in the sense that everyone should relocate? Because the latter is unrealistic: people have families, friends, communities, not anyone can uproot themself.
[-]orthonormal4y100

A secondary concern in that it's better to have one org that has some people in different locations, but everyone communicating heavily, than to have two separate organizations.

Reply
[-]Davidmanheim4y120

I think this is much more complex than you're assuming. As a sketch of why, costs of communication scale poorly, and the benefits of being small and coordinating centrally often beats the costs imposed by needing to run everything as one organization. (This is why people advise startups to outsource non-central work.)

Reply
4Vanessa Kosoy4y
This might be the right approach, but notice that no existing AI risk org does that. They all require physical presence.
4novalinium4y
Anthropic does not require consistent physical presence.
1Vanessa Kosoy4y
AFAICT, Anthropic is not an existential AI safety org per se, they're just doing a very particular type of research which might help with existential safety. But also, why do you think they don't require physical presence?
7novalinium4y
If you're asking why I believe that they don't require presence, I've been interviewing with them and that's my understanding from talking with them. The first line of copy on their website is Sounds pretty much like a safety org to me.
7Vanessa Kosoy4y
Are you talking about "you can work from home and come to the office occasionally", or "you can live on a different continent"? I found no mention of existential risk on their web page. They seem to be a commercial company, aiming at short-to-mid-term applications. I doubt they have any intention to do e.g. purely theoretical research, especially if it has no applications to modern systems. So, what they do can still be meritorious and relevant to reducing existential risk. But, the context of this discussion is: can we replace all AI safety orgs by just one org. And, Anthropic is too specialized to serve such a role.
5Vaniver4y
I believe Anthropic doesn't expect its employees to be in the office every day, but I think this is more pandemic-related than it is a deliberate organizational design choice; my guess is that most Anthropic employees will be in the office a year from now.
[-]Eli Tyre4y*1190

[Edit: I want to note that this is represents only a fraction of my overall feelings and views on this whole thing.]

I don't want to concentrate on the question of which is "worse"; it is hard to even start thinking about that without discussing facts on the ground and general social models that would apply to both cases.

I feel some annoyance at this sentence. I appreciate the stated goal of just trying to understand what happened in the different situations, without blaming or trying to evaluate which is worse.

But then the post repeatedly (in every section!) makes reference to Zoe's post, comparing her experience at Leverage to your (and others') experience at MIRI/CFAR, taking specific elements from her account and drawing parallels to your own. This is the main structure of the post!

Some more or less randomly chosen examples (ctrl-f "Leverage" or "Zoe" for lots more):

Zoe begins by listing a number of trauma symptoms she experienced.  I have, personally, experienced most of those on the list of cult after-effects in 2017, even before I had a psychotic break.

...

Zoe further talks about how the experience was incredibly confusing and people usually only talk about the past event

... (read more)
Reply
[-]Eli Tyre4y*960

This feels especially salient because a number of the specific criticisms, in my opinion, don't hold up to scrutiny, but this is obscured by the comparison to Leverage.

Like for any cultural characteristic X, there will be healthy and unhealthy versions. For instance, there are clearly good healthy versions of "having a culture of self improvement and debugging", and also versions that are harmful. 

For each point Zoe contends that (at least some parts of Leverage) had a destructive version, and you point out that there was a similar thing at MIRI/CFAR. And for many (but not all) of those points, 1) I agree that there was a similar dynamic at MIRI/CFAR, and also 2) I think that the MIRI CFAR version was much less harmful than what Zoe describes.

For instance,

Zoe is making the claim that (at least some parts of) Leverage had an unhealthy and destructive culture of debugging. You, Jessica, make the claim that CFAR had a similar culture of debugging, and that this is similarly bad. My current informed impression is that CFAR's self improvement culture both had some toxic elements and is/was also an order of magnitude better than what Zoe describes.

Assuming that for a moment that my ... (read more)

Reply
[-]Eli Tyre4y*470

Ok. After thinking further and talking about it with others, I've changed my mind about the opinion that I expressed in this comment, for two reasons.

1) I think there is some pressure to scapegoat Leverage, by which I mean specifically, "write off Leverage as reprehensible, treat it as 'an org that we all know is bad', and move on, while feeling good about our selves for not being bad they way that they were". 

Pointing out some ways that MIRI or CFAR are similar to Leverage disrupts that process. Anyone who both wants to scapegoat Leverage and also likes MIRI has to contend with some amount of cognitive dissonance. (A person might productively resolve this cognitive dissonance by recognizing what I contend are real disanalogies between the two cases, but they do have to come to terms with it at all.)

If you mostly want to scapegoat, this is annoying, but I think we should be making it harder, not easier, to scapegoat in this way.

2) My current personal opinion is that the worst things that happened at MIRI or CFAR are not in the same league as what was describes as happening in (at least some parts of) Leverage in Zoe's post, both in terms of the deliberateness of the bad dynami... (read more)

Reply
[-]Hazard4y140

I'm not sure what writing this comment felt like for you, but from my view it seems like you've noticed a lot of the dynamics about scapegoating and info-suppression fields that Ben and Jessica have blogged about in the past (and occasionally pointed out in the course of these comments, though less clearly). I'm going to highlight a few things.

I do think that Jessica writing this post will predictably have reputational externalities that I don't like and I think are unjustified. 

Broadly, I think that onlookers not paying much attention would have concluded from Zoe's post that Leverage is a cult that should be excluded from polite society, and hearing of both Zoe's and Jessica's post, is likely to conclude that Leverage and MIRI are similarly bad cults.

I totally agree with this. I also think that to the degree to which an "onlooker not paying much attention" concludes this is the degree to which they are habituated to engaging with discussion of wrongdoing as scapegoating games. This seems to be very common (though incredibly damaging) behavior. Scapegoating works on the associative/impressionistic logic of "looks", and Jessica's post certainly makes CFAR/MIRI "look" bad. This... (read more)

Reply
[-]jessicata4y360

I appreciate this comment, especially that you noticed the giant upfront paragraph that's relevant to the discussion :)

One note on reputational risk: I think I took reasonable efforts to reduce it, by emailing a draft to people including Anna Salamon beforehand. Anna Salamon added Matt Graves (Vaniver) to the thread, and they both said they'd be happy with me posting after editing (Matt Graves had a couple specific criticisms of the post). I only posted this on LW, not on my blog or Medium. I didn't promote it on Twitter except to retweet someone who was already tweeting about it. I don't think such reputation risk reduction on my part was morally obligatory (it would be really problematic to require people complaining about X organization to get approval from someone working at that organization), just possibly helpful anyway.

Spending more than this amount of effort managing reputation risks would seriously risk important information not getting published at all, and too little of that info being published would doom the overall ambitious world-saving project by denying it relevant knowledge about itself. I'm not saying I acted optimally, just, I don't see the people complaining about this making a better tradeoff in their own actions or advising specific policies that would improve the tradeoff.

Reply
[-]Eli Tyre4y*230

Anyone using this piece to scapegoat needs to ignore the giant upfront paragraph about "HEY, DON'T USE THIS TO SCAPEGOAT"

I think that's literally true, but also they way you wrote this sentence implies that that is unusual or uncommon.

I think that's backwards. If a person was intentionally and deliberately motivated to scapegoat some other person or group, it is an effective rhetorical move to say "I'm not trying to punish them, I just want to talk freely about some harms."

By pretending that you're not attacking the target, you protect yourself somewhat from counter attack. Now you can cause reputational damage, and if people try to punish you for doing that, you can retreat to the Motte of "but I was just trying to talk about what's going on. I specifically said not to punish any one!"

and has no plausible claim to doing justice, upholding rules, or caring about the truth of the matter in any important relevant sense.

This also seems to strong too me. I expect that many movement EAs will read the  post Zoe's and think "well, that's enough information for me to never have anything to do with Geoff or Leverage." This isn't because they're not interested in justice, it's because they don't have time time or the interest to investigate every allegation, so they're using some rough heuristics and policies such as "if something looks sufficiently like a dangerous cult, don't even bother giving it the benefit of the doubt."

Reply
[-]Hazard4y220

When I was drafting my comment, the original version of the text you first quoted was, "Anyone using this piece to scapegoat needs to ignore the giant upfront paragraph about 'HEY DON'T USE THIS TO SCAPEGOAT' (which people are totally capable of ignoring)", guess I should have left that in there. I don't think it's uncommon to ignore such disclaimers, I do think it actively opposes behaviors and discourse norms I wish to see in the world.

I agree that putting a "I'm not trying to blame anyone" disclaimer can be a pragmatic rhetorical move for someone attempting to scapegoat. There's an alternate timeline version of Jessica that wrote this post as a well crafted, well defended rhetorical attack, where the literal statements in the post all clearly say "don't fucking scapegoat anyone, you fools" but all the associative and impressionistic "dark implications" (Vaniver's language) say "scapegoat CFAR/MIRI!" I want to draw your attention to the fact that for a potential dark implication to do anything, you need people who can pick up that signal. For it to be an effective rhetorical move, you need a critical mass of people who are well practiced in ignoring literal speech, who understand... (read more)

Reply
2Eli Tyre2y
I retracted this comment, because reading all of my comments here, a few years later, I feel much more compelled by my original take than by this addition. I think the addition points out real dynamics, but that those dynamics don't take precedence over the dynamics that I expressed the first place. Those seem higher priority to me.
[-]Vladimir_Nesov4y110

This works as a general warning against awareness of hypotheses that are close to but distinct from the prevailing belief. The goal should be to make this feasible, not to become proficient in noticing the warning signs and keeping away from this.

I think the feeling that this kind of argument is fair is a kind of motivated cognition that's motivated by credence. That is, if a cognitive move (argument, narrative, hypothesis) puts forward something false, there is a temptation to decry it for reasons that would prove too much, that would apply to good cognitive moves just as well if considered in their context, which credence-motivated cognition won't be doing.

Reply
[-]Vanessa Kosoy4y*1130

Full disclosure: I am a MIRI Research Associate. This means that I receive funding from MIRI, but I am not a MIRI employee and I am not privy to its internal operation or secrets.

First of all, I am really sorry you had these horrible experiences.

A few thoughts:

Thought 1: I am not convinced the analogy between Leverage and MIRI/CFAR holds up to scrutiny. I think that Geoff Anders is most likely a bad actor, whereas MIRI/CFAR leadership is probably acting in good faith. There seems to be significantly more evidence of bad faith in Zoe's account than in Jessica's account, and the conclusion is reinforced by adding evidence from other accounts. In addition, MIRI definitely produced some valuable public research whereas I doubt the same can be said of Leverage, although I haven't been following Leverage so I am not confident about the latter (ofc it's in principle possible for a deeply unhealthy organization to produce some good outputs, and good outputs certainly don't excuse abuse of personnel, but I do think good outputs provide some evidence against such abuse).

It is important not to commit the fallacy of gray: it would risk both judging MIRI/CFAR too harshly and judging Leverage in... (read more)

Reply
[-]Dojan4y150

Plus a million points for "IMO it's a reason for less secrecy"!

If you put a lid on something you might contain it in the short term, but only at the cost of increasing the pressure: And pressure wants out, and the higher the pressure the more explosive it will be when it inevitably does come out. 

I have heard too many accounts like this, in person and anecdotally, on the web and off for me to currently be interested in working or even getting to closely involved with any of the organizations in question. The only way to change this for me is to believably cultivate a healthy, transparent and supportive environment. 

This made me go back and read "Every Cause wants to be a Cult" (Eliezer, 2007), which includes quotes like this one:
"Here I just want to point out that the worthiness of the Cause does not mean you can spend any less effort in resisting the cult attractor. And that if you can point to current battle lines, it does not mean you confess your Noble Cause unworthy. You might think that if the question were, “Cultish, yes or no?” that you were obliged to answer, “No,” or else betray your beloved Cause."

Reply
[-]ChristianKl4y141

Thought 2: From my experience, AI alignment is a domain of research that intrinsically comes with mental health hazards. First, the possibility of impending doom and the heavy sense of responsibility are sources of stress. Second, research inquiries often enough lead to "weird" metaphysical questions that risk overturning the (justified or unjustified) assumptions we implicitly hold to maintain a sense of safety in life. I think it might be the closest thing in real life to the Lovecraftian notion of "things that are best not to know because they will drive you mad". Third, the sort of people drawn to the area and/or having the necessary talents seem to often also come with mental health issues (I am including myself in this group).

That sounds like MIRI should have a councillor on it's staff.

Reply
[-]philip_b4y240

That would make them more vulnerable to claims that they use organizational mind control on their employees, and at the same time make it more likely that they would actually use it.

Reply
[-]ChristianKl4y190

You would likely hire someone who's traditionally trained, credentialed and has work experience instead of doing a bunch of your own psych-experiments, likely in a tradition like gestalttherapy that focuses on being nonmanipulative. 

Reply
[-]benjamin.j.campbell4y200

There's an easier solution that doesn't run the risk of being or appearing manipulative. You can contract external and independent councillors and make them available to your staff anonymously. I don't know if there's anything comparable in the US, but in Australia they're referred to as Employee Assistance Programs (EAPs). Nothing you discuss with the councillor can be disclosed to your workplace, although in rare circumstances there may be mandatory reporting to the police (e.g. if abuse or ongoing risk of a minor is involved).

This also goes a long way toward creating a place where employees can talk about things they're worried will seem crazy in work contexts.

Reply
[-]ChristianKl4y150

Solutions like that might work, but it's worth noting that just having an average therapist likely won't be enough.

If you actually care about a level of security that protects secrets against intelligence agencies, operational security of the office of the therapist is a concern. 

Governments that have security clearances don't want their employees to talk with therapists who don't have the secuirty clearances about classified information.

Talking nonjudgmentally with someone who has reasonable fears that the humanity won't survive the next ten years because of fast AI timelines is not easy. 

Reply
5jessicata4y
As far as I can tell, normal corporate management is much worse than Leverage. The kind of people from that world will, sometimes when prompted in private conversations, say things like: * Standard practice is to treat negotiations with other parties as zero-sum games. * "If you look around the table and can't tell who the sucker is, it's you" is a description of a common, relevant social dynamic in corporate meetings. * They have PTSD symptoms from working in corporate management, and are very threat-sensitive in general. * They learned from experience to treat social reality in general as fake, everything as an act. * They learned to accept that "there's no such thing as not being lost", like they've lost the ability to self-locate in a global map (I've experienced losing this to a significant extent). * Successful organizations get to be where they are by committing crimes, so copying standard practices from them is copying practices for committing crimes. This is, to a large extent, them admitting to being bad actors, them and others having been made so by their social context. (This puts the possibility of "Geoff Anders being a bad actor" into perspective) MIRI is, despite the problems noted in the post, as far as I can tell the most high-integrity organization doing AI safety research. FHI contributes some, but overall lower-quality research; Paul Christiano does some relevant research; OpenAI's original mission was actively harmful, and hasn't done much relevant safety research as far as I can tell. MIRI's public output in the past few years since I left has been low, which seems like bad sign for its future performance, but what it's done so far has been quite a large portion of the relevant research. I'm not particularly worried about scandals sinking the overall non-MIRI AI safety world's reputation, given the degree to which it is of mixed value.
[-]nostalgebraist4y1240

As far as I can tell, normal corporate management is much worse than Leverage

Your original post drew a comparison between MIRI and Leverage, the latter of which has just been singled out for intense criticism.

If I take the quoted sentence literally, you're saying that "MIRI was like Leverage" is a gentler critique than "MIRI is like your regular job"?

If the intended message was "my job was bad, although less bad than the jobs of many people reading this, and instead only about as bad as Leverage Research," why release this criticism on the heels of a post condemning Leverage as an abusive cult?  If you believe the normally-employed among LessWrong readers are being abused by sub-Leverage hellcults, all the time, that seems like quite the buried lede!

Sorry for the intense tone, it's just ... this sentence, if taken seriously, reframes the entire post for me in a big, weird, bad way.

Reply
9jessicata4y
I thought I was pretty clear, at the end of the post, that I wasn't sad that I worked at MIRI instead of Google or academia. I'm glad I left when I did, though. The conversations I'm mentioning with corporate management types were suprising to me, as were the contents of Moral Mazes, and Venkatesh Rao's writing. So "like a regular job" doesn't really communicate the magnitude of the harms to someone who doesn't know how bad normal corporate management is. It's hard for me to have strong opinions given that I haven't worked in corporate management, though. Maybe a lot of places are pretty okay. I've talked a lot with someone who got pretty high in Google's management hierarchy, who seems really traumatized (and says she is) and who has a lot of physiological problems, which seem overall worse than mine. I wouldn't trade places with her, mental health-wise. MIRI wouldn't make sense as a project if most regular jobs were fine, people who were really ok wouldn't have reason to build unfriendly AI. I discussed with some friends about the benefits of working at Leverage vs. MIRI vs. the US Marines, and we agreed that Leverage and MIRI were probably overall less problematic, but the fact that the US marines signal that they're going to dominate/abuse people is an important advantage relative to the alternatives, since it sets expectations more realistically.
[-]Eli Tyre4y810

MIRI wouldn't make sense as a project if most regular jobs were fine, people who were really ok wouldn't have reason to build unfriendly AI.

I just want to note that this is a contentious claim. 

There is a competing story, and one much more commonly held among people who work for or support MIRI, that the world is heading towards an unaligned intelligence explosion due to the combination of a coordination problem and very normal motivated reasoning about the danger posed by lucrative and prestigious projects.

One could make the claim "healthy" people (whatever that means) wouldn't exhibit those behaviors, ie that they would be able to coordinate and avoid rationalizing. But that's a non-standard view. 

I would prefer that you specifically flag it as a non-standard view, and then either make the argument for that view over the more common one, or highlight that you're not going into detail on the argument and that you don't expect others to accept the claim.

As it is, it feels a little like this is being slipped in as if it is a commonly accepted premise.  

Reply
[-]jessicata4y160

I agree this is a non-standard view.

Reply
-2Dr_Manhattan4y
Yes, I would! Any pointers?  (to avoid miscommunication I'm reading this to say that people are more likely to build UFAI because of traumatizing environment vs. normal reasons Eli mentioned)
[-]Vaniver4y400

Note that there's an important distinction between "corporate management" and "corporate employment"--the thing where you say "yeesh, I'm glad I'm not a manager at Google" is substantially different from the thing where you say "yeesh, I'm glad I'm not a programmer at Google", and the audience here has many more programmers than managers.

[And also Vanessa's experience matches my impressions, tho I've spent less time in industry.]

[EDIT: I also thought it was clear that you meant this more as a "this is what MIRI was like" than "MIRI was unusually bad", but I also think this means you're open to nostalgebraist's objection, that you're ordering things pretty differently from how people might naively order them.]

Reply
[-]iceman4y310

My experience was that if you were T-5 (Senior), you had some overlap with PM and management games, and at T-6 (Staff), you were often in them. I could not handle the politics to get to T-7. Programmers below T-5 are expected to earn promotions or to leave.

Google's a big company, so it might have been different elsewhere internally. My time at Google certainly traumatized me, but probably not to the point of anything in this or the Leverage thread.

Reply
[-]jefftk4y*280

Programmers below T-5 are expected to earn promotions or to leave.

This changed something like five years ago [edit: August 2017], to where people at level four (one level above new grad) no longer needed to get promoted to stay long term.

Reply
9RobertM4y
I think maybe a bit of the confusion here is nostalgebraist reading "corporate management" to mean something like "a regular job in industry", whereas you're pointing at "middle- or upper-management in sufficiently large or maze-like organizations"? Because those seem very different to me and I could imagine the second being much worse for people's mental health than the first. Separately I'm confused about the claim that "people who were really ok wouldn't have reason to build unfriendly AI"; it sounds like you don't agree that the idea that UFAI is the default outcome from building AFI without a specific effort to make it friendly? (This is probably a distraction from this threads' subject but I'd be interested to read your thoughts on that if you've written them up somewhere.)
[-]jessicata4y150

I think maybe a bit of the confusion here is nostalgebraist reading “corporate management” to mean something like “a regular job in industry”, whereas you’re pointing at “middle- or upper-management in sufficiently large or maze-like organizations”?

Yes, that seems likely. I did some interships at Google as a software engineer and they didn't seem better than working at MIRI on average, although they had less intense psychological effects, as things didn't break out in fractal betrayal during the time I was there.

Separately I’m confused about the claim that “people who were really ok wouldn’t have reason to build unfriendly AI”

People might think they "have to be productive" which points at increasing automation detached from human value, which points towards UFAI. Alternatively, they might think there isn't a need to maximize productivity, and they can do things that would benefit their own values, which wouldn't include UFAI. (I acknowledge there could be coordination problems where selfish behavior leads to cutting corners, but I don't think that's the main driver of existential risk failure modes)

Reply
[-]Vanessa Kosoy4y730

I worked for 16 years in the industry, including management positions, including (briefly) having my own startup. I talked to many, many people who worked in many companies, including people who had their own startups and some with successful exits.

The industry is certainly not a rose garden. I encountered people who were selfish, unscrupulous, megalomaniac or just foolish. I've seen lies, manipulation, intrigue and plain incompetence. But, I also encountered people who were honest, idealistic, hardworking and talented. I've seen teams trying their best to build something actually useful for some corner of the world. And, it's pretty hard to avoid reality checks when you need to deliver a real product for real customers (although some companies do manage to just get more and more investments without delivering anything until the eventual crash).

I honestly think most of them are not nearly as bad as Leverage.

Reply
[-]PhoenixFriend4y*1080

[Deleted]

Reply
[-]Duncan Sabien (Inactive)4y*1290

Trying to do a cooperative, substantive reply.  Seems like openness and straightforwardness are the best way here.

I found the above to be a mix of surprising and believable.  I was at CFAR full-time from Oct 2015 to Oct 2018, and in charge of the mainline workshops specifically for about the last two of those three years.

At least four people

This surprises me.  I don't know what the bar for "worked in some capacity with the CFAR/MIRI team" is.  For instance, while at CFAR, I had very little attention on the comings-and-goings at MIRI, a much larger organization, and also CFAR had a habit of using five or ten volunteers at a time for workshops, month in and month out.  So this could be intended to convey something like "out of the 500 people closest to both orgs."  If it's meant to imply "four people who would have worked for more than 20 hours directly with Duncan during his three years at CFAR," then I am completely at a loss; I can't think of any such person who I am aware had a psychotic break.

Psychedelic use was common among the leadership

This also surprises me.  I do not recall ever either directly encountering or hearing open discussions of p... (read more)

Reply
[-]TekhneMakre4y110
Like, I want to agree wholeheartedly with the poster's distaste for the described situation, separate from my ability to evaluate whether it took place.

As a general dynamic, no idea if it was happening here but just to have as a hypothesis, sometimes people selectively follow rules of behavior around people that they expect will seriously disapprove of the behavior. This can be well-intentioned, e.g. simply coming from not wanting to harm people by doing things around them that they don't like, but could have the unfortunate effect of producing selected reporting: you don't complain about something if you're fine with it or if you don't see it, so the only reports we get are from people who changed their mind (or have some reason to complain about something they don't actually think is bad). (Also flagging that this is a sort of paranoid hypothesis; IDK how the world is on this dimension, but the Litany of Gendlin seems appropriate. Also it's by nature harder to test, and therefore prone to the problems that untestable hypotheses have.)

Reply
[-]Duncan Sabien (Inactive)4y*600

This literally happened with Brent; my current model is that I was (EDIT: quite possibly unconsciously/reflexively/non-deliberately) cultivated as a shield by Brent, in that he much-more-consistently-than-one-would-expect-by-random-chance happened to never grossly misbehave in my sight, and other people, assuming I knew lots of things I didn't, never just told me about gross misbehaviors that they had witnessed firsthand.

Reply
2TekhneMakre4y
Damn.
9TekhneMakre4y
The two stories here fit consistently in a world where Duncan feels less social pressure than others including Phoenix, so that Duncan observes people seeming to act freely but Molochianly, and they experience network-effect social pressure (which looks Molochian, but is maybe best thought of as a separate sort of thing).
[-]Eli Tyre4y*1040

I worked for CFAR from 2016 to 2020, and am still somewhat involved.

This description does not reflect my personal experience at all. 

And speaking from my view of the organization more generally (not just my direct personal experience): Several bullet points seem flatly false to me. Many of the bullet points have some grain of truth to them, in the sense that they refer to or touch on real things that happened at the org, but then depart wildly from my understanding of events, or (according to me) mischaracterize / distort things severely.

I could go through and respond in more detail, point by point, if that is really necessary, but I would prefer not to do that, since it seems like a lot of exhausting work.

As a sort of free sample / downpayment: 

  • At least four people who did not listen to Michael's pitch about societal corruption and worked in some capacity with the CFAR/MIRI team had psychotic episodes.

I don't know who this is referring to. To my knowledge 0 people who are or have been staff at CFAR had a psychotic episode either during or after working at CFAR.

  • Psychedelic use was common among the leadership of CFAR and spread through imitation, if not actual institutiona
... (read more)
Reply
[-]Duncan Sabien (Inactive)4y200

I endorse Eli's commentary.

Reply
[-]AnnaSalamon4y*900

Thank you for adding your detailed take/observations.

My own take on some of the details of CFAR that’re discussed in your comment:

Debugging sessions with Anna and with other members of the leadership was nigh unavoidable and asymmetric, meaning that while the leadership could avoid getting debugged it was almost impossible to do so as a rank-and-file member. Sometimes Anna described her process as "implanting an engine of desperation" within the people she was debugging deeply. This obviously had lots of ill psychological effects on the people involved, but some of them did seem to find a deeper kind of motivation.

I think there were serious problems here, though our estimates of the frequencies might differ. To describe the overall situation in detail:

  • I often got debugging help from other members of CFAR, but, as noted in the quote, it was voluntary. I picked when and about what and did not feel pressure to do so.
  • I can think of at least three people at CFAR who had a lot of debugging sort of forced on them (visibly expected as part of their job set-up or of check-in meetings or similar; they didn’t make clear complaints but that is still “sort of forced”), in ways that were
... (read more)
Reply
[-]AnnaSalamon4y940

Related to my reply to PhoenixFriend (in the parent comment), but hopping meta from it:

I have a question for whoever out there thinks they know how the etiquette of this kind of conversation should go. I had a first draft of my reply to PhoenixFriend, where I … basically tried to err on the side of being welcoming, looking for and affirming the elements of truth I could hear in what PhoenixFriend had written, and sort of emphasizing those elements more than my also-real disagreements. I ran it by a CFAR colleague at my colleague’s request, who said something like “look, I think your reply is pretty misleading; you should be louder and clearer about the ways your best guess about what happened differed from what’s described in PhoenixFriend’s comment. Especially since I and others at CFAR have our names on the organization too, so if you phrase things in ways that’ll cause strangers who’re skim-reading to guess that things at CFAR were worse than they were, you’ll inaccurately and unjustly mess with other peoples’ reputations too.” (Paraphrased.)

So then I went back and made my comments more disagreeable and full of details about where my and PhoenixFriend’s models differ. (Thoug... (read more)

Reply
6TekhneMakre4y
This sounds like an extreme and surprising statement. I wrote out some clarifying questions like "what do you mean by privacy here", but maybe it'd be better to just say: I think it strikes me funny because it sounds sort of like a PR statement. And it sounds like a statement that could set up a sort of "iterations of the Matrix"-like effect. Where, you say "ok now I want to clear out all the miasma, for real", and then you and your collaborators do a pretty good job at that; but also, something's been lost or never gained, namely the logical common knowledge that there's probably-ongoing, probably difficult to see dynamics that give rise to the miasma of {ungrounded shared narrative, information cascades, collective blindspots, deferrals, circular deferrals, misplaced/miscalibrated trust, etc. ??}. In other words, since these things happened in a context where you and your collaborators were already using reflection, introspection, reasoning, communication, etc., we learn that the ongoing accumulation of miasma is a more permanent state of affairs, and this should be common knowledge. Common knowledge would for example help with people being able to bring up information about these dynamics, and expect their information to be put to good use. (I notice an analogy between iterations of the Matrix and economic boom-bust cycles.) These statements also seem to imply a framing that potentially has the (presumably unintentional) effect of subtly undermining the common knowledge of ongoing miasma-or-whatever. Like, it sort of directs attention to the content but not the generator, or something; like, one could go through all the "stuff" and then one would be done.
6AnnaSalamon4y
Well, maybe I phrased it poorly; I don't think what I'm doing is extreme; "much" is doing a bunch of work in my "I am not much trying to..." sentence. I mean, there's plenty I don't want to share, like a normal person. I have confidential info of other peoples that I'm committed to not sharing, and plenty of my own stuff that I am private about for whatever reason. But in terms of rough structural properties of my mind, or most of my beliefs, I'm not much trying for privacy. Like when I imagine being in a context where a bunch of circling is happening or something (circling allows silence/ignoring questions/etc..; still, people sometimes complain that facial expressions leak through and they don't know how to avoid it), I'm not personally like "I need my privacy though." And I've updated some toward sharing more compared to what I used to do.
3TekhneMakre4y
Ok, thanks for clarifying. (To reiterate my later point, since it sounds like you're considering the "narrative pyramid schemes" hypothesis: I think there is not common knowledge that narrative pyramid schemes happen, and that common knowledge might help people continuously and across contexts share more information, especially information that is pulling against the pyramid schemes, by giving them more of a true expectation that they'll be heard by a something-maximizing person rather than a narrative-executer).
5Duncan Sabien (Inactive)4y
I have concrete thoughts about the specific etiquette of such conversations (they're not off the cuff; I've been thinking more-or-less continuously about this sort of thing for about eight years now). However, I'm going to hold off for a bit because: a) Like Anna, I was a part of the dynamics surrounding PhoenixFriend's experience, and so I don't want to seize the reins b) I've also had a hard time coordinating with Anna on conversational norms and practices, both while at CFAR and recently ... so I sort of want to not-pretend-I-don't-have-models-and-opinions-here (I do) but also do something like "wait several days and let other people propose things first" or "wait until directly asked, having made it clear that I have thoughts if people want them" or something.
7Beckeck4y
link to the essay if/when you write it? 
[-]Duncan Sabien (Inactive)4y230

I endorse Anna's commentary.

Reply
[-]Viliam4y120

Goal-Factoring was first called “use fungibility”, a technique I taught within a class called “microeconomics 1” at the CFAR 2012 minicamps prior to Geoff doing any teaching.

As a participant of Rationality Minicamp in 2012, I confirm this. Actually, found the old textbook, look here!

Reply
[-]AnnaSalamon4y160

Okay, so, that old textbook does not look like a picture of goal-factoring, at least not on that page. But I typed "goal-factoring" into my google drive and got up these old notes that used the word while designing classes for the 2012 minicamps. A rabbithole, but one I enjoyed so maybe others will.

Reply
[-]Davis_Kingsley4y660

I worked for CFAR full-time from 2014 until mid-to-late 2016 and have continued working as a part-time employee or frequent contractor since. I'm sorry this was your experience. That said, it really does not mesh that much with what I've experienced and some of it is almost the opposite of the impressions that I got. Some brief examples:

  • My experience was that CFAR if anything should have used its techniques internally much more. Double crux for instance felt like it should have been used internally far more than it actually was -- one thing that vexed me about CFAR was a sense that there were persistent unresolved major strategic disagreements between staff members that the organization did not seem to prioritize resolving, where I think double crux would have helped.

    (I'm not talking about personal disagreements but rather things like "should X set of classes be in the workshop or not?")
  • Similarly, goal factoring didn't see much internal use (I again think it should have been used more!) and Leverage-style "charting" strikes me as really a very different thing from the way CFAR used this sort of stuff.
  • There was generally little internal "debugging" at all, which contrary to the prev
... (read more)
Reply
[-]Adam Scholl4y*570

I've worked at CFAR for most of the last 5 years, and this comment strikes me as so wildly incorrect and misleading that I have trouble believing it was in fact written by a current CFAR employee. Would you be willing to verify your identity with some mutually-trusted 3rd party, who can confirm your report here? Ben Pace has offered to do this for people in the past.

Reply
[-]jessicata4y*200

I don't know if you trust me, but I confirmed privately that this person is a past or present CFAR employee.

Reply
[-]Adam Scholl4y250

Sure, but they led with "I'm a CFAR employee," which suggests they are a CFAR employee. Is this true?

Reply
[-]Unreal4y240

It sounds like they meant they used to work at CFAR, not that they currently do. 

Also given the very small number of people who work at CFAR currently, it would be very hard for this person to retain anonymity with that qualifier so... 

I think it's safe to assume they were a past employee... but they should probably update their comment to make that clearer because I was also perplexed by their specific phrasing. 

Reply
[-]steven04614y510

It sounds like they meant they used to work at CFAR, not that they currently do.

The interpretation of "I'm a CFAR employee commenting anonymously to avoid retribution" as "I'm not a CFAR employee, but used to be one" seems to me to be sufficiently strained and non-obvious that we should infer from the commenter's choice not to use clearer language that they should be treated as having deliberately intended for readers to believe that they're a current CFAR employee.

Reply
[-]Adam Scholl4y*240

I like the local discourse norm of erring on the side of assuming good faith, but like steven0461, in this case I have trouble believing this was misleading by accident. Given how obviously false, or at least seriously misleading, many of these claims are (as I think accurately described by Anna/Duncan/Eli), my lead hypothesis is that this post was written by a former staff member, who was posing as a current staff member to make the critique seem more damning/informed, who had some ax to grind and was willing to engage in deception to get it ground, or something like that...?

Reply
[-]PeterMcCluskey4y160

It seems misleading in a non-accidental way, but it seems fairly plausible that their main motive was to obscure their identity.

Reply
[-]Raemon4y150

FYI I just interpreted it to mean "former staff member" automatically. (This is biased by my belief that CFAR has very few current staff members so of course it was highly unlikely to be one, but I don't think it was an unreasonably weird reading)

Reply
8jessicata4y
PhoenixFriend edited the comment.
[-]jimrandomh4y260

Relatedly, the organization uses a technique called goal factoring during debugging which was in large part inspired by Geoff Anders' Connection Theory and was actually taught by Geoff at CFAR workshops at some point. This means that CFAR debugging in many ways resembles Leverage's debugging and the similarity in naming isn't just a coincidence of terms.

While it's true that there's some structural similarity between Goal Factoring and Connection Theory, and Geoff did teach Goal Factoring at some workshops (including one I attended), these techniques are more different than they are similar. In particular, goal factoring is taught as a solo technique for introspecting on what you want in a specific area, while Connection Theory is a therapy-like technique in which a facilitator tries to comprehensively catalog someone's values across multiple sessions going 10+ hours.

Reply
[-]Duncan Sabien (Inactive)4y130

Thanks for this reply, Jim; I winced a bit at my own "no resemblance whatsoever" and your comment is clearer and more accurate.

Reply
[-]Aella4y220

I don't have an object-level opinion formed on this yet, but want to +1 this as more of the kind of description I find interesting, and isn't subject to the same critiques I had with the original post.

Reply
[-]Scott Alexander4y180

Thanks for this.

I'm interested in figuring out more what's going on here - how do you feel about emailing me, hashing out the privacy issues, and, if we can get them hashed out, you telling me the four people you're thinking of who had psychotic episodes?

Reply
[-]Scott Alexander3y951

Update: I interviewed many of the people involved and feel like I understand the situation better.

My main conclusion is that I was wrong about Michael making people psychotic. Everyone I talked to had some other risk factor, like a preexisting family or personal history, or took recreational drugs at doses that would explain their psychotic episodes.

Michael has a tendency to befriend people with high trait psychoticism and heavy drug use, and often has strong opinions on their treatment, which explains why he is often very close to people and very noticeable at the moment they become psychotic. But aside from one case where he recommended someone take a drug that made a bad situation slightly worse, and the general Berkeley rationalist scene that he (and I and everyone else here) is a part of having lots of crazy ideas that are psychologically stressful, I no longer think he is a major cause.

While interviewing the people involved, I did get some additional reasons to worry that he uses cult-y high-pressure recruitment tactics on people he wants things from, in ways that make me continue to be nervous about the effect he *could* have on people. But the original claim I made that I k... (read more)

Reply
[-]iceman3y450

I want to summarize what's happened from the point of view of a long time MIRI donor and supporter:

My primary takeaway of the original post was that MIRI/CFAR had cultish social dynamics, that this lead to the spread of short term AI timelines in excess of the evidence, and that voices such as Vassar's were marginalized (because listening to other arguments would cause them to "downvote Eliezer in his head"). The actual important parts of this whole story are a) the rationalistic health of these organizations, b) the (possibly improper) memetic spread of the short timelines narrative.

It has been months since the OP, but my recollection is that Jessica posted this memoir, got a ton of upvotes, then you posted your comment claiming that being around Vassar induced psychosis, the karma on Jessica's post dropped in half while your comment that Vassar had magical psychosis inducing powers is currently sitting at almost five and a half times the karma of the OP. At this point, things became mostly derailed into psychodrama about Vassar, drugs, whether transgender people have higher rates of psychosis, et cetera, instead of discussion about the health of these organizations and how short ... (read more)

Reply
[-]Ben Pace3y120

Thanks so much for talking to the folks involved and writing this note on your conclusions, I really appreciate that someone did this (who I trust to actually try to find out what happened and report their conclusions accurately).

Reply
1Richard_Kennaway3y
... This does not contradict "Michael making people psychotic". A bad therapist is not excused by the fact that his patients were already sick when they came to him. Disclaimer: I do not know any of the people involved and have had no personal dealings with any of them.
[-]Vladimir_Nesov4y*110

outsiders as "normies"

I've seen the term used a few times on LW. Despite the denotational usefulness, it's very hard to keep it from connotationally being a slur, not without something like there being an existing slur and the new term getting defined to be its denotational non-slur counterpart (how it actually sounds also doesn't help).

So it's a good principle to not give it power by using it (at least in public).

Reply
8Unreal4y
You contributing to this conversation seems good, PhoenixFriend. Thanks for saying your piece. 
2jessicata4y
I remember someone who lived in Berkeley in 2016-2017, who wasn't a CFAR employee but was definitely talking extensively with CFAR people (collaborating on rationality techniques/instruction?) and had gone to a CFAR workshop, telling me something along the lines of "CFAR can't legally recommend that people try LSD, but..."; I don't remember what followed the "but", I don't think the specific wording was even intended to be remembered (to preserve plausible deniability?), but it gave me the impression that CFAR people may have recommended it if it were legal to do so, as implied by the "but". This was before I was talking with Michael Vassar extensively. This is some amount of Bayesian evidence for the above.
[-]Adam Scholl4y*100

It's true some CFAR staff have used psychedelics, and I'm sure they've sometimes mentioned that in private conversation. But CFAR as an institution never advocated psychedelic use, and that wasn't just because it was illegal, it was because (and our mentorship and instructor trainings emphasize this) psychedelics often harm people.

Reply
3Unreal4y
I'd be interested in hearing from someone who was around CFAR in the first few years to double check that the same norm was in place. I wasn't around before 2015. 
2Benquo4y
I had significant involvement with CFAR 2014-2015 and this is consistent with my impression.
[-]Davis_Kingsley4y*270

What does "significant involvement" mean here? I worked for CFAR full-time during that period and to the best of my knowledge you did not work there -- I believe for some of that time you were dating someone who worked there, is that what you mean by significant involvement?

Reply
[-]Benquo4y100

I remember being a "guest instructor" at one workshop, and talking about curriculum design with Anna and Kenzi. I was also at a lot of official and unofficial CFAR retreats/workshops/etc. I don't think I participated in much of the normal/official CFAR process, though I did attend the "train the trainers workshop", and in this range of contexts saw some of how decisions were made, how workshops were run, how people related to each other at parties.

As I recall it, what I observed first-hand and was told second-hand at the time confirms bullets 2, 4, and 6 of the top-level comment. Many of the others are about how people felt, and are consistent with what people I knew reported at the time. Nothing in the top-level comment seems dissonant with what I observed.

It seems like there was a lot of fragmentation (which is why we mostly didn't interact). I felt bad about exercising (a small amount of) unaccountable influence at the time through these mechanisms, but I was confused about so much relative to the rate at which I was willing to ask questions that I didn't end up asking about the info-siloing. In hindsight it seems intended to keep the true nature of governance obscure and theref... (read more)

Reply
[-]Eli Tyre4y270

As I recall it, what I observed first-hand and was told second-hand at the time confirms bullets 2, 4, and 6 of the top-level comment.

I would like a lot more elaboration about this, if you can give it. 

Can you say more specifically what you observed?

Reply
[-]Davis_Kingsley4y260

Unfortunately I think the working relationship between Anna and Kenzi was exceptionally bad in some ways and I would definitely believe that someone who mostly observed that would assume the organization had some of these problems; however I think this was also a relatively unique situation within the organization.

(I suspect though am not certain that both Anna and Kenzi would affirm that indeed this was an especially bad dynamic.)

With respect to point 2, I do not believe there was major peer pressure at CFAR to use psychadelics and I have never used psychadelics myself. It's possible that there was major peer pressure on other people or it applied to me but I was oblivious to it or whatever but I'd be surprised.

Psychadelic use was also one of a few things that were heavily discouraged (or maybe banned?) as conversation topics for staff at workshops -- like polyphasic sleep (another heavily discouraged topic), psychadelics were I believe viewed as potentially destabilizing and inappropriate to recommend to participants, plus there are legal issues involved. I personally consider recreational use of psychadelics to be immoral as well.

My comment initially said 2014-2016 but IIRC my involvement was much less after 2015 so I edited it.


Thanks for the clarification, I've edited mine too.

Reply
2Benquo4y
What do you see as the main sorts of interventions CFAR was organized around? I feel like this is a "different worlds" thing where I ought to be pretty curious what the whole scene looked like to you, what it seemed like people were up to, what the important activities were, & where progress was being made (or attempted).
[-]Davis_Kingsley4y*200

I think that CFAR, at least while I was there full-time from 2014 to sometime in 2016, was heavily focused on running workshops or other programs (like the alumni reunions or the MIRI Summer Fellows program). See for instance my comment here.

Most of what the organization was doing seemed to involve planning and executing workshops or other programs and teaching the existing curriculum. There were some developments and advancements to the curriculum, but they often came from the workshops or something around them (like followups) rather than a systematic development project. For example, Kenzi once took on the lion's share of workshop followups for a time, which led to her coming up with new curriculum based on her sense of what the followup participants were missing even after having attended the workshop.

(In the time before I joined there had been significantly more testing of curriculum etc. outside of workshops, but this seemed to have become less the thing by the time I was there.)

A lot of CFAR's internal focus was on improving operations capacity. There was at one time a narrative that the staff was currently unable to do some of the longer-term development because too much ti... (read more)

Reply
2[comment deleted]4y
[-]CronoDAS4y1070

One takeaway I got from this when combined with some other stuff I've read:

Don't do psychedelics. Seriously, they can fuck up your head pretty bad and people who take them and organizations that encourage taking them often end up drifting further and further away from normality and reasonableness until they end up in Cloudcuckooland.

Reply
[-]Eliezer Yudkowsky4y*1982

I'm about ready to propose a group norm against having any subgroups or leaders who tell other people they should take psychedelics.  Maybe they have individually motivated uses - though I get the impression that this is, at best, a high-variance bet with significantly negative expectation.  But the track record of "rationalist-adjacent" subgroups that push the practice internally and would-be leaders who suggest to other people that they do them seems just way too bad.

I'm also about ready to propose a similar no-such-group policy on 'woo', tarot-reading, supernaturalism only oh no it's not really supernaturalism I'm just doing tarot readings as a way to help myself think, etc.  I still think it's not our community business to try to socially prohibit things like that on an individual level by exiling individuals like that from parties, I don't think we have or should have that kind of power over individual behaviors that neither pick pockets nor break legs.  But I think that when there's anything like a subgroup or a leader with those properties we need to be ready to say, "Yeah, that's not a group in good standing with the rest of us, don't go there."  Th... (read more)

Reply
[-]Rob Bensinger4y1480

Copying over a related Oct. 13-17 conversation from Facebook:

(context: someone posted a dating ad in a rationalist space where they said they like tarot etc., and rationalists objected)

_____________________________________________

Marie La:  As a cultural side note, most of my woo knowledge (like how to read tarot) has come from the rationalist community, and I wouldn't have learned it otherwise

_____________________________________________

Eliezer Yudkowsky:  @Marie La   Any ideas how we can stop that?

(+1 from Rob B)

_____________________________________________

Marie La:  Idk, it's an introspective technique that works for some people. Doesn't particularly work for me. Sounds like the concern is bad optics / PR rather than efficacy

(+1 from Rob B)

_____________________________________________

Shaked Koplewitz:  @Marie La   optics implies that the concern is with the impression it makes on outsiders, my concern here is the effect on insiders (arguably this is optics too, but a non-central example)

_____________________________________________

Rob Bensinger:  If the concern is optics, either to insiders or outsiders, then it seems vastly weaker to me than i... (read more)

Reply
[-]ioannes4y230


Jim Babcock's stance here is the most sensible one I've seen in this thread:


My own impression is that the effect of LSD is not primarily a regression to the mean thing, but rather, that it temporarily enables some self-modification capabilities, which can be powerfully positive but which require a high degree of sanity and care to operate safely.

...

Meanwhile nearly everyone has been exposed to extremely unsubtle and substantially false anti-drug propaganda, which fails to survive contact with reality. So it's unfortunate but also unsurprising that the how-much-caution pendulum in their heads winds up swinging too far to the other side. The ideal messaging imo would leave most people feeling like planning an acid trip is more work than they personally will get around to, plus mild disdain towards impulsive usage and corner-cutting.

Reply
[-]Vaniver4y110

Somehow this reminds me of the time I did a Tarot reading for someone, whose only previous experience had been Brent Dill doing a Tarot reading, and they were... sort of shocked at the difference. (I prefer three card layouts with a simple context where both people think carefully about what each of the cards could mean; I've never seen his, but the impression I got was way more showmanship.)

Reply
[-]Gunnar_Zarncke4y13-2

If it works as a device to facilitate sub-conscious associations, then maybe an alternative should be designed that sheds the mystical baggage and comes with clear explanations of why and how it works. 

Reply
2jefftk4y
I'm generally very anti-woo, but I expect presenting it clearly and without baggage would make it stop working because the participant would be in a different mental state.
9Gunnar_Zarncke4y
Well, if that is true then that would be another avenue to research mental states. Something that is clearly needed. But what I really wanted to say: You shouldn't do it if you can't formulate hypotheses and do experiments for it.
[-]Viliam4y*520

Thank you for saying this!

I wonder where the line will be drawn with regards to the { meditation, Buddhism, post-rationality, David Chapman, etc. } cluster. On one hand, meditation -- when done without all the baggage, hypothetically -- seems like a useful tool. On the other hand, it simply invites all that baggage, because that is in the books, in the practicing communities, etc. Also, Christianity is an outgroup, but Buddhism is a fargroup, so people seem less averse to religious connotations; in my opinion, it's just the different flavor of the same poison. Buddhism is sometimes advertized as a kind of evidence-based philosophy, but then you read the books and they discuss the supernatural and describe the miracles done by Buddha. Plus the insights into your previous lives, into the ultimate nature of reality (my 200 Hz brain sees the quantum physics, yeah), etc.

Also, somewhat ironically...

Marcello and I developed a convention in our AI work: when we ran into something we didn’t understand, which was often, we would say “magic”—as in, X magically does Y”—to remind ourselves that here was an unsolved problem, a gap in our understanding. It is far better to say “magic” than “compl

... (read more)
Reply
[-]Holly_Elmore4y*810

Western Buddhism tends to be more of a bag of wellness tricks than a religion, but it’s worth sharing that Buddhism proper is anti-life. It came out of a Hindu obsession with ending the cycle of reincarnation. Nirvana means “cessation.” The whole idea of meditation is to become tolerant of signals to action so you can let them pass without doing the things that replicate them or, ultimately, propagate any life-like process. Karma is described as a giant wheel that powers reincarnation and gains momentum whenever you act unconsciously. The goal is for the wheel to stop moving and the way is to unlearn your habit of kicking it. When the Buddha became enlightened under the Bodhi tree, it wasn’t actually complete enlightenment. He was “enlightened with residues”— he stopped making new karma but he was still burning off old karma. He achieved actual cessation when he died. To be straight up enlightened, you stop living. The whole project of enlightenment is to end life.

It’s a sinister and empty philosophy, IMO. A lot of the insights and tools are great but the thrust of (at least Theravada) Buddhism is my enemy.

Reply
[-]Rob Bensinger4y170

I agree this is pretty sinister and empty. Traditional samsara includes some pretty danged nice places (the heavens), not just things that have Earth-like quantities or qualities of flourishing; so rejecting all of that sounds very anti-life.

 Some complicating factors:

  • It's not clear (to put it lightly) what parinirvana (post-death nirvana / escape from samsara) entails. Some early Buddhists seem to have thought of it as more like oblivion/cessation; others seem to have thought of it as more like perfectly blissful experience.

(Obviously, this becomes more anti-life when you get rid of supernaturalism -- then the only alternative to 'samsara' is oblivion. But the modern Buddhist can retreat to various mottes about what 'nirvana' is, such as embracing living nirvana (sopadhishesa-nirvana) while rejecting parinirvana.)

  • The Buddhists have a weird psychological theory according to which living in samsara inherently sucks. Liking or enjoying things is really just another species of bad.

The latter view is still pretty anti-life, but notably, it's a psychological claim ('this is what it's really like to experience things'), not a normative claim that we should reject life a priori. If a Buddhist updates away from thinking everything is dukkha, they aren't necessarily required to reject life anymore -- the life-rejection wasn't was contingent on the psych theory.

Reply
[-]Kaj_Sotala4y190

There are also versions of the psychological theory in which dukkha is not associated with all motivation, just the craving-based system, which is in a sense "extra"; it's a layer on top of the primary motivation system, which would continue to operate even if all craving was eliminated. Under that model (which I think is the closest to being true), you could (in principle) just eliminate the unpleasant parts of human motivation, while keeping the ones that don't create suffering - and probably get humans who were far more alive as a result, since they would be far more willing to do even painful things if pain no longer caused them suffering. 

Pain would still be a disincentive in the same way that a reinforcement learner would generally choose to take actions that brought about positive rather than negative reward, but it would make it easier for people to voluntarily choose to experience a certain amount of pain in exchange for better achieving their values afterwards, for instance.

Reply
1MondSemmel4y
Related to this (?) is the notion that 'wanting' and 'liking' are separate systems. For instance, from a random paper: In this perspective, a philosophy can say that 'wanting' is psychologically unhealthy while 'liking' is fine. I'm not sure if this is what Buddhists actually believe, but it is how I've interpreted notions like "desire leads to suffering", "letting go", "ego death", etc.
3Kaj_Sotala4y
There's that, but I think it would also be misleading to say that (all) Buddhists consider desire/wanting to be bad! (Though to be clear, it does seem like some of them do.) I liked this article's take on the issue.
8Rafael Harth4y
I don't think this is true, at leas not insofar as it describes the original philosophy. You may be thinking about the first noble truth "The truth of Dukkha", but Dukkha is not correctly translated as suffering. A better translation is "unsatisfactoriness". For example, even positive sensations are Dukkha, according to the Buddha. I think the intention of the first noble truth is to say that worldly sensations, positive and negative, are inherently unsatisfactory. The Buddha has also said pretty explicitly that a great happiness can be achieved through the noble path, which seems to directly contradict the idea that life inherently sucks, and that suffering can be overcome. (However, there may be things he's said that support the quote; I'm definitely not claiming to have a full or even representative view.)
8Rob Bensinger4y
From https://www.lionsroar.com/forum-understanding-dukkha/: Followed by: I could buy that early Buddhists were using a word that basically meant 'suffering' or 'pain' metaphorically, but what's the argument that this wasn't the original word meaning at all? (I'm not a specialist on this topic, I'm just wary of 'rationalizing' tendencies for modern readers to try to retranslate concepts in ways that make them sound more obvious/intuitive/modern.) If you think great happiness can be achieved through the Noble Path and you should leave samsara anyway, that's an even more extreme anti-life position, because you're rejecting the best life has to offer. I do agree that Buddhism claims you can get tons of great conventional bliss-states on the road to nirvana (see also the potential to reincarnate in the various heavens); but then it rejects those too, modulo the complications I noted in my upthread comment.
3Rafael Harth4y
I 100% grant that you can find people, including Buddhist scholars, who will translate dukkha that way. I would generally trust Wikipedia to get a reasonable consensus on this, but in this case, it is also inconsistent, e.g. this quote from the article about Buddhism backs up what I just said, but from the article about dukkha: I guess I have a strong opinion on this much like someone could have a strong opinion on what the bible says about abortion even if there are scholars on both sides. My main point is that [the idea that there is a path to overcome suffering in this life] is * not * a western invention. The Buddha may have also talked about rebirth and karma and stuff, but he has made this much clear at several points in pretty direct language, and he even talked about lasting happiness that can be achieved through the noble path. (I know he e.g. endorsed the claim that this kind of happiness has "no drawbacks"). Bottom line, I think it requires a very tortured reading of his statements to reconcile this with the idea that life on earth is necessarily negative well-being. There's also the apparent contradiction in just the noble truths ("the truth of dukkha", "the origin of dukkha", "the end of dukkha", "the path to the end of dukkha") because (1) is usually phrased as "dukkha is an inherent part of the world", which would then contradict (3), unless you read (3) as only referring to the end via escaping the cycle of rebirth (which again I don't think can be reconciled with what the Buddha actually said). It's annoying, but you have to read dukkha as referring to different things if you want to make sense of this. Agreed. (And I would agree that this is more than enough reason not to defend original Buddhism as a philosophy without picking and choosing.)
3Holly_Elmore4y
It makes sense to me to use dukkha as "unsatisfactoriness" because it emphasizes that the issue is resisting the way things are or needing things to be different. 
4Rob Bensinger4y
I think it makes Buddhism higher-probability to translate dukkha that way. This on its own doesn't immediately make me confident that the original doctrines had that in mind. For that, I'd want to hear more from Pāli experts writing articles that discuss standard meanings for dukkha at the time, and asks questions like "If by 'dukkha' early Buddhists just meant 'not totally satisfactory', then why did they choose that word (apparently mainly used for physical pain...?) rather than some clearer term? Were there no clearer options available?"
4Kaj_Sotala4y
Note that Wikipedia gives the word's etymology as being something that actually does seem  pretty analogous to 'not totally satisfactory'; As I heard one meditation teacher put it, the modern analogy to this would be if you had one of those shopping carts where one of the wheels is stuck and doesn't quite go the way you'd like it - doesn't exactly kill you or cause you enormous suffering, but it's not a totally satisfactory shopping cart experience, either. (Leigh Brasington also has a fun take.)
[-]Rob Bensinger4y*130

I find arguments by analogy etymology almost maximally unconvincing here, unless dukkha was a neologism? Like, those arguments make me update away from your conclusion, because they seem so not-of-the-correct-type. Normally, word etymologies are a very poor guide to meaning compared to looking at usage -- what do other sources actually mean when they say "dukkha" in totally ordinary contexts?

There's a massive tradition across many cultures of making sophistical arguments about words' 'true' or 'real' meaning based on (real or imagined) etymologies. This is even dicier when the etymology is as vague/uninformative as this one -- there are many different ways you can spin 'bad axle hole' to give exactly opposite glosses of dukkha.

I still don't find this 100% convincing/exacting, but the following account at least doesn't raise immediate alarm bells for me:

According to Pali-English Dictionary, dukkha (Sk. duḥkha) means unpleasant, painful, causing misery.[4] [...]

The other meaning of the word dukkha, given in Venerable Nyanatiloka written Buddhist Dictionary, is “ill”. As the first of the Four Noble Truths and the second of the three characteristics of existence (tilakkhaṇa), the term

... (read more)
Reply
3Holly_Elmore4y
I'm willing to believe, based on the totality of the Buddha's message, that he meant dukkha as "resisting how things are/wanting them to be different," i.e. being unsatisfied with reality. Look at our own word "suffering" in English. Today it connotes anguish, but it also means "enduring" or "putting up with." A word like "unsatisfied" in English has a mild connotation, but we could also say something like "tormented by desire" to ramp up the intensity without fundamentally changing the meaning. 
2Slider4y
I think even in current english there is an idiom for pain. Ie ""It pains me that I don't have food" vs "I am hungry". One variant of the claims is that there is way to be food-poor that is positive "It delights me that I don't have food" or just "I don't have food".
2Holly_Elmore4y
I think it would pretty hard to translate words like “annoying,” “irritating,” etc to a very foreign audience without making reference to physical pain. It’s hard to infer connotations or intensity when looking at those older writings.
[-]romeostevensit4y110

The set of metaphors that have come to the west are dominated by the early transmission of Buddhism which occurred in the late 1800's, and was carried out by Sanskrit scholars translating from Sanskrit sources. The Buddha specifically warned people against translating his teachings into Sanskrit for pretty much the sorts of reasons being passed off as genuine Buddhism here.

Reply
1ioannes4y
Quora for the curious: Did the Buddha forbid the translation of his teachings into Sanskrit? If so, did he mention why? From my quick skim of those answers, it looks like he was more concerned about accessibility of the teachings rather than issues of interpretation.
8Kaj_Sotala4y
I'm willing to grant that there are certain interpretations of Buddhism that take this view, but object pretty strongly to depicting it as the idea of meditation. Especially since there are many different varieties of meditation, with varying degrees of (in)compatibility with this goal; something like loving-kindness or shi-ne meditation seem both more appropriate for creating activity, for instance. In my view, there are so many varieties and interpretations of Buddhism that pointing to some of them having an anti-life view always seems like a weird sleight of hand to me. By saying that Buddhism originates as an anti-life practice, one can then imply that all of its practices also tend to lead towards that goal, without needing to establish that that's actually the case.  After all, just because some of the people who developed such techniques wanted to create an anti-life practice doesn't mean that they actually succeeded in developing techniques that would be particularly well-suited for this goal. I agree that it's possible to use them for such a goal, especially if they're taught in the context of an ideology that frames the practice that way, but  I don't think them to be very effective for that goal even then.
[-]Holly_Elmore4y210

I think if rationalists are interested in Buddhism as part of their quest to find truth, they should know that it has, at the very least, deathist origins. 

Reply
2Kaj_Sotala4y
I agree that it's valuable to be aware of the life-denying aspects of the tradition, since those mindsets do affect some teachings of it and it's good to be able to notice them and filter them out rather than accidentally absorbing them. I do however object to characterizing "Buddhism proper" anti-life, as it implies that any proper attempt to delve into or practice Buddhism will eventually just lead you into deathism.
4Unreal4y
This view is disputed and countered in the original texts. It is worth it to me to mention this, but I am not the right one to go into details. 
2Rob Bensinger4y
Some good (mainstream, scholarly) books on nirvana and historical Buddhism: * Steven Collins, Nirvana: Concept, Imagery, Narrative Excerpt (starting p. 69):
6Unreal4y
This section seems to say it well, highlighted bits in bold for easier reading.  There is nothing pro-"nonexistence" in Buddhism. There is nothing pro-"ending or annihilating life." These takes are explicitly rejected in the Pali canon.  It is very easy to misunderstand what Buddhism is saying, and the inferential gap is larger than I think most people imagine. The words / phrases do not have direct translations into common English. 
5Said Achmiz4y
When someone claims something to be “beyond designation” or “beyond categorization” or any such thing, it’s a sure bet that they’re trying to slip one by you; in fact, the given thing belongs to a category which, if you recognized that membership, would lead you to reject it—and rightly.
[-]Rob Bensinger4y160

I think this is not true in full generality -- I think meditation does give people insights that are hard to verbalize, and does make some common verbal distinctions feel less joint-carving, so it makes sense for a tradition of meditators to say a lot in favor of 'things that are hard to verbalize' and 'things that can't be neatly carved up in the normal intuitive ways'.

I do think that once you have those insights, there's a strong temptation to lapse into sophistry or doublethink to defend whatever silly thing you feel like defending that day -- if someone doubts your claim that the Buddha lives on like a god or ghost after death, you can say that the Buddha's existence-status after death transcends concepts and verbalization.

When in fact the honest thing to say if you believed in immaterial souls would be 'I don't know what happened to the Buddha when he died', and the honest thing to say if you're an educated modern person is 'the Buddha was totally annihilated when he died, the exact same as anyone else who dies.'

Reply
5Said Achmiz4y
What would the world look like if meditation only made people feel like they had insights that were hard to verbalize, without actually giving them any new insights? (But also, “thing X is beyond designation” and “some fact(s) about thing X are hard to verbalize” are not the same thing.)
[-]Kaj_Sotala4y190

If the world was one where meditation only made people feel like they had insights that were hard to verbalize, then I probably wouldn't have figured out ways to verbalize some of them (mostly due to having knowledge of neuroscience etc. stuff that most historical Buddhists haven't had).

Reply
3Holly_Elmore4y
I admire koan practice in Zen as an attempt to make sure people are reaching genuine insights without being able to fully capture them in explicit words. 
2Said Achmiz4y
Can you say more about this? I don’t think I quite follow.
2Holly_Elmore4y
Koans are “riddles” that are supposed to only be understandable by “insight,” a non-cognitive form of knowledge attained by entering “don’t know mind.” Meditating on koans “confuses the rational mind” so that it is easier to enter “don’t know mind.” Koan training consists of being given a koan by a master (the first one I ever received was “what is the meaning of [smacks hand into ground]?”), letting the koan confuse you and relaxing into that feeling, letting go of all the thoughts that try to explain, and then one day having the answer pop into your awareness (some schools have people concentrate on the koan, others say to just create the conditions for insight and it will come). If you explain your insight to a master and they think you’ve figured it out (they often say “used up”) that koan, they give you a new one that’s even further from everyday thinking. And so it continues until you’ve gone through enough of the hundreds of koans in that lineage. It’s a cool system because “getting” your koan is an objectively observable indicator of progress at meditation, which is otherwise quite difficult to assess.
5Said Achmiz4y
Ok, but how exactly does “make sure people are reaching genuine insights”? Are there canonical correct answers to koans? (But that would seem to violate the “without being able to fully capture them in explicit words” clause…) In other words, how do you know when you’ve correctly understood a koan? (When an answer pops into your awareness, how do you know it’s the right one?) And, what does it mean to correctly understand a koan? (What’s the difference between correctly understanding a koan and incorrectly understanding it?) Could you elaborate on this? I am confused by this point.
[-]Richard_Kennaway4y120

Are there canonical correct answers to koans?

"The Sound of One Hand: 281 Zen Koans with Answers"

Reply
1Holly_Elmore4y
Masters have an oral tradition of assessing the answers to koans and whether they reflect genuine insight. They use the answers people give to guide their future training. Having used up a few koans, I’d say the answers come to you pretty clearly. You get to a certain point in meditation and the koan suddenly makes sense in light of that.
2Said Achmiz4y
By what means do the masters assess whether the answers reflect “genuine insight”? Is there a way for a non-master to evaluate whether a given answer to a koan is correct, or to show that the ostensibly-correct answer is correct? (Analogously to P vs. NP—if the correct answer is difficult to determine, is it nonetheless straightforward to verify?) If the answer to the previous question is “no”, then how is one to know whether the ostensibly-correct answer is, in fact, actually correct?
3Holly_Elmore4y
It’s not really a question of factually correct. The koan is designed to make sense on a non-cognitive, non-rational level. My experience was that I would have a certain insight on my own when I was meditating and then I would realize that that’s what the koan was talking about. What makes a good koan is that you’re totally stumped when you first hear it, but when it clicks you know that’s the right answer. That’s why one English translation is “riddle.” Some riddles have correct answers according to the terms they lay out, but really what makes a riddle is the recognition of a lateral thinking move, even if it’s as simple as a pun. Koans are “riddles” that require don’t-know mind.
2Said Achmiz4y
What is the content of whatever “insight” or “sense” it is that’s gained when you “get the right answer” to a koan? I do not see what it could mean to say that one has gained such an insight… Some questions: 1. Does it ever happen that someone “gets” a koan—it “clicks” for them, and they “know” that the answer they’ve got is “the right answer”—but actually, their differs from the canonically “correct” answer? 2. Alternatively: does it ever happen that two different people both “get” a koan—it “clicks” for them both—but their answers differ? 3. Do Zen teachers/masters ever disagree on what the “right” answer to a koan is? If so—how do they resolve this disagreement? 4. Suppose I were to say to a Zen teacher: you say the answer to this koan is X, but I think it is actually Y. Please demonstrate to me that it is as you say, and not as I say. How might they do this?
4Rob Bensinger4y
* Paul J. Griffiths, On Being Buddha: The Classical Doctrine of Buddhahood Excerpt (starting p. 155):
[-]Rob Bensinger4y560

Regarding meditation, Kevin Fischer reported a surprising-to-me anecdote on FB yesterday:

I had one conversation with Soryu [the head of Monastic Academy / MAPLE] at a small party once. I mentioned that my feeling about meditation is that it’s really good for everyone when done for 15 minutes a day, and when done for much more than that forever, it’s much more complicated and sometimes harmful.

He straightforwardly agreed, and said he provides the environment for long term dedication to meditation because there is a market demand for that product. 🤷

Reply
[-]Matt Goldenberg4y110

He straightforwardly agreed, and said he provides the environment for long term dedication to meditation because there is a market demand for that product. 🤷

 

FWIW as a resident of MAPLE, my sense is Soryu believes something like:

"Smaller periods of meditation will help you relax/focus and probably have only a very small risk of harm. Larger/longer periods of meditation come with deeper risks of harm,  but are also probably necessary to achieve awakening, which is important for the good of the world." 

 

But I am a newer resident and could easily misunderstanding here.

Reply
3ioannes4y
The correspondent's reply here is helpful color on how things can get more complicated (e.g. shifts in how you perceive the actions/intentions of yourself & others) and sometimes harmful (e.g. extended stays in Dark Night).
[-]wunan4y*340

On one hand, meditation -- when done without all the baggage, hypothetically -- seems like a useful tool. On the other hand, it simply invites all that baggage, because that is in the books, in the practicing communities, etc.

 

I think meditation should be treated similarly to psychedelics -- even for meditators who don't think of it in terms of anything supernatural, it can still have very large and unpredictable effects on the mind. The more extreme the style of meditation (e.g. silent retreats), the more likely this sort of thing is.

Any subgroups heavily using meditation seem likely to have the same problems as the ones Eliezer identified for psychedelics/woo/supernaturalism.

Reply
4Gunnar_Zarncke4y
I have pointed out the risks of meditation and meditation-like practices before. The last time was on the Shoulder Advisors which does seem to fall on the boundary. I have experience with meditation and have been to extended silent meditation retreats with only positive results. Nonetheless, bad trips are possible - esp. without a supportive teacher and/or community.  But I wouldn't make a norm against groups fostering meditation. Meditation depends on groups for support (though the same might be said about psychedelics). Meditation is also a known way to gain high levels of introspective awareness and to have many mental health benefits (many posts about that on LW I'm too lazy to find). The group norm about these things should be to require oversight by a Living Tradition of Knowledge in the relevant area (for meditation e.g. an established - maybe even Buddhist - meditation school).
2Kenny4y
Psychedelics, woo, and meditation are very separate stuff. They are often used in conjunction with each other due to popularity and the context some of these things are discussed along with each other. Buddhism has incorporated meditation into its woo while other religions have mostly focused on group based services in terms of talking about their woos. I like how some commenters have grouped psychedelics and meditation separate of the woo stuff, but it was a bit surprising to me to see Eliezer dismissing psychedelics along with woo in the same statements. He probably hasn't taken psychedelics before. Meditation is quite different as in it's more of a state of mind as opposed to an altered mentality. With psychedelics there is a clear distinction between when you are tripping and when you aren't tripping. With meditation, it's not so clear when you are meditating and when you aren't. Woo is just putting certain ideas into words, which has nothing to do with different mindset/mentalities.
2Laszlo_Treszkai4y
However, according to some, even meditation done properly can have negative effects, which would be similar to psychedelics but manifesting slower and through your own effort. Quoted from the book review:
0Kenny4y
I don't think I was advocating for either. I apologize if I came off as saying people should try psychedelics and meditation.
[-]Tomás B.4y260

Even in the case of Sam Harris, who seems relatively normal, he lost a decade of his life pursuing “enlightenment” though meditation - also notable is this was spurred on by psychedelic use. Though I am sure he would not agree with the frame that it was a waste,  I read his *Waking Up* as a bit of a horror story. For someone without his high IQ and indulgent parents, you could imagine more horrible ends. 

I know of at least one person who was bright, had wild ambitious ideas, and now spends his time isolated from his family inwardly pursuing “enlightenment.” And this through the standard meditation + psychedelics combination. I find it hard to read this as anything other than wire-heading, and I think a good social norm would be one where we consider such behavior as about as virtuous as obsessive masturbation.

In general, for any drug that produces euphoria, especially spiritual euphoria, the user develops an almost romantic relationship with their drug, as the feelings they inspire are just as intense (and sometimes more so) as familial love.  One should at least be slightly suspicious of the benefits propounded by their users, who in many cases literally worship their drugs of choice. 

Reply
[-]Aella4y840

fwiw as a data point here, I spent some time inwardly pursuing "enlightenment" with heavy and frequent doses of psychedelics for a period of 10 months and consider this to be one of the best things I've ever done. I believe it raised my resting set point happiness, among other good things, and I am still deeply altered (7 years later).

I do not think this is a good idea for everyone and lots of people who try would end up worse off. But I strongly object to this being seen as virtuous as obsessive masturbation. Sure, it might not be your thing, but this frame seriously misses a huge amount of really important changes in my experience. And I get you might think I'm... brainwashed or something? by drugs? So I don't know what I could say that would convince you otherwise.

But I did have concrete things, like solving a pretty big section of childhood trauma (like; I had a burning feeling of rage in my chest before, and the burning feeling was gone afterwards), I had multiple other people comment on how different I was now (usually in regards to laughing easier and seeming more relaxed), I lost my anxiety around dying, my relationship to pain altered in such a way that I am significantly ... (read more)

Reply
[-]Duncan Sabien (Inactive)4y280

In my culture, it's easy to look at "what happens at the ends of the bell curves" and "where's the middle of the bell curve" and "how tight vs. spread out is the bell curve (i.e. how different are the ends from the middle)" and "are there multiple peaks in the bell curves" and all of that, separately.

Like, +1 for the above, and I join the above in giving a reminder that rounding things off to "thing bad" or "thing good" is not just not required, it's actively unhelpful.

Policies often have to have a clear answer, such as the "blanket ban" policy that Eliezer is considering proposing.  But the black-or-white threshold of a policy should not be confused with the complicated thing underneath being evaluated.

Reply
[-]Tomás B.4y250

And I get you might think I'm... brainwashed or something? by drugs?

I'm not sure what you find implausible about that.  Drugs do not literally propagandize the user, but they can hijack the reward system, in the case of many drugs, and in the case of psychedelics they seem to alter beliefs in reliable ways. Psychedelics are also taken in a memetic context with many crystalized notions about what the psychedelic experience is, what enlightenment is, that enlightenment itself is a mysterious but worthy pursuit.

The classic joke about psychedelics is they provide the feelings associated with profound insights without the actual profound insights. To the extent this is true, I feel this is pretty dangerous territory for a rationalist to tread.  

In your own case unless I am misremembering, I believe on your blog you discuss LSD permanently lowering your mathematical abilities degrading your memory. This seems really, really bad to me…

Maybe this one is less concrete, but some part of me feels really deeply at peace, always, like it knows everything is going to be ok and I didn't have that before.

I’m glad your anxiety is gone, but I don't think everything is going to be alright by default. I would not like to modify myself to think that. It seems clearly untrue. 

Perhaps the masturbation line was going too far.  But the gloss of virtue that “seeking enlightenment” has strikes me as undeserved. 

Reply
[-]Aella4y200

Also fwiw, I took psychedelics in a relatively memetic-free environment. I'd been homeschooled and not exposed to hippie/drug culture, and especially not significant discussion around enlightenment. I consider this to be one of the reasons my experience was so successful; I didn't have it in relationship to those memes, and did not view myself as pursuing enlightenment (I know I said I was inwardly pursuing enlightenment in my above comment, but I was mostly riffing off your phrasing; in some sense I think it was true but it wasn't a conscious thing.)

LSD did not permanently lower my mathematical abilities, and if I suggested that I probably misspoke? I suspect it damaged my memory, though; my memory is worse now than before I took LSD. 

And sorry; by 'everything being ok' I didn't mean that I literally think that situation will end up being the ones I want; I mean that I know I will be okay with whatever happens. Very related to my endurance of pain going up by quite a lot, and my anxiety of death disappearing.

Separately, I do think that a lot of the memes around psychedelics are... incomplete? It's hard to find a good word. Naive? Something around the difference between the aesthetic of a thing and the thing itself? And in that I might agree with you somewhere that "seeking enlightenment" isn't... virtuous or whatever.

Reply
5ChristianKl4y
7 years seem to be a long time and most people get worse memory as they age. Was it also significantly worse directly aften the 10 months of you being on that quest then before those 10 months.?
4Tomás B.4y
Thanks. Corrected; I probably conflated the two.  But my feeling towards that change are the same so the line otherwise remains unchanged.  I should probably organize my opinons/feelings on this topic and write an effortpost or something rather than hash it out in the comments.
1Thoth Hermes2y
This is an interesting class of opinions; I wonder if believing the following: is at all correlated with also having this belief: "Everything is not going to be alright by default" is sort of a vague belief to have, so is it worth having? I don't think this is necessarily either an anomalous belief nor a common-place belief. Admittedly, I have a hard time figuring out how I would modify myself to have this belief. I guess I am not that way by nature, but others can be. It would be interesting to find out what accounts for that difference. Ultimately, if it's more of an axiomatic belief, it would require a lot of argument about what kinds of other beliefs it leads to that are more beneficial for one to use over their lifetimes.  About the profound insights, the way to check to see if they are actually profound is: 1. Can it be articulated? 2. Can you explain it in further detail from subsequent experiences? 3. Does it remain with you even once the psychedelics or the "elevated" experience has worn off? From personal experience, there are insights you can have which satisfy all three. I think lessened anxiety (which will be accompanied with reasons, though too long for this comment) is one of them. 
[-]Rafael Harth4y170

Even in the case of Sam Harris, who seems relatively normal, he lost a decade of his life pursuing “enlightenment” though meditation

What kind of a cost-benefit analysis is this?

if you start from the assumption that something isn't useful, of course spending time on that thing is a waste. As far as I can see, this is the totality of your argument. You can do this for just about anyone, e.g.:

Even in the case of Scott Garrabrant who seems relatively normal, he lost a decade of his life pursuing "AI alignment" through the use of mathematics.

I happen to think that Scott did amazing work at Miri, but objectively speaking, it is significantly harder to justify his time spent doing research at Miri than that of Sam Harris pursuing englightenment in India. Sam has released the Waking Up app, which is effectively a small company making a ton of money, donating 10% of its income to the most effective charities (arguably that alone is more than enough to pay for one decade of Sam's time) and has thousands of people reporting enormous psychological benefits. I'm one of them; in terms of productivity alone, I'd say my time working as increased by at least 20% and has gotten at least 10% ... (read more)

Reply
5steven04614y
I wonder what the rationalist community would be like if, instead of having been forced to shape itself around risks of future superintelligent AI in the Bay Area, it had been artificial computing superhardware in Taiwan, or artificial superfracking in North Dakota, or artificial shipping supercontainers in Singapore, or something. (Hypothetically, let's say the risks and opportunities of these technologies were equally great and equally technically and philosophically complex as those of AI in our universe.)
[-]Elizabeth4y*490

No greater sign that Eliezer isn't leading a cult than that my first reaction to this was "pfft, good luck", even when I misread it as "we should shame individuals for doing these things Elizabeth finds valuable" and not the more reasonable "leaders pushing this are suspect"

Reply
[-]ioannes4y320

Big +1.

Really important to disambiguate the two:

"People shouldn't do psychedelics" is highly debatable and has to argue against a lot of research demonstrating their efficacy for improving mental wellness and treating psychiatric disorders.

"Leaders & subgroups shouldn't push psychedelics on their followers" seems straightforwardly correct.

Reply
[-]ChristianKl4y110

I haven't taken any psychedelics myself. I have the impression that best practice with LSD is not to take it alone but to have someone skillful as a trip sitter. I imagine having a fellow rationalist as a trip sitter is much better then having some one agey person with sketchy epistemics. 

Reply
[-]Vaniver4y280

I'm also about ready to propose a similar no-such-group policy on 'woo', tarot-reading, supernaturalism only oh no it's not really supernaturalism I'm just doing tarot readings as a way to help myself think, etc.

Hmm. I can't tell if the second half is supposed to be pointing at my position on Tarot, or the thing that's pretending to be my position but is actually confused?

Like, I think the hitrate for 'woo' is pretty low, and so I spend less time dredging there than I do other places which are more promising, but also I am not ashamed of the things that I've noticed that do seem like hits there. Like, I haven't delivered on my IOU to explain 'authenticity' yet, but I think Circling is actually a step above practices that look superficially similar in a way we could understand rigorously, even if Circling is in a reference class that is quite high in woo, and many Circlers like the flavor of woo.

That said, I could also see an argument that's like "look, we really have to implement rules like this at a very simple level or they will get bent to hell, and it's higher EV to not let in woo."

Reply
[-]Holly_Elmore4y170

Would it be acceptable to regard practices like self-reflective tarot and circling and other woo-adjacent stuff as art rather than an attempt at rationality? I think it is a danger sign when people are claiming those highly introspective and personal activities as part of their aspiring to rationality. Can we just do art and personal emotional and creative discovery and not claim that it’s directly part of the rationalist project?

Reply
[-]Vaniver4y170

I mean, I also do things that I would consider 'art' that I think are distinct from rationality. But, like, just like I wouldn't really consider 'meditation' an art project instead of 'inner work' or 'learning how to think' or w/e, I wouldn't really consider Circling an art project instead of those things.

Reply
[-]Holly_Elmore4y120

I would consider meditation and circling to have the same relationship to “discovering the truth” as art. The insights can be real and profound but are less rigorous and much more personal.

Reply
5Benquo4y
I think we need more than two categories here. We can't allocate credit for input, only output. People can learn things by carefully observing stuff, but we shouldn't get to mint social capital as rationalists for hours meditating any more than Darwin's reputation should depend directly on time spent thinking about tortoises. Discerning investors might profit by recognizing leading indicators of high productivity, but that only works if incentives are aligned, which means, eventually, objective tests. In hindsight it seems very unfortunate that MIRI was not mostly funded by relevant-to-its-expertise prediction markets. Good art seems like it should make people sexy, not credible.
[-]ChristianKl4y271

Instead of declaring group norms I think it would be worth it to have posts that actually lay out the case in a convincing manner. In general there are plenty of contrarian rationalists for whom "it's a group norm" is not enough to not do something. Declaring it as a drug norm might get them to be more secretive about it which is bad. 

Trying to solve issues about people doing the wrong things with group norms instead of with deep arguments doesn't seem to be the rationalist way.

Reply
6Gunnar_Zarncke4y
Can you propose a norm that avoids the pitfalls?
[-]ChristianKl4y130

Have the important conversations about why you shouldn't take drugs / engage in woo openly on LessWrong instead only having them only privately where it doesn't reach many people. Then confront people who suggest something in that direction with those posts. 

Reply
-1TekhneMakre4y
+1
[-]Unreal4y160

I feel tempted to mostly agree with Eliezer here... 

Umm To relay a trad Buddhist perspective, you're not (traditionally) supposed to make a full-blown attempt for 'enlightenment' or 'insight' until you've spent a fairly extensive time working on personal ethics & discipline. I think an unnamed additional step is to establish your basic needs, like good community, health, food, shelter, etc. It's also recommended that you avoid drugs, alcohol, and even sex. 

There's also an important sense I get from trad Buddhism, which is: If you hold a nihilistic view, things will go sideways. A subtle example of nihilism is the sense that "It doesn't matter what I do or think because it's relatively inconsequential in the scheme of things, so whatever." or a deeper hidden sense of "It doesn't really matter if everyone dies." or "I feel it might be better if I just stopped existing?" or "I can think whatever I want inside my own head, including extensive montages of murder and rape, because it doesn't really affect anything." 

These views seem not uncommon among modern people, and subtler forms seem very common. Afaict from reading biographies, modern people have more trouble wit... (read more)

Reply
[-]Duncan Sabien (Inactive)4y160

There are some potential details that might swing one way or the other (Vaniver's comment points at some), but as-written above, and to the best of my ability to predict what such a proposal would actually look like once Eliezer had put effort into it:

I expect I would wholeheartedly and publicly endorse it, and be a signatory/adopter.

Reply
2Chris_Leong4y
I guess I'd suggest thinking about targets carefully. A lot of people are going to experiment with psychedelics anyway and it's safer for people to do so within a group, assuming the group is actually trustworthy and not attempting to brainwash people.
1ToasterLightning5mo
Did you ever end up doing this? I think this is a good idea.
[-]Kaj_Sotala4y670

OTOH a significant amount of (seemingly sane) people credit psychedelics for important personal insights and mental health/trauma healing. Psychedelics seem to be showing enough promise for that for the psychiatric establishment to be getting interested in them again [1, 2] despite them having been stigmatized for decades, and AFAIK the existing medical literature generally finds them to be low-risk [3, 4].

Reply
[-]ioannes4y130

It's interesting that a lot of the discussion about psychedelics here is arguing from intuitions and personal experience, rather than from the trial results that have been coming out. 

I do think that psychedelic experiences vary a lot from person-to-person and trip-to-trip, and that psychedelics aren't for everyone. (This variability probably isn't fully captured by the trial results because study participants are carefully screened for lots of factors that may be contraindicated.)

Reply
[-]Avi4y*120

Psilocybin-based psychedelics are indeed considered low-risk both in terms of addiction and overdose. This chart sums things up nicely, and is a good thing to 'pin on your mental fridge':

https://upload.wikimedia.org/wikipedia/commons/thumb/a/a5/Drug_danger_and_dependence.svg/1920px-Drug_danger_and_dependence.svg.png

You want to stay as close as possible to the bottom left corner of that graph!

Reply
[-][anonymous]4y170

This graph shows death and addiction potential but it doesn't say anything about sanity 

Reply
3Avi4y
Correct - but they are low-risk for those factors (addiction and/or overdose).
-7[anonymous]4y
[-]iceman4y270

I want to second this. I worked for an organization where one of key support people took psychedelics and just...broke from reality. This was both a personal crisis for him and an organizational crisis for the company to deal with the sudden departure of a bus factor 1 employee.

I suspect that psychedelic damage happens more often than we think because there's a whole lobby which buys the expand-your-mind narrative.

Reply
[-]jessicata4y180

I don't regret having used psychedelics, though I understand why people might take what I've written as a reason not to try psychedelics.

Reply
[-]CronoDAS4y110

The most horrific case I know of LSD being involved in a group's downward spiral from weird and kinda messed up to completely disconnected from reality and really fucking scary is the Manson family, but that's far from a typical example. But if you do want to be a cult leader, LSD does seem to do something that makes the job a lot easier.

Reply
[-]rationalistthrowaway4y1010

(Note: I feel nervous posting this under my own name, in part because my Dad is considering transitioning at the moment and I worry he'd read it as implying some hurtful thing I don't mean, but I do want to declare the conflict of interest that I work at CFAR or MIRI).

The large majority of folks described in the OP as experiencing psychosis are transgender. Given the extremely high base rate of mental illness in this demographic, my guess is this is more explanatorily relevant than the fact that they interacted with rationalist institutions or memes. 

I do think the memes around here can be unusually destabilizing. I have personally experienced significant psychological distress thinking about s-risk scenarios, for example, and it feels easy to imagine how this distress could have morphed into something serious if I'd started with worse mental health. 

But if we're exploring analogies between what happened at Leverage and these rationalist social circles, it strikes me as relevant to ask why each of these folks were experiencing poor mental health. My impression from reading Zoe's writeup is that she thinks her poor mental health resulted from memes/policies/conversations t... (read more)

Reply
[-]Benquo4y380

As I understand it you're saying:

At Leverage people were mainly harmed by people threatening them, whether intentionally or not. By contrast, in the MIRICFAR social cluster, people were mainly harmed by plausible upsetting ideas. (Implausible ideas that aren't also threats couldn't harm someone because there's no perceived incentive to believe them.)

An example of a threat is Roko's Basilisk. An example of an upsetting plausible idea was the idea in early 2020 that there was going to be a huge pandemic soon. Serious attempts were made to suppress the former meme and promote the latter.

If someone threatens me I am likely to become upset. If someone informs me about something bad, I am also likely to become upset. Psychotic breaks are often a way of getting upset about one's prior situation.. People who transition genders are also usually responding to something in their prior situation that they were upset about.

Sometimes people get upset in productive ways. When Justin Shovelain called me to tell me that there was going to be a giant pandemic, I called up some friends and talked through self-quarantine thresholds, resulting in this blog post. Later, some friends and I did some other... (read more)

Reply
[-]jessicata4y150

The large majority of folks described in the OP as experiencing psychosis are transgender.

That would be, arguably, 3 of the 4 cases of psychosis I knew about (if Zack Davis is included as transgender) and not the case of jail time I knew about. So 60% total. [EDIT: See PhoneixFriend's comment, there were 4 cases who weren't talking with Michael and who probably also weren't trans (although that's unknown); obviously my own knowledge is limited to my own social circle and people including me weren't accounting for this in statistical inference]

My impression from reading Zoe’s writeup is that she thinks her poor mental health resulted from memes/policies/conversations that were at best accidentally mindfucky, and often intentionally abusive and manipulative.

In contrast, my impression of what happened in these rationalist social circles is more like “friends or colleagues earnestly introduced people (who happened to be drawn from a population with unusually high rates of mental illness) to upsetting plausible ideas.”

These don't seem like mutually exclusive categories? Like, "upsetting plausible ideas" would be "memes" and "conversations" that could include things like AI p... (read more)

Reply
[-]Viliam4y270

exiting would increase social isolation (increasing social dependence on a small number of people), which is a known risk factor

If exiting makes you socially isolated, it means that (before exiting) all/most of your contacts were within the group.

That suggests that the safest way to exit is to gradually start meeting new people outside the group, start spending more time with them and less time with other group member, until the majority of your social life happens outside the group, which is when you should quit.

Cults typically try to prevent you from doing this, to keep the exit costly and dangerous. One method is to monitor you and your communications all the time. (For example, Jehovah Witnesses are always out there in pairs, because they have a sacred duty to snitch on each other. Another way is to keep you at the group compound where you simply can't meet non-members. Yet another way is to establish a duty to regularly confess what you did and who you talked to, and to chastise you for spending time with unbelievers.) Another method is simply to keep you so busy all day long that you have no time left to interact with strangers.

To revert this -- a healthy group will provide y... (read more)

Reply
[-]Vaniver4y*300

Were you criticized for socializing with people outside MIRI/CFAR, especially with "rival groups"?

As a datapoint, while working at MIRI I started dating someone working at OpenAI, and never felt any pressure from MIRI people to drop the relationship (and he was welcomed at the MIRI events that we did, and so on), despite Eliezer's tweets discussed here representing a pretty widespread belief at MIRI. (He wasn't one of the founders, and I think people at MIRI saw a clear difference between "founding OpenAI" and "working at OpenAI given that it was founded", so idk if they would agree with the frame that OpenAI was a 'rival group'.)

Reply
8jessicata4y
This is what I did, it was just still a pretty small social group, and getting it and "quitting" were part of the same process. I think it was other subgroups at Leverage, at least primarily. So "mental objects" would be a consideration in favor of making friends outside of the group. Unless one is worried about spreading mental objects to outsiders. Most of this is answered in the post, e.g. I made it clear that the over-scheduling issue was not a problem for me at MIRI, which is an important difference. I was certainly spending a lot of time outside of work doing psychological work, and I noted friendships including one with a housemate formed around a shared interest in such work (Zoe notes that a lot of things on her schedule were internal psychological work). There wasn't active prevention of talking to people outside the community but it's common for it to happen anyway which is influenced by soft social pressure (e.g. looking down on people as "normies"). Zoe also is saying a lot of the pressure at Leverage was soft/nonexplicit, e.g. "being looked down on" for taking normal weekends. I do remember Nate Soares who was executive director at the time telling me that "work-life balance is overrated/not really necessary" and if I'd been more sensitive to this I might have spent a lot more time on work. (I'm not even sure he's "wrong" in that the way "normal people" do this has a lot of problems and integrating different domains of life can help sometimes, it still could have been taken as encouragement in the direction of working on weekends etc.)
5Linch4y
Just want to register that this comment seemed overly aggressive to me on a first read, even though I probably have many sympathies in your direction (that Leverage is importantly disanalogous to MIRI/CFAR)
8jessicata4y
The following recent Twitter thread by Eliezer is interesting in the context of the discussion of whether "upsetting but plausible ideas" are coming from central or non-central community actors, and Eliezer's description of Michael Vassar as "causing psychotic breaks": (in reply to "My model of Eliezer is not so different from his constantly screaming, silently to himself, at all times, pausing only to scream non-silently to others, so he doesn't have to predictably update in the future.":) A few takeaways from this: 1. Obviously, Eliezer is saying that there is a plausible but extremely upsetting idea that could be learned by studying neural networks sufficiently competently. [EDIT: Maybe I'm wrong that this is indicating neural nets being powerful and is just indicating them being unreliable for mission-critical applications? Both interpretations seem plausible...] 2. This statement, itself, is plausible and upsetting, though presumably less upsetting than if one actually knew the thing that could be learned about neural networks. 3. Someone who was "constantly screaming" would be considered, by those around them, to be having a psychotic break (or an even worse mental health problem), and be almost certain to be psychiatrically incarcerated. 4. Eliezer is, to all appearances, trying to convey these upsetting ideas on Twitter. 5. It follows that, to the extent that Eliezer is not "causing psychotic breaks", it's only because he's insufficiently capable of causing people to believe "upsetting but plausible ideas" that he thinks are true, i.e. because he's failing (or perhaps not-really-trying, only pretending to try) to actually convey them.
[-]TurnTrout4y380

This does not seem like the obvious reading of the thread to me.

Obviously, Eliezer is saying that there is a plausible but extremely upsetting idea that could be learned by studying neural networks sufficiently competently.

I think Eliezer is saying that if you understood on a gut level how messy deep networks are, you'd realize how doomed prosaic alignment is. And that would be horrible news. And that might make you scream, although perhaps not constantly. 

After all, Eliezer is known to use... dashes... of colorful imagery. Do you really think he is literally constantly screaming silently to himself? No? Then he was probably also being hyperbolic about how he truly thinks a person would respond to understanding a deep network in great detail.

That's why I feel that your interpretation is grasping really hard at straws. This is a standard "we're doomed by inadequate AI alignment" thread from Eliezer.

Reply
[-]jessicata4y120

Even though it's an exaggeration, Eliezer is, with this exaggeration, trying to indicate an extremely high level of fear, off the charts compared with what people are normally used to, as a result of really taking in the information. Such a level of fear is not clearly lower than the level of fear experienced by the psychotic people in question, who experienced e.g. serious sleep loss due to fear.

Reply
[-]dxu4y400

I strong-upvoted both of Jessica's comments in this thread despite disagreeing with her interpretation in the strongest possible terms; I did so because I think it is important to note that, for every "common-sense" interpretation of a community leader's words, there will be some small minority who interpret it in some other (possibly more damaging) way--and while I think (importantly) this does not imply it is the community leader's responsibility to manage their words in such a way that no misinterpretation is possible (which I think is simply completely unfeasible), I am nonetheless in favor of people sharing their (non-standard) interpretations, given the variation in potential responses.

As Eliezer once said (I'm paraphrasing from memory here, so the following may not be word-for-word accurate, but I am >95% confident I'm not misremembering the thrust of what he said), "The question I have to ask myself is, will this drive more than 5% of my readers insane?"

EDIT: I have located the text of the original comment. I note (with some vindication) that once again, it seems that Eliezer was sensitive to this concern way ahead of when it actually became a thing.

Reply
8Viliam4y
Hm, I thought that the upsetting thing is how neural networks work in general. Like the ones that can correctly classify pictures with 99% probability... and then you slightly adjust a few pixels in such way that a human sees no difference, but the neural network suddenly makes a completely absurd claim with high certainty. And, if you are using neural networks to solve important problems, and become aware of this, then you realize that despite them doing a great job in 99% of situations and a random stupid thing in the remaining 1%, there is actually no limit to how insanely wrong they can get, and that it can happen in circumstances that would seem perfectly harmless to you. That the underlying logic is just... inhuman. (To make an analogy, imagine that you hire a human to translate from French to English. The human is pretty good but not perfect, which means that he gets 99% right. In the remaining 1% he either translates the word incorrectly or says that he doesn't know. These two options are the only results you expect. -- Now instead of a human, you hire a robot. He also translates 99% correctly and 1% incorrectly or with no output. But in addition to this, if you give him a specifically designed input, he will say a complete absurdity. Like, he would translate "UN CHAT" as "A CAT", but when you strategically add a few dots and make it "ỤN ĊHAṬ", he will suddenly insist that is means "CENTRUM FOR APPLIED RATIONALITY" and will assign a 99.9999999% certainty to this translation. Note that this is not its usual reaction to dots; the input papers usually contain some impurities or random dots, and the algorithm has always successfully ignored them... until now. -- The answer is not just wrong, but absurdly wrong, it happened in the situation where you felt quite sure nothing wrong can happen, and the robot didn't even feel uncertain.) So, I think that you got this part wrong (and that putting "obviously" in front of it makes this weirdly ironic in given context
6Vaniver4y
I think it's important that the errors are not random; I think you mean something more like "they make large opaque errors."
4jessicata4y
Given what else Eliezer has said, it's reasonable to infer that the screaming is due to the possibility of everyone dying due to neural network based AIs being powerful but unalignable, not merely that your AI application might fail unexpectedly. It's really strange to think the idea isn't upsetting when Eliezer says understanding it would cause "constant screaming". Even if that's an exaggeration, really??????? Maybe ask someone who doesn't read LW regularly whether Elizer is saying the idea you could get by knowing how neural nets work is upsetting, I think they would agree with me.
[-]localdeity4y210

He specified "mission-critical".  An AI's ability to take over other machines in the network, take over the internet, manufacture grey goo, etc. (choose your favorite doomsday scenario), is not really related to how mission-critical its original task was.  (In fact, someone's AI to choose the best photo filters to match the current mood on Instagram to maximize "likes" seems both more likely to have arbitrary network access and less likely to have careful oversight than a self-driving car AI.)  Therefore I do think his comment was about the likelihood of failure in the critical task, and not about alignment.

I think he meant something like this:  The neural net, used e.g. to recognize cars on the road, makes most of its deductions based on accidental correlations and shortcuts in the training data—things like "it was sunny in all the pictures of trucks", or "if it recognizes the exact shape and orientation of the car's mirror, then it knows which model of car it is, and deduces the rest of the car's shape and position from that, rather than by observing the rest of the car".  (Actually they'd be lower-level and less human-legible than this.  It's like s... (read more)

Reply
4jessicata4y
Ok, I see how this is plausible. I do think that the reply to Zvi adds some context where Zvi is basically saying "Eliezer is always screaming, taking pauses to scream at others", and the thing Eliezer is usually expressing fear about is AI killing everyone. I see how it could go either way though.
[-]Duncan Sabien (Inactive)4y*920

One thing that has been bothering me a lot is that it seems like it's really likely that people don't realize just how distinct CFAR and MIRI are.

I've worked at each org for about three years total.

Some things which make it reasonable to lump them together and use the label "CFAR/MIRI":

  • They both descend from what was at one time a single organization.
  • They had side-by-side office spaces for many years, including a shared lunch table in the middle where people from both orgs would hang out and chat.
  • There are a lot of people common to both orgs (e.g. Anna does work for both orgs, I moved from CFAR to MIRI).
  • CFAR ran many explicit programs on MIRI's behalf (e.g. MSFP, or less directly but still pretty clearly AIRCS).
  • Most MIRI staff have been to a CFAR workshop.  Most MIRI staff have participated in at least one debugging session with a CFAR staff member (this was a service CFAR explicitly offered for a while).
  • Both orgs are explicitly concerned with navigating existential risk from unaligned artificial intelligence.
  • If MIRI "needed help," CFAR would be there.  If CFAR "needed help," MIRI would be there.  They are explicitly friendly, allied orgs.
  • The "most CFARish" MIRI empl
... (read more)
Reply
[-]AnnaSalamon4y220

I agree with all of the above. And yet a third thing, which Jessica also discusses in the OP, is the community near MIRI and/or CFAR, whose ideology has been somewhat shaped by the two organizations.

There are some good things to be gained from lumping things together (larger datasets on which to attempt inference) and some things that are confusing.

Reply
[-]Alex Vermillion4y150

I know you're busy with all this and other things, but how is this statement

One thing that has been bothering me a lot is that it seems like it’s really likely that people don’t realize just how distinct CFAR and MIRI are.

[...]

I agree with all of the above

compatible with this statement?

As far as I see it, nowadays CFAR is about 60% a hiring ground for MIRI and only 40% something else, though I could be wrong.

Actually, that was true for the last few years (with an ambiguous in-between time during covid), but it is not true now.

This thread is agreeing the orgs are completely different, but elsewhere you agreed that CFAR functions as a funnel into MIRI. I ask this out of personal interest in CFAR and MIRI going forwards and because I'm currently much more confused about how the two work than I was a week ago.

Reply
[-]Duncan Sabien (Inactive)4y140

In the era 2015 - 2018, CFAR served mostly as not a funnel into MIRI in terms of total effort, programs, the curriculum of those programs, etc., but also:

  • CFAR ran some specific programs intended to funnel promising people toward MIRI, such as MSFP
  • CFAR "kept its eyes out" during its regular programs for people who looked promising and might be interested in getting more involved with MIRI or MIRI-adjacent work

Toward the 2018 - 2020 era, some CFAR staff incubated the AIRCS program, which was a lot like CFAR workshops except geared toward bridging between the AI risk community and various computer scientist bubbles, with a strong eye toward finding people who might work on MIRI projects.  AIRCS started as a more-or-less independent project that occasionally borrowed CFAR logistical support, but over time CFAR decided to contribute more explicit effort to it, until it eventually became (afaik) straightforwardly one of the two or three most important "things going on at CFAR," according to CFAR.

Staff who were there at the time (this was as I was phasing out) might correct this summary, but I believe it's right in its essentials.

In the last two years, CFAR hasn't done much outward-facing work at all, due to COVID, and so has neither been a MIRI funnel nor definitively not a MIRI funnel.

Reply
[-]AnnaSalamon4y*100

In the last two years, CFAR hasn't done much outward-facing work at all, due to COVID, and so has neither been a MIRI funnel nor definitively not a MIRI funnel.

Yes, but I would predict that we won't be the same sort of MIRI funnel going forward. This is because MIRI used to have specific research programs that it needed to hire for, and it it was sponsoring AIRCS (covering direct expenses plus loaning us some researchers to help run the thing) in order to recruit for that, and those research programs have been discontinued and so AIRCS won't be so much of a thing anymore.

This has been the main part of why no AIRCS post vaccines, not just COVID.

I, and I would guess some others at CFAR, am interested in running AIRCS-like programs going forward, especially if there are groups that want to help us pay the direct expenses for those programs and/or researchers that want to collaborate with us on such programs. (Message me if you're reading this and in one of those categories.) But it'll be less MIRI-specific this time, since there isn't that recruiting angle.

Also, more broadly, CFAR has adopted different structures for organizing ourselves internally, and we are bigger now into "i... (read more)

Reply
1Alex Vermillion4y
Ah, so I should take the first statement as being strictly NOW, like 2021? That clears things up a lot, thanks!
4Duncan Sabien (Inactive)4y
I think Anna was saying "it is true that in the 2018 - 2020 era, CFAR was about 60% a hiring ground and only 40% something else, but that is not true currently."
3Alex Vermillion4y
If this is the case, I do understand now, but I think the comment claiming that it's not true at the literal current moment of October 2021 is useless in a misleading (though probably not intentional way). I think it is important to the CFAR-aligned folks that CFAR is not "bad" in the way noted in that comment, but to everyone else, the important thing is whether or not that criticism is true. It was the initial ignorance on my end that we were looking at the same fact from different angles that led to the confusion. (Also, I'm not continuing this out of a desire to show that "I'm right" or something, but just to explain why I cared since I now understand the mistake and can explain it. I'm happy to flesh it out more if this wasn't very clear)
3Duncan Sabien (Inactive)4y
TBC it easily may also be that CFAR made strategic shifts during COVID that make the statement true in a non-trivial way; I simply wouldn't know that fact and so can't speak to it.
[-]Scott Garrabrant4y180

Mostly agree. I especially agree about the organizational structure being very different.

I would not have said ""The median CFAR employee and the median MIRI employee interact frequently." is not even close to true", but it depends on the operationalization of frequently. But according to my operationalization, the lunch table alone makes it close to true.

I would also not have said "I think that a CFAR staff retreat is extremely unlike a MIRI research retreat." (e.g. we have attempted to Circle at a research retreat more than once.) (I haven't actually been to a CFAR staff retreat, but I have been to some things that I imagine are somewhat close, like workshops where a majority of attendees are CFAR staff). 

Reply
1Duncan Sabien (Inactive)4y
I think "we've attempted to circle at a research retreat more than once" is only a little stronger evidence of overlap than "we also ate food at our retreat." Fair point about the lunch table, although it's my sense that a strict majority of MIRI employees were almost never at the lunch table and for the first two years of my time at CFAR we didn't share a lunch table.
[-]Linch4y110

If you pick a randomly selected academic or hobby conference, I will be much more surprised that they had circling than if they had food.

Reply
1Duncan Sabien (Inactive)4y
Yeah.  I am more pointing at "the very fact that Scott seems to think that 'trying to circle more than once' is sufficient to posit substantial resemblance between MIRI research retreats and CFAR staff retreats is strong evidence that Scott has no idea what the space of CFAR staff retreats is like."
4Linch4y
To clarify, are you saying that CFAR staff retreats don't involve circling?
5AnnaSalamon4y
CFAR staff retreats often involve circling. Our last one, a couple weeks ago, had this, though as an optional evening thing that some but not most took part in.
4Duncan Sabien (Inactive)4y
I'm saying they involved circling often while I was there but that fact was something like 3-15% of their "character" (and probably closer to 3% imo) and so learning that some other thing also involves circling tells you very little about the overall resemblance of the two things.
6Scott Garrabrant4y
Surprised by the circling comment, but it doesn't seem worth going deep on a nitpick.
7Eli Tyre4y
All this sounds broadly correct to me, modulo some nitpicks that are on the whole smaller than Scott's objections (for a sense of scale).
[-]AnnaSalamon4y*840

FWIW, the above matches my own experiences/observations/hearsay at and near MIRI and CFAR, and seems to me personally like a sensible and correct way to put it together into a parsable narrative. The OP speaks for me. (Clarifying at a CFAR colleague's request that here and elsewhere, I'm speaking for just for myself and not for CFAR or anyone else.)

(I of course still want other conflicting details and narratives that folks may have; my personal 'oh wow this puts a lot of pieces together in a parsable form that yields basically correct predictions' level is high here, but insofar as I'm encouraging anything because I'm in a position where my words are loud invitations, I want to encourage folks to share all the details/stories/reactions pointing in all the directions.) I also have a few factual nitpicks that I may get around to commenting, but they don’t subtract from my overall agreement.

I appreciate the extent to which you (Jessicata) manage to make the whole thing parsable and sensible to me and some of my imagined readers. I tried a couple times to write up some bits of experience/thoughts, but had trouble managing to say many different things A without seeming to also negate other true things A’, A’’, etc., maybe partly because I’m triggered about a lot of this / haven’t figured out how to mesh different parts of what I’m seeing with some overall common sense, and also because I kept anticipating the same in many readers.

Reply
[-]Eli Tyre4y*850

The OP speaks for me.

Anna, I feel frustrated that you wrote this. Unless I have severely misunderstood you, this seems extremely misleading.

For context, before this post was published Anna and I discussed the comparison between MIRI/CFAR and Leverage. 

At that time, you, Anna, posited a high level dynamic involving "narrative pyramid schemes" accelerating, and then going bankrupt, at about the same time. I agreed that this seemed like it might have something to it, but emphasized that, despite some high level similarities, what happened at MIRI/CFAR was meaningfully different from, and much much less harmful than, what Zoe described in her post.

We then went through a specific operationalization of one of the specific claimed parallels (specifically the frequency and oppressiveness of superior-to-subordinate debugging), and you agreed that while the CFAR case was, quantitatively, an order of magnitude better than what Zoe describes. We talked more generally about some of the other parallels, and you generally agreed that the specific harms were much greater in the Leverage case. 

(And just now, I talked with another CFAR staff member who reported that the two of you went poi... (read more)

Reply
[-]AnnaSalamon4y730

I think that you believe, as I do, that there were some high-level structural similarities between the dynamics at MIRI/CFAR and at Leverage, and also what happened at Leverage was an order of magnitude worse than what happened at MIRI/CFAR.

Leverage_2018-2019 sounds considerably worse than Leverage 2013-2016.

My current guess is that if you took a random secular American to be your judge, or a random LWer, and you let them watch the life of a randomly chosen member of the Leverage psychology team from 2018-2019 (which I’m told is the worst part) and also of a randomly chosen staff member at either MIRI or CFAR, they would be at least 10x more horrified by the experience of the one in the Leverage psychology team.

I somehow don’t know how to say in my own person “was an order of magnitude worse”, but I can say the above. The reason I don’t know how to say “was an order of magnitude worse” is because it honestly looks to me (as to Jessica in the OP) like many places are pretty bad for many people, in the sense of degrading their souls via deceptions, manipulations, and other ethical violations. I’m not sure if this view of mine will sound over-the-top/dismissable or we-all-already... (read more)

Reply
[-]hg004y320

These claims seem rather extreme and unsupported to me:

  • "Lots of upper middle class adults hardly know how to have conversations..."

  • "the average workplace [is] more than 1/10th as damaging to most employees’ basic human capacities, compared to Leverage_2018-2019."

I suggest if you write a toplevel post, you search for evidence for/against them.

Elaborating a bit on my reasons for skepticism:

  • It seems like for the past 10+ years, you've been mostly interacting with people in CFAR-adjacent contexts. I'm not sure what your source of knowledge is on "average" upper middle class adults/workplaces. My personal experience is normal people are comfortable having non-superficial conversations if you convince them you aren't weird first, and normal workplaces are pretty much fine. (I might be overselecting on smaller companies where people have a sense of humor.)

    • A specific concrete piece of evidence: Joe Rogan has one of the world's most popular podcasts, and the episodes I've heard very much seem to me like they're "hitting new unpredictable thoughts". Rogan is notorious for talking to guests about DMT, for instance.
  • The two observations seems a bit inconsistent, if you'll

... (read more)
Reply
[-]Unreal4y820

RE: "Lots of upper middle class adults hardly know how to have conversations..."

I will let Anna speak for herself, but I have evidence of my own to bring... maybe not directly about the thing she's saying but nearby things. 

  • I have noticed friends who jumped up to upper middle class status due to suddenly coming into a lot of wealth (prob from crypto stuff). I noticed that their conversations got worse (from my POV). 
    • In particular: They were more self-preoccupied. They discussed more banal things. They spent a lot of time optimizing things that mostly seemed trivial to me (like what to have for dinner). When I brought up more worldly topics of conversation, someone expressed a kind of "wow I haven't thought about the world in such a long time, it'd be nice to think about the world more." Their tone was a tad wistful and they looked at me like they could learn something from me, but also they weren't going to try very hard and we both knew it. I felt like they were in a wealth/class bubble that insulated them from many of the world's problems and suffering. It seemed like they'd lost touch with their real questions and deep inner longings. I don't think this was as true of
... (read more)
Reply
[-]Unreal4y590

Oh yeah they also spent a lot of time trying to have the right or correct opinions. So they would certainly talk about 'the world' but mostly for the sake of having "right opinions" about it. Not so that they could necessarily, like, have insights into it or feel connected to what was happening. It was a game with not very high or real stakes for them. They tended to rehash the SAME arguments over and over with each other. 

Reply
[-]Viliam4y130

This all sounds super fascinating to me, but perhaps a new post would be better for this.

My current best guess is that some people are "intrinsically" interested in the world, and for others the interest is only "instrumental". The intrinsically interested are learning things about the real world because it is fascinating and because it is real. The instrumentally interested are only learning about things they assume might be necessary for satisfying their material needs. Throwing lots of money at them will remove chains from the former, but will turn off the engine for the latter.

For me another shocking thing about people in tech is how few of them are actually interested in the tech. Again, seems to be this intrinsical/instrumental distinction. The former group studies Haskell or design patterns or whatever. The latter group is only interested in things that can currently increase their salary, and even there they are mostly looking for shortcuts. Twenty years ago, programmers were considered nerdy. These days, programmers who care about e.g. clean code are considered too nerdy by most programmers.

I also don't like the way it insulates people from noticing how much death, sufferi

... (read more)
Reply
6ESRogs4y
Bit of a nitpick, but FYI I think you're using "worldly" here in almost the opposite of the way it's usually used. It seems like you mean "weighty" or "philosophical" or something to do with the big questions in life. Whereas traditionally, the term means: On that definition I'd say it was your friends who wanted to talk about worldly stuff, while you wanted to push the conversation in a non-worldly direction! (As I understand, the meaning originally comes from contrasting "the world" and the church.)
4Unreal4y
Oh, hmmmmm. Sorry for lack of clarity. I don't remember exactly what the topic I brought up was. I just know it wasn't very 'local'. Could have been philosophical / deep. Could have been geopolitical / global / big picture. 
9Douglas_Knight4y
A couple books suggesting that white collar workplaces are more traumatic than blue collar ones are Moral Mazes (cited by Jessica) and Bullshit Jobs.
[-]cousin_it4y220

I used to think the ability to have deep conversations is an indicator of how "alive" a person is, but now I think that view is wrong. It's better to look at what the person has done and is doing. Surprisingly there's little correlation: I often come across people who are very measured in conversation, but turn out to have amazing skills and do amazing things.

Reply
2iceman4y
Assuming that language is about coordination instead of object level world modeling, why should we be surprised that there's little correlation between these two very different things?
3TekhneMakre4y
Because object level world modeling is vastly easier and more unconstrained when you can draw on the sight of other minds, so a live world-modeler who can't talk to people has something going wrong (whether in them or in the environment).
[-]Adam Scholl4y*720

I also feel really frustrated that you wrote this, Anna. I think there are a number of obvious and significant disanalogies between the situations at Leverage versus MIRI/CFAR. There's a lot to say here, but a few examples which seem especially salient:

  • To the best of my knowledge, the leadership of neither MIRI nor CFAR has ever slept with any subordinates, much less many of them.
  • While I think staff at MIRI and CFAR do engage in motivated reasoning sometimes wrt PR, neither org engaged in anything close to the level of obsessive, anti-epistemic reputational control alleged in Zoe's post. MIRI and CFAR staff were not required to sign NDAs agreeing they wouldn't talk badly about the org—in fact, at least in my experience with CFAR, staff much more commonly share criticism of the org than praise. CFAR staff were regularly encouraged to share their ideas at workshops and on LessWrong, to get public feedback. And when we did mess up, we tried quite hard to publicly and accurately describe our wrongdoing—e.g., Anna and I spent low-hundreds of hours investigating/thinking through the Brent affair, and tried so hard to avoid accidentally doing anti-epistemic reputational control (this was
... (read more)
Reply
[-]AnnaSalamon4y460

Yeah, sorry. I agree that my comment “the OP speaks for me” is leading a lot of people to false views that I should correct. It’s somehow tricky because there’s a different thing I worry will be obscured by my doing this, but I’ll do it anyhow as is correct and try to come back for that different thing later.

To the best of my knowledge, the leadership of neither MIRI nor CFAR has ever slept with a subordinate, much less many of them.

Agreed.

While I think staff at CFAR and MIRI probably engaged in motivated reasoning sometimes wrt PR, neither org engaged in anything close to the level of obsessive, anti-epistemic reputational control alleged in Zoe's post. CFAR and MIRI staff were certainly not required to sign NDAs agreeing they wouldn't talk badly about the org—in fact, in my experience CFAR staff much more commonly share criticism of the org than praise.  CFAR staff were regularly encouraged to share their ideas at workshops and on LessWrong, to get public feedback. And when we did mess up, we tried extremely hard to publicly and accurately describe our wrongdoing—e.g., Anna and I personally spent hundreds of hours investigating/thinking about the Brent affair, and tr

... (read more)
Reply
3Viliam4y
Perhaps this is an opportunity to create an internal document on "unhealthy behaviors" that would list the screwups and the lessons learned, and read it together regularly, like a safety training? (Analogically to how organizations that get their computers hacked or documents stolen, describe how it happened as a part of their safety training.) Perhaps with anonymous feedback whether someone has a concern that MIRI or CFAR is slipping into some bad pattern again. Also, it might be useful to hire an external psychologist who would in regular intervals have a discussion with MIRI/CFAR employees. And to provide this document to the psychologist, so they know what risks to focus on. (Furthermore I think the psychologist should not be a rationalist; to provide a better outside view.) For starters, someone could create the first version of the document by extracting information from this debate. EDIT: Oops, on second reading of your comment, it seems like you already have something like this. Uhm, maybe a good opportunity to update/extend the document? * As a completely separate topic, it would be nice to have a table with the following columns: "Safety concern", "What happened in MIRI/CFAR", "What happened in Leverage (as far as we know)", "Similarities", "Differences". But this is much less important, in long term.
[-]Duncan Sabien (Inactive)4y120

I endorse Adam's commentary, though I did not feel the frustration Eli and Adam report, possibly because I know Anna well enough that I reflexively did the caveating in my own brain rather than modeling the audience.

Reply
7Benquo4y
This issue doesn't seem particularly important to me but the comparison you're making is a good example of a more general problem I want to talk about. My impression is that the decision structure of CFAR was much less legible & transparent than that of Leverage, so that it would be harder to determine who might be treated as subordinate to whom in what context. In addition, my impression from the years I was around is that Leverage didn't preside over as much of an external scene, - Leverage followers had formalized roles as members of the organization, while CFAR had a "community," many of whom were workshop alumni. Am I missing something here? The communication I read from CFAR seemed like it was trying to reveal as little as it could get away with, gradually saying more (and taking a harsher stance towards Brent) in response to public pressure, not like it was trying to help me, a reader, understand what had happened.
[-]ESRogs4y310

Anna and I personally spent hundreds of hours investigating/thinking about the Brent affair... our writeup about it...

 

Am I missing something here? The communication I read from CFAR seemed like it was trying to reveal as little as it could get away with...

FWIW, I think you and Adam are talking about two different pieces of communication. I think you are thinking of the communication leading up to the big community-wide discussion that happened in Sept 2018, while Adam is thinking specifically of CFAR's follow-up communication months after that — in particular this post. (It would have been in between those two times when Adam and Anna did all that thinking that he was talking about.)

Reply
6Adam Scholl4y
Yeah, this was the post I meant.
[-]Adam Scholl4y*240

I agree manager/staff relations have often been less clear at CFAR than is typical. But I'm skeptical that's relevant here, since as far as I know there aren't really even borderline examples of this happening. The closest example to something like this I can think of is that staff occasionally invite their partners to attend or volunteer at workshops, which I think does pose some risk of fucky power dynamics, albeit dramatically less risk imo than would be posed by "the clear leader of an organization, who's revered by staff as a world-historically important philosopher upon whose actions the fate of the world rests, and who has unilateral power to fire any of them, sleeps with many employees."

Am I missing something here? The communication I read from CFAR seemed like it was trying to reveal as little as it could get away with, gradually saying more (and taking a harsher stance towards Brent) in response to public pressure, not like it was trying to help me, a reader, understand what had happened.

As lead author on the Brent post, I felt bummed reading this. I tried really hard to avoid letting my care for/interest in CFAR affect my descriptions of what happened, or my choices abou... (read more)

Reply
7Thrasymachus4y
I think CFAR ultimately succeeded in providing a candid and good faith account of what went wrong, but the time it took to get there (i.e. 6 months between this and the initial update/apology) invites adverse inferences like those in the grandparent.  A lot of the information ultimately disclosed in March would definitely have been known to CFAR in September, such as Brent's prior involvement as a volunteer/contractor for CFAR, his relationships/friendships with current staff, and the events as ESPR. The initial responses remained coy on these points, and seemed apt to give the misleading impression CFAR's mistakes were (relatively) much milder than they in fact were. I (among many) contacted CFAR leadership to urge them to provide more candid and complete account when I discovered some of this further information independently.  I also think, similar to how it would be reasonable to doubt 'utmost corporate candour' back then given initial partial disclosure, it's reasonable to doubt CFAR has addressed the shortcomings revealed given the lack of concrete follow-up. I also approached CFAR leadership when CFAR's 2019 Progress Report and Future Plans initially made no mention of what happened with Brent, nor what CFAR intended to improve in response to it. What was added in is not greatly reassuring: A cynic would note this is 'marking your own homework', but cynicism is unnecessary to recommend more self-scepticism. I don't doubt the Brent situation indeed inspired a lot of soul searching and substantial, sincere efforts to improve. What is more doubtful (especially given the rest of the morass of comments) is whether these efforts actually worked. Although there is little prospect of satisfying me, more transparency over what exactly has changed - and perhaps third party oversight and review - may better reassure others.   
[-]Puxi Deek4y100

It would help if they actually listed and gave examples of exactly what kind of mental manipulation they were doing to people other than telling them to take drugs. These comments seem to dance around the exactly details of what happened and only talk about the group dynamics between people as a result of these mysterious actions/events.

Reply
[-]AnnaSalamon4y640

To be clear, a lot of what I find so relaxing about Jessica’s post is that my experience reading it is of seeing someone who is successfully noticing a bunch of details in a way that, relative to what I’m trying to track, leaves room for lots of different things to get sorted out separately.

I just got an email that led me to sort of triggeredly worry that folks will take my publicly agreeing with the OP to mean that I e.g. think MIRI is bad in general. I don’t think that; I really like MIRI and have huge respect and appreciation for a lot of the people there; I also like many things about the CFAR experiment and love basically all of the people who worked there; I think there’s a lot to value across this whole space.

I like the detailed specific points that are made in the OP (with some specific disagreements; though also with corroborating detail I can add in various places); I think this whole “how do we make sense of what happens when people get together into groups? and what happened exactly in the different groups?” question is an unusually good time to lean on detail-tracking and reading comprehension.

Reply
4LoganStrohl4y
[I deleted a comment in this thread because I realized it belonged in a different thread. Just being clumsy, sry.]
[-]philip_b4y*350

To my understanding, since the time when the events described in the OP took place, MIRI and CFAR have been very close and getting closer and closer. As far as I see it, nowadays CFAR is about 60% a hiring ground for MIRI and only 40% something else, though I could be wrong. Since you're one of the leaders of CFAR, that makes you one of the leading people behind all those things the OP is critical of.

The OP even writes that she thought and thinks CFAR was corrupt in 2017:

Both these cases are associated with a subgroup splitting off of the CFAR-centric rationality community due to its perceived corruption, centered around Ziz. (I also thought CFAR was pretty corrupt at the time, and I also attempted to split off another group when attempts at communication with CFAR failed; I don't think this judgment was in error, though many of the following actions were; ...)

Here she mentions Ziz also thinking that CFAR was corrupt, and I remember that in her blog, Ziz thought you being in the center of said corruption.

So, how all is this compatible with you agreeing with the OP?

Reply
[-]AnnaSalamon4y510

Since you're one of the leaders of CFAR, that makes you one of the leading people behind all those things the OP is critical of.

Yes.

So, how all is this compatible with you agreeing with the OP?

Basically because I came to see I’d been doing it wrong.

Happy to try to navigate follow-up questions if anyone has any.

Reply
[-]TurnTrout4y130

Happy to try to navigate follow-up questions if anyone has any.

PhoenixFriend wrote:

Psychedelic use was common among the leadership of CFAR and spread through imitation, if not actual institutional encouragement, to the rank-and-file.

Is this true?

Reply
[-]AnnaSalamon4y*270

Basically no. Can't say a plain "no," but can say "basically no." I'm not willing to give details on this one. I'm somehow fretting on this one, asking if "basically no" is true from all vantage points (it isn't, but it's true from most), looking for a phrase similar to that but slightly weaker, considering e.g. "mostly no", but something stronger is true. I think this'll be the last thing I say in this thread about this topic.

Reply
3epistemic meristem4y
What does "corrupt" mean in this context?  What are some examples of noncorrupt employers?
[-]AnnaSalamon4y1110

A CFAR board member asked me to clarify what I meant about “corrupt”, also, in addition to this question.

So, um. Some legitimately true facts the board member asked me to share, to reduce confusion on these points:

  • There hasn’t been any embezzlement. No one has taken CFAR’s money and used it to buy themselves personal goods.
  • I think if you took non-profits that were CFAR’s size + duration (or larger and longer-lasting), in the US, and ranked them by “how corrupt is this non-profit according to observers who people think of as reasonable, and who got to watch everything by video and see all the details”, CFAR would on my best guess be ranked in the “less corrupt” half rather than in the “more corrupt” half.

This board member pointed out that if I call somebody “tall” people might legitimately think I mean they are taller than most people, and if I agree with an OP that says CFAR was “corrupt” they might think I’m agreeing that CFAR was “more corrupt” than most similarly sized and durationed non-profits, or something.

The thing I actually think here is not that. It’s more that I think CFAR’s actions were far from the kind of straight-forward, sincere attempt to increase rationali... (read more)

Reply
[-]Duncan Sabien (Inactive)4y190

I have strong-upvoted this comment, which is not a sentence I think people usually ought leave as its own reply but which seems relevant given my relationship to Anna and CFAR and so forth.

Reply
[-]AnnaSalamon4y330

As far as I see it, nowadays CFAR is about 60% a hiring ground for MIRI and only 40% something else, though I could be wrong.

Actually, that was true for the last few years (with an ambiguous in-between time during covid), but it is not true now. Partly because MIRI abandoned the research direction we’d most been trying to help them recruit for. CFAR will be choosing its own paths going forward more.

Reply
4[comment deleted]4y
[-]Aryeh Englander4y780

I see that many people are commenting how it's crazy to try to keep things secret between coworkers, or to not allow people to even mention certain projects, or that this kind of secrecy is psychologically damaging, or the like.

Now, I imagine this is heavily dependent on exactly how it's implemented, and I have no idea how it's implemented at MIRI. But just as a relevant data point - this kind of secrecy is totally par for the course for anybody who works for certain government and especially military-related organizations or contractors. You need extensive background checks to get a security clearance, and even then you can't mention anything classified to someone else unless they have a valid need to know, you're in a secure classified area that meets a lot of very detailed guidelines, etc. Even within small groups, there are certain projects that you simply are not allowed to discuss with other group members, since they do not necessarily have a valid need to know. If you're not sure whether something is classified, you should be talking to someone higher up who does know. There are projects that you cannot even admit that they exist, and there are even words that you cannot men... (read more)

Reply
[-]jessicata4y110

Some secrecy between coworkers could be reasonable. Including secrecy about what secret projects exist (e.g. "we're combining AI techniques X and Y and applying them to application Z first as a test").

What seemed off is that the only information concealed by the policy in question (that researchers shouldn't ask each other what they're working on) is who is and isn't recently working on a secret project. That isn't remotely enough information to derive AI insights to any significant degree. Doing detective work on "who started saying they had secrets at the same time" to derive AI insights is a worse use of time than just reading more AI papers.

The policy in question is strictly dominated by an alternative policy, of revealing that you are working on a secret project but not which one. When I see a policy that is this clearly suboptimal for the stated goal, I have to infer alternative motives, such as maintaining domination of people by isolating them from each other. (Such a motive could be memetic/collective, partially constituted by people copying each other, rather than serving anyone's individual interest, although personal motives are relevant too)

Mainstream organizations... (read more)

Reply
[-]Alex Vermillion4y120

There are a few parts in here that seem fishy enough to me to try to red flag them.

Mainstream organizations being secretive at the level MIRI was isn’t a particularly strong argument. As we learned with COVID, many mainstream organizations are opposing their stated mission.

This is fair as a detraction to the sorta appeal to authority it is in reply to, but is also not a very good proof that secrecy is a bad idea. To boil it down smaller, the argument went "Secrecy works well for many existing organizations" and you replied "Many existing organizations did a bad job during Covid". Strictly speaking, doing a bad job during Covid means that not everything is going well, but this is still a pretty weird and weak argument.

This whole paragraph:

Zack Davis points out that controlling people into acting against their interests is a common function of mainstream policies (this is especially obvious in the military). Such control is especially counterproductive for FAI research, where a large part of the problem is to make AI act on human values rather than false approximations of them. Revealing actual human value requires freedom to act according to revealed preferences, not just pre-

... (read more)
Reply
8jessicata4y
There's a big difference between "optimizing poorly" and "pessimizing", i.e. making the peoblem worse in ways that require some amount of cleverness. Mainstream institutions handling COVID was a case of pessimizing not just optimizing poorly, e.g. banning tests, telling people masks don't work, and seizing mask shipments. I don't think you're mis-stating the argument here, it really is a thing I'm arguing that institutions that make people act against their values can't build FAI. As an example you could imagine an institution that optimized for some utility function U that was designed by committee. That U wouldn't be the human utility function (unless the design-by-committee process is a reliable value loader), so forcing everyone to optimize U means you aren't optimizing the human utility function, it has the same issues as a paperclip maximizer. What if you try setting U = "get FAI"? Too bad, "FAI" is a lisp token, for it to have semantics it has to connect with human value somehow, i.e. someone actually wanting a thing and being assisted in getting it. Maybe you can have a research org where some people are slaves and some aren't, but for this to work you'd need a legible distinction between the two classes, so you don't get confused into thinking you're optimizing the slave's utility function by enslaving them.
6Alex Vermillion4y
With a bit more meat, I can see what you're referring to better. I still don't agree I think, but I can see why you would build that belief much better than I could before. I appreciate the clarification, thank you.
[-]Duncan Sabien (Inactive)4y100

You have by far more information than me about what it's like on the ground as a MIRI researcher.

But one thing missing so far is that my sense was that a lot of researchers preferred the described level of secretiveness as a simplifying move?

e.g. "It seems like I could say more without violating any norms, but I have a hard time tracking where the norms are and it's easier for me to just be quiet as a general principle.  I'm going to just be quiet as a general principle rather than being the-maximum-cooperative-amount-of-open, which would be a burden on me to track with the level of conscientiousness I would want to apply."

Reply
[-]jessicata4y110

The policy described was mandated, it wasn't just on a voluntary basis. Anyway, I don't really trust something optimizing this badly to have a non-negligible shot at FAI, so the point is kind of moot.

Reply
[-]So8res4y750

First and foremost: Jessica, I'm sad you had a bad late/post-MIRI experience. I found your contributions to MIRI valuable (Quantilizers and Reflective Solomonoff Induction spring to mind as some cool stuff), and I personally wish you well.

A bit of meta before I say anything else: I'm leery of busting in here with critical commentary, and thereby causing people to think they can't air dirty laundry without their former employer busting in with critical commentary. I'm going to say a thing or two anyway, in the name of honest communication. I'm open to suggestions for alternative ways to handle this tradeoff.

Now, some quick notes: I think Jessica is truthfully reporting her experiences as she recalls them. I endorse orthonormal's comment as more-or-less matching my own recollections. That said, in a few of Jessica's specific claims, I believe I recognize the conversations she's referring to, and I feel misunderstood and/or misconstrued. I don't want to go through old conversations blow-by-blow, but for a sense of the flavor, I note that in this comment Jessica seems to me to misconstrue some of Eliezer's tweets in a way that feels similar to me. Also, as one example from the text, lo... (read more)

Reply
[-]jessicata4y620

Thanks, I appreciate you saying that you're sorry my experience was bad towards the end (I notice it actually makes me feel better about the situation), that you're aware of how criticizing people the wrong way can discourage speech and are correcting for that, and that you're still concerned enough about misconstruals to correct them where you see fit. I've edited the relevant section of the OP to link to this comment. I'm glad I had a chance to work with you even if things got really confusing towards the end.

Reply
[-]hg004y120

I'm not sure I agree with Jessica's interpretation of Eliezer's tweets, but I do think they illustrate an important point about MIRI: MIRI can't seem to decide if it's an advocacy org or a research org.

"if you actually knew how deep neural networks were solving your important mission-critical problems, you'd never stop screaming" is frankly evidence-free hyperbole, of the same sort activist groups use (e.g. "taxation is theft"). People like Chris Olah have studied how neural nets solve problems a lot, and I've never heard of them screaming about what they discovered.

Suppose there was a libertarian advocacy group with a bombastic leader who liked to go tweeting things like "if you realized how bad taxation is for the economy, you'd never stop screaming". After a few years of advocacy, the group decides they want to switch to being a think tank. Suppose they hire some unusually honest economists, who study taxation and notice things in the data that kinda suggest taxation might actually be good for the economy sometimes. Imagine you're one of those economists and you're gonna ask your boss about looking into this more. You might have second thoughts like: Will my boss scream at m... (read more)

Reply
[-]Scott Garrabrant4y840

MIRI can't seem to decide if it's an advocacy org or a research org.

MIRI is a research org. It is not an advocacy org. It is not even close. You can tell by the fact that it basically hasn't said anything for the last 4 years. Eliezer's personal twitter account does not make MIRI an advocacy org.

(I recognize this isn't addressing your actual point. I just found the frame frustrating.)

Reply
[-]Aella4y110

as a tiny, mostly-uninformed data point, i read "if you realized how bad taxation is for the economy, you'd never stop screaming" to have a very diff vibe from Eliezer's tweet, cause he didn't use the word bad. I know it's a small diff but it hits diff. Something in his tweet was amusing because it felt like it was pointing to a presumably neutral thing and making it scary? whereas saying the same thing about a clearly moralistic point seems like it's doing a different thing. 

Again - a very minor point here, just wanted to throw it in.

Reply
9jessicata4y
With regard to the specific misconstruals: * I don't think OP asserted that this specific plan was fixed, it was an example of a back-chaining plan, but I see how "a world-saving plan" could imply that it was this specific plan, which it wasn't. * I didn't specify which small group was taking over the world, I didn't mean to imply that it had to be MIRI specifically, maybe the comparison with Leverage led to that seeming like it was implied? * I still don't understand how I'm misconstruing Eliezer's tweets, it seems very clear to me that he's saying something about how neural nets work would be very upsetting if learned about and I don't see what else he could be saying.
[-]Connor_Flexman4y270

Regarding Eliezer's tweets, I think the issue is that he is joking about the "never stop screaming". He is using humor to point at a true fact, that it's really unfortunate how unreliable neural nets are, but he's not actually saying that if you study neural nets until you understand them then you will have a psychotic break and never stop screaming.

Reply
[-]gallabytes4y740

There's this general problem of Rationalists splitting into factions and subcults with minor doctrinal differences, each composed of relatively elite members of The Community, each with a narrative of how they’re the real rationalists and the rest are just posers and/or parasites. And, they're kinda right. Many of the rest are posers, we have a mop problem.

There’s just one problem. All of these groups are wrong. They are in fact only slightly more special than their rival groups think they are. In fact, the criticisms each group makes of the epistemics and practices of other groups are mostly on-point.

Once people have formed a political splinter group, almost anything they write will start to contain a subtle attempt to slip in the doctrine they're trying to push. With sufficient skill, you can make it hard to pin down where the frame is getting shoved in.

I have at one point or another been personally involved with a quite large fraction of the rationalist subcults. This has made the thread hard to read - I keep feeling a tug of motivation to jump into the fray, to take a position in the jostling for credibility or whatever it is being fought over here, which is then marred by the ... (read more)

Reply
5Benquo4y
Same. I don't think I can exit a faction by declaration without joining another, but I want many of the consequences of this. I think I get to move towards this outcome by engaging nonfactional protocols more, not by creating political distance between me & some particular faction.
5lex4y
Without disagreeing with any specific logical statement you have made, I call bullshit on this. You have quoted a short segment such that technically what you're saying is not false, but you're drawing a broader equivalence & request for social credit around "not wanting to be in factions" which is not valid in context of the fact that you are blatantly participating in a faction and doing factional protocols. People are usually on board with the idea of it being better to just talk rather than do politics, and I acknowledge & appreciate the sense in which you want to want to not do politics, but there is a game here which you are playing in and I wish you would own up to that.
[-]jessicata4y200

If Ben says: "I desire X, and I could get that by doing less faction stuff", that implies that he is doing faction stuff. But you're taking it as implying that he isn't.

The only way I could understand your criticism is as making a revealed-preference critique, where Ben is expressing a preference for doing non-faction stuff but is still doing faction stuff. That doesn't seem like a strong critique, though, since doing less faction stuff is somewhat difficult, and noticing the problem is the first step to fixing it.

Reply
1Benquo4y
Seems like you agree with what I actually said, and are claiming to find some implied posture objectionable, but aren't willing to criticize me explicitly enough for me or anyone else to learn from. ¯_(ツ)_/¯
[-]temporary_visitor_account4y*700

I want to provide an outside view that people might find helpful. This is based on my experience as a high school teacher (6 months total experience), a professor at an R1 university (eight years total experience), and someone who has mentored extraordinarily bright early-career scientists (15 years experience).

It’s very clear to me that the rationalist community is acting as a de facto school and system of interconnected mentorship opportunities. In some cases (CFAR, e.g.) this is explicit.

Academia also does this. It has ~1000 years of experience, dating from the founding of the University of Cambridge, and has learned a few things in that time.

An important discovery is that there are serious responsibilities that come with attending on “young” minds (young in quotes; generically the first quarter of life, depending on era, that’s <15 up to today around <30). These minds are considered inherently vulnerable, who need to be protected from manipulation, boundary violations, etc. It’s been discovered that making this a blanket and non-negotiable rule has significant positive epistemic and moral effects that haven’t been replicated with alternatives.

Even before academic institu... (read more)

Reply
[-]gwillen4y700

Upvoted for thoughtful dissent and outside perspective.

I ... have some complicated mixed feelings here. LW has a very substantial contingent of "gifted kids", who spent a decent chunk of their (...I suppose I should say "our") lives being frustrated that the world would not take them seriously due to age. Groups like that are never going to tolerate norms saying that young age is a reason to talk down to someone. And guidelines for protecting younger people from older people, to the extent that they involve disapproval or prevention of apparently-consensual choices by younger people, are going to be tricky that way. Any concern that "young minds are not allowed to waive" will be (rightly) seen as condescending, especially if you extend "young" to age 30. This does not really become less true if the concern is accurate.

This is extra-true here, because the "rationalist community" is not a single organization with a hierarchy, or indeed (I claim) even really a single community. So you can't make enforceable global rules of conduct, and it's very hard to kick someone out entirely (although I would say it's effectively been done a couple of times.)

You might be relieved to learn that, at... (read more)

Reply
[-]River4y520

I find this position rather disturbing, especially coming from someone working at a university. I have spent the last sixish years working mostly with high school students, occasionally with university students, as a tutor and classroom teacher. I can think of many high school students who are more ready to make adult decisions than many adults I know, whose vulnerability comes primarily from the inferior status our society assigns them, rather than any inherent characteristic of youth. 

As a legal matter (and I believe the law is correct here), your implication that someone acts in loco parentis with respect to college students is simply not correct (with the possible exception of the rare genius kid who attends college at an unusually young age). College students are full adults, both legally and morally, and should be treated as such. College graduates even more so. You have no right to impose a special concern on adults just because they are 18-30.

I think one of the particular strengths of the rationalist/EA community is that we are generally pretty good at treating young adults as full adults, and taking them and their ideas seriously. 

Reply
[-]Sniffnoy4y260

I want to more or less second what River said. Mostly I wouldn't have bothered replying to this... but your line of "today around <30" struck me as particularly wrong.

So, first of all, as River already noted, your claim about "in loco parentis" isn't accurate. People 18 or over are legally adults; yes, there used to be a notion of "in loco parentis" applied to college students, but that hasn't been current law since about the 60s.

But also, under 30? Like, you're talking about grad students? That is not my experience at all. Undergrads are still treated as kids to a substantial extent, yes, even if they're legally adults and there's no longer any such thing as "in loco parentis". But in my experience grad students are, absolutely, treated as adults, nor have I heard of things being otherwise. Perhaps this varies by field (I'm in math) or location or something, I don't know, but I at least have never heard of that before.

Reply
[-]Linch4y*120

Thanks for the outside perspective. If you're willing to go into more detail, I'm interested in a more detailed account from you on both what academia's safeguards are and (per gwillen's comment) where do you think academia's safeguards fall short and how that can be fixed. 

This is decision-relevant to me as I work in a research organization outside of academia (though not working on AI risk specifically), and I would like us to both be more productive than typical in academia and have better safeguards against abuse.

If it helps, we have about 15 researchers now, we're entirely remote, and we hire typically from people who just finished their PhDs or have roughly equivalent research experience, although research interns/fellows are noticeably younger (maybe right after undergrad is the median). 

Reply
[-]temporary_visitor_account4y*320

Sure. I'm really glad to hear. This is not my community, but you did explicitly ask.

This is just off the top of my head, and I don't mean it to be a final complete and correct list. It's just to give you a sense of some things I've encountered, and to help you and your org think about how to empower people and help them flourish. Academia uses a lot of these to avoid the geek-MOP-sociopath cycle.

I'm assuming your institution wants to follow an academic model, including teaching, mentorship, hiearchical student-teacher relationships, etc.

An open question is when you have a duty of care. My rule of thumb is (1) when you or the org is explicitly saying "I'm your teacher", "I'm your mentor"; (2) when you feel a power imbalance with someone because this relationship has arisen implicitly; (3) when someone is soliciting this role from you, whether you want it or not.

If you're a business making money, that's quite different, just say "we're going to use your body and mind to make money" and you've probably gotten your informed consent. :)

* Detection

1. Abuse is non-Gaussian. A small number of people may experience a great deal, while the majority see nothing wrong. That means that occasion... (read more)

Reply
[-]philh4y420

Somebody in the comments said that many of the people reporting abuse are trans, and “trans people suffer from mental illness more”, so maybe they’re just crazy and everything was actually pretty OK.

Hopefully this reasoning looks as crazy to you as it does to me; in the 1970s people would have said the same about gay people, but now we realize that a lot of that was due to homophobia (etc), and a lot of it was due to the fact that gay people, being marginalized, made soft targets for manipulation, blackmail, etc.

So, I think this is not a fair reading of the comment in question. Not a million miles away from, but far enough that I wanted to point it out.

But also, you seem to be saying something like: "consider that maybe trans people's rates of mental illness are downstream of them being trans and society being transphobic, not that their transness is downstream of mental illness".

And, okay, but...

Consider a hypothetical trans support forum. If rationalistthrowaway is right, you'd expect the members of that forum to have higher than average rates of mental illness, possibly leading to high profile events like psychotic breaks and suicides. And it sounds like you don't disagree wi... (read more)

Reply
8Linch4y
Thanks so much for the response! I really appreciate it. I think we have more of a standard manager-managee hierarchal relationship, with the normal corporate guardrails plus a few more. We also have explicit lines of reporting for abuse or other potential issues to people outside of the organization to minimize potential coverups. Here are my general thoughts: I'm kind of confused. Surely organizations by default have a power dynamic over employees, and managers over reports, and abusing this is bad? Maybe I'm confused and you mean a stronger thing by "duty of care" 1. Seems straightforwardly true to me, though I think you're maybe underestimating correlates of direct harm. (eg I expect in many of the cases cited, there's things like megalomania, insufficient humility, insufficient willingness to listen to contrary evidence, caring more about charismatic personalities than object-level arguments, etc) 2. Speaking as someone in the subset of "women and minorities", I'd be pretty concerned about any form of special treatments or affordances given because "women and minorities" are at higher risk, aside from really obvious ones like being moderately more careful about male supervisor/female supervisee. 1. In particular, this creates bad dynamics/incentive structures, like making it less likely to provide honest/critical feedback to "marginalized" groups, which is one of the things I was warned against in management training. 3. This seems correct. Also you want multiple trusted points of contact outside the organization, which I think both academia and rationality are failing at. 1. EA organizations often have Julia Wise, but she's stretched too thin and thus have (arguably) made significant mistakes as a result, as pointed out in a different thread. 4. This seems right to me. I think "common sense" should be dereferenced a little for people coming from different cultures, but the company culture of the AngloAmerican elite seems not-crazy as a starting
5temporary_visitor_account4y
This seems like the beginning of a very good discussion, but: 1. I want to be clear that I'm not a member of the LW community, and I don't want to take up space here. 2. There are complex and interesting ideas in play on both sides that are hard to communicate in a back-and-forth, and are perhaps better saved for a structured long-form presentation. To that end, I'll suggest that if you like we chat offline. I'm in NYC, for example, and you're welcome to get in touch via PM.
9Linch4y
To be clear, my own organization is a nonprofit. We are not interested in making money, nor in doing other things of low moral value.  I currently think emulating the culture of normal companies is a better starting template than academia or other research nonprofits (many of whom have strong positions that they want to believe and research that oh-so-interestingly happen to justify their pre-existing beliefs), though of course different cultures have different poisons that are more or less salient to different people.  But yeah, let's take this offline.
4ChristianKl4y
That seems to me doubtful. Relative to viticization survey reported rape numbers women seem to be much more willing to report it if they get raped then men.  A woman who reports sexual harrassment from a male mentor has it radically easier then a man who reports sexual harrassment from a female mentor. (this does not deminish the fact that it's worth listening to reports from women, but the mental model behind believing that it's easy to report for men is wrong)
[-]Unreal4y670

Attempt to get shared models on "Variations in Responses":

Quote from another comment by Mr. Davis Kingsley:

My sense is that dynamics like those you describe were mostly not present at CFAR, or insofar as they were present weren't really the main thing.

I bid: 

This counts as counter-evidence, but it's unfortunately not very strong counter-evidence. Or at least it's weaker than one might naively believe. 

Why?

It is true of many groups that even while most of a group's activities or even the main point of a group's activities might be wholesome, above board, above water, beneficial, etc., it is possible that this is still secretly enabling the abuse of a silent or hidden minority. The minority that, in the end, is going to be easiest to dismiss, ridicule, or downplay. 

It might even be only ONE person who takes all the abuse. 

I think this dynamic is so fucked that most people don't want to admit that it's a real thing. How can a community or group that is mostly wholesome and good and happy be hiding atrocious skeletons in their closet? (Not that this is true of CFAR or MIRI, I'm not making that claim. I do get a 'vibe' from Zoe's post that it's what Leverage 1.0 migh... (read more)

Reply
[-]Viliam4y870

Please allow me to point out one difference between the Rationalist community and Leverage that is so obvious and huge that many people possibly have missed it.

The Rationalist community has a website called LessWrong, where people critical of the community can publicly voice their complaints and discuss them. For example, you can write an article accusing their key organizations of being abusive, and it will get upvoted and displayed on the front page, so that everyone can add their part of the story. The worst thing the high-status members of the community will do to you is publicly post their disagreement in a comment. In turn, you can disagree with them; and you will probably get upvoted, too.

Leverage Research makes you sign an NDA, preventing you from talking about your experience there. Most Leverage ex-members are in fact afraid to discuss their experience. Leverage even tries (unsuccessfully) to suppress the discussion of Leverage on LessWrong.

Considering this, do you find it credible that the dynamics of both groups is actually very similar? Because that seems to be the narrative of the post we are discussing here -- the very post that got upvoted and is displayed publicly ... (read more)

Reply
[-]Unreal4y370

Considering this, do you find it credible that the dynamics of both groups is actually very similar?

I'm a little unsure where this is coming from. I never made explicitly this comparison. 

That said, I was at a CFAR staff reunion recently where one of the talks was on 'narrative control' and we were certainly interested in the question about institutions and how they seem to employ mechanisms for (subtly or not) keeping people from looking at certain things or promoting particular thoughts or ideas. (I am not the biggest fan of the framing, because it feels like it has the 'poison'—a thing I've described in other comments.)

I'd like to be able to learn about these and other such mechanisms, and this is an inquiry I'm personally interested in. 

I do strongly object against making this kind of false equivalence.

I mostly trust that you, myself, and most readers can discern the differences that you're worried about conflating. But if you genuinely believe that a false equivalence might rise to prominence in our collective sense-making, I'm open to the possibility. If you check your expectations, do you expect that people will get confused about the gap between the Leverage situa... (read more)

Reply
[-]Viliam4y310

The conflation between Leverage and CFAR is made by the article. Most explicitly here...

Most of what was considered bad about the events at Leverage Research also happened around MIRI/CFAR, around the same time period (2017-2019).

...and generally, the article goes like "Zoe said that X happens in Leverage. A kinda similar thing happens in MIRI/CFAR, too." The entire article (except for the intro) is structured as a point-by-point comparison with Zoe's article.

Most commenters don't buy it. But I imagine (perhaps incorrectly) that if a person unfamiliar with MIRI/CFAR and rationalist community in general would read the article, their impression would be that the two are pretty similar. This is why I consider it quite important to explain, very clearly, that they are not. This debate is public... and I expect it to be quote-mined (by RationalWiki and consequently Wikipedia).

I hope it is fine for me to try to investigate the nature of these group dynamics. 

Sure, go ahead!

I will put forth that a silent minority has existed at CFAR, in the past, and that their experience was difficult and pretty traumatic for them. And I have strong reasons to believe they're still 'not over it'.

I w... (read more)

Reply
6Unreal4y
I seem less concerned about this than you do. I don't see the consequences of this being particularly bad, in expectation. It seems you believe it is important, and I hear that.  I'm frustrated by the way you are engaging in this... there's a strangely blithe tone, and I am reading it as somewhat mean?  If you want to engage in a curious, non-judgy, and open conversation about the way this conversation is playing out, I could be up for that (in a different medium, maybe email or text or a phone call or something). Continuing on the object level like this is not working for me. You can DM me if you want... but obviously fine to ignore this also. If I know you IRL, it is a little more important to me, but if I don't know you, then I'm fine with whatever happens. Well wishes. 
[-]Vladimir_Nesov4y280

This comment mostly makes good points in their own right, but I feel it's highly misleading to imply that those points are at all relevant to what Unreal's comment discussed. A policy doesn't need to be crucial to be good. A working doesn't need to be worse than terrible to get attention to its remaining flaws. Inaccuracy of a bug report should provoke a search for its better form, not nullify its salience.

Reply
[-]Vaniver4y*810

On the other side of it, why do people seem TOO DETERMINED to turn him into a scapegoat? Most of you don't sound like you really know him at all.

A blogger I read sometimes talks about his experience with lung cancer (decades ago), where people would ask his wife "so, he smoked, right?" and his wife would say "nope" and then they would look unsettled. He attributed it to something like "people want to feel like all health issues are deserved, and so their being good / in control will protect them." A world where people sometimes get lung cancer without having pressed the "give me lung cancer" button is scarier than the world where the only way to get it is by pressing the button.

I think there's something here where people are projecting all of the potential harm onto Michael, in a way that's sort of fair from a 'driving their actions' perspective (if they're worried about the effects of talking to him, maybe they shouldn't talk to him), but which really isn't owning the degree to which the effects they're worried about are caused by their instability or the them-Michael dynamic.

[A thing Anna and I discussed recently is, roughly, the tension between "telling the truth" and "not destabilizing the current regime"; I think it's easy to see there as being a core disagreement about whether or not it's better to see the way in which the organizations surrounding you are ___, and Michael is being thought of as some sort of pole for the "tell the truth, even if everything falls apart" principle.]

Reply
[-]Unreal4y160

+1 to your example and esp "isn't owning the degree to which the effects they're worried about are caused by their instability or the them-Michael dynamic." 

I also want to leave open the hypothesis that this thing isn't a one-sided dynamic, and Michael and/or his group is unintentionally contributing to it. Whereas the lung cancer example seems almost entirely one-sided. 

Reply
7Unreal4y
Sorry if my tone about "something slippery" was way too confronting. I have simultaneously a lot of compassion and a lot of faith in people's ability to 'handle difficult truths' or something like that. But that nuanced tone is hard to get across on the internet.  If you feel negatively impacted by my comment here, you are welcome to challenge me or confront me about it here or elsewhere. 
[-]lwanon4y620

I don't live in the Bay anymore and haven't been on LessWrong for a while, but was informed of this thread by a friend.

I have only one thing to say, and will not be commenting any further due to an NDA.

Stay away from Geoff Anders and whatever nth iteration of "Leverage" he's on now.

Reply
[-]Freyja4y120

You might not be able to say this, but I’m wondering whether it’s one of the NDAs Zoe references Geoff pressuring people to sign at the end of Leverage 1.0 in 2019,

Reply
[-]Unreal4y580

(This is not a direct response to PhoenixFriend's comment but I am inspired because of that comment, and I recommend reading theirs first.) 

Note: CFAR recently had a staff reunion that I was present for. I made updates, including going from "Anna is avoidant, afraid, and tries to control more than she ought" to "Anna is in the process of updating, seeking feedback, and has reaffirmed honesty as a guiding principle." Given this, I feel personally relaxed about CFAR being in good hands for now; otherwise, maybe I'd be more agitated about CFAR. 

I'm not interested in questions of CFAR's virtue or lack thereof or fighting over its reputation. So I'm just gonna talk about general group dynamics with CFAR as an example, and people can join on this segment of the convo if they want. 

I don't think CFAR is a cult, and things did not seem comparably bad to Leverage. This is almost a meaningless sentence? But let's get it out of the way? 

RE: Class distinctions within CFAR

So... my sense of the CFAR culture, even though it was indeed a small group of 12-ish people, was that there was a social hierarchy. Because as monkeys, of course, we would fall into such a pattern. 

I ... (read more)

Reply
[-]Duncan Sabien (Inactive)4y410

I endorse Unreal's commentary.

I more and more feel like it was a mistake to turn down my invitation to the recent staff reunion/speaking-for-the-dead, but I continue to feel like I could not, at the time, have convinced myself, by telling myself only true things, that it was safe for me to be there or that I was in fact welcome.

I re-mention this here because it accords with and marginally confirms:

going from "Anna is avoidant, afraid, and tries to control more than she ought" to "Anna is in the process of updating, seeking feedback, and has reaffirmed honesty as a guiding principle."

Like, "Duncan felt unsafe because of the former, and is now regretting his non-attendance because of signals and bits of information which are evidence of the latter."

Reply
[-]AnnaSalamon4y570

Here is a thread for detail disagreements, including nitpicks and including larger things, that aren’t necessarily meant to connect up with any particular claim about what overall narratives are accurate. (Or maybe the whole comment section is that, because this is LessWrong? Not sure.)

I’m starting this because local validity semantics are important, and because it’s easier to get details right if I (and probably others) can consider those details without having to pre-compute .

For me personally, part of the issue is that though I disagree with a couple of the OPs details, I also have some other details that support the larger narrative which are not included in the OP, probably because I have many experiences in the MIRI/CFAR/adjacent communities space that Jessicata doesn’t know and couldn’t include. And I keep expecting that if I post details without these kinds of conceptualizing statements, people will use this to make false inferences about my guesses about higher-order-bits of what happened.

Reply
[-]habryka4y1590

The post explicitly calls for thinking about how this situation is similar to what is happening/happened at Leverage, and I think that's a good thing to do. I do think that I do have specific evidence that makes me think that what happened at Leverage seemed pretty different from my experiences with CFAR/MIRI.

Like, I've talked to a lot of people about stuff that happened at Leverage in the last few days, and I do think that overall, the level of secrecy and paranoia about information leaks at Leverage seemed drastically higher than anywhere else in the community that I've seen, and I feel like the post is trying to draw some parallel here that fails to land for me (though it's also plausible it is pointing out a higher level of information control than I thought was present at MIRI/CFAR).

I have also had my disagreements with MIRI being more secretive, and think it comes with a high cost that I think has been underestimated by at least some of the leadership, but I haven't heard of people being "quarantined from their friends" because they attracted some "set of demons/bad objects that might infect others when they come into contact with them", which feels to me like a different lev... (read more)

Reply
[-]ChristianKl4y150

When it comes to agreements preventing disclosure of information often there's no agreement to keep the existence of the agreement itself secret. If you don't think you can ethically (and given other risks) share the content that's protected by certain agreements it would be worthwhile to share more about the agreements and with whom you have them. This might also be accompied by a request to those parties to agree to lift the agreement. It's worthwhile to know who thinks they need to be protected by secrecy agreements.

Reply
[-]Unreal4y140

It has taken me about three days to mentally update more fully on this point. It seems worth highlighting now, using quotes from Oli's post: 

  • I've talked to a lot of people about stuff that happened at Leverage in the last few days, and I do think that overall, the level of secrecy and paranoia about information leaks at Leverage seemed drastically higher than anywhere else in the community that I've seen
  • I think the number of people who have been hurt by various things Leverage has done is really vastly larger than the number of people who have spoken out so far, in a ratio that I think is very different from what I believe is true about the rest of the community.

I am beginning to suspect that, even in the total privacy of their own minds, there are people who went through something at Leverage who can't have certain thoughts, out of fear. 

I believe it is not my place (or anyone's?) to force open a locked door, especially locked mental doors. 

Zoe's post may have initially given me the wrong impression—that other ex-Leverage people would also be able to articulate their experiences clearly and express their fears in a reasonable and open way. I guess I'm updating away ... (read more)

Reply
[-]Freyja4y450

I really don’t know about the experience of a lot of the other ex-Leveragers, but the time it took her to post it, the number and kind of allies she felt she needed before posting it, and the hedging qualifications within the post itself detailing her fears of retribution, plus just how many peoples’ initial responses to the post were to applaud her courage, might give you a sense that Zoe’s post was unusually, extremely difficult to make public, and that others might not have that same willingness yet (she even mentions it at the bottom, and presumably she knows more about how other ex-Leveragers feel than we do).

Reply
[-]LoganStrohl4y1460

I, um, don't have anything coherent to say yet. Just a heads up. I also don't really know where this comment should go.

But also I don't really expect to end up with anything coherent to say, and it is quite often the case that when I have something to say, people find it worthwhile to hear my incoherence anyway, because it contains things that underlay their own confused thoughts, and after hearing it they are able to un-confuse some of those thoughts and start making sense themselves. Or something. And I do have something incoherent to say. So here we go.

I think there's something wrong with the OP. I don't know what it is, yet. I'm hoping someone else might be able to work it out, or to see whatever it is that's causing me to say "something wrong" and then correctly identify it as whatever it actually is (possibly not "wrong" at all).

On the one hand, I feel familiarity in parts of your comment, Anna, about "matches my own experiences/observations/hearsay at and near MIRI and CFAR". Yet when you say "sensible", I feel, "no, the opposite of that".

Even though I can pick out several specific places where Jessicata talked about concrete events (e.g. "I believed that I was intrinsically... (read more)

Reply
[-]Vladimir_Nesov4y*370

This matches my impression in a certain sense. Specifically, the density of gears in the post (elements that would reliably hold arguments together, confer local validity, or pin them to reality) is low. It's a work of philosophy, not investigative journalism. So there is a lot of slack in shifting the narrative in any direction, which is dangerous for forming beliefs (as opposed to setting up new hypotheses), especially if done in a voice that is not your own. The narrative of the post is coherent and compelling, it's a good jumping-off point for developing it into beliefs and contingency plans, but the post itself can't be directly coerced into those things, and this epistemic status is not clearly associated with it.

Reply
9jessicata4y
How do you think Zoe's post, or mainstream journalism about the rationalist community (e.g. Cade Metz's article, perhaps there are other better ones I don't know about) compare on this metric? Are there any examples of particularly good writeups about the community and its history you know about?
7Vladimir_Nesov4y
I'm not saying that the post isn't good (I did say it's coherent and compelling), and I'm not at this moment aware of something better on its topic (though my ability to remain aware of such things is low, so that doesn't mean much). I'm saying specifically that gear density is low, so it's less suitable for belief formation than hypothesis setup. This is relevant as a more technical formulation of what I'm guessing LoganStrohl is gesturing at. I think investigative journalism is often terrible, as is philosophy, but the concepts are meaningful in characterizing types of content with respect to gear density, including high quality content.
8jessicata4y
I am intending this more as contribution of relevant information and initial models than firm conclusions; conclusions are easier to reach the more different relevant information and models are shared by different people, so I suppose I don't have a strong disagreement here.
4Vladimir_Nesov4y
Sure, and this is clear to me as a practitioner of the yoga of taking in everything only as a hypothesis/narrative, mining it for gears, and separately checking what beliefs happen to crystallize out of this, if any. But for someone who doesn't always make this distinction, not having a clear indication of the status of the source material needlessly increases epistemic hygiene risks, so it's a good norm to make epistemic status of content more legible. My guess is that LoganStrohl's impression is partly of violation of this norm (which I'm not even sure clearly happened), shared by a surprising number of upvoters.
9jessicata4y
Do you predict Logan's comment would have been much different if I had written "[epistemic status: contents of memory banks, arranged in a parseable semicoherent narrative sequence, which contains initial models that seem to compress the experiences in a Solomonoff sense better than alternative explanations, but which aren't intended to be final conclusions, given that only a small subset of the data has been revealed and better models are likely to be discovered in the future]"? I think this is to some degree implied by the title which starts with "My experience..." so I don't think this would have made a large difference, although I can't be sure about Logan's counterfactual comment.
5Vladimir_Nesov4y
I'm not sure, but the hypothesis I'm chasing in this thread, intended as a plausible steelman of Logan's comment, thinks so. One alternative that is also plausible to me is motivated cognition that would decry undesirable source material for low gear density, and that one predicts little change in response to more legible epistemic status.
4jessicata4y
I expect the alternative hypothesis to be true given the difference between the responses to this post and Zoe's post.
1Alex Vermillion4y
If you are genuinely asking, I think cutting that down into something slightly less clinical sounding (because it sounds sarcastic when formalized) would probably take a little steam out of that type of opposition, yes.
[-]Benquo4y*130

This reads like you feel compelled to avoid parsing the content of the OP, and instead intend to treat the criticisms it makes as a Lovecraftian horror the mind mustn't engage with. Attempts to interpret this sort of illegible intent-to-reject as though it were well-intentioned criticism end up looking like:

I believed that I was intrinsically evil, had destroyed significant parts of the world with my demonic powers, and was in a hell of my own creation. I was catatonic for multiple days, afraid that by moving I would cause harm to those around me.

Very helpful to have a crisp example of this in text.

ETA: I blanked out the first few times I read Jessica's post on anti-normativity, but interpreted that accurately as my own intent to reject the information rather projecting my rejection onto the post itself, treated that as a serious problem I wanted to address, and was able to parse it after several more attempts.

Reply
[-]Duncan Sabien (Inactive)4y170

I understood the first sentence of your comment to be something like "one of my hypotheses about Logan's reaction is that Logan has some internal mental pressure to not-parse or not-understand the content of what Jessica is trying to convey."

That makes sense to me as a hypothesis, if I've understood you, though I'd be curious for some guesses as to why someone might have such an internal mental pressure, and what it would be trying to accomplish or protect.

I didn't follow the rest of the comment, mostly due to various words like "this" and "it" having ambiguous referents.  Would you be willing to try everything after "attempts" again, using 3x as many words?

Reply
[-]Benquo4y23-1

Summary:

Logan reports a refusal to parse the content of the OP. Logan locates a problem nonspecifically in the OP, not in Logan's specific reaction to it. This implies a belief that it would be bad to receive information from Jessica.

Logan reports a refusal to parse the content of the OP

But then, "the people most mentally concerned" happens, and I'm like, Which people were most mentally concerned? What does it mean to be mentally concerned? How could the author tell that those people were mentally concerned? Then we have "with strange social metaphysics", and I want to know "what is social metaphysics?", "what is it for social metaphysics to be strange or not strange?" and "what is it to be mentally concerned with strange social metaphysics"? Next is "were marginalized". How were they marginalize? What caused the author to believe that they were marginalized? What is it for someone to be marginalized?

Most of this isn't even slightly ambiguous, and Jessica explains most of the things being asked about, with examples, in the body of the post.

Logan locates a nonspecific problem in the OP, not in Logan's response to it.

I just, also have this feeling like something... isn't just wrong h

... (read more)
Reply
[-]Viliam4y1400

I also don't know what "social metaphysics" means.

I get the mood of the story. If you look at specific accusations, here is what I found, maybe I overlooked something:

there were at least 3 other cases of psychiatric institutionalizations by people in the social circle immediate to MIRI/CFAR; at least one other than me had worked at MIRI for a significant time, and at least one had done work with MIRI on a shorter-term basis.  There was, in addition, a case of someone becoming very paranoid, attacking a mental health worker, and hijacking her car, leading to jail time; this person was not an employee of either organization, but had attended multiple CFAR events including a relatively exclusive AI-focused one.

There are even cases of suicide in the Berkeley rationality community [...] associated with a subgroup splitting off of the CFAR-centric rationality community due to its perceived corruption

a prominent researcher was going around convincing people that human-level AGI was coming in 5-15 years.

MIRI became very secretive about research.  Many researchers were working on secret projects, and I learned almost nothing about these.  I and other researchers were told not

... (read more)
Reply
[-]Eli Tyre4y190

This comment was very helpful. Thank you.

Reply
[-]Duncan Sabien (Inactive)4y100

Thanks for the expansion!  Mulling.

Reply
-1farp4y
Thanks for this articulate and vulnerable writeup. I do think we might all agree that the experience you are describing seems like a very good description of what somebody in a cult would go through while facing information that would trigger disillusionment.  I am not asserting you are in a cult, maybe I should use more delicate language, but in context I would like to point out this (to me) obvious parallel.
[-]habryka4y1380

I feel like one really major component that is missing from the story above, in particular a number of the psychotic breaks, is to mention Michael Vassar and a bunch of the people he tends to hang out with. I don't have a ton of detail on exactly what happened in each of the cases where someone seemed to have a really bad time, but having looked into it for a few hours in each case, I think all three of them were in pretty close proximity to having spent a bunch of time (and in some of the cases after taking psychedelic drugs) with Michael. 

I think this is important because Michael has I think a very large psychological effect on people, and also has some bad tendencies to severely outgroup people who are not part of his very local social group, and also some history of attacking outsiders who behave in ways he doesn't like very viciously, including making quite a lot of very concrete threats (things like "I hope you will be guillotined, and the social justice community will find you and track you down and destroy your life, after I do everything I can to send them onto you"). I personally have found those threats to very drastically increase the stress I experience from inter... (read more)

Reply
[-]jessicata4y230

I don’t have a ton of detail on exactly what happened in each of the cases where someone seemed to have a really bad time, but having looked into it for a few hours in each case, I think all three of them were in pretty close proximity to having spent a bunch of time (and in some of the cases after taking psychedelic drugs) with Michael.

Of the 4 hospitalizations and 1 case of jail time I know about, 3 of those hospitalized (including me) were talking significantly with Michael, and the others weren't afaik (and neither were the 2 suicidal people), though obviously I couldn't know about all conversations that were happening. Michael wasn't talking much with Leverage people at the time.

I hadn't heard of the statement about guillotines, that seems pretty intense.

I talked with someone recently who hadn't been in the Berkeley scene specifically but who had heard that Michael was "mind-controlling" people into joining a cult, and decided to meet him in person, at which point he concluded that Michael was actually doing some of the unique interventions that could bring people out of cults, which often involves causing them to notice things they're looking away from. It's common for t... (read more)

Reply
[-]habryka4y800

IIRC the one case of jail time also had a substantial interaction with Michael relatively shortly before the psychotic break occurred. Though someone else might have better info here and should correct me if I am wrong. I don't know of any 4th case, so I believe you that they didn't have much to do with Michael. This makes the current record 4/5 to me, which sure seems pretty high.

Michael wasn't talking much with Leverage people at the time.

I did not intend to indicate Michael had any effect on Leverage people, or to say that all or even a majority of the difficult psychological problems that people had in the community are downstream of Michael. I do think he had a large effect on some of the dynamics you are talking about in the OP, and I think any picture of what happened/is happening seems very incomplete without him and the associated social cluster.

I think the part about Michael helping people notice that they are in some kind of bad environment seems plausible to me, though doesn't have most of my probability mass (~15%), and most of my probability mass (~60%) is indeed that Michael mostly just leverages the same mechanisms for building a pretty abusive and cult-like ingroup... (read more)

Reply
[-]Andrew Rettek4y370

IIRC the one case of jail time also had a substantial interaction with Michael relatively shortly before the psychotic break occurred

I was pretty involved in that case after the arrest and for several months after and spoke to MV about it, and AFAICT that person and Michael Vassar only met maybe once casually. I think he did spend a lot of time with others in MV's clique though. 

Reply
9habryka4y
Ah, yeah, my model is that the person had spent a lot of time with MV's clique, though I wasn't super confident they had talked to Michael in particular. Not sure whether I would still count this as being an effect of Michael's actions, seems murkier than I made it out to be in my comment.
[-]jessicata4y150

I think one of the ways of disambiguating here is to talk to people outside your social bubble, e.g. people who live in different places, people with different politics, people in different subcultures or on different websites (e.g. Twitter or Reddit), people you run into in different contexts, people who have had experience in different mainstream institutions (e.g. different academic departments, startups, mainstream corporations). Presumably, the more of a culty bubble you're in, the more prediction error this will generate, and the harder it will be establish communication protocols across the gap. This establishes a point of comparison between people in bubble A vs B.

I spent a long part of the 2020 quarantine period with Michael and some friends of his (and friends of theirs) who were previously in a non-bay-area cult, which exposed me to a lot of new perspectives I didn't know about (not just theirs, but also those of some prison reform advocates and religious people), and made Michael seem less extremal or insular in comparison, since I wasn't just comparing him to the bubble of people who I already knew about.

Reply
5habryka4y
Hmm, I've tried to read this comment for something like 5 minutes, but I can't really figure out its logical structure. Let me give it a try in a more written format:  Presumably this is referring to distinguishing the hypothesis that Michael is kind of causing a bunch of cult-like problems, from the hypothesis that he helping people see problems that are actually present.  I don't understand this part. Why would there be a monotonous relationship here? I agree with the bubble part, and while I expect there to be a vague correlation, it doesn't feel like it measures anything like the core of what's going on. I wouldn't measure the cultishness of an economics department based on how good they are at talking to improv-students. It might still be good for them to get better at talking to improv students, but failure to do so doesn't feel like particularly strong evidence to me (compared to other dimensions, like the degree to which they feel alienated from the rest of the world, or have psychotic breaks, or feel under a lot of social pressure to not speak out, or many other things that seem similarly straightforward to measure but feel like they get more at the core of the thing).  But also, I don't understand how I am supposed to disambiguate things here? Like, maybe the hypothesis here is that by doing this myself I could understand how insular my own environment is? I do think that seems like a reasonable point of evidence, though I also think my experiences have been very different from people at MIRI or CFAR. I also generally don't have a hard time establishing communication protocols across these kinds of gaps, as far as I can tell. This is interesting, and definitely some evidence, and I appreciate you mentioning it. 
[-]jessicata4y100

If you think the anecdote I shared is evidence, it seems like you agree with my theory to some extent? Or maybe you have a different theory for how it's relevant?

E.g. say you're an econ student, and there's this one person in the econ department who seems to have all these weird opinions about social behavior and think body language is unusually important. Then you go talk to some drama students and find that they have opinions that are even more extreme in the same direction. It seems like the update you should make is that you're in a more insular social context than the person with opinions on social behavior, who originally seemed to you to be in a small bubble that wasn't taking in a lot of relevant information.

(basically, a lot of what I'm asserting constitutes "being in a cult" is living in a simulation of an artificially small, closed world)

Reply
6habryka4y
The update was more straightforward, based on "I looked at some things that are definitely cults, what Michael does seems less extremal and insular in comparison, therefore it seems less likely for Michael to run into the same problems". I don't think that update required agreeing with your theory to any substantial degree. I do think your paragraph still clarified things a bit for me, though with my current understanding, presumably the group to compare yourself against are less cults, and more just like, average people who are somewhat further out on some interesting dimension. And if you notice that average people seem really crazy and cult-like to you, then I do think this is something to pay attention to (though like, average people are also really crazy on lots of topics, like schooling and death and economics and various COVID related things that I feel pretty confident in, and so I don't think this is some kind of knockdown argument, though I do think having arrived at truths that large fractions of the population don't believe definitely increase the risks from insularity). 
6jessicata4y
I definitely don't want to imply that agreement with the majority is a metric, rather the ability to have a discussion at all, to be able to see part of the world they're seeing and take that information into account in your own view (which might be called "interpretive labor" or "active listening").
3habryka4y
Agree. I do think the two are often kind of entwined (like, I am not capable of holding arbitrarily many maps of the world in my mind at the same time, so when I arrive at some unconventional beliefs that has broad consequences, the new models based on that belief will often replace more conventional models of the domain, and I will have to spend time regenerating the more conventional models and beliefs in conversation with someone who doesn't hold the unconventional belief, which does frequently make the conversation kind of harder, and I still don't think is evidence of something going terribly wrong)
6jessicata4y
Oh, something that might not have been clear is that talking with other people Michael knows made it clear that Michael was less insular than MIRI/CFAR people (who would have been less able to talk with such a diverse group of people, afaict), not just that he was less insular than people in cults.
7Ben Pace4y
Do you know if the 3 people who were talking significantly with Michael did LSD at the time or with him? Erm... feel free to keep plausible deniability. Taking LSD seems to me like a pretty worthwhile thing to do in lots of contexts and I'm willing to put a substantial amount of resources to defending against legal attacks (or supporting you in the face of them) that are caused by you replying openly here. (I don't know if that's plausible, I've not thought about it much so mentioned it anyway.)
9jessicata4y
I had taken a psychedelic previously with Michael; one other person probably had; the other probably hadn't; I'm quite unsure of the latter two judgments. I'm not going to disambiguate about specific drugs.
3Chris_Leong4y
What kinds of things was he attacking people for?
[-]habryka4y140

I am not fully sure. I have heard him say very similar things to the above directed at Holden (and have heard reports of the things I put in quotes above). I think in that case the thing he is attacking him for is leveraging people's desire to be a morally good person in a way that they don't endorse (and plays into various guilt narratives), to get them to give him money, and to get them to dedicate their life towards Effective Altruism, and via that technique, preventing a substantial fraction of the world's top talent to dedicate themselves towards actually important problems, and also causing them various forms of psychological harm.

Reply
3ChristianKl4y
Do you have an idea of when those things were directed at Holden?
0Gunnar_Zarncke4y
UPDATE: I mostly retract this comment. It was clarified that the threat was made in a mostly public context which changes the frame for me significantly.  I think it is problematic to post a presumably very private communication (the threat) to such a broad audience. Even when it is correctly attributed it lacks all the context of the situation it was uttered in. It lacks any amends that way or may not have been made and exposes many people to the dynamics of the narrative resulting from the posting here. I'm not saying you shouldn't post it. I don't know the context and what you know either. But I think you should take ownership of the consequences of citing it and anyway it might escalate from here (a norm proposed by Scott Adams a while ago). 
[-]habryka4y120

I don't think the context in which I heard about this communication was very private. There was a period where Michael seemed to try to get people to attack GiveWell and Holden quite loudly, and the above was part of the things I heard from that time. The above did not to me strike me as a statement intended to be very private, and also my model of Michael has norms that encourage sharing this kind of thing, even if it happens in private communication. 

Reply
5Gunnar_Zarncke4y
Thank you for the clarification. I think it is valuable to include this context in your comment. I will adjust my comment accordingly.
3Gunnar_Zarncke4y
Can somebody give me some hints according to which norms this could be downvoted?
4[anonymous]4y
I didn't downvote, but I almost did because it seems like it's hard enough to reveal that kind of thing without also having to worry about social disapproval.
[-]AnnaSalamon4y290

I, in fact, asked a CFAR instructor in 2016-17 whether the idea was to psychologically improve yourself until you became Elon Musk, and he said "yes". This part of the plan was the same.

Re: “this part of the plan was the same”: IMO, some at CFAR were interested in helping some subset of people become Elon Musk, but this is different from the idea that everyone is supposed to become Musk and that that is the plan. IME there was usually mostly (though not invariably, which I expect led to problems; and for all I know “usually” may also have been the case in various parts and years of Leverage) acceptance for folks who did not wish to try to change themselves much.

Reply
[-]Eli Tyre4y160

Yeah, I very strongly don't endorse this as a description of CFAR's activities or of CFAR's goals, and I'm pretty surprised to hear that someone at CFAR said something like this (unless it was Val, in which case I'm less surprised). 

Most of my probability mass is on the CFAR instructor was taking "become Elon Musk" to be a sort of generic, hyperbolic term for "become very capable."

Reply
[-]jessicata4y110