Unreal

Wiki Contributions

Comments

Tears Must Flow

Some of the reactions to this post baffle and astound me. I don't know what to say. These reactions feel totally lacking in empathy and compassion. And also like they don't understand the grieving process. 

It seems like a reasonable response to cry with grief over the suffering and murder of beings. ? 

If someone started crying about this in front of me, my response would be one of sympathy. It just seems like such a reasonable thing to be upset about! I've cried about the Holocaust, about factory farming, about civilizational inadequacy, about nuking Japan, etc. These things are tragic. Grief doesn't necessarily wait for the family dinner to be over or for people to first be comfortable with your crying. 

It seems like a stupid suggestion to avoid all situations where one might encounter a factory-farmed animal?? Like what are you advocating? Avoid being with friends and family if you expect to be upset because 'being upset' is 'bad'? You are confused. "Bad feelings" are not inherently "bad". This is a weird map-territory confusion. 

The tactic of 'avoid things that make you feel bad' is a terrible one. Equating 'feeling bad' with 'something bad happening' is also terrible. It is entirely possible to be very happy deep down while one is grieving. Grieving is a healthy, good, and normal human reaction. Even at Thanksgiving, over a dead turkey. 

Unwitting cult leaders

I was actually thinking of writing about a concept I have called 'cult brain' but this post covers the basic idea. I'm glad! Nice work, Kaj. 

One thing that I want to see more of in this post is treating cultists as agentic. I think for some reason we tend to treat them as non-agentic and helpless, and this frustrates me to no end. Potential cultists have the ability to do something about their own minds and how they use them. 

Leaders should notice their own reactions and behaviors in response to cultists, but I think it becomes codependent the moment a leader takes responsibility for a cultist's behavior or thought process. 

I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness

Claim: Many things are happening on a below-consciousness level that 'matter' to a person. And if you disrupted those things without changing a person's subjective experience of them (or did it without their notice), this should still count as harm. 

This idea that 'harm' and the level of that harm is mostly a matter of the subjective experience of that harm goes against my model of trauma and suffering. 

Trauma is stored in the body whether we are conscious of it or not. And in fact I think many people are not conscious of their traumas. I'd still call it 'harm' regardless of their conscious awareness. 

I have friends who were circumcised before they could form memories. They don't remember it. Through healing work or other signs of trauma, they realized that in fact this early surgery was likely traumatic. I think Eliezer is sort of saying that this only counts as harm to the degree that it consciously affects them later or something? I disagree with this take, and I think it goes against moral intuition. (If one sees a baby screaming in pain, the impulse is to relieve their 'pain' even if they might not be having a conscious experience of it.) 

If I take a "non-sentient" chicken and cut off its wings, and I watch it as it helplessly tries to fly repeatedly, but is unable to, this strikes me as a form of harm to the chicken and its values even if the chicken is not having a subjective experience of its condition. 

Also, from my investigations, much suffering does not reach the level of awareness. When a person investigates very closely and zooms in on experiences (such as through meditation), suffering is 'found' to be 'occurring' at a level of granularity and detail that was not previously accessible. But becoming aware of this suffering does not increase the amount of suffering that was occurring; you just become aware of the amount that was already there. It's an "oh" moment. And this can actually help relieve the suffering, by becoming aware of it. 

This suggests that maybe beings who lack the ability of awareness and observation to see their own condition actually are suffering more. This accords with my own journey in relieving personal suffering. More awareness was generally helpful. Whereas as a child, I was more 'braindead' in some way. Not very 'conscious'.

One could make similar inquiries into 'dissociation'. If a person is regularly dissociated and doesn't feel things very intensely, does it make it more okay to hurt them? 

Also my model of pain is that pain != suffering, which might be relevant here. Not sure. 

I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness

Why is that? You're still tying moral relevance to a subjective experience? 

I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness

I had another complaint about that tweet, which... you do not seem to have, but I want to bring up anyway. 

Why do we assume that 'consciousness' or 'sentience' implies 'morally relevant' ? And that a lack of consciousness (if we could prove that), would also imply 'not morally relevant' ? 

It seems bad to me to torture chickens even if turns out they aren't self-aware. But lots of people seem to take this as a major crux for them. 

If I torture a permanently brain-damaged comatose person to death, who no one will miss, is that 'fine' ? 

I am angry about this assumption; it seems too convenient. 

Zoe Curzi's Experience with Leverage Research

Anna made a relevant follow-up question. She said something like: I expect picketing to be [a more balanced response] because it's a public action. What about [non-public] (hidden) acts of retaliation? 

I saw some of his reaction to this before my internet cut out again. (I think he could have used a hug in that moment... or maybe just me, maybe I could use a hug right now.) 😣

From the little glimpses I got (pretty much only during the first hour Q&A section), I got this sense (this is my own feelings and intuitions speaking):

  • I did not sense him being 'in cooperate mode' on the object level, but he seemed to be 'picking cooperate' on a meta level. He was trying to act according to good principles. E.g. by doing the video at all, and the way he tried to answer Qs by saying only true things. He tried not to come from a defensive place.
  • He seemed to keep to his own 'side of the street'. Did not try to make claims about others, did not really offer models of others, did not speculate. I think he may have also been doing the same thing with the people in the chat? (I dunno tho, I didn't see 90%.) Seems 'cleaner' to do it this way and avoids a lot of potential issues (like saying something that's someone else's to say). But meh, it's also too bad we didn't get to see his models about the people. 
Zoe Curzi's Experience with Leverage Research

Well, the video is lost. But my friend Ben Pace (do you know him? he is great) was kind enough to take notes on what he said specifically in response to my question. 

My question was something like: "Why do you think some people are afraid of retaliation from you? Have you made any threats? Have you ever retaliated against a Leverage associate?" This is not the exact wording but close enough. I used the words "spiteful, retaliatory, or punishing" so he repeats that in his answer. 

I also explicitly told him he didn't have to answer any of these questions, like I wasn't demanding him to answer them. 

I am pasting Geoff's response below. 

  • Great questions.
  • Um.
  • Off the top of my head I don’t recall spiteful retaliatory or punishing actions. Um. I do think that I… There’s gotta be some other category of actions taken in anger where… I can think of angry remarks that I’ve made, absolutely. I can think of some actions that don’t pertain to Leverage associates that after thinking about for a while I realized there was something I was pretty angry about. In general I try to be really constructive, there’s definitely, let’s see, so… There’s definitely a mode that, it’s like, I like to think of all sorts of different possibilities of things to do, for example this was for EAG a while back, we were going to go and table at EAG and see if there’s anyone who is good to hire, we received word from CEA that we weren’t allowed to table there, super mad about that because we created the EA Summit series and handed it to CEA, being disinvited from the thing we started, I was really mad “let’s go picket and set up in front of EAG and tell people about htis”, y’know a number of people responded to that suggestion really negatively, and… Maybe the thing I want to say is I think there’s something like appropriate levels of response, and the thing I really want to do is to have appropriate levels of response. It’s super hard to never get mad or never be insulted but the thing I try super hard to do is to get to the point where there’s calibrated response. So maybe y’know there’s something in there… I have been in fact rly surprised when people talked about extreme retaliation, I’m like “What.” (!).
  • There’s definitely a line of thought I’ve seen around projects where they deal with important things, where people are like “The project is so important we must do anything to protect it” which I don’t agree with, y’know, I shut down Leverage because I talked to someone who was suffering too much and I was like “No” and then that was that.
My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

A lot of the comments in response to Connor's point are turning this into a 2D axis with 'mainstream norms' on one side and 'weird/DIY norms' on the other and trying to play tug-of-war, but I actually think the thing is way more nuanced than this suggests. 

Proposal: 

  • Investigate the phenomenon of totalization. Where does it come from, what motivates it, what kinds of people fall into it... To what extent is it coming from external vs internal pressure? Are there 'good' kinds of totalizing and 'bad' kinds? 
  • Among people who totalize, what kinds of vulnerabilities do they experience as a result? Do they get exploited more by bad actors? Do they make common sense mistakes? Etc.

I am willing to bet there is a 'good' kind of totalizing and a 'bad' kind. And I think my comment about elitism was one of the bad kinds. And I think it's not that hard to tell which is which? I think it's hard to tell 'from the inside' but I... think I could tell from the outside with enough observation and asking them questions? 

A very basic hypothesis is: To the extent that a totalizing impulse is coming from addiction (underspecified term here, I don't want to unpack rn), it is not healthy. To the extent that a totalizing impulse is coming from an open-hearted, non-clingy, soulful conviction, it is healthy. 

I would test that hypothesis, if it were my project. Others may have different hypotheses. 

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

Hmm, I want to point out I did not say anything about what I expected would have helped me or helped 'protect' me. I don't see anything on that in my comment... 

I also don't think it'd be good for me to be saved from my problems...? but maybe I'm misunderstanding what you meant. 

I definitely like Connor's post. My "hear hear" was a kind of friendly encouragement for him speaking to something that felt real. I like the totalization concept. Was a good comment imo. 

I do not particularly endorse his proposal... It seems like a non-starter. A better proposal might be to run some workshops or something that try to investigate this 'totalization' phenomenon in the community and what's going on with it. That sounds fun! I'd totally be into doing this. Prob can't though. 

Zoe Curzi's Experience with Leverage Research

Unfortunately for me, there is apparently no video recording available on Twitch for this stream? (There are two short clips, but not the full broadcast.) 

If anyone has a link to it, if you could include it here, that'd be great. ! 

Load More