tcheasdfjkl

Wiki Contributions

Comments

Frame Control

I want to express some strong appreciation for the post including not just some indicators that frame control is occurring but also some indicators that frame control is NOT occurring, and also for trying to mitigate the likelihood that this concept will be misused in the future. I also appreciate that the comment section is full of people absorbing the concept and also working to set bounds on it and make it safer. I appreciate the epistemic environment that gives rise to this kind of caution.

Frame Control

One fairly central reaction I had to this post is not so much about the specific phenomenon of frame control but rather about the general observation that it's quite common for the aspects of an abusive situation that are worst to experience to NOT be the same as the aspects that are most clear-cut bad and easiest to convey objectively to another person.

This seems true; I have heard multiple people with objectively horrifying stories of abuse report that actually they don't really care about the objectively awful parts that their friends are horrified about, but instead they are really fucked up by some stuff that's much harder to convey. (Probably in some cases that's the same general phenomenon described in this post and in other cases it's some other interpersonal fuckery.)

I have also heard people report that they experienced a situation as abusive and NOT have any clear-cut objectively awful behavior to point to. It makes perfect sense that this would happen in some cases - because the abuser is savvy enough about what people will object to to avoid those things, or because the abuser is actually trying to be good by following the ethical rules they know but is not managing to also be good in less legible matters, or for some other reason.

...It is also my experience that when humans make not-fully-objective reports about the beliefs/behaviors/words of other humans they disagree with and/or have some kind of adversarial relationship with, it is extremely common for such subjective accounts to be distorted in some way. For this reason, when I hear about an accusation of wrongdoing, I usually try to zero in on the objective claims being made, because (assuming I basically trust that the reporter is intending to be truthful) those are much less likely to be distorted or interpreted through a lens I think is unreasonable.

But this means that it's very hard for me to tell, as an outsider, when illegible wrongdoing has occurred. (I was going to say "illegible harm" but actually accusations of interpersonal wrongdoing are much stronger evidence of harm than of wrongdoing per se; I only need a very basic level of trust in someone's honesty to conclude they were harmed by a situation they're describing as abusive.) Indeed this feels kind of epistemically hopeless to ever evaluate from the outside?

I don't really know what to do with this thought but it felt important to note.

Lies, Damn Lies, and Fabricated Options

I think the examples are good but I wish there were more examples that aren't highly controversial in some way, either politically or interpersonally. (The "parental control" example is the one that least pinged my "eek, drama here" sense, though certainly there are many who would disagree with your point there (but it doesn't feel like a locally live issue).)

Lies, Damn Lies, and Fabricated Options

#EXAMPLE: when I want to get five things done and I kind of know I only have time or energy for three of them but all the things are Very High Priority and unacceptable to leave unfinished, I sometimes find myself making plans for accomplishing the things that are pretty unrealistic if I'm honest with myself. Here the fake option is "get all the things done by trying hard and believing in myself", and what actually happens is that some random subset of the things will not get done when people expect them to and I'll feel bad about myself and also people will judge me for not doing what I said I'd do. Whereas the actually existing options involve prioritizing and deciding in advance which things I'll do and which I'll delay, based on those things' actual urgency, and letting affected people know this in advance.

or similarly, sometimes I want to finish a task tonight and also make it to a thing in the morning tomorrow and get some more stuff done tomorrow, and it's kind of late, and I think "well I can stay up late and finish it, I won't get quite enough sleep tonight but I can catch up later", but often that's not a real option because if I'm sleep-deprived enough I might be really bad at doing things tomorrow, or indeed just sleep through my alarm clock.

in both of these cases a complication is that.... the fake option is sometimes real? it's a bad gamble usually but like, sometimes I do manage to do all five very important big tasks on my list, sometimes I do manage to have a productive day on five hours of sleep. this feeds the wishful thinking, I think.

(I've gotten better at forecasting my own success rates at this kind of thing over time and making choices accordingly, and at communicating with people in ways that avoid overpromising, though I do still have this kind of issue sometimes.)

Book review: Lost Connections by Johann Hari

I really like this review, thanks! I appreciate the clearly structured summaries and the distillation of things that are useful and the careful and very reasonable critiques of many questionable parts.

Cup-Stacking Skills (or, Reflexive Involuntary Mental Motions)

I like this one: https://www.theferrett.com/2015/06/22/i-never-said-no/

Also, a cool feature of his posts is that they all come with notes like this at the top:

(NOTE: Based on time elapsed since the posting of this entry, the BS-o-meter calculates this is 8.442% likely to be something that Ferrett now regrets.)

Common knowledge about Leverage Research 1.0

This might be easier to see when you consider how, from an outside perspective, many behaviors of the Rationality community that are, in fact, fine might seem cultish. Consider, for example, the numerous group houses, hero-worship of Eliezer, the tendency among Rationalists to hang out only with other Rationalists, the literal take over the world plan (AI), the prevalence of unusual psychological techniques (e.g., rationality training, circling), and the large number of other unusual cultural practices that are common in this community. To the outside world, these are cult-like behaviors. They do not seem cultish to Rationalists because the Rationality community is a well-liked ingroup and not a distrusted outgroup. 

 

I think there's actually been a whole lot of discourse and thought about Are Rationalists A Cult, focusing on some of this same stuff? I think the most reasonable and true answers to this are generally along the lines of "the word 'cult' bundles together some weird but neutral stuff and some legitimately concerning stuff and some actually horrifying stuff, and rationalists-as-a-whole do some of the weird neutral stuff and occasionally (possibly more often than population baseline but not actually that often) veer into the legitimately concerning stuff and do not really do the actually horrifying stuff". This post, as I read it, is making the case that Leverage veered far more strongly into the "legitimately concerning" region of cult-adjacent space, and perhaps made contact with "actually horrifying"-space.

Notably out of your examples, some are actually bad imo? "Hero-worship of Eliezer" is imo bad, and also happily is not really much of a thing in at least the parts of ratspace I hang out in; "the tendency of rationalists to hang out with only other rationalists" is I think also not great and I think if taken to an extreme would be a pretty worrying sign, but in fact most rationalists I know do maintain social ties (including close ones) outside this group. 

Unusual rationalist psychological techniques span a pretty wide range, and I have sometimes heard descriptions of such techniques/practices/dynamics and been wary or alarmed, and talked to other rationalists who had similar reactions (which I say not to invoke the authority of an invisible crowd that agrees with me but to note that rationalists do sometimes have negative "immune" responses to practices invented by other rationalists even if they're not associated with a specific disliked subgroup). Sort of similarly re: "take over the world plan", I do not really know enough about any specific person or group's AI-related aspirations to say how fair a summary that is, but... I think the more a fair summary it is, the more potentially worrying that is?

Which is to say, I do think that there are pretty neutral aspects of rationalist community (the group houses, the weird ingroup jargon, the enthusiasm for making everything a ritual) that may trip people's "this makes me think of cults" flag but are not actually worrying, but I don't think this means that rationalists should turn off their, uh, cult-detectors? Central-examples-of-cults do actually cause harm, and we do actually want to avoid those failure modes.

The Coordination Frontier: Sequence Intro

Tentatively excited to read the rest of the sequence, though I think I would have gotten more out of this if I knew more about what your motivating examples of rationalists failing to coordinate are like. Would be interesting to hear about some examples if any are not too private/fraught to share.

Load More