pjeby

Software developer and mindhacking instructor. Interested in intelligent feedback (especially of the empirical testing variety) on my new (temporarily free) ebook, A Minute To Unlimit You.

Wiki Contributions

Comments

Frame Control

That's odd. When I googled "frame control" (prior to my comment) the first result was about programming, the second was this post, and the third was a 14-point article in which most of the illustrative examples were about ways of responding to social bullying, dominance displays, or manipulation of various sorts. That is, frame control as reaction to social maneuvering by others.

That's also fairly consistent with things I've previously read, that establish the very first rule of frame control as not letting others trick, trap, or threaten you out of your intended frame for an interaction. And while some works do treat frame control as a zero sum game, the core message of most things I've read have been about internal frame defense and non-zero sum games.

For example, one book (literally entitled "Frame Control") notes many times that "basing the strength of your frame on the weakness of others is not a good strategy" and provides quite a lot of exercises that are aimed at changing one's internal beliefs and interpretation of situations, with frequent examples roughly of the form, "don't try to argue, fight, trick, persuade, etc. people - instead just accept what people say and hold to your opinion, instead of being emotionally dependent on others agreeing with you".

The type of "frame control" described in this post seems rather the opposite of that!

Frame Control

If I punch you and say "I am only doing this for your own good; someone needs to punish your sins to make you stronger; you will thank me later", that is frame control.

If I punch you and five minutes later say "no, I have never punched you; what made you make this horrible accusation", that is gaslighting.

They sound the same to me. In both cases, the intent is to undermine the target's perception of events in a way that supports continuing exploitation -- i.e. gaslighting.

So perhaps "gaslighting" is a special case of "frame control", but the main difference seems to be whether unambiguous sensory perceptions are denied (as oppposed to e.g. denying motivation).

Frame control is a general term that actually mostly refers to refusing to allow other people's frames to be treated as common knowledge. You need frame control in order to gaslight, but frame control is also a defense against gaslighting, in the sense that one is vulnerable to gaslighting to the extent one is unable to control one's own frame in response to provocative or manipulative communication.

Frame Control

Pretty much. The relevance for NLP is that if you're trying to help someone out of say, a self-defeating mindset or victim state, then you need to be able to (at minimum) control your own frame so as not to get pulled into whatever role the person's problems try to assign you (e.g. rescuer or persecutor).

The main thing I dislike about this post's framing of frame control is that the original meaning of "frame control" is maintaining your own frame -- i.e. the antidote to the abusive and manipulative behaviors described in this post. Not allowing yourself to be sucked in or trapped by the frames that other people attempt to establish, intentionally or not.

Shoulder Advisors 101

The hard part of making something like this work is that if your parents were messed up enough for you need to do this, your concept of an "ideal" parent is probably pretty broken, though perhaps in subtle ways. There were a lot of counterintuitive things I had to realize about parenting that aren't well-understood in popular culture, to get this kind of thing right.

(Also, if you do get it right, then a lot of the time you can just use memory reconsolidation on the events where things didn't work out the right way, and then you don't need the shoulder advising on that topic any more, because the new response is embedded in the schema for responding to situations like that.)

Shoulder Advisors 101

Consider that GPT doesn't have any of that fancy stuff and yet can generate dialogues of semi-consistent characters. Shoulder advisors can be slightly-fancier text bots just by adding audio tone and facial expressions to what is being prompted and predicted.

Notes on Shame

a term that has many, conflicting definitions in popular use

Can you point to a modern popular use of your definition? As far as I'm aware, the current popular (late 20th/21st century) usage is much closer to my definition than the one you're using. I've also not seen any dictionary definitions that reference one's own standards (vs. implied social standards such as "impropriety" or "foolishness").

It just seems to me that referencing one's own standards is a very odd carve-out in the definition, as is calling it merely "unpleasant" (vs. dictionary terms saying things like "painful" and "humiliating").

Something that is unpleasant and one's own standards sounds much more like the emotion of "regret" (wishing you'd done something different), rather than the emotion of shame (public disregard and low worth).

Your usage seems to me like saying that "rage is a virtue because to rage is to act against things that are unjust", while ignoring the fact that the popular understanding of the word "rage" is more like "anger to the point of irrational, destructive or counterproductive action". You can redefine the term in an excessively narrow way, but it doesn't help anybody understand what you're getting at.

Notice, too, that if you simply called it regret, much of the article would be dissolved: you wouldn't need to address toxic shame or virtue signaling, since these aren't terribly relevant to regret. The article could be considerably shorter, which suggests that choosing a better term would be an empirical benefit. I also can't help but notice that all of the other top-level comments are about this terminology confusion and would have been obviated by choosing regret or another term for a less problematic emotion.

Notes on Shame

I’m going to use “shame” to mean an unpleasant sense that one has failed to live up to one’s own standards in some way.

That's regret. Shame is an unpleasant sense that one is not worthy of good treatment from others, based on an external socially-defined standard.

This would probably be why you think shame would be a useful thing or a virtue. Regret is, shame isn't.

Consequentialism may cost you

Can't you assume LCPW as hypothetical?

The question isn't can't I, but why should I? The LCPW is a tool for strengthening an argument against something, it's not something that requires a person to accept or answer arbitrary hypotheticals.

As noted at the end of the article, the recommendation is to separate rejecting the entire argument vs accepting the argument contingent on an inconvenient fact. In this particular case, I categorically reject the argument that trolley problems should be answered in a utilitarian way, because I am not a utilitarian.

Consequentialism may cost you

Doesn't "sufficiently close relation" also apply with some strength to any being of the same species? Consider a species A is splitting into two subspecies A1 and A2. This could be due to members of A1 preferring to save other members of A1. Once A2 dies, A1 retains the trait of wanting to save other members of A1.

Only after the gene is already essentially universal in the general population. When a gene with altruistic inclinations first appears, it will only increase its propagation by favoring others with the same gene. Otherwise, self-sacrifice will more likely extinguish the gene than spread it.

I would be interested in knowing the Least Convenient World stipulations, and what this phrase means.

See The Least Convenient Possible World for where the term was introduced.

Precedents and perverse incentives can be ruled out by assuming none exist, right? Assume in the hypothetical that nobody will ever get to know what choice you made after you made it.

But answering the question means that somebody will know: whoever is asking the question and anyone present to hear the answer. And since it's a hypothetical, the most relevant incentives and consequences are those for the social situation.

I didn't get how a hypothetical with two clear choices could be a false dichotomy. Assume that refusing to choose results in something far worse than either choice.

Far worse for whom? In what way? Consequentialism isn't utilitarianism. If you're taking a utilitarian position of greatest good for greatest number, then the choice is obvious. But consequentialism isn't utilitarianism: you can choose what's best for you, personally, and what's best for me depends heavily on the details.

I agree but that in my mind seems like a lot like - their feelings and values are wired deontologically, their rational brain (incorrectly) thinks they are consequentialists, and they're finding justifications for their thoughts. Unless ofcourse they find a really good justification. (And even if they did find one, I'd be suspicious of whether the justifcation came after the feeling or action, .... or before.)

But that's you projecting your own experience onto somebody else, aka the Typical Mind Fallacy.

My experience of being asked a utilitarian hypothetical is, "what am I going to get out of answering this stupid hypothetical?" And mostly the answer is, "nothing good". So I'm going to attack the premise right away. It's got zero to do with killing or not killing: my answer to the generalized question of "is it ever a good thing to kill somebody to save somebody else" is sure, of course, and that can be true even at 1:1 trade of lives.

Hell, it can be a good thing to kill somebody even if it's not saving any lives. The more important ethical question in my mind is consent, because it's a hell of a lot harder to construct a justification to kill somebody without their consent, and my priors suggest that any situation that seems to be generating such a justification is more likely to be an illusion or false dichotomy, that needs more time spent on figuring out what's actually going on.

And even then, that's not the same as saying that I would personally ever consent to killing someone, whatever the justification. But that's not because I have a deontological rule saying "never do that", but because I'm reasonably certain that no real good can ever come of that, without some personal benefit, like saving my own life or that of my spouse. For example, if the two people I'm saving are myself and my wife and the person being killed is somebody attacking us, then I'm much less likely to have an issue with using lethal force.

Based on a glance at the paper you referenced, though, I'm going to say that the authors incorrectly conflated consequentialism and utilitarianism. You can be a consequentialist without being a utilitarian, and even there I'm not 100% sure you can't have a consistent utilitarian position based on utility as seen by you, as opposed to an impartial interpretation of utility.

At the very least, what the paper is specifically saying is that people don't like impartial beneficence. That is, we want to be friends with people who will treat their friends better than everybody else. This is natural and also pretty darn obvious... and has zero to do with consequentialism as discussed on LW, where consequentialism refers to an individual agent's utility function, and it's perfectly valid for an individual's utility function to privilege friends and family.

Consequentialism may cost you

re: second quote, I mean that evolution selects for those traits that ensure collective survival

It really, really doesn't. It selects for the proliferation of genes that proliferate, which is very, very different.

A trait where "one person is willing to kill 10 others to ensure their own survival" will be less selected for compared to one where "one person is willing to die to save someone else".

No, it selects for "one person is willing to die to save someone who is a sufficiently close relation, especially of the next generation". If there were no correlation between the trait and relatedness, the trait would be extinguished.

(And the being willing to kill 10 others isn't deselected for either, so long as the others are strangers or rivals for resources, mates, etc.)

Selection works on relative frequency of genes, not on groups or individuals. To the extent that we have any sort of group feeling or behaviors at all, this is due to commonality of genes. A gene won't be universal in a population unless it provides its carriers with some sort of advantage over non-carriers. If there's no individual advantage (or at least gene-specific advantage), it won't become universal.

Suppose your friend asks you a situation, purely as hypothetical, as to whether you would murder someone to save two others. You simply answering this question indicating you're willing to murder reduces trust with your friend.

This sounds less like "consequentialism reduces trust" than "willingness to murder reduces trust" or perhaps "utilitarianism reduces trust".

Now maybe LessWrong-style consequentialism requires you to lie to your friend, that hasn't been studied.

I would expect a LW-style consequentialist to reject such a simple framework as "kill one person to save two" without first requiring an awful lot of Least Convenient World stipulations to rule out alternatives, and/or to prefer to let two people die in the short run rather than establish certain horrible precedents or perverse incentives in the long run, reject the whole thing as a false dichotomy, etc. etc.

Really, I find it hard to imagine a rational consequentialist simply taking the scenario at face value and agreeing to straight-up murder even in a fairly hypothetical discussion.

Load More