pjeby

Software developer and mindhacking instructor. Interested in intelligent feedback (especially of the empirical testing variety) on my new (temporarily free) ebook, A Minute To Unlimit You.

pjeby's Comments

The Curse Of The Counterfactual

I'm not sure I understand your question. In order to think that there's a problem with how much love he's providing, you have to have a counterfactual in which he's supposed to be providing more. For the amount of love to be insufficient, there has to be something to compare it to. If you aren't (implicitly) comparing, then there is nothing to draw it to your attention in the first place.

In other words, you wouldn't keep saying "I guess he didn't", because if you're not comparing, then there's not an issue any more -- it's just history, not an unresolved problem.

It sounds to me like the experience you're talking about is incomplete grief, like maybe a description of a situation where someone is accepting (at least intellectually) that they aren't going to get the love they want in the future, but has yet to accept that they didn't get it in the past. Because as long as they think they should have gotten it, the grieving is still incomplete.

As for deadening, letting go of things generally makes us more alive, not less, because we stop obsessing over the things we can't change, and move on to enjoying what we have (or can actually get). But before one actually lets go of something, the idea of letting go feels like it would be a loss.

As I suggested in the article, our brains treat unacknowledged losses like they are still assets on our inner books of utility. So the idea of writing off a loss feels like it is a loss. But once the write-off is actually done, then it no longer feels like a loss, because it's now the status quo, and therefore it doesn't keep coming back to conscious attention the way a perceived threat of loss does.

Implementing an Idea-Management System

That just means you've not seen that many wikis. ;-) For example, the ConnectedText personal wiki software includes backlinks, date-specific pages, and graph visualization of link structures, much like Roam. It also has the ability to include pages in others, and some of Roam's other features could likely be emulated using CT's scripting and templating systems, though it'd be a pain.

I actually own a copy of an older version of CT but stopped using it many years ago because it's not terribly interoperable with anything else.

Skill and leverage

This presupposes that you know what the difficulty level is for the person in question. It also ignores a ton of stuff that can get between "easy thing" and "actual doing", like what their priorities, interests, and abilities are.

Let's say Bob has a really important project he needs to work on. He's stuck and obsessed with it. Meanwhile, his room goes uncleaned and his dishwasher unloaded. He's not accomplishing anything, but he's not doing those simple things because he's pouring energy into something else.

Now let's consider Alice. Alice is a blind paraplegic computer programmer, who runs rings around her peers when it comes to coding. Programming for her is super easy, barely an inconvenience. But cleaning up the room or loading the dishwasher are not exactly her strengths.

And then there's Carl. He spends hours playing video games at insanely high difficulty levels that nobody else can match. But putting away dishes is boring, and doesn't get him that sweet sweet cred... or endorsement deals and advertising revenue. He'll do it tomorrow, for sure. Maybe. Or maybe his mom will.

None of these people's rooms are getting cleaned or dishwashers loaded, but that fact by itself tells you very little about what that person can accomplish. (After all, Bob could easily be a successful best-selling author who lets his place go to hell when he gets stuck in the middle of a book project.)

The Curse Of The Counterfactual

Hey Kaj. I was actually looking for feedback in email, but this is good too. :) (I'll update the article to clarify on that point.) Thanks for the info about your friend's experience: the answer to their question is that the act of visualizing requires them to access implicit information from their memory from direct (if remembered) experience, vs. simply verbalizing cached facts. It is structurally similar to scanning one's memory for past experiences, looking for something that matches a pattern of feeling or behavior. I'm only using the term "felt sense" because there's no sense (no pun intended) in creating yet another name for something that is already described in other places. (Also, some people actually do access the turn information kinesthetically, i.e., by feeling their way through the recalled day.)

As to your transcript, I see you transitioned from the Quick Questions right to the Work, which is a good move in the event one objects to one's desires. But I think perhaps you've missed something (two somethings, actually) about how the Work works.

So, when you got to: "what happens, when you believe that thought?", you took the response you got as an objection from a part (mixing IFS in), rather than simply taking the response at face value. In other words, "What happens when I believe this thought? I feel like the reins are pulling me to my death". You actually got the answer to your question! When you believe the thought that it's impossible to do anything meaningful because you'll get pulled, the consequence is just that: feeling like you're being pulled to death.

The next question, "who would I be without that thought?" would then be helpful in targeting the specific belief, because objections to letting go of the belief directly imply the state of the world (or yourself) that your beliefs predict would result from you not believing it.

This might've avoided a lot of the going in circles you did from this point on in the transcript, and led you directly to the target schema with less... well, thrashing between ideas, for lack of a better word.

The reason I've moved towards using the Work as a prime investigative tool is that it lets you walk the belief network really fast compared to other methods. Getting your brain to object to getting rid of a belief forces it to reveal what the next belief up the branch is with far less wasted movement.

And as you can see, starting from a place where you already have a concrete objection (e.g. using a tool like the Quick Questions), you can move really rapidly to the real "meat" of an issue.

That being said, the Quick Questions are designed to solve logistical problems, more than emotional ones -- aside from the emotional issue of focusing on the problems instead of on solutions. A Minute To Unlimit You is just a mental jujitsu move to disengage your brain's planning system from "There's a Problem" mode and put it back into "Seeking Solutions" mode.

Of course, that's only one module of your brain's motivation system, as the ebook mentions. There are four other modules (like the two that handle punishment and virtue-signalling) that can be involved in a motivation problem, but it's usually easiest to begin with the Quick Questions to rule out a mode 1 mismatch first, even if the problems being predicted turn out to be coming from one of the other modules.

The Curse Of The Counterfactual

Hi Ben, thanks for commenting.

What I'd first like to say is that negative reinforcement and punishment are actually two different things. What you're describing as "punishment" is actually just negative feedback: i.e. noticing that something you're doing isn't working. But punishment is something we do to raise someone's costs for bad action. This does not necessarily result in any reinforcement for the subject of the punishment.

In "Ingvar's" case, for example, he constantly punished himself for surfing the internet, but this was actually positively reinforcing for the behavior of self-punishment itself, and did nothing to discourage the internet surfing behavior!

Even within the technical context of behaviorist learning, "punish" and "negatively reinforce" are two different things... and punishment does not do what you seem to be thinking it does.

Technically, what happens when you punish an animal or person, is that you end up positively reinforcing whatever works quickest to stop the punishment. Punishment, in and of itself, does not actually alter behavior. The only thing it trains you (or any other animal) to do is to avoid the punishment.

And when you are applying social punishment of the type described in this article, the thing that stops it is (e.g. in Sara's case) ideation. The thing that turns off self-punishment is imagining a future in which you are a better person and the bad thing can't happen any more. So, in a behaviorist reinforcement sense, by punishing yourself in this fashion you are training yourself to imagine better futures, because that's the fastest way to stop the pain.

IOW, properly understood, the only functional use of punishment is to raise the costs of bad behavior. But in a self-applied case, raising your own costs is not a functional thing to do, especially when you factor in the moral licensing for being virtuously self-punishing, and effectively training yourself to imagine things being better, instead of actually doing anything to make them better.

So in that sense, I will say, no, it's not the case that punishing yourself (using either the social or behaviorist definition) is a useful strategy for anything other than convincing others not to punish you (worse) for the same thing. That is the one way in which punishing yourself is actually useful, and it's often how we learned to do it. (That is, to punish ourselves for the same things our parents punished us for, to lessen their desire to punish us.)

That being said, we probably have different definitions of what "punishment" actually consists of. In this post, I mean in the sense of "attacking reputation to raise the target's costs", not "negative feedback to shape behavior", which is something else altogether.

People routinely confuse these two things, because our moral bias tells us that we must not let wrongs go unpunished. So we distort what behaviorism actually says about learning into "reward and punishment", when in fact neither reward nor punishment are reliable reinforcement strategies! (For one thing, rewards and punishments are usually too far away in time from the actual behavior to have any meaningful effect, though that's not the only difference.)

The mindset of reinforcing actual behavior, vs. rewarding and punishing what we think should be done, are very, very different in practice, but our brains are biased towards confusing the two.

As for Sara, I think perhaps you are overgeneralizing from Carlos's example. I have different examples in the article because there are many different ways for "punishing based on counterfactuals" to manifest. What I did not cover in Sara's case (or Ingvar's for that matter) is that the surface-level "shoulds" being discussed were not the root issue. As I mention later in the article, one begins with whatever one is aware of, but working on these initial "should" statements then leads us deeper into the belief network.

For example, Ingvar believed he should have been working, and should have been able to finish in a certain amount of time. But the solution to this problem was not "grieve for not having worked"! It was discovering that the real issue was believing he was a bad person unless he was working. Removing that belief stopped him from generating counterfactuals about how he should have been working, which then led to him thinking of ways to actually get the work done.

IOW, it's the deactivation of the punishment system that's relevant here, because its activation blocked him from thinking about the actual process of work and the trade-offs involved, due to the "sacredness" of punishing himself for being a lazy evildoer who wasn't working.

In the same way, Sara's root issue isn't that she's punishing herself for her failed actions, it's that she believes she needs to prove herself... or else she's not a capable person. It's that underlying belief which motivates the generation of the counterfactuals in the first place.

The full chain of events (for Sara and Ingvar) looks something like this:

  • Step 1: Learn that a personal quality or behavior is subject to punishment by others (e.g. badness, incompetence)
  • Step 2: Try to avoid feeling bad by creating an ideal of some kind (e.g. punish one's self for evil, seek recognition to prove competence) that will counteract this and avoid future punishment
  • Step 3: Encounter situations in life that remind one's self of the quality learned about in step 1
  • Step 4: Generate counterfactuals based on the ideal to stop the punishment (Sara) or punish one's self for failing to make the ideal happen (Ingvar)

Here's the thing: the only part of this cycle that you can meaningfully change is the learning found in step 1, because otherwise every time they encounter a reminder in the world, the punishment will be remembered, and sustain the motivation for avoidance. Without this punishment cycle in effect, the person can actually think about what would be a good way to reach their goals. But with the cycle in effect, all the person can think about when it comes up is what's the fastest way to make the hurting stop!

I covered this more with the Ingvar example than the Sara one, but knowing how to do something doesn't help in this cycle, because it produces the "yeah, but..." response. From inside of this cycle, practical advice literally seems irrelevant or off-topic, or at best misguided. People inside the loop say things like, "yeah, but it's not that simple" or "you just don't understand", when you try to give them practical advice.

ISTM that you have overgeneralized from Carlos' example that this is process is all about grief. But even in Sara's case, it's important to understand that she cannot actually accept or act on negative feedback without first acknowledging what actually happened. If there's a semantic stop sign in her brain that pops up every time she tries to consider ways to behave (because in order to do that she has to think about what she actually did or might do), then she can't really think about how to act differently, only ruminate about how she ought to have done something else.

So when we say "we should have done X" or "I should do Y", we are not actually saying the full truth. What we are doing is denying the underlying reality that we did not do X, and we don't want to do Y.

Sara actually knew, going into the conference, that she tended to be stubborn, and specifically thought ahead of time that she should not be. The problem is that "I should not do X" is an argument with reality: you know full well ahead of time that you probably will do X, but see this as wrong (in a moral sense, rather than a functional one). This motivates you to deflect the perception (and associated punishment) by asserting that you should do the right thing. (Like Ingvar asserting he should get the work done in an afternoon.)

I hope that the above explanation clarifies better what this article is driving at. The issue is that anytime we start thinking about what we or other people "ought" to do -- as a moral judgment -- we immediately "taboo tradeoffs" and disengage from practical reasoning. We're no longer in a state of mind where feedback from what actually happened is even being taken into account, let alone learned from.

Finally, as for your comments on relationships, I'm just going to say that most of what you said has no real bearing on Carlos's actual situation, which I will not comment further on as it would reduce his anonymity. But I do want to address this point:

I read the section on Carlos, and it seems like the explicit content was that you should always give up on relationships when they're making you angry, and while there's a deep truth to that with long-term relationships, I don't think it should be the standard the solution. The standard solution to being angry at someone is to follow-through and make sure the cause is resolved, such that your anger reaches its natural conclusion. This is true even when it's built up for a while. Often there's something important that's been left unsaid, and needs communicating.

So, this is an overgeneralization, again, because nothing in this post recommends any object-level behaviors. What the post discusses is the fact that, when you are counterfactualizing with moral judgment attached, you cannot reason properly. Your brain hijacks your reasoning in the service of your moral judgment, so you have literally no idea what actually should be done on the object level of the situation.

The solution to this problem, then, is to disable the hijacker so you can get back in the cockpit of the plane and figure out where you want to fly. In Ingvar's case, he immediately began seeing other ways he could behave that would get to his goals better, and I had no need to advise him on the object level. The issue was that with his moral judgment system active, he literally could not even consider those options seriously, because they weren't "punish someone" or "make the pain go away NOW".

With regard to relationships, as with everything else this article talks about, the solution is to begin with whatever the actual ground truth of the situation is. If you are insisting that the other person in a relationship "should" be doing something, and that the only solution is to express anger in their direction, then you will miss the clue that sometimes, being angry at people doesn't change them... but positive reinforcement might.

(But of course, when we're thinking morally rather than strategically, we think it's wrong to use positive reinforcement, because the other person doesn't "deserve" it. They should just do the right thing without being rewarded, and they should be punished for not doing the right thing. So saith the moral judgment brain, so shall it be!)

Another problem is where you say, "make sure the cause is resolved, such that your anger reaches its natural conclusion". The thing is, our anger's "natural conclusion" is when somebody has suffered enough. (Notice, for example, how somebody who accedes to angry demands, but does not appear remorseful, will often result in the demander getting angrier. If it were about resolving the actual issue, this would not make sense.) And suffering enough doesn't always correspond with an actual solution, either: note how often people end up stuck in abusive relationships because the abuser is really good at appearing remorseful!

So, following anger to its "natural conclusion" can easily lead you astray, compared to clearing your head and acting strategically. It can be almost impossible to enact, say, "tough love", when you are stuck in your own moralizing about how someone ought to behave, both because you can't think it through, and because it's hard to do the "love" part while your brain is urging you to make someone to suffer for their sins.

Anyway, in summary: if you are arguing object-level recommendations from this article, you've confused your inferences with my statements. The only advice this post actually gives is to disengage your moral judgment if you want to be able to actually solve your problems, instead of just ruminating about them or punishing yourself for them. (And I guess, to avoid recursively making a "should" out of this idea, since that's just doing more of the problem!)

[Edit to add: I have added a new section to the article, called The Disclaimer, to clarify that none of the stories contain, nor are intended to imply, any object-level advice for the depicted situations, and that rather, the article's focus is on the problem of moral judgment impairing our ability to reason about the truth, and even perceive what it is in the first place.]

Is there a definitive intro to punishing non-punishers?

Before thinking of how to present this idea, I would study carefully whether it's true.

I'm probably referring to the idea in a much narrower context, specifically our inclination to express outrage (or even just mild disapproval) as a form of low-cost, low-risk social punishment, and for that inclination to apply just as well to people who appear insufficiently disapproving or outraged.

The targets of this inclination may vary culturally, and it might be an artifact or side-effect of the hardware, but I'd be surprised if there were societies where nothing was ever a subject that people disapproved of other people not being disapproving of. Disapproving of the same things is a big part of what draws societies together in the first place, so failing to disapprove of the common enemy seems like something that automatically makes you "probably the enemy".

(But my reasons and evidence for thinking this way will probably be clearer in the actual article, as it's about patterns of motivated reasoning that seem to reliably pop up in certain circumstances... but then again my examples are not terribly diverse, culturally speaking.)

Is there a definitive intro to punishing non-punishers?

Yeah, those are the things I found, but none of them are the thing I remember, which was something that explained how punishment is costly (risky) for the punisher due to free-riding by non-punishers, so we evolved the desire to punish non-punishers in order to ensure nobody gets away with free-riding. None of these articles cover that, which is a surprise to me since I had to have read that idea somewhere, and it feels to me like something that's part of the rationalsphere zeitgeist, yet I can't seem to place where I actually read it.

Anyway, I'm thinking what I'll do for now is link to this question from my article, so people can see all the collected answers here. ;-)

On Internal Family Systems and multi-agent minds: a reply to PJ Eby

See, now this comment would have made a great article. ;-) I think it says more clearly what you mean than the article you actually wrote, and makes a much better case for your position.

On Internal Family Systems and multi-agent minds: a reply to PJ Eby

Thank you for the consideration, and I appreciate the edits. This was just an unfortunate confluence of events and I'm not holding any grudges.

I have to admit that one of my faults is a healthy dose of the illusion of transparency. I tend to assume that other people can reach the same conclusions I have when they have access to the same information my conclusions are based on... even though there's a distinction between say, reading UtEB and grokking what it means about "legacy" approaches to therapy.

So some of the things you said in this article seemed to me like excessively belaboring points I thought were already made quite explicitly in the text of UtEB, so I interpreted it as you trying to argue in favor of IFS, not that you were just now realizing how IFS fit within UtEB's model.

The fact you posted an article about UtEB before made me assume that you understood it at least as well as I did (since in effect, you introduced me to it!), so I didn't see why you would only be now discovering those points... especially since I thought they'd been covered by our previous discussion and your restatement of my position.

Regarding Core Transformation, I'm glad you found it useful. Back at the time I mentioned it, it was one of the better techniques available to me, despite the tendency to sometimes get bogged down in "is that really a part or am I imagining things" or parts getting in circular arguments about things. But I later found that there were simpler ways to address the same things, because what CT calls "core states" are also accessible by simply not activating the parts of the brain that shut off those states. (e.g. by telling us we don't deserve love)

So if, for example, we don't see ourselves as worthless, then experiencing ourselves as "being" or love or okayness is a natural, automatic consequence. Thus I ended up pursing methods that let us switch off the negatives and deal directly with what CT and IFS represent as objecting parts, since these objections are the constraint on us accessing CT's "core states" or IFS's self-leadership and self-compassion.

In effect, you could think of the approaches I've been pursuing since then as shortcutting the process of CT by jumping as directly as possible to our objections to experiencing ourselves as lovable, okay, etc., and working backwards from there.

To put it in context of your changes using Transforming The Self, the shadow qualities (or "negative qualities" as TTS calls them), are the things I target first, since around 2012 or so.

That's because practical experience had shown by then that almost anything I tried to change in myself or others using other methods would often return in a few weeks, unless said negative qualities were somehow addressed. So, strategically, going hunting for them first makes things a lot more efficient, as you then don't have to worry about all the tactical-level behaviors and beliefs being regenerated from the persistent, strategic-level, negative self-image.

Interestingly, now that you've mentioned TTS (indirectly, by linking to your posts referencing it), it reminds me that TTS actually includes something rather like a reconsolidation-oriented approach to quality changes. It might be interesting now to go back and re-read it with our newer knowledge of reconsolidation in mind, to see if I can either improve on his technique, or use something from it to improve on mine.

On Internal Family Systems and multi-agent minds: a reply to PJ Eby

I'm sorry you had a jarring experience being named in the OP.

Thank you. It is at least good to know that it was not his decision to put this on the front page, though the number of times I'm named still makes it feel a bit like it's a calling out, especially since he refers to "pjebyan" practices as if they were what we discussed, rather than the material from UtEB that he himself previously posted.

A lot of what we talked about in the original comments was actually what UtEB describes as reconsolidation, not what I do, because I specifically did not want to get into that here.

Rather, my direct discussion with Kaj was strictly focused on the reductionism issue with parts-oriented models, and the difference between deliberate reconsolidation (ala UtEB) and accidental reconsolidation (ala IFS). It was never supposed to be a referendum on my approach to working with clients or comparing my approach with IFS, outside of me mentioning some reasons why I don't like to use parts-oriented approaches (like IFS or any of its many predecessors), and how my experiences relate to what's said in UtEB.

Indeed, the only reason I felt safe to discuss what I did in that previous thread was because I could use UtEB as an example of a reconsolidation-oriented approach other than mine, because I did not wish to create an impression of using LW as a pulpit from which to preach my own gospel. The unexpected combination of "suddently frontpage" and "naming names/ascribing positions" was quite unpleasant, as it made it feel like I was being shoved into a frame of doing that in direct opposition to my attempts to keep the previous discussion focused on general schools of thought (e.g. behaviorism vs. "parts", deliberate vs. accidental reconsoldiation, etc.) rather than being about "my way is better than yours".

After all, as the guidelines say, "aim to explain, not persuade".

(That being said, I can also see how the frame shift probably seems way more visible and salient to me than it does to anybody else, and on a re-read of the article, even I can see that the parts that got me upset are really very tiny in comparison to the whole. It's also pretty understandable in retrospect why Kaj could easily have thought I was arguing for a model of my own, rather than speaking generically, without him having any intention to distort my views or attribute his own views to me... even as it's also understandable why the situation inclined me to give more weight to the reverse hypothesis.)

Load More