[Cross-posted from Grand Unified Crazy. Relevant to Kaj's summary of the book.]

In children’s stories, the good guys always win, the hero vanquishes the villain, and everyone lives happily ever after. Real life tends to be somewhat messier than this.

The world of therapy presented by Unlocking the Emotional Brain reads somewhat like a children’s story. Loosely, it presents a model of the brain where your problems are mostly caused by incorrect emotional beliefs (bad guys). The solution to your problems is to develop or discover a correct emotional belief (good guy) that contradicts your incorrect beliefs, then force your brain to recognize the contradiction at an emotional level. This causes your brain to automatically resolve the conflict and destroy the incorrect belief, so you can live happily ever after.

Real life tends to be somewhat messier than this.

After about a month of miscellaneous experimentation on myself based on this book, my experiences match the basic model presented, where many psychological problems are caused by incorrect emotional beliefs (I don’t think this part is particularly controversial in psychological circles). It also seems to be true that if I force my brain to recognize a contradiction between two emotionally relevant beliefs, it will resolve the conflict and destroy one of them. Of course, as in real life where the good guy doesn’t always win, it seems that when I do this my brain doesn’t always destroy the right belief.

I have had several experiences now where I have identified an emotional belief which analytically I believe to be false or harmful. Per UtEB I have identified or created a different experience or belief that contradicts it, and smashed them together in my mind. A reasonable percentage of the time, the false belief emerges stronger than before, and I find myself twisting the previous “good” belief into some horrific experience to conform with the existing false belief.

In hindsight this shouldn’t be particularly surprising. Whatever part of your brain is used to resolve conflicting emotional beliefs and experiences, it doesn’t have special access to reality. All it has to work with are the two conflicting pieces and any other related beliefs you might have. It’s going to pick the wrong one with some regularity. As such, my recommendation for people trying this process themselves (either as individuals or as therapists) is to try and ensure that the “good” belief is noticeably stronger and more immediate than the false one before you focus on the contradiction. If this doesn’t work and you end up in a bad way, I’ve had a bit of luck “quarantining” the newly corrupted belief to prevent it from spreading to even further beliefs, at least until I can come up with an even stronger correct belief to fight it with.

New Comment
20 comments, sorted by Click to highlight new comments since: Today at 1:57 PM

It's weird that one always wins for you, and that you view them as fighting.

Often when I do this sort of work, It's like I'm dialoging and waiting for them to come to consensus. I wonder if there's a particular stance you're holding when you do this work that is leading to a winner take all dynamic

[-][anonymous]4y90

I would put that down partly to my personality (I do tend to view things as fairly black-and-white, and struggle sometimes with nuance), and partly to the framing of the book. The book frames the entire approach as a more decisive therapy vs the very gradual approaches used in a lot of traditional therapy which don't cure the underlying root cause.

I've read the book as well, and didn't get that from the book at all, it specifically warns you against having a stance where one framing "wins"... although the idea of "pro-symptom" and "anti-symptom" position do indeed have implicit judgement about wanting one side to "win".

One question I do have is that you're distressed about one side winning, don't you have more re-consolidation to do? This time, with that feeling of distress. For me as long as there's some part of me that believes a belief to be "incorrect", I don't feel fully integrated.

If some conscious activations the process of consolidating is itself causing "one idea to win... sometimes the wrong one", then trying consolidation on "the feelings about the management of consolidation and its results" seems like it could "meta-consolidate" into a coherently "bad" result.

...

It could be that the author of the original post was only emitting their "help, I'm turning evil from a standard technique in psychotherapy that I might or might not be using slightly wrongly!" post in a brief period of temporary pro-social self-awareness.

If we are beyond the reach of god then there's nothing in math or physics that would make a spin glass process implemented in belief holding metamaterials always have to point "up" at the end of the annealing process. It could point down, at the end, instead.

This is part of why I'm somewhat opposed to hasty emotional consolidation (which seems to me like it rhymes with the logical fallacy of hasty generalization).

If some conscious activations the process of consolidating is itself causing "one idea to win... sometimes the wrong one", then trying consolidation on "the feelings about the management of consolidation and its results" seems like it could "meta-consolidate" into a coherently "bad" result.

 

Can you give an example of how this would happen? Do you have examples of it? I think the only way that the process of consolidating can cause one idea to win in the way described is through suppression of a higher level concern.  At some point as you keep going meta there's nowhere left to suppress it.

You might greatly enjoy Feeding Your Demons by Tsultrim Allione.

Someone I know has reported something similar. She had both negative and positive beliefs of another person, and felt that the negative beliefs were wrong. After trying to do reconsolidation, she found that the negative beliefs only got stronger. Not only was this an unwanted result, it also didn't feel more true, but also felt really distressing. She did get it eventually fixed, and is still using the technique, but is now more cautious about it.

Personally I haven't had this kind of an issue: I find that if I'm in a stance where I have already decided that a certain belief is wrong and am trying to force my brain to update on that, the update process just won't go through, or produces a brief appearance of going through but doesn't really change anything. This seems fortunate, since it forces me to switch to more of a mode of exploration: is this belief false, or might it in fact be true? (Note that UtEB also explicitly cautions against trying to explicitly argue against or disprove a belief.)

If you go through a belief update process and it feels like the wrong belief got confirmed, the fact that you feel like the wrong belief won means that there's still some other belief in your brain disagreeing with that winner. In those kinds of situations, if I am approaching this from a stance of open exploration, I can then ask "okay, so I did this update but some part of my mind still seems to disagree with the end result; what's the evidence behind that disagreement, and can I integrate that"?

In my experience, if I find myself really strongly insisting that a belief must be false and disproven, then that may actually be because a part of my mind thinks that it would be really bad for the belief to be true. Maybe it would be really unpleasant to believe in existential risk being a serious issue, and then I get blended with the part that really doesn't want it to be true. Then I try to prove x-risk concerns false, which repeatedly fails because the issue isn't them being false, the issue is me not wanting to believe them true. mr-hire has a good piece of advice relating to this:

For every belief schema you're working with, there's (at least) two belief schema's at play. There's the side that believes a particular thing, and then there's a side that wants you to question the belief in that thing. As a general rule, you should always start with the side that's more cognitively fused.
As an example, I was working with someone who was having issues going to bed on time, and wanted to change that. Before we started looking at the schema of "I should avoid ruminating by staying up late," We first examined the schema of "I should get more sleep."
By starting with the schema that you're more cognitively fused with, you avoid confirmation bias and end up with more accurate beliefs at the end.

Note also that it may be the case that you really want some belief to be false, and it is in fact false. But the above bit is good advice even in that situation: even if the belief is false, you are less likely to be able to update it if your mind is stuck on wanting to disprove it, because you need to experience it as genuinely true in order to make progress. As I've mentioned:

Something that been useful to me recently has been remembering that according to memory reconsolidation principles, experiencing an incorrect emotional belief as true is actually necessary for revising it. Then, when I get an impulse to push the wrong-feeling belief out of my mind, I instead take the objecting part or otherwise look for counterevidence and let the counterbelief feel simultaneously true as well. That has caused rapid updates the way Unlocking the Emotional Brain describes.
I think that basically the same kind of thing (don't push any part out of your mind without giving it a say) has already been suggested in IDC, IFS etc.; but in those, I've felt like the framing has been more along the lines of "consider that the irrational-seeming belief may still have an important point", which has felt hard to apply in cases where I feel very strongly that one of the beliefs is actually just false. Thinking in terms of "even if this belief is false, letting myself experience it as true allows it to be revised" has been useful for those situation

All of that said, I do agree that there is always the risk of more extensive integration actually leading to incorrect beliefs. In expectation, learning more about the world is going to make you smarter, but there's always the chance of buying into a crazy theory that makes you dumber and integrating your beliefs to be more consistent with it - or even buying into a correct theory that makes you dumber. But of course, if you don't try to learn or integrate your models more, you're not going to have very good models either.

If you go through a belief update process and it feels like the wrong belief got confirmed, the fact that you feel like the wrong belief won means that there's still some other belief in your brain disagreeing with that winner. In those kinds of situations, if I am approaching this from a stance of open exploration, I can then ask "okay, so I did this update but some part of my mind still seems to disagree with the end result; what's the evidence behind that disagreement, and can I integrate that"?

I sometimes find that memories and the beliefs about the world that they power are "stacked" several layers deep. It's rare to find a memory directly connected to a mistaken ground belief, and it's more normal that 2, 3, 4, or even 5 memories are all interacting through twists and turns to produce whatever knotted and confused sense of the world I have.

Thanks for sharing! ++ for "I tried the thing, this is how it went" post

That is really interesting! Sorry to hear that this promising technique backfired. Do you mind sharing any specific examples of what the clash was, what you did, and what "false belief" got stronger, twisting the previously "good" belief?

I would also be curious to hear of more specific examples (though of course they might be too personal to share).

[-][anonymous]4y40

Yeah, too personal to share on the public internet, sorry. To kind of translate it into the case of Richard as described in the book - imagine if Richard also happened to have some minor doubts around his ability to read body language. They don't come up in discussion with his therapist since they're far more minor and apparently unrelated. Then, Richard notices his colleague speaking up confidently and being apparently received well. Instead of deciding that his original belief (being confident is bad) was false, Richard's brain resolves the conflict by deciding that the colleague was in fact also hated by everyone, and he just *really* can't read body language. So now Richard has his existing problem (being confident is bad) and also deeply believes he is incapable of reading any body language.

[I should be clear, the situation became rather more dire in translation; my case wasn't that life-impacting, and after noticing this I was able to reverse the process with additional work.]

Haven't even gotten to the rest of this yet but already want to say I think this initial summary is incorrect. I'll have to do some re-reading to discern the extent to which I think that's a mis-reading on your part, a mis-characterization on Kaj's part, or the result of ambiguity on the original authors part, but regardless: the summary at the start of this post, quoted below, seems to me to be completely the opposite of the actual basis of Coherence Therapy (speaking as someone who has read UtEB, several other Coherence Therapy books, done half a dozen sessions with a Coherence Therapist)

it presents a model of the brain where your problems are mostly caused by incorrect emotional beliefs (bad guys). The solution to your problems is to develop or discover a correct emotional belief (good guy) that contradicts your incorrect beliefs, then force your brain to recognize the contradiction at an emotional level. This causes your brain to automatically resolve the conflict and destroy the incorrect belief, so you can live happily ever after.

How I would characterize this is that the problems are caused by partially correct but incomplete beliefs who are not bad guys but good guys within their own limited frame. The solution to your problems is to find data in your own experience that is compartmentalized from the part of your cognition holding the incomplete belief, and bring it into contact with that part, so that your system as a whole can assess all of the data and synthesize it into a new belief that is more complete (although still surely could be even more complete).

The strategy you're describing is ultimately isomorphic to the kind of strategy that coherence therapists call counteractive, which is the opposite of how emotional reconsolidation actually works. This process can't be forced, in the same way you can't force someone to agree with you no matter how much evidence you shove in their face.

I see now how this could happen, and evidently it happened to you.

It has not happened to me, even though I used it quite aggressively e.g. to instil objectively false but useful beliefs.

I am trying to work out what is different... I did this as part of the IFS (Internal Family Systems) process, as a more powerful way to resolve exiles that are hard to fix.

I suspect maybe the difference is that in IFS they make a huge deal about honoring the 'parts' including exiles. In your terms this would be the unhelpful beliefs. You need ideally to fully accept that they are there for a reason and have good intentions. In IFS it is a common rookie mistake to try to shove 'bad' "parts" (in IFS terms) away prematurely and tell them to stop doing or believing that thing right away. If you do this they will often resist vehemently in open or in covert ways. Once you do get to know them, appreciate them, acknowledge their good intentions, they are then often very willing to form the intention to change, and in this case they will not resist.

So my suggestion would be to try to get to know the 'false' belief better and to acknowledge why it is there, the good it did, the good intention behind it - and with associated beliefs - there can be quite a complex structure of chained beliefs and practices. Only then do you ask it, are you happy with the current set-up? Would you like to change anything? Ask if you do really want to change the belief in every bone of your body. Usually at this point it is pretty easy to change and you are done.

If the 'exile' *wants* to change but cannot then the UTEB techniques can be very useful. I will give one example.

As a very young student I had a vicious and sadistic teacher. Apart from her beatings, she employed psychological terror tactics seemingly designed to maximize our terror and helplessness and humiliation. I had frequent flashbacks which I see as a form of hyper-vigilance whose intention was to keep me safe. I tried all the usual techniques for resolving my flashbacks. We are here now, she is dead, I have adult resources that can protect you, I can hold you, etc, etc. These helped a bit but not entirely.

So when everything else did not succeed entirely I tried the "nuclear option" - rewriting history. I implanted a belief that the very first time she exhibited her toxic behavior a group of parents stormed into the classroom, beat her up, threw her out of the school, and warned her never to set foot in a school again, which she never did (in the rewritten history). We reverted back to our previous teacher who was lovely. This worked, even though - at some level - I know it is false. I think it worked because all the parts of me were united in resolving this issue and there was no internal conflict apart from the ongoing feelings of fear and anxiety being too strong.

So again I think you may perhaps have had some residual internal conflict about changing the belief and this may be why you did not succeed at times. I hope this helps.

Two notes

1. People may confuse what I did with a revenge fantasy. I don't think revenge fantasies are very often useful. This is different because the bad thing, in the rewritten history, did not happen. There is nothing to revenge.

2. Assuming my post makes sense to you, it may illustrate why the seemingly preposterous IFS model can be quite useful - it gives you a powerful language and structure for dealing with all these internal complexities.

I suspect maybe the difference is that in IFS they make a huge deal about honoring the 'parts' including exiles. In your terms this would be the unhelpful beliefs. You need ideally to fully accept that they are there for a reason and have good intentions. In IFS it is a common rookie mistake to try to shove 'bad' "parts" (in IFS terms) away prematurely and tell them to stop doing or believing that thing right away. If you do this they will often resist vehemently in open or in covert ways. Once you do get to know them, appreciate them, acknowledge their good intentions, they are then often very willing to form the intention to change, and in this case they will not resist.
So my suggestion would be to try to get to know the 'false' belief better and to acknowledge why it is there, the good it did, the good intention behind it - and with associated beliefs - there can be quite a complex structure of chained beliefs and practices. Only then do you ask it, are you happy with the current set-up? Would you like to change anything? Ask if you do really want to change the belief in every bone of your body. Usually at this point it is pretty easy to change and you are done.

This agrees with my experience.

So when everything else did not succeed entirely I tried the "nuclear option" - rewriting history. I implanted a belief that the very first time she exhibited her toxic behavior a group of parents stormed into the classroom, beat her up, threw her out of the school, and warned her never to set foot in a school again, which she never did (in the rewritten history). We reverted back to our previous teacher who was lovely. This worked, even though - at some level - I know it is false.

My model of "rewriting history" is that it still requires something that your mind believes could in principle have happened, and is a way of integrating those true beliefs in the form of an experience which an emotional part can believe in. Part of what's going on in such a memory is a fear that if this were to happen again, there would be no way to escape the situation, and you would be totally helpless. A parental intervention is something that could in principle have happened even back in that situation (even if its imagined concrete manifestation was a bit over-the-top), so once the "stuck" part of the mind has updated on it being at least possible to get out of the situation, it can relax a little and allow other relevant information to be updated.

I think that the feeling of total helplessness, in particular, is a big factor in trauma memories - the therapeutic literature seems to argue it, and it also makes sense in theoretical terms. If you get into a state where absolutely nothing you do can make any difference to the negative situation that you are in, that could get almost unboundedly bad. From that perspective, it's not surprising that upon detecting the potential for such a situation, some parts of the brain would become obsessed with bringing up the possibility of that situation, until a way had been found to avoid it.

There was a related excerpt in Unlocking the Emotional Brain, which talked imaginary re-enactment; taking some bad situation that you were in, and imagining how you yourself had gone about it differently. This would update your belief that there was nothing that you could have done in that situation, and make the memory less traumatic. But it notes that the therapist should ensure that an alternate way of acting would in fact have been possible:

As understood in terms of the therapeutic reconsolidation process, enacting the natural, self-protective response is de-traumatizing because the experience of the empowered response creates new knowings that disconfirm and dissolve the model and the feeling of being powerless that had formed in the original traumatic learning experience.
It is important to note that the re-enactment technique is appropriate only if the original situation in fact gave the client early signs of danger or trouble, so that in re-enacting, the client can respond sooner and more assertively and self-protectively, and in that way can experience the ability to avoid harm. An example of a trauma that is inappropriate for re-enactment is the experience of a bomb exploding. In that case there is no way to respond more self-protectively, so re-enactment would only be re-traumatizing. In such cases, different techniques of traumatic memory transformation are needed.

Given that quote, it is interesting that "someone else could have come in and helped me" re-enactment sometimes works. It is not entirely clear to me why and when it does (it doesn't always seem to help me), but one belief that people sometimes internalize from e.g. being terrorized by an authority figure is that they are worthless and deserved the terrorizing. If you know that parents generally would not accept this kind of a behavior from a teacher, then that belief contains the generalized belief of "no child deserves this kind of a treatment"; which, when applied to the original memory, may be a way of turning that abstract belief concrete and removing the belief that nobody would care about that happening to your childhood self. (Just wildly speculating here.)

[-][anonymous]4y60
So my suggestion would be to try to get to know the 'false' belief better and to acknowledge why it is there, the good it did, the good intention behind it - and with associated beliefs

Yes, I suspect you are right. This is something I had never done with my problems before I read UtEB, and I have found it one of the more useful insights (it would have been worth me reading the book just for this even without the rest of the model), but it is definitely not a habit yet. It's entirely possible that I hadn't properly completed this step before I tried to acknowledge the juxtoposition.

I think there is a simpler explanation. All our thoughts are driven by needs. When needs for safety are not met in a particularly overwhelming fashion, it gets stuck in our minds as trauma--meaning we are in a constant state of anticipating it might happen again. All our thoughts, feelings, and beliefs just follow from that fear and trying to protect ourselves from that outcome.

What you imagined was a scenario where your needs for protection/safety were completely met--you were protected and the trauma never occurred. As the emotional part of our brain sees imagination and memory as the same, this resolved the trauma.

Being internally aligned may have made the concentration to do that easier, but I think it's the strength of your imagining and your felt sense of your need for safety being met that changed the way your brain stores the memory (moving it from the amygdala/danger center to the hippocampus/long term storage).

In other words, your body was stuck in a constant state of needing to experience safety, and your imagination made that experience of safety happen (by imagining throughly enough for it to *feel* real), which met that need to experience safety.

As the emotional part of our brain sees imagination and memory as the same, this resolved the trauma

I think you are talking about something downstream from the problem OP reported. What you said explains why changing the memory would help. But I think it is not relevant to the question of whether you *can* change the memory.

If there are parts of you that think that holding on to the memory and to whatever partial solutions you came up with at the time are important, you will have trouble changing that, no matter what the benefits would be after the fact.

And of course given the traumatic nature of such memories, holding onto them and to the solutions you found do tend to seem very important. Books and reports of therapy are full of examples of this kind of thing.

Lots of other folks have commented on things like conflict framing, but I haven't seen anyone mention this, so I'll throw it in there.

In the past decade or so that I've been teaching people how to use memory reconsolidation processes (and using them myself), the number one obstacle that comes up (after identifying a suitable memory to use), is something I call "false belief change", and it may have some overlap with what you're describing here.

A false belief change is what happens when we imagine what happens in a memory to be different, without imagining our assumptions to be different first. It comes about from trying to script an alternative experience instead of either reinterpreting the experience or presupposing an alternative causal model and then simulating the result of that model.

And scripting usually reinforces the existing belief instead of changing it.

A common example: somebody is trying to change a belief they got from something they got yelled at over as a kid, or were treated in some obviously neglectful or abusive way, but in a frame where this was "for their own good". Typically, someone's first attempt at reconsoldiation is to imagine the parents acting lovingly and supportively. Good idea, right?

Nope. Invariably, this first attempt makes things worse, because on careful reflection, what they're usually imagining is the parents' behavior being changed on the outside, but leaving their internal model of the parent alone... which results in an interpretation like the imagined parent thinking, "I need to treat this evil/incompetent/idiot child in a loving and supporting way so they can stop being such an annoying little shit."

As you can probably imagine, this does nothing to improve the person's self-esteem. ;-)

Re-premising and Reinterpretation

The trick to making a reconsolidation of this type work is that you either have to reinterpret the situation, or re-premise it.

An example of re-premising might be to imagine the parent if they didn't think you were annoying, evil, incompetent, whatever, and then see what your mental simulation generates as how they would have acted with this new premise.

An example of re-interpretation might be to realize that the parent was acting selfishly to get rid of something that was annoying/upsetting to them rather than acting in your best interest, or that they were mistaken about the best way to motivate you, or that in fact they didn't really give a shit about you at all and there is no particular reason for you to give their opinion any weight to begin with. (I sometimes call this the Bigger Asshole Theory.) As with re-premising, the idea is to allow your mental simulation to play out with the new premise, but in this case you are playing it out with the premise that your attitude is different, rather than theirs.

In neither case is "scripting" the reconsolidation target a good idea. It can be helpful to have examples of helpful behaviors or unflappable attitudes or compassion or whatever, but this is more as a vehicle for one to find their way back to the modeled internal schema that people with those behaviors have, that can then be used to premise a new mental simulation to be run "forwards".

When done this way, the necessary contradiction that plays out comes as a natural result of simulating different inner models of people, and naturally leads to an actual change. It also forms a natural checkpoint for iteration, in that if you can't get a simulation to play out by itself and feel natural, then you know you haven't gotten a solid enough model of the schema you're trying to change to yet!

" Of course, as in real life where the good guy doesn’t always win, it seems that when I do this my brain doesn’t always destroy the right belief."

I wonder if this might not be related to the Curse of the Counterfactual post.

Perhaps the two could be brought together and better results?

To clarify a bit, I was thinking about the observation that the "punish" behavior mentioned in that thread can be positive reinforcement to bad behaviors rather than working towards resolving or reducing the bad.