I've recently updated that noticing is a key rationality skill -- not just noticing confusion, but noticing your cognition more generally. This allows you to figure out at a very granular level why you're not reaching your goals, and then intervene to change those reasons.

For example:

At one point I found myself procrastinating on ordering the catering for an event. Noticing the disconnect between my high-level goals ("make a good event") and my concrete actions ("spend time on FB"), triggered me to try to notice what was up in my mind (this is a particular trigger-response pattern I've trained myself to use). I found that I didn't want to make the call since last time I called them, they couldn't hear what I was saying and were kind of rude about it. I didn't want my phone to be bad or my accent to be inaudible, an so I didn't want to call them again. I then proceeded to borrow a friend's phone, and called them without problem.

Another example, this time with a concrete cognitive rather than practical intervention:

I noticed myself being unhappier than I wanted to be. So when the unhappiness clashed with the higher-level desire for happiness, it triggered a noticing process, and I realised my mind was running an algorithm like: "notice happy thought --> remember Hamming problem or that timelines might be short --> feel bad". This sounds ridicolously unhelpful when written out, but is in fact what was going on. So I started training myself to hold on to the happiness in the first part of the chain without automatically falling into the second.

Here's a worry with this: if part of my congition is consciously accessible and interpretable, and part of it is not, will extensive noticing-and-intervening cause motivated cognition to become less noticable?

It will by selection effects, since the more noticeable parts I'll change. But this feels more definitionally true than actually worrying.

It also might by negative reinforcement, if my mind learns that when subagents make their desires known they'll tend to be overruled/modified. (To prevent this, and as a safer policy in my current epistemic state, I make sure to sometimes deliberately not intervene on things I've noticed.) But this shouldn't be the case if I genuinely listen to subagents and take their preferences into account; as well as if the subagent theory doesn't fit (which seems more plausible in the second example above).

Is there some other reason to believe that improved ability to notice your cognition will cause rationalisation, motivated cognition, thought patterns highly valued by certain subagents, etc. to become less noticeable?

New Answer
New Comment

3 Answers sorted by

A perception I've had, since learning focusing and noticing and related introspective skills, is that my thought patterns sometimes have particular flavors that I associate with being motivated.

A couple such flavors are:

  • a hint of righteousness – My thoughts are oriented in a stance similar to anger. I notice my intent to prove another person wrong and win. My muscles are tensed. I find it particularly hard to detach myself from this one, but relatively easy to say things like "hmm, so I notice I'm pretty motivated right now but I still think I'm right. So, um, epistemic status: here are some motivated arguments for X."
  • a sinking feeling – A pit in my stomach. A part of me realizes this isn't a good idea but it would be super inconvenient if that were true. My internal monologue generates sentence fragments like "it's not that important." This for some reason is easier to detach from – as soon as I recognize what's going on I'm like 'ugh, fine. Okay. I will rewrite my solstice speech at the last minute because smallpox maybe isn't as old as I thought even though it's really annoying."

One interesting subskill is noticing that certain states of rationalization and motivation seem to come with certain physiological cues. (i.e. for me it's tensed forearm muscles for righteousness, slightly differently tensed forearm muscles for anxiety, pit in stomach for "oh man this isn't going to work is it?")

And once I've identified that, I also gain the ability to notice those physiological cues before I've figured out the exact nature of my motivation, and I can go "huh, tensed forearm muscles. Am I feeling righteously and overly excited about my frame? Hmm. Maybe" and then examine that (and sometimes it's "oh, no, this is anxious forearm tension not righteous forearm tension")

This is all to say that my personal experience has been "introspection being helpful for noticing motivation and rationalization". But, I can imagine an alternate me that learned the skills differently and ended up using them to generate stories that made myself stupider.

Short answer: sort, but I don't think it's net negative?

I wrote a post recently about how I've ignored emotions that feels relevant.

I've had periods where increased introspection ability lead to overconfidence about the absence of rationalizations, though I don't know if I can say that they became "less noticeable". Put another way, I'm not sure if I would have been any more likely to discover these rationalizations if I'd never increased introspection capabilities.

Note: In my case, the introspection I got better at is what I'd call "detective introspection". Think of looking at your own behavior as clues to what some other person was thinking (in contrast to focusing-esque introspection which is more about listening for something inside you to speak up).

Noticing is the fundamental skill and habit without which everything else is in vain. Noticing when you're going wrong is the only chance you have to put things right. Noticing when you're going right is the only chance to appreciate that you're going right.

1 comment, sorted by Click to highlight new comments since: Today at 8:06 AM
It will by selection effects, since the more noticeable parts I'll change. But this feels more definitionally true than actually worrying.

I could see this going both ways - they become harder to notice*, or as you get better at noticing you are more able to pickup on more complicated lines of reasoning/rationalization.**

*1) There's advice out there for making sure you noticing things doesn't make it harder to notice - the gist seems to be like "Be happy if you notice things, even if they're not good, because in order to solve problems you need to see them." (There's probably more in depth explorations of this directly, or indirectly: how reinforcement/learning works in people.)

2) Something more complicated could*** happen involving subconscious selection like if you subconsciously notice a pattern is leading to bad results/conflict it gets promoted to consciousness.

**Your examples seemed to be about things other than rationalization like:

This sounds ridicolously unhelpful when written out, but is in fact what was going on.

because rationalization is supposed to sound good.

***Speculation on my part.