New Comment
7 comments, sorted by Click to highlight new comments since: Today at 10:29 AM

TL;DR: The core concept is this:


Your brain already has the ability to update its cognitive strategies (this is called "meta-cognitive reinforcement learning"). However, the usual mechanism works with unnecessary levels of indirection, as in:

  • Cognitive strategy -> Thought -> Action -> Reward or punishment
    • You get rewarded or punished for what you do (as measured by your brain's chemical responses). Good thoughts are more likely to be followed by good actions. Good cognitive strategies are more likely to generate good thoughts. On average, your brain will slowly update its cognitive strategies in the right direction.
  • Cognitive strategy -> Thought -> Reward or punishment
    • You have learned to be happy or unhappy about having certain ideas, even when you don't yet know how they apply to the real world. Now your brain gets rewarded or punished for thoughts, and on average good thoughts are more likely to be generated by good cognitive strategies. Your brain can update cognitive strategies faster, according to heuristics about what makes ideas "good".

However, by carefully looking at the "deltas" between conscious thoughts, we can get rid of the last remaining level of indirection (this is the key insight of this whole page!):

  • Cognitive strategy -> Reward or punishment
    • You have learned to perceive your cognitive strategies as they happen, and developed some heuristics that tell you whether they are good or bad. Now your brain can update cognitive strategies immediately, and do it regardless of the topic of your thoughts.
    • Even when you generate a useless idea from another useless idea, you can still track whether the cognitive strategy behind it was sound, and learn from the experience.


(It doesn't look like it's possible to quote bullet points, especially not nested bullet points, and I didn't want to remove more than one layer of bullets because I thought they made the whole thing more clear.)

The rest of the linked post is mostly about how to actually go about implementing this. (And, I feel like that probably deserves a book and regular practice, rather than just a short blog post. So, if you want notice and learn better cognative strategies, reading the full thing is well-worth the time investment.)

Thanks, this is excellent. I was planning a post that was similar to this, but mine was less well thought out and (presumably) much less tested. Looking forward to trying this framework.

I have found it useful to think about broad classes of cognitive strategies to help figure out what it is I am 'already trying to do' in a siutation and then offer it suggestions/improvements.

Example: search strategies and prioritization strategies. (maps pretty well to default and task network, or open and closed mode.)

Is rewarding strategies instead of object level outputs the same as TDT?

I'd also guess that this would be fairly difficult for those who haven't already done some secondary attention training.

I like this, it makes sense to me theoretically. I think I would be more likely to try it if there were some kind of shameless sales pitch attached to it. For example, I wouldn't be meditating regularly if there wasn't a concrete story about the kind of positive changes that meditation leads to. What kinds of objective positive changes does this practice lead to? Examples?

The very top of the post lists several bullet points of "the good" that would happen to you if you had this skill. Is that what you were asking for? Or were you asking for a personal life example, "I used to do [thing], but I gained this skill and now I do [better thing]". If the latter, then he has a story's tab for his emotional processing post, and I assume he'll eventually have a story tab for this post as soon as someone sends him a personal story.

I admit I was looking for something more like a narrative or anecdote. It would really help my brain to decide this is actually something to be excited enough about to try.

[+][comment deleted]6mo1