Schelling place for comments is here on LessWrong.
I wish there was an example here. I think the algorithm you're pointing to is something like:
Is that roughly what you're trying to describe? Am I emphasizing the proper parts?
I'll note that one thing I love about step #3 is that it's asymetric to true beliefs. Other belief change techniques I know like the Lefkoe belief process or reframing instead ask you to imagine how your beliefs could be wrong, which is very effective for getting rid of them but says nothing about their validity.
Nope. That's just a process this thing is calling into. See this for more info on the context for this technique. (And the primary use case for this is where neither of the belifs/aliefs is wrong, and you end up grokking they are separate instead.)
Ahh I see, so the important thing I was missing is something like "This is about disentangeling social reality from predictive reality?"
I'd go a step further and say "this is about disentangling how to make useful predictions about social reality from how to make useful predictions about non-social reality."