I'm having trouble understanding the process I should use when I am considering new models as they might apply to old data, like memories. This is primarily when reasoning with respect to qualitative models, like those that come out of development psychology, business, or military strategy. These models can be either normative or descriptive, but the big trait that they all seem to share is that they were all conceptualized with reference to the inside view more than the outside view - they were based on either memories or intuition, so they will have a lot of implicit internal structure, or they will have a lot of bullshit. Re-framing my own experiences as a way of finding out whether these models are useful is thus reliant on system one more than system two. Unfortunately now we're in the realm of bias.
My concrete examples of models that I am evaluating are (a) when I am attempting to digest the information contained in the "Principles" document (as discussed here) and for which situations the information might apply in; (b) learning Alfred Adler's "individual psychology" from The Rawness, which also expands the ideas and (c) the mighty OODA loop.
When I brought up the OODA loop during a meetup with the Vancouver Rationalists I ended up making some mistakes regarding the "theories" from which it was derived, adding the idea of "clout" to my mental toolkit. But it also makes me wary that my instinctive approach to learning about qualitative models such as this might have other weaknesses.
I asked at another meetup, "What is the best way to internalize advice from books?" and someone responded with thinking about concrete situations where the idea might have been useful.
As a strategy to evaluate the truth of a model I can see this backfiring. Due to the reliance on System One in both model structuring and model evaluation, hindsight bias is likely to be an issue, or a form of Forer effect. I could then make erroneous judgements on how effectively the model will predict an outcome, and use the model in ineffective ways (ironically this is brought up by the author on The Rawness). In most cases I believe that this is better than nothing, but I don't think it's good enough either. It does seem possible to be mindful of the actual conceptual points and just wait for relevance, but the reason why we reflect is so that we are primed to see certain patterns again when they come up, so that doesn't seem like enough either.
As a way of evaluating model usefulness I can see this go two ways. On one hand, many long-standing problems exist due to mental ruts, and benefit from re-framing the issue in light of new information. When I read books I often experience a linkage between statements that a book makes and goals that I have, or situations I want to make sense of (similar to Josh Kaufman and his usage of the McDowell's Reading Grid). However, this experience has little to do with the model being correct.
Here are three questions I have, although more will likely come up:
- What are the most common mistakes humans make when figuring out if a qualitative model applies to their experiences or not?
- How can they be worked around, removed, or compensated for?
- Can we make statements about when "informal" models (i.e. not specified in formal language or not mappable to mathematical descriptions other than in structures like semantic webs) are generally useful to have and when they generally fail?