I have a vague idea about two things I'd like to turn into a proper term / concept / mental model, and be able to communicate them better to others. I don't know if there are already common terms for these. The first one is just a standard "thing I have encountered a few times and would like to have a word for". The second one is the meta thing of "not having a word for something while being sure that there is or should be a name for it".

#1 Judging the viability of rules, based on cherry-picked implications

This is the thing I observed a few times recently and wish I had a proper way of naming / explaining it, that is more compressed and easier to understand than just listing a bunch of examples.

A bunch of examples:

  1. There a quite a few Covid related regulations in my country, that (expectedly) sometimes lead to weird instances. E.g. family A, B and C are not allowed to meet all at once, but it's fine if A meets B, A meets C and B meets C separately all on the same day. Just looking at these scenarios it seems evident that them just meeting all at once would not be worse then meeting one after another on the same day, so it's easy to conclude that the underlying rule must be stupid.
  2. I sometimes heard of cases where a university student didn't get financial support from the state because the parents earned just barely too much (because financial support is only granted up to a certain threshold of parental income). It's very easy to point to these and say "see, if the parents had just earned 20€ less per month, they would have been better off! Isn't that crazy? What a stupid rule!"
  3. Another case of which I'm not sure whether it really points towards the same mental model, or this is actually an entirely separate issue: car manufacturers often hide features behind software locks; i.e. the car is capable of a whole bunch of things, but you need to pay good money in order to unlock them. An extreme case of this is some Tesla cars that came in short and long range version, both using the identical battery, with the short range ones being purely software-restricted. Here too it's easy to point to this situation from the perspective of a consumer who bought the short range version, and conclude "they should just unlock the long range version for me, it's pervese to keep it locked artificially".

In all these cases, looking at the thing we're observing in isolation can lead to the conclusion that the underlying rule is suboptimal and it should instead be changed in order to not make that particular scenario happen the way it did. However, in all situations there may actually be good reasons for things to be the way they are, such as:

  • making regulations easy to understand, easy to enforce or just generally less beurocratic may be more important than preventing select cases that don't make a lot of sense in isolation
  • in the Tesla case, what they did was apparently cheaper to Tesla than producing two entirely different vehicles with different battery packs. Just giving the long range version to every consumer for the price of the short range version might not have worked out for Tesla financially in the long run. And had they actually built two separate models, the cars would have ended up more expensive to all consumers. So the alleged alternative really wasn't a realistic one that could have worked as a general rule, even if in that one particular case of the individual consumer it wouldn't have made a difference to Tesla.

All these cases seem to be related to the concept of short-sightedly generalizing from cherry-picked examples. They are about one strange implication of a rule/pattern/behavior being taken as evidence of the rule being bad and/or easy to fix, when in reality that may not actually be the case.

But I'd be really happy to have a proper term to describe the thing more concisely, so would appreciate to be pointed to terms, wikipedia articles, blog posts, or just general trains of thought in this direction.

#2 The meta issue of often lacking a mental model for some observed pattern

The second issue is more on the meta level. I find myself in this situation quite often: seeing (or assuming) some kind of vague, not yet fully understood pattern in the world, and wanting to have an effective way of discussing it with people. Or at least putting a nice label on it so I can integrate it into my own thinking process more effectively.

So in a way, "lacking a mental model" is a meta-pattern that I keep running into. Maybe "lacking a mental model" already is the mental model to describe this situation sufficiently. I think what I'm lacking here is a proper strategy to deal with this situation when it comes up though, rather than a better term. Now I'm writing a lesswrong post in order to find answers. Once in the past I asked in a lesswrong Telegram group for a similar case, and got a helpful answer. It would be even better to have a good way to search the internet for mental models / terms / concepts that match the pattern I have in mind. But if I'm lacking a word to properly describe the thing I have in mind, and my best way to describe it is by naming a bunch of examples, then googling it isn't so viable.

For this second part, I'd not only be interested in how others tend to think about this particular issue, but also how you approach resolving it when it happens. Additionally, good sources for extensive lists of useful mental models would be helpful - examples I'm aware of are conceptually.org and the list from Farnam Street. There also seems to be a subreddit.

New Comment
1 comment, sorted by Click to highlight new comments since: Today at 12:43 PM

For #1, you could call them something like "usefully bad policies."

For #2, it sounds like the first step in the scientific method. My only other suggestion would be just to find other interested minds (like on LessWrong) to discuss the pattern and to collaborate on forming a new mental model and label. Someone else may already be more predisposed to thinking with a mental model that predicts or explains the phenomenon, so they could help get you to a clear conceptual anchor more quickly.