This is another essay about naming things, dichotomies, and where subtle mix-ups can lead to errors. More specifically, I’d like to draw your attention to situations where a very real conceptual commonality is present between several problems, but this commonality doesn’t actually provide much insight into a unified solution for the aforementioned problems.

Concretely, we can refer to time-inconsistent preferences, the well-documented phenomenon where we’ll relent to in-the-moment urges, often for a temptation we will later regret. For example, a student might put off studying until the last moment, choosing instead to read a riveting novel. Or a partygoer might drink far too much they can handle, knowing they’ll soon end up regretting it.

In both of these cases, there is indeed something we can abstract from the nature of each of these situations—a human considers doing X and soon regrets it, instead wishing they had done Y. My claim here is that “time-inconsistent preferences”  form a type of descriptive classification because they can help us see the larger shape of what’s going on, but they don’t tell us how to solve the general problem.

Or, more specifically, I claim that in these situations where you’ve got a descriptive classification, it’s actually the specific details (and not the ability to recognize that you’re engaging in a general phenomenon) which provide the most leverage towards solving your problem.

In the above two examples, it might be that our struggling student needs to reexamine their priorities. Perhaps the regret is misplaced and actually doing poorly on the upcoming test isn’t even that big of a deal. Or perhaps our student could rearrange their schedule around and study with a friend to shave off some of the aversion.

The point is, this ends up looking quite different from what our overzealous partygoer might want to do. Our partygoer may want to consider the sort of circumstances which brought them to said party in the first place; it might be the case that avoiding certain triggers means they could sidestep potential binge opportunities entirely.

The point is, there’s a sort of mental misstep that can happen where being able to simply identify the generalized principle at work could give the false impression that you also know how to solve the problem. But the two are very much independent, as the generalized principle is, in these cases, a descriptive classification and not one that focuses on actions or implementation.

It might seem like I’m splitting hairs here—there’s a sort of argument that the above might represent, the sort of thing where I argue that well technically everything is implementation-specific because at some point, you’ll always need to get specific. After all, you can’t actually directly act on advice to “remove triggering environmental cues” (unless you’ve got a weird set of billiard firearm fauna).

And I’m not trying to be the person who’s trying to win by technicalities or definitions. Having the sense of the general shape of the sort of problem you’re facing can be helpful. Knowing what sorts of general solutions tend to work can provide a helpful template you can fill in with your environment-specific details. It can provide a good springboard for brainstorming ways forward.

EX: “Okay, I know that habit formation works best with strong, related sensory cue. What’s something related to flossing that I could use…?”

But I still do think that this sort of conceptual muddling can be pernicious, especially in the related case where two tasks seem similar, despite having vastly different effects. So let’s pivot a little bit to a slightly different situation: situations where your brain, when faced with an apparent conceptual similarity, assumes their actionable similarity, and then defaults to the easier one.

For example, Hunting for Practicality is about how, even when we try to internalize advice, it often gets cached in our brains in a way that’s akin to declarative semantic memory—we end up representing how the concepts are linked to each other and perhaps what properties they hold.

But, really, the information we should be trying to internalize should be procedural in nature—we care about how the advice can actually affect our actions in practice.

The default is to represent the information as a concept map; that’s a rather simple translation of the information presented to you that doesn’t require much additional effort. Trying to actively consider how your future actions will change as a result of heeding the advice in question is more involved.

And in the absence of other factors, the less effortful option wins out.

For second related example, In Defense of the Obvious is about how just receiving advice can set off our brain’s dismissal signals too quickly when it sounds like something we’ve heard a million times before. Yet, the advice is often still valuable even when it pattern matches to “boring” or “obvious”. Overriding the immediate dismissal response and doing the advice anyway is often a better response.

Once again, verifying whether or not you’ve heard such advice before ends up being an easier task than noting the dismissal, filing it away, and then looking into yourself to see if you actually are doing said obvious advice. It is also the more effortful one.

As a third example, consider a student trying to study math. They have a couple of choices: One thing they could do is read through the textbook and trace through the examples, making sure they can follow each step. Or, they could cover up the example problem and try to work through it themselves.

Looking through the textbook can provide the illusion of “exercising your math muscles”, but it’s the actual act of trying to solve problems which improve your ability to solve problems. And of course doing the real math is what’s harder, in terms of time and effort involved.

The issue with all three of these examples seem to be about the mental labels we use. Both the ineffective option and the active option can fall under the same category (e.g. “studying for math”), despite their major differences.

(The overall pattern here seems to be that of the Recognizing vs Generating distinction: roughly speaking, it’s much more difficult to put the pieces together at first, than to merely verify that the pieces fit.)

One view is that this sort of behavior is an example of self-signaling. After all, it’s much easier to feel productive than to actually be productive. I also think that, on some level, your brain thinks that you really can get all of the same benefits by doing the easier thing. In this case, your brain is hopeful and also wrong.

There are two takeaways here:

One is the importance of putting in deliberate, mindful effort, a thesis I hope to expound on in a later post.

The second one is perhaps a more familiar variant on the (now) well-worn phrase “the map is not the territory”. In this case, your ontology is not the reality. Inferences you make based on similarities may not transfer to other inferences of a similar type.

(Aka “Similarity-based connections are not themselves connected by similarities!”)

New to LessWrong?

New Comment