In Things I learned Writing The Lockdown Post, Scott Alexander describes a really tricky issue when trying to quantify the effects of some policies:

This question was too multi-dimensional. As in, you could calculate everything right according to some model, and then someone else could say "but actually none of that matters, the real issue is X", and you would have a hard time proving it wasn't.

A long time ago, I remember being asked whether banning marijuana was good or bad. I spent a long time figuring out the side effects of marijuana, how addictive it was, how many people got pain relief from it, how many people were harmed by the War on Drugs, etc - and it turned out all of this was completely overwhelmed by the effects of deaths from intoxicated driving. If even a few people drove on marijuana and crashed and killed people, that erased all its gains; if even a few people used marijuana instead of alcohol and then drove while less intoxicated than they would have been otherwise, that erased all its losses. This was - "annoying" is exactly the right word - because what I (and everyone else) wanted was a story about how dangerous and addictive marijuana was vs. how many people were helped by its medical benefits, and none of that turned out to matter at all compared to some question about stoned driving vs. substituting-for-drunk-driving, which nobody started out caring about.

It might actually be even worse than that, because there was some hard-to-quantify chance that marijuana decreased IQ, and you could make an argument that if there was a 5% chance it decreased IQ by let's say 2 points across the 50% of the population who smoked pot the most, and you took studies about IQ vs. job success, criminality, etc, really seriously, then lowering the national IQ 1 point might have been more important than anything else. But this would be super-annoying, because the studies showing that it decreased IQ were weak (and you would have to rely on a sort of Pascal-type reasoning) and people reading something on the costs/benefits of marijuana definitely don't want to read something mildly politically incorrect trying to convince them that IQ is super important. And if there are twenty things like this, then all the actually interesting stuff people care about is less important than figuring out which of the twenty 5%-chance-it-matters things actually matters, and it's really tempting to just write it off or put it in a "Future Research Needed" section, but that could be the difference between your analysis being right vs. completely wrong and harmful.

The same was true here. How do we quantify the effect of Long COVID? Who knows? Given the giant pile of bodies, maybe we just round COVID off the the number of deaths it causes, and ignore this mysterious syndrome where we've only barely begun the work of proving it exists? But under certain assumptions, the total suffering caused by Long COVID is worse than the suffering caused by the acute disease, including all the deaths!

There is more, but this covers the phenomenon I’m curious about. Let me try to describe the problem in general terms:

Important policies have so many effects that it is near impossible to keep track of them all. In addition, some effects tend to dwarf all others, so it is critical to catch every last one. (Perhaps they follow a Paretian distribution?) It follows that any quantitative analysis of policy effects tends to be seriously flawed.

Do we already have a term for this problem? It reminds me of moral cluelessness as well as known and unknown unknowns, but none of those seem fit the bill exactly.

New Answer
New Comment

4 Answers sorted by

johnswentworth

90

Important policies have so many effects that it is near impossible to keep track of them all. In addition, some effects tend to dwarf all others, so it is critical to catch every last one. (Perhaps they follow a Paretian distribution?) It follows that any quantitative analysis of policy effects tends to be seriously flawed.

I don't think this is the right way to frame the problem.

It is true that even unimportant policies have so many effects that it is de-facto impossible to calculate them all. And it is true that one or a few effects tend to dwarf all others. But that does not mean that it's critical to catch every last one. The effects which dwarf all others will typically be easier to notice, in some sense, precisely because they are big, dramatic, important effects. But "big/important effect" is not necessarily the same as "salient effect", so in order for this work in practice, we have to go in looking for the big/important effects with open eyes rather than just asking the already-salient questions.

For instance, in the pot/IQ example, we can come at the problem from either "end":

  • What things tend to be really important to humans, in the aggregate, and how does pot potentially impact those? Things like IQ, long-term health, monetary policy, technological development, countries coming out of poverty, etc, are "big things" in terms of what humans care about, so we should ask if pot potentially has predictable nontrivial effects on any of them.
  • On what things does pot have very large impact, and how much do we care? Pot probably has a big impact on things like recreational activity or how often people are sober. So, how do those things impact the things we care about most?

If people think about the problem in a principled way like this, then I expect they'll come up with hypotheses like the pot-IQ thing. There just aren't that many things which are highly important to humans in the aggregate, or that many things on which any given variable has a large expected effect. (Note the use of "expected effect" - variables may have lots of large effects via the butterfly effect, but that's relevant to decision-making only insofar as we can predict the effects.)

The trick is that we have to think about the problem in a principled way from the start, not just get caught up in whatever questions other people have already brought to our attention.

johnswentworth makes the great point that "some effects tend to dwarf all others, so it is critical to catch every last one" assumes that we can't identify the big effects early. If people are looking around with open eyes, they're not so unable to pick up the relevant stuff first. 

What yhoiseth's framing gets right is that big effects are sometimes not salient, even for people with open eyes. And especially when effects are hard to directly observe or estimate with certainty because they're indirect in nature (like substitution effects), not only are... (read more)

Richard_Kennaway

30

It seems similar to what Andrew Gelman has called the piranha problem (two links there). Also related is Gelman's kangaroo.

Kenny

10

Arnold Kling has written a bit about 'causal density', which seems pretty relevant.

Tao Lin

10

We could dub this "Long Tail Externalities" - the idea that most of the impact comes from a few indirect effects, and sometimes the more indirect the bigger - for instance, most policies might impact the future mainly through AI safety.

1 comment, sorted by Click to highlight new comments since:

related to curse of dimensionality