Epistemic status: more confident in the conclusion than in any particular model.
Suppose we have a fire insurance company with $2.5M in monthly revenue, and claims following a power law:
- Most months, claims are low
- Once every ~1 year, they see around $5M in claims
- Once every ~10 years, they see around $25M in claims
- In general, every ~N^1.5 months, they see around N million dollars of claims, for large N.
Ignoring bankruptcy laws and other breakdowns of the model, what’s the expected profit of this company in a random month?
Correct answer: negative infinity. This is a classic black swan scenario: there’s a well-defined distribution of always-finite events with infinite expected value. Sooner or later this company is going to go broke.
(Math, for those who partake: when , the integral is finite but the integral is infinite. So, if events have probability density proportional to , then they’ll have a well-defined probability distribution, they’ll always be finite, but they’ll have infinite expectation.)
Let’s change the scenario a bit - let’s make our insurance company more proactive. They want to go out and prevent those big fires which eat into their profits. Whenever there’s a big fire, they find some way to prevent that sort of fire in the future. Now, different sizes of fire tend to result from different kinds of problems; preventing the once-a-year fires doesn’t really stop the once-a-decade or once-a-century fires. But within a few years, the stream of roughly-once-a-year $5M claims has all but disappeared. Within a few decades, the stream of roughly-once-a-decade $25M claims has all but disappeared.
Iterative improvement - fixing problems as they come up - has eliminated 95% of the fires. Now what’s the expected profit of this company in a random month?
Correct answer: still negative infinity. The black swans were always the problem, and they haven’t been handled at all. If anything, the problem is worse, because now the company has eliminated most of the “warning bells” - the more-frequent fires which are big but not disastrous.
Moral of the story: when the value is in the long tail, iterative improvement - i.e. fixing problems as they come up - cannot unlock the bulk of the value. Long tails cannot be handled by iterative feedback and improvement.
What this really looks like
The fire insurance example seems a bit unrealistic. In the real world, things like bankruptcy laws and government responses to extreme disasters would kick in, and our fire insurance company wouldn’t actually have a valuation of negative infinity dollars.
Nonetheless, I do think the basic idea holds up: long tails cannot be handled by iterative improvement. A fire insurance company might dodge that problem by passing responsibility for the longest of the tail to the government, but there are plenty of value-in-the-tail scenarios where that won’t work.
Value of the Long Tail opens with the example of a self-driving car project. If the car is safe 99% of the time - i.e. a human driver only needs to intervene to prevent an accident on one trip in a hundred - then this car will generate very little of the value of a full self-driving car.
What happens when we apply iterative improvement to this sort of project? We drive the car around a bunch, and every time a problem comes up, we fix it. Soon we’ve picked the low-hanging fruit, and the remaining problems are all fairly complicated; failures arise from interactions between half a dozen subsystems. If we have 100 subsystems, and any given six can interact in novel ways to generate problems, then that’s 100^6 = 1 trillion qualitatively distinct places where problems can occur. If 30% of our problems look like this, then we can spend millions of man-hours iteratively fixing problems as they come up and never even make a dent - and that’s not even counting how much time the cars must be driven around to discover each problem!
The real world is high-dimensional. Rare problems are an issue in practice largely because high-dimensional spaces have exponentially many corners, and therefore exponentially many corner-cases. Each individual problem is rare and qualitatively different from other problems - patching one won’t fix the others. That means iterative feedback and improvement won’t ever push the rate of failures close to zero.