Contains spoilers for the worldbuilding of Vernor Vinge's "Zones of Thought" universe.

Based on Eliezer's vision of the present from 1900.


In the Zones of Thought universe, there is a cycle of civilization: civilizations rise from stone-age technology, gradually accumulating more technology, until they reach the peak of technological possibility. At that point, the only way they can improve society is by over-optimizing it for typical cases, removing slack. Once society has removed its slack, it's just a matter of time until unforeseen events force the system slightly outside of its safe parameters. This sets off a chain reaction: like dominoes falling, the failure of one subsystem causes the failure of another and another. This catastrophe either kills everyone on the planet, or sets things so far back that society has to start from scratch.

Vernor Vinge was writing before Nassim Taleb, but if not for that, this could well be interpreted as a reference to Taleb's ideas. Taleb mocks the big players on the stock market for betting on the typical case, and taking huge losses when "black swans" (unexpected/unanticipatable events) occur. (Taleb makes money on the stock market by taking the opposite side of these bets, betting on the unknown unknowns.)

Taleb ridicules Bayesians for their tendency to rely on oversimplified models which assume the future will look like the past. Instead he favors Popperian epistemology and ergodicity economics.

Indeed, naive Bayesians do run the risk of over-optimizing, eliminating slack, and overly assuming that the future will be like the past. On a whole-society level, it makes sense that this kind of thinking could eventually lead to catastrophe (and plausibly already has, in the form of the 2008 housing crash).

However, human nature and modern experience leads me to think that the opposite failure mode might be more common.

Taleb advises us to design "antifragile" systems which, like him, bet on the atypical and get stronger through failure. This means designing systems with lots of slack, modularity, redundancy, and multiple layers (think of a laptop, which has a hard chassis to protect and support the vital electronics, & then often has a moderately hard plastic protective case, and then is transported in a soft outer case of some kind). It means responding to black swans by building new systems which mitigate, or (even better) take advantage of, the new phenomena.

But when I look around at society (at least, through my Bayesian-biased lens) I see it doing too much of that

  • The over-cautious FDA seemingly kills a lot more people on average (compared to a less-cautious alternative) in the name of avoiding risks of severe unanticipated drug side-effects. And people are largely comforted by this. A typical healthy individual would prefer (at least in the short term) to be very sure that the few drugs they need are safe, as opposed to having a wider selection of drugs.
  • In response to the 9/11 attacks, the government spend huge amounts of money on the TSA and other forms of security. It's possible that this has been a huge waste of money. (The TSA spends 5.3 billion on airline security annually. It's difficult to put a price on 9/11, but quick googling says that total insurance payouts were $40 billion. So very roughly, the utilitarian question is whether the TSA stops a 9/11-scale attack every 8 years.) On the other hand, many people are probably glad for the TSA even if the utilitarian calculation doesn't work out.
  • Requiring a license or more education may be an attempt to avoid the more extreme negative outcomes; for example, I don't know the political motivations which led to requiring licenses for taxi drivers or hairdressers, but I imagine vivid horror stories were required to get people sufficiently motivated.
  • Regulation has a tendency to respond to extreme events like this, attempting to make those outcomes impossible while ignoring how much value is being sacrificed in typical outcomes. Since people don't really think in numbers, the actual frequency of extreme events is probably not considered very heavily.

Keep in mind that it's perfectly consistent for there to be lots of examples of both kinds of failure (lots of naive utilitarianism which ignores unknown unknowns, and lots of overly risk-averse non-utilitarianism). A society might even die from a combination of both problems at once. I'm not really claiming that I'd adjust society's bias in a specific direction; I'd rather have an improved ability to avoid both failure modes.

But just as Vernor Vinge painted a picture of slow death by over-optimization and lack of slack, we can imagine a society choking itself to death with too many risk-averse regulations. It's harder to reduce the number of laws and regulations than it is to increase them. Extreme events, or fear of extreme events, create political will for more precautions. This creates a system which punishes action.

One way I sometimes think of civilization is as a system of guardrails. No one puts up a guardrail if no one has gotten hurt. But if someone has gotten hurt, society is quick to set up rails (in the form of social norms, or laws, or physical guardrails, or other such things). So you can imagine the physical and social/legal/economic landscape slowly being tiled with guardrails which keep everything within safe regions.

This, of course, has many positive effects. But the landscape can also become overly choked with railing (and society's budget can be strained by the cost of rail maintenance).

New Comment
15 comments, sorted by Click to highlight new comments since: Today at 5:12 AM

In the Zones of Thought universe, there is a cycle of civilization: civilizations rise from stone-age technology, gradually accumulating more technology, until they reach the peak of technological possibility. At that point, the only way they can improve society is by over-optimizing it for typical cases, removing slack. Once society has removed its slack, it's just a matter of time until unforeseen events force the system slightly outside of its safe parameters. This sets off a chain reaction: like dominoes falling, the failure of one subsystem causes the failure of another and another. This catastrophe either kills everyone on the planet, or sets things so far back that society has to start from scratch.

Reminds me of this:

In 1988, Joseph Tainter wrote a chilling book called The Collapse of Complex Societies. Tainter looked at several societies that gradually arrived at a level of remarkable sophistication then suddenly collapsed: the Romans, the Lowlands Maya, the inhabitants of Chaco canyon. Every one of those groups had rich traditions, complex social structures, advanced technology, but despite their sophistication, they collapsed, impoverishing and scattering their citizens and leaving little but future archeological sites as evidence of previous greatness. Tainter asked himself whether there was some explanation common to these sudden dissolutions.

The answer he arrived at was that they hadn’t collapsed despite their cultural sophistication, they’d collapsed because of it. Subject to violent compression, Tainter’s story goes like this: a group of people, through a combination of social organization and environmental luck, finds itself with a surplus of resources. Managing this surplus makes society more complex—agriculture rewards mathematical skill, granaries require new forms of construction, and so on.

Early on, the marginal value of this complexity is positive—each additional bit of complexity more than pays for itself in improved output—but over time, the law of diminishing returns reduces the marginal value, until it disappears completely. At this point, any additional complexity is pure cost.

Tainter’s thesis is that when society’s elite members add one layer of bureaucracy or demand one tribute too many, they end up extracting all the value from their environment it is possible to extract and then some.

The ‘and them some’ is what causes the trouble. Complex societies collapse because, when some stress comes, those societies have become too inflexible to respond. In retrospect, this can seem mystifying. Why didn’t these societies just re-tool in less complex ways? The answer Tainter gives is the simplest one: When societies fail to respond to reduced circumstances through orderly downsizing, it isn’t because they don’t want to, it’s because they can’t.

In such systems, there is no way to make things a little bit simpler – the whole edifice becomes a huge, interlocking system not readily amenable to change. Tainter doesn’t regard the sudden decoherence of these societies as either a tragedy or a mistake—”[U]nder a situation of declining marginal returns collapse may be the most appropriate response”, to use his pitiless phrase. Furthermore, even when moderate adjustments could be made, they tend to be resisted, because any simplification discomfits elites.

When the value of complexity turns negative, a society plagued by an inability to react remains as complex as ever, right up to the moment where it becomes suddenly and dramatically simpler, which is to say right up to the moment of collapse. Collapse is simply the last remaining method of simplification.

The ‘and them some’ is what causes the trouble. Complex societies collapse because, when some stress comes, those societies have become too inflexible to respond. In retrospect, this can seem mystifying. Why didn’t these societies just re-tool in less complex ways? The answer Tainter gives is the simplest one: When societies fail to respond to reduced circumstances through orderly downsizing, it isn’t because they don’t want to, it’s because they can’t.

In such systems, there is no way to make things a little bit simpler – the whole edifice becomes a huge, interlocking system not readily amenable to change.

I can't help but think of The Dictator's Handbook and how all of the societies examined were probably very far from democratic. The over-extraction of resources was probably relatively aligned with the interests of the ruling class. "The king neither hates you nor loves you, but your life is made up of value which can be extracted to keep the nobles from stabbing him in the back for another few years"

If you are a smart person I suggest working in domains where the regulators have not yet shut down progress. In many domains, if you want to make progress most of your obstacles are going to be other humans. It is refreshingly easy to make progress if you only have to face the ice.

That's ambitious without an ambition. Switching domains stops your progress in the original domain completely, so doesn't make it easier to make progress. Unless domain doesn't matter, only fungible "progress".

"Progress" can be a terminal goal, and many people might be much happier if they treated it as such. I love the fact that there are fields I can work in that are both practical and unregulated, but if I had to choose between e.g. medical researcher and video-game pro, I'm close to certain I'd be happier as the latter. I know many people which basically ruined their lives by choosing the wrong answer and going into dead-end fields that superficially seem open to progress (or to non-political work).

Furthermore, fields bleed into each other. Machine learning might well not be the optimal paradigm in which to treat <gestures towards everything interesting going on in the world>, but it's the one that works for cultural reasons, and it will likely converge to some of the same good ideas that would have come about had other professions been less political.

Also, to some extent, the problem is one of "culture" not regulation. EoD someone could have always sold a covid vaccine as a supplement, but who'd have bought it? Anyone is free to make their own research into anything, but who'll take them seriously?... etc

EoD someone could have always sold a covid vaccine as a supplement, but who'd have bought it? 

No, the FDA would very likely have shut them down. You can't simply evade the FDA by saying that you are selling a supplement. 

It only assumes there are a lot of domains in which you would be happy to make progress. In addition success is at least somewhat fungible across domains. And it is much easier to cut red tape once you already resources and a track record (possibly in a different domain). 

Don't start out in a red-tape domain unless you are ready to fight off the people trying to slow you down. This requires a lot of money, connections, and lawyers and you still might lose. Put your talents to work in an easier domain, at least to start.

I think you're far too charitable toward the FDA, TSA et al. I submit that they simultaneously reduce slack, increase fragility ... and reduce efficiency. They are best modeled as parasites, sapping resources from the host (the general public) for their own benefit.

[edit] Something I've been toying with is thinking of societal collapse as a scale-free distribution [1], from micro-collapse (a divorce, perhaps?) to Roman Empire events. In this model, a civilization-wide collapse doesn't have a "cause" per se, it's just a property of the system.

[1] https://en.wikipedia.org/wiki/Self-organized_criticality

Oh, and for the FDA/TSA/etc, apologies for not being up to writing a more scathing review. This has to do with issues mentioned here. I welcome your more critical analysis if you'd want to write it. I sort of wrote the minimal thing I could. I do want to stand by the statement that it isn't highly fine-tuned utilitarian optimization by any stretch (IE it's not at all like Vernor Vinge's vision of over-optimization), and also, that many relevant people actively prefer for the TSA to exist (eg politicians and people on the street will often give defenses like "it helps avert these extreme risks").

I'm not sure we disagree on anything. I don't mean to imply that the TSA increases slack... Perhaps I over-emphasized the opposite-ness of my two scenarios. They are not opposite in every way.

Death by No Slack:

Utilitarian, puritan, detail-oriented (context-insensitive) thinking dominates everything. Goodhart's curse is everywhere. Performance measurements. Tests. All procedures are evidence-based. Empirical history has a tendency to win the argument over careful reasoning (including in conversations about catastrophic risks, which means risks are often under-estimated). But we have longstanding institutions which have become highly optimized. Things have been in economic equilibrium for a long time. Everything is finely tuned. Things are close to their malthusian limit. People wouldn't know how to deal with change.

Death by Red Tape:

Don't swim 30 minutes after eating. Don't fall asleep in a room with a fan going. The ultimate nanny state, regulating away anything which has been associated with danger in the past, unless it's perceived as necessary for society to function, in which case it comes with paperwork and oversight. Climbing trees is illegal unless you have a license. If you follow all the rules you have a very high chance of dying peacefully in your bed. There's a general sense that "follow all the rules" means "don't do anything weird". Any genuinely new idea gets the reaction "that sounds illegal". Or "the insurance/bank/etc won't like it".

Hm, these scenarios actually have a lot of similarities...

I like the analogy between social collapse and sand-pile collapse this implies :) [actually, rice-pile collapse] But can you say anything more about the model, eg, how you would make a simple computer simulation illustrating the dynamics? Based on your divorce example, it seems like the model is roughly "people sometimes form groups" (as opposed to "sometimes a new grain is dropped" in the rice-pile analogy). But how can we model the formation of very large groups, such as nations?

While I agree with the general point, I think this part:

(The TSA spends 5.3 billion on airline security annually. It's difficult to put a price on 9/11, but quick googling says that total insurance payouts were $40 billion. So very roughly, the utilitarian question is whether the TSA stops a 9/11-scale attack every 8 years.)

is rather poorly phrased in a way that detracts from the overall message.  I don't think this accurately captures either the costs of the TSA (many of which come in the form of lost time waiting in security or of poorly-defined costs to privacy) or the costs of 9/11 (even if we accept that the insurance payouts adequately capture the cost of lives lost, injuries, disruption, etc. there are lots of extra...let's call them 'geopolitical costs').

I agree -- I would want to account for stress from going thru TSA, a better accounting of life lost, etc. Unfortunately, a better analysis would have taken up a lot more space and perhaps been a post in itself. But I'm curious to see your analysis if you think you can produce one.

My internal reasoning is more like "look, any way you run the numbers, so long as you're reasonably fair, it aint gunna work out" but that's not exactly an argument, just a strongly held intuition.

At least in the TSA case, you have to consider that your adversity may be optimizing against you. Maybe you don't get very many attempted attacks precisely because you're putting effort into security. This isn't to say the current level of effort is optimal, just that you need more than simple cost-benefit calculation if your intervention decisions feed back into the event distribution.

Agreed. Something something infrabayes.

But, 

  • my current estimate is that if the government were doing adversarial reasoning, they would have allocated the money much differently; they probably heavily over-reacted to the very specific attack that was observed
  • even under adversarial reasoning, a strategy isn't much good if it costs more than what it prevents

It's practically impossible to objectively assess the cost-effectiveness of the TSA, since we have no idea what the alternate universe with no TSA looks like. But we can subjectively assess.

While I fully agree with your general point that you have to compare your costs not with the current situation, but the counterfactual where you would not have incured those costs, in the TSA case, I wonder whether more regulation might also have an effect of increasing attacks chances by signaling that you care about attacks, therefore that attacks are efficient at hurting you.