Contains spoilers for the worldbuilding of Vernor Vinge's "Zones of Thought" universe.
Based on Eliezer's vision of the present from 1900.
In the Zones of Thought universe, there is a cycle of civilization: civilizations rise from stone-age technology, gradually accumulating more technology, until they reach the peak of technological possibility. At that point, the only way they can improve society is by over-optimizing it for typical cases, removing slack. Once society has removed its slack, it's just a matter of time until unforeseen events force the system slightly outside of its safe parameters. This sets off a chain reaction: like dominoes falling, the failure of one subsystem causes the failure of another and another. This catastrophe either kills everyone on the planet, or sets things so far back that society has to start from scratch.
Vernor Vinge was writing before Nassim Taleb, but if not for that, this could well be interpreted as a reference to Taleb's ideas. Taleb mocks the big players on the stock market for betting on the typical case, and taking huge losses when "black swans" (unexpected/unanticipatable events) occur. (Taleb makes money on the stock market by taking the opposite side of these bets, betting on the unknown unknowns.)
Taleb ridicules Bayesians for their tendency to rely on oversimplified models which assume the future will look like the past. Instead he favors Popperian epistemology and ergodicity economics.
Indeed, naive Bayesians do run the risk of over-optimizing, eliminating slack, and overly assuming that the future will be like the past. On a whole-society level, it makes sense that this kind of thinking could eventually lead to catastrophe (and plausibly already has, in the form of the 2008 housing crash).
However, human nature and modern experience leads me to think that the opposite failure mode might be more common.
Taleb advises us to design "antifragile" systems which, like him, bet on the atypical and get stronger through failure. This means designing systems with lots of slack, modularity, redundancy, and multiple layers (think of a laptop, which has a hard chassis to protect and support the vital electronics, & then often has a moderately hard plastic protective case, and then is transported in a soft outer case of some kind). It means responding to black swans by building new systems which mitigate, or (even better) take advantage of, the new phenomena.
But when I look around at society (at least, through my Bayesian-biased lens) I see it doing too much of that.
- The over-cautious FDA seemingly kills a lot more people on average (compared to a less-cautious alternative) in the name of avoiding risks of severe unanticipated drug side-effects. And people are largely comforted by this. A typical healthy individual would prefer (at least in the short term) to be very sure that the few drugs they need are safe, as opposed to having a wider selection of drugs.
- In response to the 9/11 attacks, the government spend huge amounts of money on the TSA and other forms of security. It's possible that this has been a huge waste of money. (The TSA spends 5.3 billion on airline security annually. It's difficult to put a price on 9/11, but quick googling says that total insurance payouts were $40 billion. So very roughly, the utilitarian question is whether the TSA stops a 9/11-scale attack every 8 years.) On the other hand, many people are probably glad for the TSA even if the utilitarian calculation doesn't work out.
- Requiring a license or more education may be an attempt to avoid the more extreme negative outcomes; for example, I don't know the political motivations which led to requiring licenses for taxi drivers or hairdressers, but I imagine vivid horror stories were required to get people sufficiently motivated.
- Regulation has a tendency to respond to extreme events like this, attempting to make those outcomes impossible while ignoring how much value is being sacrificed in typical outcomes. Since people don't really think in numbers, the actual frequency of extreme events is probably not considered very heavily.
Keep in mind that it's perfectly consistent for there to be lots of examples of both kinds of failure (lots of naive utilitarianism which ignores unknown unknowns, and lots of overly risk-averse non-utilitarianism). A society might even die from a combination of both problems at once. I'm not really claiming that I'd adjust society's bias in a specific direction; I'd rather have an improved ability to avoid both failure modes.
But just as Vernor Vinge painted a picture of slow death by over-optimization and lack of slack, we can imagine a society choking itself to death with too many risk-averse regulations. It's harder to reduce the number of laws and regulations than it is to increase them. Extreme events, or fear of extreme events, create political will for more precautions. This creates a system which punishes action.
One way I sometimes think of civilization is as a system of guardrails. No one puts up a guardrail if no one has gotten hurt. But if someone has gotten hurt, society is quick to set up rails (in the form of social norms, or laws, or physical guardrails, or other such things). So you can imagine the physical and social/legal/economic landscape slowly being tiled with guardrails which keep everything within safe regions.
This, of course, has many positive effects. But the landscape can also become overly choked with railing (and society's budget can be strained by the cost of rail maintenance).
Reminds me of this:
I can't help but think of The Dictator's Handbook and how all of the societies examined were probably very far from democratic. The over-extraction of resources was probably relatively aligned with the interests of the ruling class. "The king neither hates you nor loves you, but your life is made up of value which can be extracted to keep the nobles from stabbing him in the back for another few years"
If you are a smart person I suggest working in domains where the regulators have not yet shut down progress. In many domains, if you want to make progress most of your obstacles are going to be other humans. It is refreshingly easy to make progress if you only have to face the ice.
That's ambitious without an ambition. Switching domains stops your progress in the original domain completely, so doesn't make it easier to make progress. Unless domain doesn't matter, only fungible "progress".
"Progress" can be a terminal goal, and many people might be much happier if they treated it as such. I love the fact that there are fields I can work in that are both practical and unregulated, but if I had to choose between e.g. medical researcher and video-game pro, I'm close to certain I'd be happier as the latter. I know many people which basically ruined their lives by choosing the wrong answer and going into dead-end fields that superficially seem open to progress (or to non-political work).
Furthermore, fields bleed into each other. Machine learning might well not be the optimal paradigm in which to treat <gestures towards everything interesting going on in the world>, but it's the one that works for cultural reasons, and it will likely converge to some of the same good ideas that would have come about had other professions been less political.
Also, to some extent, the problem is one of "culture" not regulation. EoD someone could have always sold a covid vaccine as a supplement, but who'd have bought it? Anyone is free to make their own research into anything, but who'll take them seriously?... etc
No, the FDA would very likely have shut them down. You can't simply evade the FDA by saying that you are selling a supplement.
It only assumes there are a lot of domains in which you would be happy to make progress. In addition success is at least somewhat fungible across domains. And it is much easier to cut red tape once you already resources and a track record (possibly in a different domain).
Don't start out in a red-tape domain unless you are ready to fight off the people trying to slow you down. This requires a lot of money, connections, and lawyers and you still might lose. Put your talents to work in an easier domain, at least to start.
I think you're far too charitable toward the FDA, TSA et al. I submit that they simultaneously reduce slack, increase fragility ... and reduce efficiency. They are best modeled as parasites, sapping resources from the host (the general public) for their own benefit.
 Something I've been toying with is thinking of societal collapse as a scale-free distribution , from micro-collapse (a divorce, perhaps?) to Roman Empire events. In this model, a civilization-wide collapse doesn't have a "cause" per se, it's just a property of the system.
Oh, and for the FDA/TSA/etc, apologies for not being up to writing a more scathing review. This has to do with issues mentioned here. I welcome your more critical analysis if you'd want to write it. I sort of wrote the minimal thing I could. I do want to stand by the statement that it isn't highly fine-tuned utilitarian optimization by any stretch (IE it's not at all like Vernor Vinge's vision of over-optimization), and also, that many relevant people actively prefer for the TSA to exist (eg politicians and people on the street will often give defenses like "it helps avert these extreme risks").
I'm not sure we disagree on anything. I don't mean to imply that the TSA increases slack... Perhaps I over-emphasized the opposite-ness of my two scenarios. They are not opposite in every way.
Death by No Slack:
Utilitarian, puritan, detail-oriented (context-insensitive) thinking dominates everything. Goodhart's curse is everywhere. Performance measurements. Tests. All procedures are evidence-based. Empirical history has a tendency to win the argument over careful reasoning (including in conversations about catastrophic risks, which means risks are often under-estimated). But we have longstanding institutions which have become highly optimized. Things have been in economic equilibrium for a long time. Everything is finely tuned. Things are close to their malthusian limit. People wouldn't know how to deal with change.
Death by Red Tape:
Don't swim 30 minutes after eating. Don't fall asleep in a room with a fan going. The ultimate nanny state, regulating away anything which has been associated with danger in the past, unless it's perceived as necessary for society to function, in which case it comes with paperwork and oversight. Climbing trees is illegal unless you have a license. If you follow all the rules you have a very high chance of dying peacefully in your bed. There's a general sense that "follow all the rules" means "don't do anything weird". Any genuinely new idea gets the reaction "that sounds illegal". Or "the insurance/bank/etc won't like it".
Hm, these scenarios actually have a lot of similarities...
I like the analogy between social collapse and sand-pile collapse this implies :) [actually, rice-pile collapse] But can you say anything more about the model, eg, how you would make a simple computer simulation illustrating the dynamics? Based on your divorce example, it seems like the model is roughly "people sometimes form groups" (as opposed to "sometimes a new grain is dropped" in the rice-pile analogy). But how can we model the formation of very large groups, such as nations?
While I agree with the general point, I think this part:
is rather poorly phrased in a way that detracts from the overall message. I don't think this accurately captures either the costs of the TSA (many of which come in the form of lost time waiting in security or of poorly-defined costs to privacy) or the costs of 9/11 (even if we accept that the insurance payouts adequately capture the cost of lives lost, injuries, disruption, etc. there are lots of extra...let's call them 'geopolitical costs').
I agree -- I would want to account for stress from going thru TSA, a better accounting of life lost, etc. Unfortunately, a better analysis would have taken up a lot more space and perhaps been a post in itself. But I'm curious to see your analysis if you think you can produce one.
My internal reasoning is more like "look, any way you run the numbers, so long as you're reasonably fair, it aint gunna work out" but that's not exactly an argument, just a strongly held intuition.
At least in the TSA case, you have to consider that your adversity may be optimizing against you. Maybe you don't get very many attempted attacks precisely because you're putting effort into security. This isn't to say the current level of effort is optimal, just that you need more than simple cost-benefit calculation if your intervention decisions feed back into the event distribution.
Agreed. Something something infrabayes.
It's practically impossible to objectively assess the cost-effectiveness of the TSA, since we have no idea what the alternate universe with no TSA looks like. But we can subjectively assess.
While I fully agree with your general point that you have to compare your costs not with the current situation, but the counterfactual where you would not have incured those costs, in the TSA case, I wonder whether more regulation might also have an effect of increasing attacks chances by signaling that you care about attacks, therefore that attacks are efficient at hurting you.