Command and Control (a 2013 book by Schlosser about nuclear weapon accidents) has some good examples of this. Some examples from my review:
Technical safety is contingent and probably matters: technical measures like 1-point safety (which was almost measured incorrectly) or Strong link/weak link probably avoided some accidental nuclear explosions (which might have triggered a nuclear war), required non-trivial technical insights and quiet heroes to push them through
(this is talking about nuclear weapons, in case someone doesn't know the context and doesn't want to click the links)
One of the major examples of invisible guardrails that keep our world functioning is disease control. Bubonic plague isn't extinct, but being controlled by plague prevention centers all over the world spanning from Russia and China, to Peru and Americas.
There also was infamous case in Victorian England, where smallpox vaccination became so effective that people decided that vaccination was no longer necessary which resulted in the epidemy and it took decades to regain population shield back.
The infectious way of the way public stops regarding certain illnesses as serious after significant improvement of medical treatment can be noticed in rabies. It's popular enough sentiment that rabies can't be that bad since people don't die from it as much, to the point that people view vaccination as unnecessary.
Going from the medical field back to technical, one can recall Crowdstrike debacle. System failure that briefly affected everything from international flights to medical facilities. It was, rightfully, called out as catastrophic mistake and criminal negligence of the company. Yet it was fixed under 24 hours by no doubt tireless work of significant amount of people that relegated it to blink and you miss it event.
It might not be truly grand scale problem and yet it's brilliant example of how the point of failure is highly visible and widely discussed, but the work necessary to work out is invisible and forgotten.
Because of that, people who prevent bad outcomes often get treated as though they’ve done nothing, or even as though they were dramatic for worrying. Which is a pretty fucked up reward structure when you think about it.
This is more towards the personal for someone who does the work vs how society should act towards them:
The master does nothing yet leaves nothing undone." (Tao Te Ching)
One interpretation: No one knows you saved the world or everyone thinks they did it themselves.
You have a right to perform your prescribed duties, but you are not entitled to the fruits of your actions. Never consider yourself to be the cause of the results of your activities, nor be attached to inaction. (Bhagavad Gita: Chapter 2, Verse 47)
When people trying to save the world are working for the recognition/results they quickly start goodhearting.
PS: Love the post title.
This also applies to all decision making by all elected political leaders, and is a big part of why we usually can't seem to act on an issue until it becomes a crisis, often more than once. Most people don't have the knowledge, interest, habits, or willingness to grapple with hypothetical harms deeply enough to properly evaluate them, so leaders who try to do so get punished for wasting resources on things that get deemed not real, or else punished for choosing an ineffective strategy if the harms happen anyway.
if you do that kind of work, yapp about it.
Say out loud what the risk is, tell us what you did about it. Tell us what would have happened if it weren't for you.
One of the challenges with ongoing litigation is it's hard to comment about why one does things. Y'all are smart though, here are some quotes from which one can draw an inference:
"With it's new “Home Equity Investment” (“HEI”), Hometap joins a long line of predatory financiers attempting to strip home equity away from vulnerable consumers and into its own pockets."
"the Hometap HEI is riskier, underwritten with less diligence, and vastly more expensive to consumers than pre[-2008]-recession subprime mortgage loans"
"Hometap reviews no documentation of income, assets, or employment, and does not base its underwriting decisions on any ofthese factors."
"Hometap engages in much of the same conduct that was rightly prohibited after the subprime mortgage crisis in order to prevent a future foreclosure crisis and ensure the solvency and soundness ofthe economic system. In failing to either extend financing only to those homeowners who have a demonstrated ability to repay, as with forward mortgages, or to comply with the strict limits on reverse mortgages outlined herein, Hometap flouts or circumvents these critical regulations"
All quotes are allegations yet to be proved.
Yours is much better written, but this reminds me of a post I wrote a few months ago: We live in the luckiest timeline
In most cases I think you'll never be sure if you actually did anything. IT and Cybersecurity comes to mind - did it matter that some clever intervention got an important server patched a week quicker than it would've otherwise? Identifying and fixing a rare scenario that could cause backups to be missed?
How can we know that these examples are real?
Roman Malov links to a Hank Green video with much more mundane examples, like rumble strips on highways. The falling automobile death rate is evidence that car interventions are doing something right, if not proof of a particular example like rumble strips. But how do I know that the Y2K problem was not overblown? If a few systems had had big disasters, I could estimate how much all the other systems had accomplished by avoiding them. But if no one had disasters, I have to consider the possibility that the problem was overblown and the effort expending on the fix wasteful.
That one’s on me for the phrasing.
What I meant to point at was more like “ozone hole wasn’t real” kind of reaction, where by saying "overblown" people imply that the problem was completely made up. I'm not tying to make a point about whether the response scale matched the risk, and I don’t really have the expertise to judge.
Asking how exactly the counterfactual world would have looked like is absolutely reasonable, and honestly it's a much harder question than the one I was trying to talk about. My focus was only that the full scale of possible consequences is counterfactual to us and therefore invisible.
Btw I belive that in principle it can be estimated. Major industries currently have risk assessment systems in place, nothing stops us from using them to analyze past near-misses. And regarding Y2K specifically: we actually do have examples of software failures cascading through infrastructure -- I was thinking specifically about the 2024 CrowdStrike thing while writing it.
I wasn't out of school back then, but I can imagine the board meeting went something like this: Boss :"It's going to cost 10mm dollars to fix this Y2K bug? Can you verify our systems crash by running a simulation?" Engineer: "you mean manually chaining the computer time and seeing if our code still works? If so, we already did that and our code failed." Boss:"Thanks! you're approved."
Part of the reason why wrote this is hope that comments will actually yield a list of similar and better texts. I know they're out there somewhere.
Nothing groundbreaking, just something people forget constantly, and I’m writing it down so I don’t have to re-explain it from scratch.
The world does not just ”keep working.” It keeps getting saved.
Y2K was a real problem. Computers really were set up in a way that could have broken our infrastructure, including banking, medical supply chains, etc. It didn’t turn into a disaster because people spent many human lifetimes of working hours fixing it. The collapse did not happen, yes, but it’s not a reason to think less of the people who warned about it — on the contrary. Nothing dramatic happened because they made sure it wouldn’t.
When someone looks back at this and says the problem was “overblown,” they’re doing something weird. They’re looking at a thing that was prevented and concluding it was never real.
Someone on Twitter once asked where the problem of the ozone hole had gone (in bad faith, implying that it — and many other climate problems — never really existed). Hank Green explained it beautifully: you don't hear about it anymore because it's being solved. Scientists explained the problem to everyone and found ways to counter it, countries cooperated, companies changed how they produce things. Thousands of people work for it, and they are winning.
Discussion has died down as we began to feel relatively safe. Now we can pretend that it was never serious.
You see this with AI too, already. There are people who are sure that the alignment problem is exaggerated because chatbots already care about people enough and do not give out bomb recipes. As if that were not a man-made miracle. Somehow people infer that the problem was inconsequential, not that we responded properly this one time.
Humans are wired to notice events, not non-events. People observe the post-intervention world and treat it as the baseline. Prevention is invisible.
Because of that, people who prevent bad outcomes often get treated as though they’ve done nothing, or even as though they were dramatic for worrying. Which is a pretty fucked up reward structure when you think about it.
If you work in safety (of... anything), you'll be told many times that your job is unimportant. Some people find it comforting to think that if someone succeeded, then there was never a real problem to begin with. Some are consciously fighting windmills and assume everyone else must be too. And most people just don’t think about catastrophes, you know, unless.
It’s also psychologically harder to respect routine prevention than cinematic heroics. People love the last-minute save, and they are not taught to clap for scheduled maintenance or tedious work.
But it’s still just wrong.
Most of civilization runs on maintenance and prevention. The world is being saved constantly. The world is actively being saved from something right now. You are held by myriads of careful hands! Rejoice!
Anyway, here are a few of my takeaways:
And this one feels awkward, because I’m asking you to stop being so humble — if you yourself do that kind of work, yap about it.
Say out loud what the risk is, tell us what you did about it. Tell us what would have happened if it weren't for you.
If you don’t, people may eventually assume the danger was imaginary. If enough people assume that, they might stop funding, supporting, or doing the quiet work that keeps the floor from collapsing.
P.S. Please, let me know if someone wrote a similar thing better.
P.P.S. Was irritated by NOTHING EVER HAPPENS meme again. Also thought about "myriads of careful hands" -metaphor and liked it.