Your analysis of early action + no disaster overlooks the fact that early action can prevent the disaster. But you never see the things that were prevented... because they were prevented. Early action only seems useful when it merely mitigates a disaster — that is, when it is not all that successful. Valiant failure is valorised over competent success.
Y2K is a case in point. There actually was a problem, and it was fixed.
Thanks — I agree that early action can genuinely prevent disasters, and Y2K may well be a case where large-scale remediation averted serious failures. That’s an important distinction, and I’m not trying to deny it.
Instead, I am deliberately mostly overlooking prevention (though I can make that clearer) because the level I’m focusing on in this note is one step down from that system view: what things look like to a reasonably informed non-expert in advance, under uncertainty, before the outcome is known. The reason I am overlooking prevention is because, for the purpose of this text, it would not affect my conclusion. In 1998–1999, it wasn’t obvious to most people outside the remediation teams whether Y2K fixes were sufficient or even well coordinated. Expert assessments diverged, public information was mixed, and there was no way for a layperson to “test” the fix ahead of time. Some people responded to that murky situation by preparing early.
Afterwards, when the rollover produced no visible breakdowns, it became easy to reframe Y2K as a non-event or a clean mitigation success. But foresight and hindsight operate on different information. From the point of view of a typical person in 1999, you couldn’t know whether early preparation would turn out to be prudent or would later look unnecessary — that only becomes clear after the fact. A similar pattern shows up in nuclear brinkmanship: diplomats may succeed in preventing escalation, but families deciding whether to leave Washington or New York during a crisis have to act under incomplete information. They cannot rely on knowing in advance that prevention efforts will succeed.
In that sense, I actually think your point strengthens the mechanism I’m interested in. If someone now looks back at Y2K and sees it as a mitigation success — “the system handled it” — then their lived lesson is still “I waited and it was fine; professionals took care of it.” For many others who barely tracked the details and just remember that nothing bad seemed to happen where they lived, the felt lesson is similar: “I waited and it was fine.” In both cases, doing nothing personally seemed to have worked, regardless of what, if any, beliefs they had about why there was no disaster. In both instances, that is exactly the kind of training signal I’m worried about for future timing decisions.
So I fully agree there can be real, competent prevention at the system level. My claim is about what these episodes teach individuals making timing choices under uncertainty. I’ll make that foresight–hindsight and system–individual distinction clearer in the Y2K section so readers don’t bounce off in the way you describe. Thanks for flagging it — this comment helps me see where the draft was under-explained. And none of my examples are completely clear: The Gunnison example is actual system level prevention, though at a "near-individual-level". I think this is generally the cases when trying to split actual, messy and complex parts of the world into delineated classes.
Side note: As I discuss in the note, one complication for future decisions is that institutional early-warning capacity may be weakening in some areas, while emerging technologies (especially in bio and AI) could create faster, harder-to-mitigate risks. So even if Y2K was ultimately a case where system-level remediation succeeded, that doesn’t guarantee the same dynamic will hold for future threats. But that’s a separate point from the hindsight/foresight issue you raised here.
Only skimmed, but I think you need to include COST of early action times the probability of false-alarm in the calculation.
The high number of false positives silently trains us to wait
For me, the high number of false positives loudly and correctly trains me to wait. Bayes for the win - every false alarm is evidence that my signal is noisy. As a lot of economists say, "the optimal error rate is not 0".
You’re absolutely right that, in principle, you want to think about both: how costly early action is and how often it turns out to be a false alarm. In a fully explicit model, you’d compare “how much harm do I avert if this really is bad news?” to “how often am I going to spend those costs for nothing?”
This note is deliberately staying one level up from that, and just looking at the training data people’s guts get. In everyday life, most of us accumulate a lot of “big scary thing that turned out fine” and “I waited and it was fine” stories, and very few vivid “I waited and that was obviously a huge mistake” stories.
In a world where some rare events can permanently uproot you or kill you, it can actually be fine – even optimal – to tolerate a lot of false alarms. My worry is that our intuitions don’t just learn “signals are noisy”; they slide into “waiting is usually safe”, which can push people’s personal thresholds higher than they’d endorse if they were doing the full cost–benefit tradeoff explicitly.
Note for future work:
Look at roles or institutions with explicit early-action triggers — for example nuclear early-warning / launch-on-warning systems, where early action is pre-approved and procedurally mediated because delay is irrecoverable.
Not making a claim — just flagging this in case follow-on pieces explore how early-action systems are actually set up in practice.
Prologue
Note: The Ukraine example is not making any claim about what someone ought to do in wartime (e.g. stay, fight, flee, help others, etc.). Those questions are outside the scope of this note.
Instead, I use the Ukraine case only to illustrate a simple structural point: when danger increases rapidly, the timing of action can sharply change an individual’s risk profile, whatever their values or duties happen to be.
In the days before the Russian full-scale invasion of Ukraine in 2022, a healthy Ukrainian man of draft age already faced some chance of death[1] or permanent injury (e.g. car accident, cancer, etc.) — but that risk was diffuse and long-term. Once martial law hit, young men were barred from leaving the country by their own government and de facto forced to fight, exposing them to significant risk in the battlefield. This sharply increased their risk of dying.
Very rough back-of-the-envelope calculations[2] using public casualty estimates suggest that, for a draft-eligible man who stayed in Ukraine, the chance of being killed or permanently disabled over the next few years may have ended up in the low single-digit percentage range. The point for this example is that early action mattered: once the state closed the borders— acting to maximise national defence, against individual welfare — the risk of a young, healthy male dying jumped significantly, perhaps doubled or more, and the easiest way to avoid that jump was to have left early, before the borders closed.
Scope, confidence and intended audience
What this text does
What this text does not do[4]
Future work:
More work should be done to:
Confidence statement/strength of claims
AND
[(Large military buildup near the border AND >3 independent signs of pre-strike logistics AND a major diplomatic breakdown) OR visible mobilization of nuclear-delivery systems]
Example selection / research process
For this note I did a shallow search (considered 50-100 historical cases[5]) for historically vivid, reasonably well-documented cases that roughly map onto each quadrant (late/no disaster, early/no disaster, late/disaster, early/disaster). I relied on a mix of primary reporting, historical reconstructions, and a small number of academic papers. I did not try to identify the globally “cleanest” possible examples or to fully adjudicate causal debates about each case. That’s why I treat the examples as intuition-building rather than strong evidence about specific thresholds.
Who this text is for
People who are already interested in going beyond baseline government preparedness.
The tradeoff and the 4 possible outcomes
In many crises, people face a choice between acting early and acting late, and each option has different costs. Early action often is socially awkward or materially costly, and often turns out unnecessary if the threat never materialises. Late action feels normal until suddenly it isn’t — and when a disaster does unfold, late movers face the steepest costs — sometimes losing their health, homes, or lives. This creates a timing tradeoff that shows up across many types of risks.
The four quadrants in this early/late framework are:
Over a lifetime, most of our salient experiences are of the first type – ‘I waited and it was fine’ – and relatively few of the others. That skewed training signal is a big part of why our guts bias us toward waiting.
One Hypothetical Scenario, Four Outcomes: Fast-Moving Wildfire With Ambiguous Early Signals
You cancel work, pull kids out of school, reorganise a custody hand-off, pack valuables, and pay for a hotel. A wind shift keeps the fire away. The financial and logistical hit is real, and people around you quietly signal that you overreacted.
Same sacrifices, but you leave while the roads are still clear. By the time the fire breaks containment, traffic is already backing up; you’re out early, breathing clean air and choosing where to go next rather than scrambling.
You stay because blowing up your day, childcare, and commitments feels disproportionate. A bit of smoke rolls in, you shut the windows, and nothing else happens. Your decision feels validated.
You delay for the same reasons. But this time the fire moves faster than forecast. Evacuation orders come when smoke is already thick, cars crawl through clogged routes, and some sections of road are hot enough to soften tyres. Your options narrow dramatically — even though the warning signs looked almost identical to the day when nothing happened.
Real disasters rarely fall neatly into one of the above 4 boxes. For example, the disaster/no disaster threshold is muddy - a small wildfire could still ruin vegetation and scenery - there are degrees of destruction where it is hard to draw the line between disaster and no disaster. The point of the framework isn’t to perfectly classify every individual outcome, but to highlight a structural pattern in how timing, uncertainty, and losses interact.
If you combine a lifetime of “I waited and it was fine” with vivid stories of early actors who look foolish in hindsight, you get a gut-level bias toward acting late — even when the signals are screaming. Joplin is what that looks like. The rest of this note walks through the four quadrants in turn, ending with a hopeful example of early action that actually mattered.
The examples I use are “regular” catastrophes with reasonably well-understood dynamics and data. They’re probably not the main contributors to overall risk to an individual going forward; my preliminary analysis is that rare tail events — large nuclear exchanges, globally catastrophic pandemics, and interactions with advanced AI — dominate that picture. My best guess is that the same timing structure appears, and often more sharply, in those tail scenarios: rare, high-impact threats where early warning is noisy, expert views diverge, and institutional mitigation (if it happens) is largely invisible to individuals. I use more mundane cases here because they’re tractable and emotionally legible, and because they can still give a decent first-pass intuition for that timing problem.
The importance of individual timing decisions may also grow if institutional early-warning capacity erodes: for example, if democratic institutions, public-health agencies, and international early-warning systems weaken.
How “nothing happened” experiences can skew us toward waiting
Key take-away: A high number of ‘nothing happened’ experiences silently trains us to wait: we experience ‘I waited and it was fine’ thousands of times, and almost never viscerally experience the opposite.
The signals of a catastrophe are there, but people mostly wait — and nothing happens to them. This seems to be the most common outcome following early signals of a potential catastrophe. It is business as usual. But it is setting us up to fail in a real emergency. This is the core asymmetry this text is about: our everyday experience overwhelmingly reinforces “wait and it’ll probably be fine,” while the cases where early action mattered are rarer and less vivid.
In 2009, some officials explicitly compared early H1N1 numbers to 1918. For most people in rich countries, that translated to a few alarming headlines, no major change in behaviour, and a pandemic that felt mild enough to file under “overblown scare.” Similar patterns have repeated with SARS, MERS, and Ebola for people outside the affected regions: serious experts were worried; the median person read about it, did nothing, and watched the story fade from the news.
Similarly, there have been repeated moments when nuclear war looked — at least from some expert perspectives — like a live possibility: the Cuban Missile Crisis, later increased risks of nuclear detonation (e.g. Ukraine invasion, or the Kargil War). Similar things could be said about overdue major earthquakes. Again, each time, most people didn’t move, didn’t build a shelter, didn’t overhaul their lives. So far, for almost all of them, that “do nothing” choice has worked out.
At a smaller scale, we get the same reinforcement loop. We ignore that nagging “should I back up my data, move some savings, or see a doctor about this?” feeling, and most of the time nothing obviously bad happens. The world rarely labels these as “near misses”; it just stamps them “nothing” and moves on.
Over a lifetime, this creates a very lopsided training signal: thousands of “I waited and it was fine” experiences, and far fewer vivid “I acted early and was glad” or “I waited and deeply regretted it” examples. The issue is that if you design your preparedness thresholds using only your gut, your gut has been learning from a heavily biased sample. This would be further exacerbated if, indeed, the threats of tomorrow look different from those of the past.
Side note: a high false positive rate is probably inevitable if you want early action in rare, fast-moving crises. I say more about that in a footnote[6].
Takeaways from “act late + no disaster” experiences
The embarrassment of preparing for Y2K makes bias against early action worse
Key take-away: When early action precedes a non-event (regardless of whether it was competently mitigated or we just got lucky), the people who acted pay real costs and often feel foolish. That experience biases everyone further against early action next time.
In the late 1990s, governments and companies scrambled to fix the “Year 2000 problem” (Y2K) — two-digit year fields that might make systems misread 2000 as 1900 and fail. Contemporary estimates put worldwide remediation spending in the hundreds of billions of dollars, and the issue was widely discussed as a potential threat to power grids, banking, telecoms, and other critical systems.
When the clocks rolled over to 1 January 2000, those fears did not show up as obvious, widespread collapse. There were documented glitches — misdated receipts, some ticketing and monitoring failures, issues in a few nuclear plant and satellite systems — but major infrastructure continued to operate, and retrospective evaluations describe “few major errors” and no systemic breakdown. From the outside, it looked to many people as if “nothing happened.”
Even before that, however, a noticeable minority of individuals had treated Y2K as a personal disaster signal and acted well ahead of any visible local failure. A national survey reported by Wired in early 1999 found that although nearly everyone had heard about Y2K, about one in five Americans (21%) said they had considered stockpiling food and water, and 16% planned to buy a generator or wood stove. Coverage at the time, as well as later summaries, notes that some people also bought backup generators, firearms, and extra cash in case of disruptions.
Long-form reporting makes the costs to early actors very concrete. One Wired feature follows Scott Olmsted, a software developer who established a desert retreat with a mobile home and freshwater well, and began building up long-life food stores. He planned to add solar panels and security measures. Taken together, this implied substantial out-of-pocket costs on top of his normal living expenses. Socially, he also paid a price: the reporter notes that “most of the non-geeks closest to Scott think he’s a little nuts,” while more hardcore survivalists criticised his setup as naïvely insufficient and too close to Los Angeles. He describes talking to friends and relatives and “getting nowhere” — too alarmed for his normal social circle, not alarmed enough for the even more extreme fringe.
Not all early actors moved to the desert. The same feature describes Paloma O’Riley, a Y2K project manager who turned down a contract extension in London, returned to the United States, and founded “The Cassandra Project,” a grassroots Y2K preparedness group. She spent much of her time organising local meetings, lobbying state officials, and building a network of community preparedness groups, while her family stockpiled roughly a six-month food supply. For her, in addition to food storage, the main costs were time, foregone income, and political capital invested in a catastrophe that, from the outside, never visibly arrived.
When Y2K finally passed with only minor disruptions, official narratives tended to emphasise successful institutional remediation, and in public memory, Y2K came to be seen as an overblown scare — a big build-up to ‘nothing.’[7] For individuals like Olmsted, O’Riley, and the fraction of the public who had stocked supplies, bought generators, or shifted cash and investments, the visible outcome was simpler: they had paid real material and social costs in a world where, to everyone around them, “nothing serious” seemed to happen.
One complication is that Y2K may actually be a case where the early action of companies fixing software glitches prevented a disaster. Technologists argue that the underlying software issue was real and was fixed by large-scale remediation, which is why the rollover was uneventful. From a high-level system’s point of view, that looks like it could have been successful prevention. From the perspective I care about here, though, what tends to lodge in memory is simpler: people prepared, the clocks rolled over, and nothing obviously bad happened. That phenomenology—“someone acted early and it later looked unnecessary”—feeds into future intuitions regardless.
Takeaways from Y2K early individual action
When desensitisation meets a real disaster - (Joplin tornado - 2011, USA)
Key take-away: The biases of the above two sections, when pushing people to act late in an actual disaster, can have tragic consequences.
Following the sections above on why people become desensitized due to the flood of false positives as well as the updates from “failed preppers”, this section investigates how such desensitization can contribute to fatal delays when, in a minority of cases, the warning signs turn into an actual disaster:
At 1:30pm, May 21st 2011, a tornado watch was issued for southwestern Missouri, including the city of Joplin. The tornado watch — a routine, opt-in alert that many residents either didn’t receive or didn’t treat as significant. Tornado watches were common in the region, and most people continued their normal Saturday activities.
About four hours later, the city’s sirens sounded loudly across the city. Some residents moved to interior rooms, but many waited for clearer confirmation. Nationally, roughly three out of four tornado warnings don’t result in a tornado striking the warned area, and Joplin residents were used to frequent false alarms. Moreover, many people didn’t distinguish between a “watch” and a “warning,” and the most dangerous part of the storm was hidden behind a curtain of rain. From these viewpoints the situation might not have felt obviously threatening, so many people hesitated.
Seventeen minutes after the sirens, the tornado touched down. It intensified rapidly, becoming one of the deadliest in U.S. history. By the time it dissipated, it had killed around 160 people and injured more than 1,000. For anyone who delayed even briefly, the window for safe action closed almost immediately.
Takeaways from Joplin tornado
Acting early when a disaster unfolds can dramatically reduce harm (Gunnison influenza response – 1918, USA)
Key take-away: While the above three sections showed why people become desensitized, and how tragic such desensitization is in an actual disaster, this section paints a picture of hope. It shows that acting early is possible, and that it avoids large costs when disaster actually unfolds.
A note on the role of authorities in this Gunnison example: I have tried to choose scenarios showing the dynamics for an individual. However, individual action is a fuzzy concept - a family is not individual, nor is a group of friends. With Gunnison county having ~8000 residents, we might assume the town had ~2000 inhabitants. Compared to the United States, this is perhaps more akin to a neighborhood taking action, than a government. As such, and because the main point is the structural features and less the number of people, I believe this example is relevant.
By early October 1918, major U.S. cities were being overwhelmed by the influenza pandemic. In Philadelphia, hospitals ran out of beds, emergency facilities filled within a day, and the city recorded 759 influenza deaths in a single day — more than its average weekly death toll from all causes. Reports from Philadelphia and other cities illustrated how quickly local healthcare systems could be overwhelmed once the virus gained a foothold, especially in places with far fewer resources than large coastal cities.
While influenza was already spreading rapidly across Colorado, Gunnison itself still had almost no influenza cases. Local newspapers ran headlines like “Spanish Flu Close By” and “Flu Epidemic Rages Everywhere But Here,” noting thousands of cases and hundreds of deaths elsewhere in the state while Gunnison remained mostly untouched.
Gunnison was a small, relatively isolated mountain town, plausibly similar to many of the other Colorado communities with very limited medical resources and few doctors. Contemporary overviews note that the 1918 flu “hit small towns hard, many with few doctors and medical resources,” and that Gunnison was unusual in avoiding this fate by imposing an extended quarantine. Under the direction of the county physician and local officials, the town used its small population, low density, and limited transport links (source, p.72) — and, despite some tension among city, county, and state officials, seems to have benefited from cooperation among local public agencies sufficient to implement and maintain the measures.
Historical reconstructions of so-called “escape communities” (including Gunnison) describe them as monitoring the spread of influenza elsewhere and implementing “protective sequestration” while they still had little or no local transmission. Several measures were implemented: schools and churches were closed, parties and public gatherings were banned, and barricades were erected on the main highways. Train passengers who stepped off in Gunnison were quarantined for several days, and violators were fined or jailed.
Takeaways from Gunnison’s early response
Putting the 4 quadrants together
Taken together, these four cases show why our intuitions about acting early might not be neutral. Most of what we personally live through, and most of what we hear about, looks like “I waited and it was fine,” occasionally punctuated by stories of people who acted early and later looked foolish. Direct, vivid experiences of “I waited and deeply regretted it” or “I acted early and was glad I did” are much rarer. Over time, that asymmetry quietly trains us to treat “wait and see” as the safe, reasonable default. My aim in this note is only to make that skew visible. The more speculative follow-on question — how to design preparedness setups and early-action thresholds that balance any imbalance — is work for later pieces.
Very rough baseline mortality anchor (not Ukraine-specific): To give a concrete scale for “ordinary” mortality, suppose we have a stylised population where about 30% of men die between ages 15 and 60, and the rest survive to at least 60. That corresponds to a survival probability over 45 years of 0.70. If we (unrealistically) assume a constant annual mortality rate 𝑟 over that period, we have:
r≈1−0.70^(1/45)≈0.8%
Over a 3-year period, the cumulative probability of death is then:
1-((1-0.008)^3)=2.4%
This is a crude “ballpark” figure: it mixes healthy and unhealthy adults, ignores age variation, and only counts death, not permanent disability. A healthy 30-year-old’s 3-year death risk would be lower; their combined risk of “death or life-altering injury” over a lifetime would be higher. I use this only as an order-of-magnitude anchor (“a few percent over several years”), not as a precise estimate for pre-invasion Ukrainian men.
For illustration, take mid-range public estimates of Ukrainian military casualties, e.g. on the order of 60,000–100,000 killed and perhaps a similar magnitude of permanently disabling injuries as of late 2024. If we (very crudely) divide ~150,000–200,000 “death or life-altering injury” outcomes by a denominator of a few million draft-eligible men (say 4–8 million, depending on where you draw age and fitness boundaries), we get something like a 2–5% risk for a randomly selected draft-eligible man over the relevant period. This ignores civilian casualties, regional variation, selective mobilisation practices, and many other complications; it’s meant only as an order-of-magnitude illustration that the personal risk conditional on staying was not tiny. A more careful analysis could easily move this number around by a factor of ~2× in either direction.
As with other categories in this piece, I am not strictly looking at single people. In most instances I am actually referring to something more like households, or even perhaps groups of friends/neighborhoods. Especially with the Gunnison example, I actually widened my definition a couple of orders of magnitude to a group of ~2000 individuals. The basic thing I am pointing out is that groups of small but perhaps quite arbitrary sizes might choose to act differently, and perhaps earlier than government recommendations. This ties together with this piece’s focus on going beyond baseline government preparedness
Each of these topics I am not covering are areas I have worked on and that I’ve already explored to some extent, and I hope several of them will become their own follow-on pieces. So despite having gathered evidence and performed analysis, I’m deliberately not covering them here because this first text is narrowly focused on making the timing tradeoff intuitive before adding more complexity and exploring solutions in later pieces.
The selection process was not very formalized, but I iterated a few times to find cases where there was easily accessible evidence online. I therefore think I did not cherry pick too much, as I was looking more for the level of documentation of the various examples more than realizing that early action was definitely not possible. I started with the Camp Fire, but determined there was not enough easily accessible documentation on people acting late and early (although I think this actually happened). I also felt the timeline was a bit uncertain, I could not pinpoint when someone might have reasonably picked up e.g. via social media monitoring whether a fire had started or not upwind. I then looked at the Maui wildfire but again it was hard to quickly find information on what information was available to an early observer. I then had GPT generate a list of candidate scenarios - I used this prompt with GPT:
“Could you please just search extensively and generate a ranked list of candidates? So we need something that preppers can related to (maybe festival stampede in India is not so easy for US readers), something where it got real bad, like many died. Then you need to look carefully at the timeline, was early warning documetably (like we can reference something) possible? Keep in mind this is for the piece on the tradeoff, explaining the act late - suffer badly quadrant. what questions do you have before starting? Maybe continue until you have 10 really good examples?”
That is when I identified Joplin. I will not in detail go through how Y2K and Gunnison were selected, but it was a similar search process picking one example, understanding it, maybe discarding it and looking at the next until I found one that had verifiable characteristics (note that I think probably early action was possible in the above fires, I just could not easily find evidence of this online).
It might be worth pointing out that a high false positive rate is likely reasonable. One main point of this text is showing that in the lead-up to a disaster, the signals are weak. This means that in order to act early, one has to make decisions under uncertainty. If one pushes the threshold for action, as is illustrated in the following example, until one is certain - it is often too late. The tradeoff between desensitization and sufficiently early action is extensively discussed in academic and government circles. It is an unfortunate fact of the world and human psychology. Governments are even setting thresholds so high that they expect deaths from alarms coming too late - from a utilitarian view they are minimizing deaths across both desensitized people acting too late (acting later) and people not getting information early enough (acting earlier). These are dark calculations with real lives on the line.
Some technologists argue that Y2K was a genuine near-miss, prevented by large-scale remediation. The cultural memory, however, tends to frame it as an overreaction rather than a narrowly avoided catastrophe.