In most crises, people face a timing decision under uncertainty. You choose whether to act early or to wait, and only later does the world reveal whether the threat was real. These two dimensions form four simple categories — early/late × disaster/no disaster — a conceptual tool for understanding the act early/act late tradeoff.
In the days before the Russian full-scale invasion of Ukraine in 2022, a healthy Ukrainian man of draft age already faced some chance[1]of death or permanent injury (e.g. car accident, cancer, etc.) — but that risk was diffuse and long-term. Once martial law hit, young men were barred from leaving the country by their own government and de facto forced to fight, exposing them to significant risk in the battlefield. This sharply increased their risk of dying.
Very rough back-of-the-envelope calculations[2]using public casualty estimates suggest that, for a draft-eligible man who stayed in Ukraine, the chance of being killed or permanently disabled over the next few years may have ended up in the low single-digit percentage range. The point for this example is that early action mattered: once the state closed the borders— acting to maximise national defence, against individual welfare — the risk of a young, healthy male dying jumped significantly, perhaps doubled or more, and the easiest way to avoid that jump was to have left early, before the borders closed.
People who are already interested in going beyond baseline government preparedness.
In many crises, people face a choice between acting early and acting late beyond government recommendations, and each option has different costs. Early action often feels socially awkward or materially costly, and may turn out unnecessary if the threat never materialises. Late action feels normal until suddenly it isn’t — and when a disaster does unfold, late movers face the steepest costs — sometimes losing their health, homes, or lives. This creates a timing tradeoff that shows up across many types of risks.
To be explicit, the four quadrants in this early/late framework are:
Real disasters rarely fall neatly into one of these boxes. Someone living just outside a tornado’s path might “act early” and see nothing, while someone 50 km away sees the storm veer toward their house after they’ve already left. The point of the framework isn’t to perfectly classify every individual outcome, but to highlight a structural pattern in how timing, uncertainty, and losses interact.
If you combine a lifetime of “I waited and it was fine” with vivid stories of early actors who look foolish in hindsight, you get a gut-level bias toward acting late — even when the signals are screaming. Joplin is what that looks like. The rest of this note walks through the four quadrants in turn, ending with a hopeful example of early action that actually mattered.
The examples I use are “regular” catastrophes with reasonably well-understood dynamics and data. They’re probably not the main contributors to overall existential risk; my preliminary analysis indicates that rare tail events — large nuclear exchanges, globally catastrophic pandemics, and interactions with advanced AI — dominate that picture. I still focus on more mundane cases here because they’re tractable, emotionally legible, and because the same timing structure likely appears—often more sharply—in those tail scenarios.
The importance of individual timing decisions may also grow if institutional early-warning capacity erodes: for example, if democratic institutions, public-health agencies, and international early-warning systems weaken.
Key take-away: The high number of false positives silently trains us to wait: we experience ‘I waited and it was fine’ thousands of times, and almost never viscerally experience the opposite.
The signals of a catastrophe are there, but people mostly wait — and nothing happens to them. This seems to be the most common outcome following early signals of a potential catastrophe. It is business as usual. But it is setting us up to fail in a real emergency.
In 2009, some officials explicitly compared early H1N1 numbers to 1918. For most people in rich countries, that translated to a few alarming headlines, no major change in behaviour, and a pandemic that felt mild enough to file under “overblown scare.” Similar patterns have repeated with SARS, MERS, and Ebola for people outside the affected regions: serious experts were worried; the median person read about it, did nothing, and watched the story fade from the news.
Similarly, there have been repeated moments when nuclear war looked — at least from some expert perspectives — like a live possibility: the Cuban Missile Crisis, later increased risks of nuclear detonation (e.g. Ukraine invasion, or the Kargil War). Similar things could be said about overdue major earthquakes. Again, each time, most people didn’t move, didn’t build a shelter, didn’t overhaul their lives. So far, for almost all of them, that “do nothing” choice has worked out.
At a smaller scale, we get the same reinforcement loop. We ignore that nagging “should I back up my data, move some savings, or see a doctor about this?” feeling, and most of the time nothing obviously bad happens. The world rarely labels these as “near misses”; it just stamps them “nothing” and moves on.
Over a lifetime, this creates a very lopsided training signal: thousands of “I waited and it was fine” experiences, and far fewer vivid “I acted early and was glad” or “I waited and deeply regretted it” examples. The issue is that if you design your preparedness thresholds using only your gut, your gut has been learning from a heavily biased sample. This would be further exacerbated if, indeed, the threats of tomorrow look different from those of the past.
Side note: a high false positive rate is probably inevitable if you want early action in rare, fast-moving crises. I say more about that in a footnote[4].
Takeaways from “act late + no disaster” experiences
Key take-away: When early action precedes a non-event, the people who acted pay real costs and often feel foolish. That experience biases everyone further against early action next time.
In the late 1990s, governments and companies scrambled to fix the “Year 2000 problem” (Y2K) — two-digit year fields that might make systems misread 2000 as 1900 and fail. Contemporary estimates put worldwide remediation spending in the hundreds of billions of dollars, and the issue was widely discussed as a potential threat to power grids, banking, telecoms, and other critical systems.
When the clocks rolled over to 1 January 2000, those fears did not show up as obvious, widespread collapse. There were documented glitches — misdated receipts, some ticketing and monitoring failures, issues in a few nuclear plant and satellite systems — but major infrastructure continued to operate, and retrospective evaluations describe “few major errors” and no systemic breakdown. From the outside, it looked to many people as if “nothing happened.”
Even before that, however, a noticeable minority of individuals had treated Y2K as a personal disaster signal and acted well ahead of any visible local failure. A national survey reported by Wired in early 1999 found that although nearly everyone had heard about Y2K, about one in five Americans (21%) said they had considered stockpiling food and water, and 16% planned to buy a generator or wood stove. Coverage at the time, as well as later summaries, notes that some people also bought backup generators, firearms, and extra cash in case of disruptions.
Long-form reporting makes the costs to early actors very concrete. One Wired feature follows Scott Olmsted, a software developer who established a desert retreat with a mobile home and freshwater well, and began building up long-life food stores. He planned to add solar panels and security measures. Taken together, this implied substantial out-of-pocket costs on top of his normal living expenses. Socially, he also paid a price: the reporter notes that “most of the non-geeks closest to Scott think he’s a little nuts,” while more hardcore survivalists criticised his setup as naïvely insufficient and too close to Los Angeles. He describes talking to friends and relatives and “getting nowhere” — too alarmed for his normal social circle, not alarmed enough for the even more extreme fringe.
Not all early actors moved to the desert. The same feature describes Paloma O’Riley, a Y2K project manager who turned down a contract extension in London, returned to the United States, and founded “The Cassandra Project,” a grassroots Y2K preparedness group. She spent much of her time organising local meetings, lobbying state officials, and building a network of community preparedness groups, while her family stockpiled roughly a six-month food supply. For her, in addition to food storage, the main costs were time, foregone income, and political capital invested in a catastrophe that, from the outside, never visibly arrived.
When Y2K finally passed with only minor disruptions, official narratives tended to emphasise successful institutional remediation, and in public memory, Y2K came to be seen as an overblown scare — a big build-up to ‘nothing.’[5]For individuals like Olmsted, O’Riley, and the fraction of the public who had stocked supplies, bought generators, or shifted cash and investments, the visible outcome was simpler: they had paid real material and social costs in a world where, to everyone around them, “nothing serious” seemed to happen.
Takeaways from Y2K early individual action
Key take-away: The biases of the above two sections, when pushing people to act late in an actual disaster, can have tragic consequences.
Following the section above on why people become desensitized due to the flood of false positives, this section investigates how such desensitization leads to death when, in a minority of cases, the warning signs turn into an actual disaster:
At 1:30pm, May 21st 2011, a tornado watch was issued for southwestern Missouri, including the city of Joplin. The tornado watch — a routine, opt-in alert that many residents either didn’t receive or didn’t treat as significant. Tornado watches were common in the region, and most people continued their normal Saturday activities.
About four hours later, the city’s sirens sounded loudly across the city. Some residents moved to interior rooms, but many waited for clearer confirmation. Nationally, roughly three out of four tornado warnings don’t result in a tornado striking the warned area, and Joplin residents were used to frequent false alarms. Moreover, many people didn’t distinguish between a “watch” and a “warning,” and the most dangerous part of the storm was hidden behind a curtain of rain. From these viewpoints the situation might not have felt obviously threatening, so many people hesitated.
Seventeen minutes after the sirens, the tornado touched down. It intensified rapidly, becoming one of the deadliest in U.S. history. By the time it dissipated, it had killed around 160 people and injured more than 1,000. For anyone who delayed even briefly, the window for safe action closed almost immediately.
Takeaways from Joplin tornado
Key take-away: While the above three sections showed why people become desensitized, and how tragic such desensitization is in an actual disaster, this section paints a picture of hope. It shows that acting early is possible, and that it avoids large costs when disaster actually unfolds.
A note on the role of authorities in this Gunnison example: I have tried to choose scenarios showing the dynamics for an individual. However, individual action is a fuzzy concept - a family is not individual, nor is a group of friends. With Gunnison county having ~8000 residents, we might assume the town had ~2000 inhabitants. Compared to the United States, this is perhaps more akin to a neighborhood taking action, than a government. As such, and because the main point is the structural features and less the number of people, I believe this example is relevant.
By early October 1918, major U.S. cities were being overwhelmed by the influenza pandemic. In Philadelphia, hospitals ran out of beds, emergency facilities filled within a day, and the city recorded 759 influenza deaths in a single day — more than its average weekly death toll from all causes. Reports from Philadelphia and other cities illustrated how quickly local healthcare systems could be overwhelmed once the virus gained a foothold, especially in places with far fewer resources than large coastal cities.
While influenza was already spreading rapidly across Colorado, Gunnison itself still had almost no influenza cases. Local newspapers ran headlines like “Spanish Flu Close By” and “Flu Epidemic Rages Everywhere But Here,” noting thousands of cases and hundreds of deaths elsewhere in the state while Gunnison remained mostly untouched.
Gunnison was a small, relatively isolated mountain town, plausibly similar to many of the other Colorado communities with very limited medical resources and few doctors. Contemporary overviews note that the 1918 flu “hit small towns hard, many with few doctors and medical resources,” and that Gunnison was unusual in avoiding this fate by imposing an extended quarantine. Under the direction of the county physician and local officials, the town used its small population, low density, and limited transport links (source, p.72) — and, despite some tension among city, county, and state officials, seems to have benefited from cooperation among local public agencies sufficient to implement and maintain the measures.
Historical reconstructions of so-called “escape communities” (including Gunnison) describe them as monitoring the spread of influenza elsewhere and implementing “protective sequestration” while they still had little or no local transmission. Several measures were implemented: schools and churches were closed, parties and public gatherings were banned, and barricades were erected on the main highways. Train passengers who stepped off in Gunnison were quarantined for several days, and violators were fined or jailed.
Takeaways from Gunnison’s early response
Very rough baseline mortality anchor (not Ukraine-specific): To give a concrete scale for “ordinary” mortality, suppose we have a stylised population where about 30% of men die between ages 15 and 60, and the rest survive to at least 60. That corresponds to a survival probability over 45 years of 0.70. If we (unrealistically) assume a constant annual mortality rate 𝑟 over that period, we have: ↩︎
For illustration, take mid-range public estimates of Ukrainian military casualties, e.g. on the order of 60,000–100,000 killed and perhaps a similar magnitude of permanently disabling injuries as of late 2024. If we (very crudely) divide ~150,000–200,000 “death or life-altering injury” outcomes by a denominator of a few million draft-eligible men (say 4–8 million, depending on where you draw age and fitness boundaries), we get something like a 2–5% risk for a randomly selected draft-eligible man over the relevant period. This ignores civilian casualties, regional variation, selective mobilisation practices, and many other complications; it’s meant only as an order-of-magnitude illustration that the personal risk conditional on staying was not tiny. A more careful analysis could easily move this number around by a factor of ~2× in either direction. ↩︎
Each of these topics I am not covering are areas I have worked on and that I’ve already explored to some extent, and I hope several of them will become their own follow-on pieces. So despite having gathered evidence and performed analysis, I’m deliberately not covering them here because this first text is narrowly focused on making the timing tradeoff intuitive before adding more complexity and exploring solutions in later pieces. ↩︎
It might be worth pointing out that a high false positive rate is likely reasonable. One main point of this text is showing that in the lead-up to a disaster, the signals are weak. This means that in order to act early, one has to make decisions under uncertainty. If one pushes the threshold for action, as is illustrated in the following example, until one is certain - it is often too late. The tradeoff between desensitization and sufficiently early action is extensively discussed in academic and government circles. It is an unfortunate fact of the world and human psychology. Governments are even setting thresholds so high that they expect deaths from alarms coming too late - from a utilitarian view they are minimizing deaths across both desensitized people acting too late (acting later) and people not getting information early enough (acting earlier). These are dark calculations with real lives on the line. ↩︎
Some technologists argue that Y2K was a genuine near-miss, prevented by large-scale remediation. The cultural memory, however, tends to frame it as an overreaction rather than a narrowly avoided catastrophe. ↩︎