This is partly a test run of how we'd all feel and react during a genuine existential risk. Metaculus currently has it as a 19% chance of spreading to billions of people, a disaster that would certainly result in many millions of deaths, probably tens of millions. Not even a catastrophic risk, of course, but this is what it feels like to be facing down a 1/5 chance of a major global disaster in the next year. It is an opportunity to understand on a gut level that, this is possible, yes, real things exist which can do this to the world. And it does happen.
It's worth thinking that specific thought now because this particular epistemic situation, a 1/5 chance of a major catastrophe in the next year, will probably arise again over the coming decades. I can easily imagine staring down a similar probability of dangerously fast AGI takeoff, or a nuclear war, a few months in advance.
Well, now a few months have gone by and much has changed. The natural question to ask is- what general lessons have we learned, compared to that ‘particular epistemic situation’, now that we’re in a substantially different one? What does humanity’s response to the coronavirus pandemic so far imply about how we might fare against genuine X-risks?
At a first pass, the answer to that question seems obvious - not very well. The response of most usually well-functioning governments (I’m thinking mainly of Western Europe here) has been slow, held back by an unwillingness to commit all resources to a strategy and accept its trade-offs, and sluggish to respond to changing evidence. Advance preparation was even worse. This post gives a good summary of some of those more obvious lessons for X-risks, focussing specifically on slow AI takeoff.
As to what we ultimately blame for this slowness - Scott Alexander and Toby Ord gave as good an account as anyone (before the pandemic) in blaming a failure to understand expected value and the availability heuristic.
However, many of us predicted in advance the dynamics that would lead countries to put forward a slow and incoherent response to the coronavirus. What I want to explore now is - what has changed epistemically since I wrote that comment - what things have happened since that have surprised many of us who have internalised the truth of civilisational inadequacy? I am looking for generalised lessons we can take from this pandemic, rather than specific things we have learnt about the pandemic in the last few months. I believe there is one such lesson that is surprising, which I’d like to convince you of.
Underweighting Strong Reactions
My claim is that in late February/early March, many of us did overlook or underweight the possibility that many countries would eventually react strongly to coronavirus - with measures like lockdowns that successfully drove R under 1 for extended periods, or with individual action that holds R near 1 in the absence of any real government intervention. This meant we placed too much weight on coronavirus going uncontained, and were surprised when in many countries it did not.
Whether the strong reaction is fully or only partially effective remains to be seen, but the fact that this reaction occurred was surprising to many of us, relative to what we believed at the start of all this - I know that it surprised me.
I will first present the examples of predictions, some from people on LessWrong or adjacent groups and some from government scientists, which all either foretold worse outcomes by now, more feeble results from interventions, lower compliance with interventions, that interventions wouldn’t even be implemented or predicted bad outcomes that are not yet ruled out but now look much less likely than they did.
I will then put forward an explanation for these mistakes - something I named (in April) the ‘Morituri Nolumus Mori’ ('We who are about to die don't want to') effect, in reference to the Discworld novel The Last Hero: that most governments and individuals have a consistent, short-term aversion to danger which is stronger than many of us suspected, though not sustainable in the absence of an imminent threat. I’ll first go through the incorrect predictions and then give my favoured explanation. If I am correct that many of us (and also many scientists and policymakers) missed the importance of the MNM effect, it should increase our confidence that, in situations where there is some warning, there are fairly basic features of our psychology and institutions that do get in the way of the very worst outcomes. However, the MNM effect is limited and will not help in any situation where advance planning or responding to things above and beyond immediate incentives are required.
I consider the MNM effect to be mostly compatible with Zvi’s ‘Governments Most Places Are Lying Liars With No Ability To Plan or Physically Reason.’ (I do think that claim is too America-centric, and 'no ability to plan/reason' is hyperbole if applied to Europe or even the UK, let alone e.g. Taiwan). The MNM effect is what we credit instead of clever planning or reasoning, for why things aren’t as bad as they could be - the differences between e.g. America and Germany are due to any level of planning at all, not better planning.
Things (especially in the US) are sufficiently bad right now that it is difficult to remember that many of us put significant weight on things already being worse than they currently are - but as I will show that was the case.
Some people’s initial predictions were that R would not be driven substantially below 1 for any extended period, anywhere, except with a Wuhan style lockdown. Robin Hanson seemingly claimed this on March 19: ‘So even if you see China policy as a success, you shouldn’t have high hopes if your government merely copies a few surface features of China policy.‘ In that article Hanson was clearly referring to ‘most governments’ that aren’t China as being unlikely to suppress without adopting a deep mimicry of China’s policy - including welding people into their flats and forcible case isolation. Yet, two months later there are many countries, from New Zealand to Germany, which have simply copied some but not all features of the Chinese policy while achieving initial suppression.
More recently, Hanson updated to speaking more specifically about the USA: (in response to a graphic showing several examples of successful suppression in Europe and Asia) ‘Yes, you know that other nations have at times won wars. Even so, you must decide if to choose peace or war.’ Going from ‘most western countries’ to ‘America’ counts as an optimistic update.
But mitigation measures (which Hanson calls ‘peace’) have also worked out less disastrously than our worst fears suggested because of stronger-than-expected individual action. See e.g. this article about Sweden:
Ultimately, Sweden shows that some of the worst fears about uncontrolled spread may have been overblown, because people will act themselves to stop it. But, equally, it shows that criticisms of lockdowns tend to ignore that the real counterfactual would not be business as usual, nor a rapid attainment of herd immunity, but a slow, brutal, and uncontrolled spread of the disease throughout the population, killing many people. Judging from serological data and deaths so far, it is the speed of deaths that people who warned in favour of lockdowns got wrong, not the scale.
This remark about Sweden is applicable more generally - the worst case scenario for almost every country seems to be R around 1.5 at this point - see this map from Epidemic Forecasting. True explosive spread is very rare across the world, but was being discussed as a real possibility in early March even in Europe. Again, the response is not good enough to outright reverse the unfolding disaster, but it is still strong enough to arrest explosive spread.
Focussing on the UK, which had a badly delayed response and a highly imperfect lockdown, we can see that even there R was driven substantially below 1 and hospital admissions with Covid-19 (which are the most reliable short-term proxy for infection throughout the overall pandemic) are at 13% of their peak. London did not exceed its ICU capacity despite predictions that it would from government modellers.
Another way of getting at this disjoint is to just look at the numbers and see if we still expect the same number of people to die. Wei Dai initially (1st March) predicted 190-760 million people would eventually die from coronavirus with 50% of the world infected. The more recent top-rated comment by Orthonormal points out that current evidence points against that. Good Judgment rates the probability that more than 80 million will die as 1%. A recent paper by Imperial College suggested that the Europe-wide lockdowns have so far saved 3 million lives without accounting for the fact that deaths in an unmitigated scenario would have been higher due to a lack of intensive care beds. Regardless of what happens next, would we have predicted that in early March?
These mistakes have not been limited to the LessWrong community - one of the reasons for the aforementioned delay before the UK called the lockdown was that UK behavioural scientists advising the government were near certain that stringent lockdown measures would not be obeyed to the necessary degree and lockdowns in the rest of Europe were instead implemented ‘more for solidarity reasons’. In the end it turned out that compliance was instead ‘higher than expected’. The attitude in most of Europe in early March was that full lockdowns were completely infeasible. Then they were implemented.
Another way of getting at this observation is to note the people who have publicly recorded their surprise or shift in belief as these events have unfolded. I have written several comments with earlier versions of this claim, starting two months ago. Wei Dai notably updated in the direction of thinking coronavirus would reach a smaller fraction of the population, after reading this prescient blogpost:
The interventions of enforced social distancing and contract tracing are expensive and inevitably entail a curtailment of personal freedom. However, they are achievable by any sufficiently motivated population. An increase in transmission *will* eventually lead to containment measures being ramped up, because every modern population will take draconian measures rather than allowing a health care meltdown. In this sense COVID-19 infections are not and will probably never be a full-fledged pandemic, with unrestricted infection throughout the world. It is unlikely to be allowed to ever get to high numbers again in China for example. It will always instead be a series of local epidemics.
In a recent podcast, Rob Wiblin and Tara Kirk Sell were discussing what they had recently changed their minds about. They picked out the same thing:
Robert Wiblin: Has the response affected your views on what policies are necessary or should be prioritized for next time?
Tara Kirk Sell: The fact that “Stay-at-home orders” are actually possible in the US and seem to work… I had not really had a lot of faith in that before and I feel like I’ve been surprised. But I don’t want “Stay-at-home orders” to be the way we deal with pandemics in the future. Like great, it worked, but I don’t want to do this again.
Or this from Zvi:
5. Fewer than 3 million US coronavirus deaths: 90%
I held. Again, we saw very good news early, so to get to 3 million now we’d need full system collapse to happen quickly. It’s definitely still possible, but I’m guessing we’re now more like 95% to avoid this than 90%.
Lastly, we have the news from the current hardest-hit places, like Manhattan, which have already hit partial herd immunity and show every sign of being able to contain coronavirus going forward even with imperfect measures.
The Morituri Nolumus Mori effect
Many of these facts (in particular the reason that 100 million plus dead is effectively ruled out) have multiple explanations. For one, the earliest data on coronavirus implied the hospitalization rate was 10-20% for all age groups, and we now know it is substantially lower (that tweet by an author of the Imperial College paper, which estimated a hospitalization rate of 4.4%). This means that if hospitals were entirely unable to cope with the number of patients, the IFR would be in the range of 2%, not 20% initially implied.
However, the rest of our information about the characteristics of the virus in early March- the estimate of R0 and ‘standard’ IFR, were fairly close to the mark. Our predictions were working off of reasonable data about the virus. Any prediction made then about the number of people who would be infected isn’t affected by this hospitalization rates confounder, nor is any prediction about what measures would be implemented. So there must be some other reason for these mistakes - and a common thread among nearly all the inaccurate pessimistic predictions was that they underestimated the forcefulness, though not the level of forethought or planning, behind mitigation or suppression measures. As it is written,
"Brains don't work that way. They don't suddenly supercharge when the stakes go up - or when they do, it's within hard limits. I couldn't calculate the thousandth digit of pi if someone's life depended on it."
The Morituri Nolumus Mori effect, as a reminder, is the thesis that governments and individuals have a consistent, short-term reaction to danger which is stronger than many of us suspected, though not sustainable in the absence of an imminent threat. This effect is just such a hard limit - it can’t do very much except work as a stronger than expected brake. And something like it has been proposed as an explanation, not just by me two months ago but by Will MacAskill and Toby Ord, for why we have already avoided the worst disasters. Here’s Toby’s recent interview:
Learning the right lessons will involve not just identifying and patching our vulnerabilities, but pointing towards strengths we didn’t know we had. The unprecedented measures governments have taken in response to the pandemic, and the public support for doing so, should make us more confident that when the stakes are high we can take decisive action to protect ourselves and our most vulnerable. And when faced with truly global problems, we are able to come together as individuals and nations, in ways we might not have thought possible. This isn’t about being self-congratulatory, or ignoring our mistakes, but in seeing the glimmers of hope in this hardship.
Will MacAskill made reference to the MNM effect in a pre-coronavirus interview, explaining why he puts the probability of X-risks relatively low.
Second then, is just thinking in terms of the rational choice of the main actors. So what’s the willingness to pay from the perspective of the United States to reduce a single percentage point of human extinction whereby that just means the United States has three hundred million people. How much do they want to not die? So assume the United States don’t care about the future. They don’t care about people in other countries at all. Well, it’s still many trillions of dollars is the willingness to pay just to reduce one percentage point of existential risk. And so you’ve got to think that something’s gone wildly wrong, where people are making such incredibly irrational decisions.
Bill Gates also referred to this effect.
I also think that the MNM effect is the main reason why both Metaculus and superforecasters consistently predicted deaths will stay below 10 million, implying a very slow burn, neither suppression nor full herd immunity, right across most of the world.
The Control System
Is there a possibility where R0 is exactly 1? Seems unlikely – one is a pretty specific number. On the other hand, it’s been weirdly close to one in the US, and worldwide, for the past month or two. You could imagine an unfortunate control system, where every time the case count goes down, people stop worrying and go out and have fun, and every time the case count goes up, people freak out and stay indoors, and overall the new case count always hovers at the same rate. I’ve never heard of this happening, but this is a novel situation.
One more speculative consequence of the MNM effect is that a reactive, strong push against uncontrolled pandemic spread is a good explanation for why Rt tends to approach 1 in countries without a coordinated government response, like the United States, and the more coordinated the response the further below 1 Rt can be pushed. A priori, we might expect that there is some ‘minimal default level’ of response that leads to Rt being decreased from R0, 3-4, to some much lower value - but why is the barometer set around 1? It’s not a coincidence, as Zvi points out.
Whenever something lands almost exactly on the only inflection point, in this case R0 of one where the rate of cases neither increases nor decreases, the right reaction is suspicion.
In this case, the explanation is that a control system is in play. People are paying tons of attention to when things are ‘getting better’ or ‘getting worse’ and adjusting behaviour, both legally required actions and voluntary actions.
The MNM effect is apparently so predictable that, with short-ish term feedback, it can form a control system. The other end of this control system is all the usual cognitive and institutional biases that prevent us from taking these events seriously and actually planning for them.
It is possible this is the first time such a control system has formed to mitigate a widespread disaster. Disasters of this size are rare throughout history. Add to this the fact that such control systems can only form when the threat unfolds and changes over several months, giving people time to veer between incaution and caution. Meanwhile, the short term feedback which governments and people can access about the progress of the epidemic is relatively new - better data collection and mass media make modern populations much more sensitive to the current level of threat than those throughout history. Remembering that noone knows exactly where or when the Spanish Flu began highlights that good real-time monitoring of a pandemic is an extremely new thing.
In our current situation of equilibrium created by a control system, the remaining uncertainties are: can we do better than the equilibrium position? (sociological and political) and how bad is the equilibrium position? (mainly a matter of the disease dynamics). It seems to me, the equilibrium probably ends in partial herd immunity (nowhere near 75% 'full herd immunity', because of MNM). This involves healthcare systems struggling to cope to some extent along the way. The US is essentially bound for equilibrium - but what that entails is not clear. I could imagine the equilibrium holding Rt near 1 even in the absence of any government foresight or planning but it doesn’t seem very likely, as some commenters pointed out. More likely it ends with partial herd immunity.
However, there is still a push away from this equilibrium in Europe (e.g. attempts to use national-level tracing and testing programs). This push is not that strong and depends on individuals sticking to social distancing rules. European lockdowns brought Rt down to between 0.6 and 0.8, noticeably below 1, indicating that they beat the equilibrium to some degree for a while. Rt got down to 0.4 in Wuhan, suggesting great success in beating the equilibrium.
That is the other lesson - any level of government foresight or planning adds on to the already existing MNM effect - witness how foot traffic levels dramatically declined before lockdowns were instituted, or even if they were never instituted, right across the world. The effects are additive. So if the default holds Rt near 1, then a few extra actions by a government able to look some degree into the future can make all the difference.
I consider that the number of predictions that have already been falsified or rendered unlikely is sufficient to establish that the MNM effect exists, or is stronger than many of us thought early on (I don’t imagine there were many people who would have denied the MNM effect exists at all, i.e. expected us to just walk willingly to our deaths). ‘Dumb reopening' as is happening the US, as a successor to lockdowns that have pushed R to almost exactly 1, is consistent with what I have claimed - that our reliable and predictable short-term reactivity (governmental and individual) and desire to not die, the Morituri Nolumus Mori effect, serves as a brake against the very worst outcomes. What next?
Conceivably, the control system could keep running, and R could stay near 1 perpetually even with no effective planning or well-enforced lockdowns, or there could be a slow grind as the virus spreads up to a partial herd immunity threshold - either way, the MNM effect is there, screening off some outcomes that looked likely in early March, such as a single sharp peak. Similarly, the MNM effect gives a helping hand to attempts at real strategy. Some governments that are competent in the face of massive threats but slow to react (such as Germany) did better than expected because of the caution of citizens who started restricting their movements before lockdown and who now aren’t taking full advantage of reopened public spaces.
From the perspective of predicting future X-risks, the overall outcome of this pandemic is less interesting than the fact that there has been a consistent, unanticipated push from reactive actions against the spread of the virus. Then there is a further, also relevant issue of whether countries can beat the equilibrium (of R being held at near 1 or just above 1) and do better than the MNM effect mandates. So far, Europe spent a while beating equilibrium (with R during lockdown at 0.6-0.8) and China drove R down even further.
The first remaining uncertainty is: can a specific country/the world as a whole do better than this equilibrium position? We do have some pertinent evidence to answer this in the form of the superforecaster predictions and, though it is confounded by the next uncertainty, from disease modelling. The insights of disease modelling should shed light on the question: how bad is this equilibrium position? If we knew this we would have a better sense of what the reasonable worst case scenario is for coronavirus, but that is not important from an x-risk perspective.
This makes it clear what kinds of evidence are worth looking out for. We should look at the performance of areas of the world where there is little advance planning, but nevertheless the people are informed about the level of day-to-day danger and leaders don’t actively oppose individual efforts at safety. Parts of the United States fit the bill. Seeing the eventual outcomes in these areas, when compared to some initial predictions about just how bad things could get, will give us an idea of the extra help provided by the MNM effect. Then, with that as our baseline, we can see how many countries do better to judge the further help provided by planning or an actual strategy.
Implications for X-risks
The most basic lesson that should be learned from this disaster is, of course, that for the moment we are inadequate - unable to coordinate as long as there is any uncertainty about what to do, and unable to meaningfully plan in advance for plausible near-term threats like those from pandemics. We should of course remember that not enough focus is put on long-term risks, that our institutions are flawed in dealing with them.
Covid-19 shows that there can still be a strong reaction once it is clear there is disaster coming. We have some idea already just how strong this reaction is. We have less idea how effective it will end up being. In February and March, we often observed a kind of pluralistic ignorance, where even experts raising the alarm did so in a way that was muted and seemingly aimed at ‘not causing panic’.
Robert Wiblin: I think part of what was going on was perhaps people wanted to promote this idea of “Don’t panic” because they were worried that the public would panic and they felt that the way to do that was really to talk down the risk a lot and then it kind of got a bit out of control, but I’m not sure how big the risk of… It seems like what’s ended up happening is much worse than the public panicking in January. Or maybe I just haven’t seen what happens when the public really panics. I guess people panicked later and it wasn’t that bad.
Suppose this dynamic applies in a future disaster. We might expect to see a sudden phase change from indifference to panic despite the fact that trouble was already looming anyway and no new information has appeared.
If there is enough forewarning before the disaster occurs that a phase shift in attitudes can take place, we will react hard. Suppose the R0 of Coronavirus had been 1.5-2, and the rest of our response had been otherwise the same - suppression measures taken in the US and elsewhere would have worked perfectly even though we were sleepwalking towards disaster as recently as three weeks before. The only reason this didn’t happen is because of contingent facts about this particular virus. On the other hand, there are magnitudes of disaster which the MNM effect is clearly inadequate for - suppose the R0 had been 8.
Perhaps the MNM effect is stronger for a disaster, like a pandemic, for which there is some degree of historical memory and evolved emotions and intuitions around things like purity and disgust which can take over and influence our risk-mitigation behaviour. Maybe technological disasters that don’t have the same deep evolutionary routes, like nuclear war, or X-risks like unaligned AGI that have literally never happened before, would not evoke the same strong, consistent reaction because the threat is even less comprehensible.
Nevertheless, one could imagine a slow AI takeoff scenario with a lot of the same characteristics as coronavirus, where the MNM effect steps in at the last moment:
It takes place over a couple of years. Every day there are slight increases in some relevant warning sign. A group of safety people raise the alarm but are mostly ignored. There are smaller scale disasters in the run-up, but people don’t learn their lesson (analogous to SARS-1 and MERS). Major news orgs and government announce there is nothing to worry about (analogous to initial statements about masks and travel bans). Then there is a sudden change in attitudes for no obvious reason. At some point everyone freaks out - bans and restrictions on AI development, right before the crisis hits. Or, possibly, right when it is already too late.
The lesson to be learned is that there may be a phase shift in the level of danger posed by certain X-risks - if the amount of advance warning or the speed of the unfolding disaster is above some minimal threshold, even if that threshold would seem like far too little time to do anything given our previous inadequacy, then there is still a chance for the MNM effect to take over and avert the worst outcome. In other words, AI takeoff with a small amount of forewarning might go a lot better than a scenario where there is no forewarning, even if past performance suggests we would do nothing useful with that forewarning.
More speculatively, I think we can see the MNM effect’s influence in other settings where we have consistently avoided the very worst outcomes despite systematic inadequacy - Anders Sandberg referenced something like it when he was discussing the probability of nuclear war. There have been many near misses when nuclear war could have started, implying that we can’t have been lucky over and over. Instead that there has been a stronger skew towards interventions that halt disaster at the last moment, compared to not-the-last-moment:
Robert Wiblin: So just to be clear, you’re saying there’s a lot of near misses, but that hasn’t updated you very much in favor of thinking that the risk is very high. That’s the reverse of what I expected.
Anders Sandberg: Yeah.
Robert Wiblin: Explain the reasoning there.
Anders Sandberg: So imagine a world that has a lot of nuclear warheads. So if there is a nuclear war, it’s guaranteed to wipe out humanity, and then you compare that to a world where is a few warheads. So if there’s a nuclear war, the risk is relatively small. Now in the first dangerous world, you would have a very strong deflection. Even getting close to the state of nuclear war would be strongly disfavored because most histories close to nuclear war end up with no observers left at all.
In the second one, you get the much weaker effect, and now over time you can plot when the near misses happen and the number of nuclear warheads, and you actually see that they don’t behave as strongly as you would think. If there was a very strong anthropic effect you would expect very few near misses during the height of the Cold War, and in fact you see roughly the opposite. So this is weirdly reassuring. In some sense the Petrov incident implies that we are slightly safer about nuclear war.
On the other hand, the MNM effect requires leaders and individuals to have access to information about the state of the world right now (i.e. how dangerous are things at the moment). Even in countries with reasonably free flow of information this is not a given. If you accept Eliezer Yudkowksy’s thesis that clickbait has impaired our ability to understand a persistent, objective external world then you might be more pessimistic about the MNM effect going forward. Perhaps for this reason, we should expect countries with higher social trust, and therefore more ability for individuals to agree on a consensus reality and understand the level of danger posed, to perform better. Japan and the countries in Northern Europe like Denmark and Sweden come to mind, and all of them have performed better than the mitigation measures employed by their governments would suggest.
The principle that I’ve called the Morituri Nolumus Mori effect is defined in terms of the map, not the territory - a place where our predictions diverged from reality in an easily and consistently describable way - that the short-term reaction from many governments and individuals was stronger than we expected, whilst advance planning and reasoning was as weak as we expected. The MNM effect may also be a feature of the territory. It may already have a name in the field of social psychology, or several names. It may be a contingent artefact of lots of local facts about only our coronavirus response, though I don’t think that’s plausible for the reasons given above. Either way, I believe that it was an important missing piece, probably the biggest missing piece, in our early predictions and needs to be considered further if we want to refine our analysis of X-risks going forward. One of the few upsides to this catastrophe is that it has provided us with a small-scale test run of some dynamics that might play out during a genuine catastrophic or existential risk, and we should be sure to exploit that for all its worth.