SDM

Philosophy and Physics, just finished my MSc AI at Edinburgh University. Interested in metaethics, anthropics and technical AI Safety.

SDM's Comments

Taking Initial Viral Load Seriously

How much is the data we're currently working off of influenced by high/low viral load effects? This table from Imperial college seems to contain the hospitalisation risk estimates by age that everyone has converged on: https://mobile.twitter.com/anderssandberg/status/1239923496916058112.

The data is based on adjusted results from Wuhan which would suggest... what? I would think that under Lockdown conditions you would get more in home infections? Perhaps we are working with estimates of hospitalisation risk that already account for a large fraction of cases being high viral dose.

If there's a really really large difference between high and low viral dose risk, but only half the exposure in Wuhan was high dose (as in the OPs example) , then as a rough approximation you should multiply those risks by 2 if you're high dosed.

Second, one of Rob Wiblin's sources suggested that the dominant effect might not be at home vs outside but a virtuous or vicious circle - severe illnesses release more virus and are more likely to provoke severe illness in the same household, while mild illness provokes mild illness: https://m.facebook.com/story.php?story_fbid=887350766835&id=204401235&anchor_composer=false#

That story could fit some of the data we've seen, especially doctors and care homes, but imply groups of young healthy people have much less to fear, as they just expose each other to mild or asymptomatic illness and don't make each other much sicker.

Hanson argued that viral load before and after Lockdown was the main factor affecting differing fatality rates between countries and I agree with the OP that this probably isn't the case. As additional evidence, we can see that the death rates for under 50s seem to be more consistent between countries than those over 50, https://ourworldindata.org/uploads/2020/03/COVID-CFR-by-age-768x595.png. That's harder to fit with the viral load story, unless we assume older people are more sensitive to differences in viral load

March Coronavirus Open Thread

Perhaps the numbers work out better when you include cocooning of populations that disproportionately make use of hospital resources

March Coronavirus Open Thread

From the blog post:

If most people who need it do not have access to ventilators, which is inevitable if even a percent of the population are infected at any one time, then it on the order of 4% of infected individuals will die.

I have heard '5-15%' and '20%' and '12%' for hospitalization/'no-treatment fatality' rates, with a trend that the newer estimates tend to be lower. The initial figure from China was a blood-curdling 20%, as you said, while a current projection based on evidence from real overwhelmed healthcare systems is a merely very bad 3-5%. This is lower by a larger factor than most of the reductions to the CFR that account for undocumented cases - perhaps indicating there are more undocumented cases than those corrections imply?

Also, of relevance to the UK's strategy (cocooning older people from infection), how does this breakdown by age? This poster has estimated that young, male, no pre-existing condition have 1/4th the risk of hospitalization (assuming a 50/50 chance that the intersection of age-30/no-pre-existing condition has a much lower risk than either alone) - which means if older and vulnerable people can be 'cocooned', the actual rate of hospitalization can be slashed again by a factor of 4 to something bearable, around 1%, if you take 4% as the baseline.

(note that the corrections in this paper for delay to death and underreporting skew the death rates even more strongly towards older patients, with the fatality rate among 20-29 barely changing after adjustment but the fatality rates among 60+ doubling).

That means you could surf a wave of a few hundred thousand people having the virus at a time and still provide adequate ICU space. With some expansion in capacity, that could be even higher.

March Coronavirus Open Thread
If it's infeasible to literally stamp it out everywhere (which I've heard), then you basically want to either delay long enough to have a vaccine

South Korea, Singapore, Italy

or have people get sick at the largest rate that the healthcare system can handle.

The UK.

We're running an interesting experiment to see which approach works. One potential benefit is that the world will be able to observe which of the two strategies is viable and switch between them, at least theoretically. Practically, switching from 'suppress/contain' to 'flatten curve' seems a lot more feasible than the alternative of trying to suppress after not taking tough measures, as the UK will have to do if its strategy means cases grow out of control. South Korea could still try to use curve-flattening as a backup plan.

However, for the reason given in the blog post, suppression will be a viable backup even if switching from curve-flattening to suppression is intrinsically harder than the other way round.

The interventions of enforced social distancing and contract tracing are expensive and inevitably entail a curtailment of personal freedom. However, they are achievable by any sufficiently motivated population. An increase in transmission *will* eventually lead to containment measures being ramped up, because every modern population will take draconian measures rather than allowing a health care meltdown.  In this sense COVID-19 infections are not and will probably never be a full-fledged pandemic, with unrestricted infection throughout the world. It is unlikely to be allowed to ever get to high numbers again in China for example. It will always instead be a series of local epidemics.
Estimated risk of death by coronavirus for a healthy 30 year old male ~ 1/190

Assuming that he read your comment and the comments of people on his FB saying similar things, I think Rob is confident that aggressive testing and social distancing measures will arrest the spread (as they already have in at least 3 countries!), along with expansion of capability (already happening w.r.t. masks!), will ensure that we get sort-of-adequate access to healthcare, even if things are somewhat overwhelmed, like in Wuhan, so doubling or 5x-ing their mortality rate is a better guide to what is likely to happen, rather than guesstimating based on no treatment.

Estimated risk of death by coronavirus for a healthy 30 year old male ~ 1/190
Why are you skeptical about social distancing? It's working in Hong Kong... When thousands are dying each day, there would be a lot of political will for drastic measures, right?

I think you're right about social distancing working, and if you live in a country that has the capacity to mount an effective response, I'd probably put p(treatment) = 0.7 or so. Remember we won't see any effect from what Italy has just done for at least a week because of incubation, and if it doesn't work they'll just keep escalating the isolation and quarantine, and we know that a high enough response works (China, South Korea).

Also, I think there's a better than even chance that p(young + no preexisting conditions) is much lower than either individually - since the absolute numbers of young people in a lot of those studies were low.

Also also, and maybe the OP took this into account, the corrections for delay to death and underreporting skew the death rates even more strongly towards older patients.

I wouldn't discount the possibility of a saving throw in the case of the virus approaching its natural attack rate - massive mobilization to provide at least basic medical care (oxygen) on a huge scale. The UK government has floated ideas that sound a lot like that (field hospitals outside cities), and there has already been a colossal expansion in the production of protective gear in China. So I would put p(Treatment | infection) at 0.2 or so if you live in the UK or somewhere similar.

Finally, and possibly for the above reasons, Rob Wiblin estimated a probability of the same 1/10th as high as the OP, here and again here.

What will be the big-picture implications of the coronavirus, assuming it eventually infects >10% of the world?

In other words, a CFR of 1-6%, with the lowest value overlapping with estimates being put out by governments right now.

EDIT: I just read Scotty's new post on the subject and he's confused by that 10-20% figure as well

So the graph above implies that every demographic has approximately equal hospitalization rates, which other sources suggest are 15% to 20%.
This is a weird pattern – why are so many young people getting hospitalized if almost none of them die? Either the medical system is serving these people really well (ie they would die if they didn’t go the hospital, but everyone does make it to the hospital, and the hospital saves everyone who goes there), they are being hospitalized unnecessarily (ie they would live even if they didn’t go the hospital, but they do anyway), or it’s statistical shenanigans (eg most statistics are collected at the hospital, so it looks like everybody goes to the hospital).
Are these an overestimate? Maybe most cases never come to the government’s attention? There’s some evidence for this.
Seeing the Smoke

This is partly a test run of how we'd all feel and react during a genuine existential risk. Metaculus currently has it as a 19% chance of spreading to billions of people, a disaster that would certainly result in many millions of deaths, probably tens of millions. Not even a catastrophic risk, of course, but this is what it feels like to be facing down a 1/5 chance of a major global disaster in the next year. It is an opportunity to understand on a gut level that, this is possible, yes, real things exist which can do this to the world. And it does happen.

It's worth thinking that specific thought now because this particular epistemic situation, a 1/5 chance of a major catastrophe in the next year, will probably arise again over the coming decades. I can easily imagine staring down a similar probability of dangerously fast AGI takeoff, or a nuclear war, a few months in advance.

Response to Oren Etzioni's "How to know if artificial intelligence is about to destroy civilization"

The sad/good thing is that this article represents progress. I recall that in Human Compatible Stuart Russell said that there was a joint declaration from some ML researchers that AGI is completely impossible, and its clear from this article that Oren is at least thinking about it as a real possibility that isn't hundreds of years away. Automatically forming learning problems sounds a lot like automatically discovering actions, which is something Stuart Russell also mentioned in a list of necessary breakthroughs to reach AGI, so maybe there's some widespread agreement about what is still missing.

That aside, even by some of Oren's own metrics, we've made quite substantial progress - he mentions the Winograd schemas as a good test of when we're approaching human-like language understanding and common sense, but what he may not know is that GPT-2 actually bridged a significant fraction of the gap on Winograd schema performance between the best existing language models and humans, which is a good object lesson in how the speed of progress can surprise you - from 63% to 71%, with humans at about 92% accuracy according to deepmind.

Will AI undergo discontinuous progress?

This is something I mentioned in the last section - if there is a significant lead time (on the order of years), then it is still totally possible for a superintelligence to appear out of nowhere and surprise everyone, even given the continuous progress model. The difference is that with discontinuous progress that outcome is essentially guaranteed, so discontinuities are informative because they give us good evidence about what takeoff speeds are possible.

Like you say, if there are no strong discontinuities we might expect lots of companies to start working hard on AIs with capability enhancement/recursive improvement, but the first AI with anything like those abilities will be the one made the quickest, so likely isn't very good at self-improvement and gets poor returns on optimization, and the next one that comes out is a little better (I didn't discuss the notion of Recalcitrance in Bostrom's work, but we could model this setup as each new self-improving AI design having a shallower and shallower Recalcitrance curve), making progress continuous even with rapid capability gain. Again, if that's not going to happen then it will be either because one project goes quiet while it gets a few steps ahead of the competition, or because there is a threshold below which improvements 'fizzle out' and don't generate returns, but adding one extra component takes you over such a threshold and returns on investment explode, which takes you to the conceptual question of whether intelligence has such a threshold built in.

Load More