To the best of my knowledge there are four evil inaccurate but not-completely-moronic reasons for sticking with a 2-dose vaccination plan. Just to be clear: none of these arguments convincingly suggest that 2-dose will be a better method to combat the pandemic.
Anyway, the case for 1-dose is overwhelming. I just wanted to point out how otherwise intelligent people might get this question so incredibly wrong, seeing as I've run into shades of all four of these arguments in the past.
Oh, it’s so much worse than that. What happens when the central planner combines threats to those who don’t distribute all the vaccine doses they get, with other threats to those who let someone ‘jump the line’? Care to solve for the equilibrium?
You conclude that vaccination facilities will reduce their orders so they are guaranteed to be able to distribute all. I think in practice it is much easier to cook the books and/or destroy vaccines as necessary.
More pressingly, this is the first mention I've run into of the potential seriousness of the South African variant. But (perhaps for the first time since February) it really seems to be the case that "more data is needed before we can make an informed judgment on this"?
There has been previous discussion about this on LessWrong. In particular, this is precisely the focus of Why the tails come apart, if I'm not mistaken.
If I remember correctly that very post caused a brief investigation into an alleged negative correlation between chess ability and IQ, conditioning on very high chess ability (top 50 or something). Unfortunately I don't remember the conclusion.
Edit: and now I see Mo Nastri already pointed this out. Oops.
Your point on alternative hypotheses is well taken, I only mentioned the superspreader one since that was considered the main possibility for strong relative growth of one variant over another without increased infectiousness. Could you expand on the likelihood of any of these being true/link to discussion on them?
I also thought this, but was told this was not the case (without sources though). If you are right then the scaling assumption is probably close to accurate. I tried briefly looking for more information on this but found it too complicated to judge (for example, papers summarizing contact tracing results in order to determine the relative importance of superspreader events are too complicated for me to undo their selection effects - in particular the ones I saw limited to confirmed cases, or sometimes even confirmed cases with known source).
EDIT: if I check microCOVID for example, they state that the chance of catching it during a 1 hour dinner with another person who has been confirmed to have COVID is probably between 0.2% and 20%, The relevant event risks for group spread (as opposed to personal risk evaluations) are conditional on at least one person present having COVID. So is this interval a small chance or a large chance? I wouldn't be surprised if ~10% is significantly high that the linearity assumption becomes questionable, and a 1 hour dinner is far from the most risky event people are participating in.
I agree that this means particular interactions would have a larger risk increase than the 70% cited (again, or whatever average you believe in).
In the 24-minute video in Zvi's weekly summary Vincent Racaniello makes the same point (along with many other good points), with the important additional fact that he is an expert (as far as I can tell?). The problem is that this leaves us in the market for an alternative explanation of the UK data, both their absolute increase in cases as well as the relative growth of this particular variant as a fraction of all sequenced COVID samples. There are multiple possible but unlikely explanations, such as superspreaders, 'mild' superspreaders along with a 'mild' increase in infectiousness, or even downright inflated numbers due to mistakes or political motives. To me all of these sound implausible, but if the biological prior on a mutation causing such extreme differences is sufficiently low they might still be likely a postiori explanations.
I commented something similar on Zvi's summary, but I don't know how to link to comments on posts. It has a few more links motivating the above.
I had a long discussion on this very topic, and wanted to share my thoughts somewhere. So why not here.
Disclaimer: I am not an expert on any of this.
The scaling assumption (if the new strain has an R of 1.7 when the old one has an R of 1, then we need countermeasures pulling the old one down to 0.6 to get the new one to 0.6 * 1.7 = 1) is almost certainly too pessimistic an estimate, but I have no clue by how much. A lot of high risk events (going to a concert, partying with 10+ people in a closed room for an entire night, having a multiple hour Christmas dinner with the entire family) will become less than linearly more risky. I interpreted the "70%" (after some initial confusion) to represent an increase in risk per event or unit time of exposure. But if you are sharing the same air with possibly contagious people for a long period of time your risk is all the way on the saturated end of the geometric distribution, and it simply can't go above 100%. So high risk events will likely stay high risk events.
At the same time, I expect a lot of medium and low risk events to become almost proportionally more risky. This includes events like having one or two people over for dinner while keeping the room properly ventilated, going to supermarkets, going to the office and using public transport. Something that has been bugging me is that the increase in R-value has been deduced from the actual increased rate at which it spreads, so it is simply not possible that every activity has less than 70% (or whatever number you believe in) increased risk, since that is apparently the population average under the UK lockdown level 2 conditions. So some of this nonlinearity has already been factored in, making it very difficult to say what stronger lockdowns would mean.
In conclusion, I think it is possible that even if the new variant is 70% more transmissible that lockdown conditions that would have pushed the old strain down to 0.7 or only 0.8 might be sufficient to contain this new strain, and of course if the new strain is less transmissible than this we have even more leeway. At the same time I have absolutely no clue how to get a reliable estimate of the "old R needed".
My father sent me this video (24 min) that makes the case for all of this being mostly a nothingburger. Or, to be more precise, he says he has only low confidence instead of moderate confidence that the new strain is substantially more infectious, which therefore means don’t be concerned. Which is odd, since even low confidence in something this impactful should be a big deal! It points to the whole ‘nothing’s real until it is proven or at least until it is the default outcome’ philosophy that many people effectively use.
I think this is a great video, it explained a lot of things very clearly. I'm not a biologist/epidemologist/etc., and this video was very clear and helpful. In particular the strong prior "a handful of mutations typically does not lead to massive changes in reproduction rate" is a valuable insight that makes a lot of sense.
That being said, the main arguments against this new strain variant being a large risk seem to be:
However, personally I think the strongest case for the increased transmissibility of this new variant comes not from indirect evidence as presented above, but from the direct observation of exponential growth in the relative number of cases over multiple weeks/months. See for example the ECDC threat assesment brief or the PHE technical briefing. These seem to strongly imply that, while being agnostic about the mechanism, this new variant is spreading very rapidly. So all things considered the linked video makes me update only very weakly towards a lower probability of this new variant being massively transmissible - a good explanation for growth shown in both reports is still missing if it is not inherently more transmissible.
Good point, I'm likely misinterpreting nextstrain website then.
I can answer this one, or more specifically the PHE can. The tl; dr of this technical briefing is that the new strain tests positive on two assays (N, ORF1ab) and negative on a third (S), and that up to some noise this is currently the only strain to do so. So the number of PCR tests that are both S-negative and COVID-positive is a good indication of the spread of the new strain, without the need for genome sequencing. This document makes this argument precise, and then produces a painful graph on page 8 showing the 'S dropout' proportion at the Milton Keynes Lighthouse lab (Buckinghamshire). Mid December they show a proportion of over 60%.
This has led to me to update towards the new variant being as aggressive as previously feared, because unlike genome sequencing PCR test data does not lag several weeks behind. Combined with the fact that genome sequencing is done sporadically at best (if I understand correctly, nextstrain data explains the UK has sequenced 85 samples since September, with neighbouring countries showing similar numbers) I think it may already be more widely spread/beyond containment in a lot of European countries. Edit: Oskar Mathiasen gives a difference source with incompatible numbers, I am no longer confident in this point.
I also share shminux' fears that this more aggressive strain may be difficult to contain with just the measures we have taken so far.