I've been forecasting a high probability that almost all of the low case count growth in Africa and Southeast Asia as limited testing.
I'm more concerned about increased rates of central nervous system impacts and cytokine storms, both of which are rare in typical COVID cases, but seem closely related to high fatality rates in the minority where they occur.
It's unclear to me that you wouldn't end up with a worse clinical course in this case - perhaps you wouldn't, but I'm not sure why you'd assume it's safer.
Unfortunately, 1bn doses is likely no more than a quarter of the world's need - less if COVID is stopped more places.
See image here for a best-estimate of the course of infection. (Matches a number of other analyses, unfortunately doesn't have good representation of uncertainty.)
They kept them there for long enough that this seems unlikely.
Interesting - I'd ask Robin Hanson if that fits with his variolation suggestion.
That's not quite right. I can't get to that book right now, but measles and mumps for MMR are also done in Chicken eggs, IIRC, as are Herpes and Poxviruses, while cell lines and other media can be used to grow other viruses - but the remainder of the facilities are still similar, and can be repurposed.
But I agree that we do need new platform technologies.
This seems related to my speculations about multi-agent alignment. In short, for embedded agents, having a tractable complexity of building models of other decision processes either requires a reflexively consistent view of their reactions to modeling my reactions to their reactions, etc. - or it requires simplification that clearly precludes ideal Bayesian agents. I made the argument much less formally, and haven't followed the math in the post above (I hope to have time to go through more slowly at some point.)
To lay it out here, the basic argument in the paper is that even assuming complete algorithmic transparency, in any reasonably rich action space, even games as simple as poker become completely intractable to solve. Each agent needs to simulate a huge space of possibilities for the decision of all other agents in order to make a decision about what the probability is that the agent is in each potential position. For instance, what is the probability that they are holding a hand much better than mine and betting this way, versus that they are bluffing, versus that they have a roughly comparable strength hand and are attempting to find my reaction, etc. But evaluating this requires evaluating the probability that they assign to me reacting in a given way in each condition, etc. The regress may not be infinite, because the space of states is finite, as is the computation time, but even in such a simple world it grows too quickly to allow fully Bayesian agents within the computational capacity of, say, the physical universe.
(This is still showing as a comment, not an answer.)