That's terrible news! It means that on top of the meager coronavirus there's another unidentified disease overcrowding the hospitals, causing respirator shortages all over the world, and threatening to kill millions of people!
> The idea of “flattening the curve” is the worst, as it assumes a large number of infections AND a large number of virus generation AND high selective pressure
Flattening _per se_ doesn't affect the evolution of the virus much. It doesn't evolve on a time grid, but rather on an event grid where an event is spreading from a person to another. As long as it spreads the same number of times it will have the same number of opportunities to evolve.
"Overreacting to underestimates" - great way of putting it!
Fewer waiting lines?
If you're trying to be homo economicus and maximize your expected utility, probably it's not worth it. But if you're not, you can still do it! We did (blood and tissue).
I don't see how it would explain double descent on training time. This would imply that gradient descent on neural nets first has to memorize noise in one particular way, and then further training "fixes" the weights to memorize noise in a different way that generalizes better
For example, the (random, meaningless) weights used to memorize noise can get spread across more degrees of freedom, so that on the test their sum will be closer to 0.
The 5nm in "5nm scale" no longer means "things are literally 5nm in size". Rather, it's become a fancy way of saying something like "200x the linear transistor density of an old 1-micron scale chip". The gates are still larger than 5nm, it's just that things are now getting put on their side to make more room ( https://en.wikipedia.org/wiki/FinFET ). Some chip measures sure are slowing down, but Moore's law (referring to the number of transistors per chip and nothing else) still isn't one of them despite claims of impending doom due to "quantum effects" originally dating back to (IIRC) the eighties.
I know some people who (at least used to) maintain a group pool of cash to fund the preservation of whoever died first (at which point the pool would need to be refilled). So if you're unlucky first to die out of N people, you only pay 1/N of the full price, and if you're lucky (last to die) you eventually pay about lnN+0.6 times the price, but at least you get more time to earn the money. Not sure how it was all structured legally. Of course if you're really pressed for time it may be hard to convince other people for such an arrangement.
Fundraisers have helped in the past: https://alcor.org/Library/html/casesummary2643.html - although it fell quite short of the sticker price, and ultimately Alcor had to foot most of the bill anyway.
I'm pretty sure (epistemic status: Good Judgment Project Superforecaster) the "AI" in the name is pure buzz and the underlying aggregation algorithm is something very simple. If you want to set up some quick group predictions for free, there's https://tinycast.cultivatelabs.com/ which has a transparent and battle-tested aggregation mechanism (LMSR prediction markets) and doesn't use catchy buzzwords to market itself. For other styles of aggregation there's "the original" Good Judgment Inc, a spinoff from GJP which actually ran an aggregation algorithm contest in parallel with the forecaster contest (somehow no "AI" buzz either). They are running a public competition at https://www.gjopen.com/ where anyone can forecast and get scored, but if you want to ask your own questions that's a bit more expensive than Swarm. Unfortunately there doesn't seem to be a good survey-style group forecasting platform out in the open. But that's fine, TinyCast is adequate as long as you read their LMSR algorithm intro.