I'm a quantitative biologist with a PhD in evolutionary theory and currently working in microbiome and metagenomics data analysis and methods development. https://mikemc.cc @mikemc423 mike_mclaren
Just a note that there still seems to be disagreement and a lot of uncertainty about aerosol transmission https://www.nature.com/articles/d41586-020-00974-w
Your Update suggests you've realized this by now, but your confusion seems to be stemming from not understanding the difference between _droplets_ (large particles, fall to ground within seconds) and _aerosols_ (small particles, can float for 10s of minutes). The reason why there is an emphasis on staying 6 feet away and being careful of touching contaminated surfaces, but not an emphasis on ventilation, is because it is thought that SARS-CoV2 is mostly expelled as droplets and not as aerosols. The purported contraction disappears in this light.
Droplets are larger particles that fall to the ground within seconds, but can be expelled up to ~6 ft / 2 meters by coughing and sneezing. Droplets can also be expelled by talking. Droplets containing the virus can directly land in the face of another person, hence the recommendations to stay 6 ft away. They can also land on surfaces, whence these surfaces become "fomites" that can pass the virus to other people via touching.
Aerosols are smaller particles that can remain in the air for longer periods and potentially be moved around by ventilation systems. I think aerosolized virus can in principle be expelled by an infected person but based on the reporting and scientist interviews I've heard (admittedly mostly on This Week in Virology), my understanding is that experts think that SARS-CoV2 is mostly being expelled and hence spread by droplets and not aerosols. Aerosolization might be more important in some situations, such as by certain types of high energy toilets acting on fecally shed virus, but perhaps more important when patients are intubated in hospitals to go on ventilators. And aerosolized virus might turn out to be more common and play a more important role in SARS-CoV2 transmission than experts currently think and so it's perhaps still something to be aware of as an individual and no doubt warrants more research.
Regarding how long aerosols remain in the air...I am not familiar with the retracted article you mentioned, but the NYT reported that the authors of the now famous aerosol + surface stability study said that aerosols of SARS-CoV2 stayed in the air for 1/2 an hour. The paper itself doesn't contain this 1/2 hour number, and the authors needed to use a rotating drum to keep the virus floating for 3 hours. My understanding is that the 1/2 hour floating time has nothing to do with SARS-CoV2 itself and is just a property of physics and the size of the particles. Which is still a long time. But the question is whether significant aerosolized virus is being produced by infected people in normal circumstances. In the above paper, the authors used a nebulizer to aerosolize live virus, it didn't happen naturally.
Note: I have not reviewed the scientific evidence that the CDC and other experts have used to draw the conclusion that droplets and contaminated surfaces are more important than aerosols for SARS-CoV2 transmission.
This ^...Another way to spot check the "100000 cases" estimate without knowing the Wuhan numbers is to consider that that would imply roughly 1e5 / (2^4) = 6250 cases 3 weeks ago (the typical delay between infection and death; assuming 6 day doubling time), which corresponds to 31-125 deaths by today for a case fatality rate in the interval of [0.005, 0.02]. That would be for Ohio alone. As of March 13, the US CDC is only reporting 36 deaths for the country as a whole (source; though reported as 47 deaths here) and Ohio is currently reporting 0 deaths (source). Not to say that this is a definitive argument against there being 100000 cases in Ohio, but it does suggest that this estimate wasn't based on current understanding of the virus and its spread.
Update: On March 13 Trevor Bedford also tweeted a rough estimate of 10K-40K cases nationally.
Comments on hospital capacity models from other threads in this post:
Other models / estimates:
This preprint from Marc Lipsitch and colleagues is relevant,
Li R, Rivers C, Tan Q, Murray MB, Toner E, Lipsitch M. 2020. The Demand for Inpatient and ICU Beds for COVID-19 in the US: Lessons From Chinese Cities. https://dash.harvard.edu/handle/1/42599304
See their Figure 1 where they plot the hospitalization rate during the Wuhan epidemic against US hospital bed capacity to give an idea of how quickly the US would be overloaded in a "Wuhan-like outbreak". They consider ICU beds (2.8 per 10000 adults), empty ICU beds (31.8% of all ICU beds), and what they call "US inpatient beds in community hospitals" (29.7 per 10000 adults). The sum of ICU and community beds comes out to ~850000 based on an adult US population of 240 million, which isn't too far off from your 924107 number.
Two things to keep in mind for working through your question about the implications of 10^6 (concurrent) cases (I see these are reiterating points Mark already made): On the one hand, most symptomatic cases will not need hospitalization. On the other hand, most hospital beds are occupied (~70% of ICU beds, which roughly agrees with Mark's 66% estimate for overall beds), so the number of available beds is much less than the total number of staffed beds.
I've heard it suggested that today's declared national state of emergency and associated funding may enable things like FEMA building field hospitals to extend hospital bed capacity.
Edit: see also this blog post by author Eric Toner about the above preprint, http://www.centerforhealthsecurity.org/cbn/2020/cbnreport-03132020.html
For those who are interested, the class that Uri Alon teaches that goes with this textbook is on YouTube
Nissen et al 2016 ("Publication bias and the canonization of false facts") give a simple model for how publication bias in academic research can have a similar effect to the "information cascades" described in the OP. False scientific claims are likely to be falsified by an experiment, but will sometimes be found to be true. Positive results supporting a claim may be more likely to be published than negative results against the claim. The authors' model assumes that the credence of the scientific community in the claim is determined by the number of published positive and negative results, and that new studies will be done to repeatedly test the claim until the credence becomes sufficiently close to 0 or 1. The publication bias favoring false results can overpower the odds against getting a positive result in any given experimental replication and lead to false claims becoming canonized as fact with a non-negligible probability.
The mechanism here differs in a sense from the "information cascade" examples in the OP and on the Wikipedia page in that the false claim is being repeatedly tested with new experiments. However, I think it could be seen as fundamentally the same as the citation bias example of Greenberg 2009 in the OP, if we think of the scientific community rather than an individual scientist as being the actor. In the Greenberg 2009 example, the problem is that individual scientists tend only to cite positive findings; in the Nissen et al model, the scientific community tends to only publish positive findings. (Of course, this second problem feeds into the first.)