A whole paper, huh.
I am contesting the whole Extremely Online Lesswrong Way<tm> of engaging with the world whereby people post a lot and pontificate, rather than spending all day reading actual literature, or doing actual work.
"Unless you’d put someone vulnerable at risk, why are you letting another day of your life go by not living it to its fullest? "
As soon as you start advocating behavior changes based on associational evidence you leave the path of wisdom.
You sure seem to have a lot of opinions about statisticians being conservative about making claims without bothering to read up on the relevant history and why this conservativism might have developed in the field.
You can read Halpern's stuff if you want an axiomatization of something like the responses to the do-operator.
Or you can try to understand the relationship of do() and counterfactual random variables, and try to formulate causality as a missing data problem (whereby a full data distribution on counterfactuals and an observed data distribution on factuals are related via a coarsening process).
How is this different from just a regular imperative programming language with imperative assignment?
Causal models are just programs (with random inputs, and certain other restrictions if you want to be able to represent them as DAGs). The do() operator is just imperative assignment.
Here are directions: https://www.instructables.com/id/The-Pandemic-Ventilator/
I think the sorts of people I want to see this blog website will know what to do with the information on it.
Medical information on covid-19: https://emcrit.org/ibcc/covid19/
https://panvent.blogspot.com/ <- Spread this to your biomedical engineering friends, or any hobbyist who can build things. We need to ramp up ventilator capacity, now. Even if they are 80% as good as a high tech one, but cheap to make, they will save lives.
There's a long history of designing and making devices like these for the Third World places that need them. We will need these soon, here and everywhere.
Some references to lesswrong, and value alignment there.
anyone going to the AAAI ethics/safety conf?