I see three things missing from your examples:
The news report you link to on the CDC study of BMI and Covid is highly misleading. The CDC report (https://www.cdc.gov/mmwr/volumes/70/wr/mm7010e4.htm) says
"the analyses in this report describe a J-shaped association between BMI and severe COVID-19, with the lowest risk at BMIs near the threshold between healthy weight and overweight in most instances".
The CNBC news report falsely states that "The agency found the risk for hospitalizations, ICU admissions and deaths was lowest among individuals with BMIs under 25". They go on to say, "It doesn’t take a lot of extra pounds to be considered overweight or obese. A 5-foot-10-inch man at 175 pounds and 5-foot-4-inch woman at 146 pounds would both be considered overweight with BMIs of just over 25", without mentioning that this BMI of 25 is actually optimal for avoiding hospitalization for all but the youngest (18-39) age group, according to the CDC study (see Figure 2).
It's not clear what happened at Dodger Stadium. You say that "Far right anti-vaccine protesters blocked access to the mass vaccination site at Dodger Stadium, forcing it to shut down for a day.". But if you read this account:
and try to read between the lines, it's not clear. They say that "The Los Angeles Fire Department shut the entrance to the vaccination center at Dodger Stadium about 2 p.m. as a precaution". That doesn't actually sound like the protesters prevented people from entering. It sounds more like they waved signs and shouted, and the authorities thought that they might become violent, so they shut things down (for an hour, not the whole day).
I think some of these examples are not real examples of "inadequate equilibria". They are instead situations with real switching costs, or where there are conflicting beliefs or interests.
To illustrate, the authors example of moving from proprietary to open-source journals seems like a real example to me. But their example of using Bayesian rather than frequentist statistical methods does not seem like a real example.
Note that I'm a Bayesian (though not a rabid one), and that I've taught introductory statistics. There isn't some easy way to just switch to Bayesian methods. First, students need to understand the scientific literature, and that includes the past scientific literature. So for a considerable period of time, students will need to understand frequentist statistics. This is a legacy compatibility problem, not a coordination problem. Second, there are many scientists who don't know Bayesian statistics. This is a retraining problem (which is not cost-free), not a coordination problem. Third, not everyone agrees that Bayesian methods are better. This is a persuasion problem, not a coordination problem. Fourth, Bayesian methods aren't always better - there really are problems where the correct Bayesian approach is much, much more difficult to carry out than a simple frequentist approach that is usually gives mostly-correct results. So it really is necessary for at least some people to understand frequentist statistics, though it would be good if the emphasis eventually changes in a Bayesian direction. There may be some coordination aspects to the current mess, but to a considerable extent the mess reflects real issues with real costs and benefits.
It would be great if the mouse results turn out to apply to humans as well, but I have my doubts. These doubts are based on what I thought were pretty conventional biological assumptions, but that nevertheless don't seem to be addressed in the anti-aging discussions I've seen.
The basic problem is that there's a good reason mice don't live long. Even if they didn't age, the environment in which they live means they are very likely to die in a few years from starvation or predation. So genes that keep them from aging won't be selected for because of either or both of two reasons: (1) The selective advantage of not aging, when you're likely to die young anyway, isn't enough to overcome random mutation that undoes the anti-aging genes. (2) The advantage of not aging comes at some (possibly rather small) cost in terms of increased likelihood of death from predation or starvation, or decreased fecundity early in life. (For instance, it might have an energy/food cost, or might come with decreased physical performance, such as in running speed.)
Humans live in a different environment, in which slower aging is more advantageous. And indeed humans age much slower than mice, presumably because we have genes that enable various anti-aging strategies that mice lack.
So, when a drug is found to slow aging in mice, the first question in my mind would be, "is this drug enabling a mechanism that is already present in humans?".
And the default answer to this question would seem to be "yes". If there's some simple biochemical way of slowing aging, why don't humans already have this, given that slower aging in humans would give a significant selective advantage? (Even (especially?) in pre-civilizational societies, significant numbers of people die of old age rather than from violence or starvation.)
On this reasoning, one would expect that a successful anti-aging program would have to involve something complicated, not easily produced by evolution. Something like, for example, nanobots inspecting cells for damaged DNA (comparing against a consensus sequence derived from a large number of the person's cells), and killing cells that are too damaged. Or at least, if there is some relatively simple intervention that helps, one would expect it to be sufficiently subtle that it doesn't show up in mice (but only after decades of life, when selective pressure for it in humans is comparatively small).
I think you should share it with one other person. Almost any other person, as long as you have reason to think they are knowledgeable enough to understand it, and that they will take you seriously enough to listen to the idea. Since you're talking about a (small) chance of billions of deaths, they don't have to be ethical paragons - a very high fraction of people will not want that to happen, even if they're otherwise rather awful people.
This person will then be in a position to give you more useful advice, starting with whether or not your worries are at all rational. (If they're not, and are instead a sign of illness, then I guess it would be good if the person was not too unkind...)
Thanks! Very interesting.
Another possibility is better disease survival due to increased vitamin D levels from sunshine (or due to some other physiological effect of sunlight).
The effect seems rather large for this to be the explanation, but it sure would be great if a bit of sunlight is all that's needed!
Indeed. And one can come up with other lists that stem from somewhat different moral intuitions, like:
So one can see that there are plenty of things currently supported by large numbers of people that are plausibly in the "worse than Hitler" category, without even getting into possible future denigration of the colour green.
And of course the opposite of all the above might also be plausibly condemned by some future society:
There's really no substitute for making your own moral judgements. The idea that the future will always be more moral than the past seems quite false. Even if there is some slow, long-term moral progress (which I think may be the case), there are obviously significant regressions over the time scale of decades and centuries. Going by what you think (correctly) will be the moral views 20 years in the future would not be a good thing in the Germany of 1920.
From the abstract: "The incidence of new illness compatible with Covid-19 did not differ significantly between participants receiving hydroxychloroquine (49 of 414 [11.8%]) and those receiving placebo (58 of 407 [14.3%])"
So the treatment group did have a lower incidence of illness than the control group, but the difference wasn't statistically significant. However, only 107 patients in total became ill. This is a rather small sample, so the results by no means rule out a clinically important benefit of HCQ. Even just taking the observed proportions as a best estimate, there's a 17% reduction of illness in the treatment group, which doesn't seem negligible, and the actual benefit could plausibly be considerably larger. (Of course, given the small sample size, it's also plausible that the real effect is in the other direction.)