I've talked to some people who locked down pretty hard pretty early; I'm not confident in my understanding but this is what I currently believe.
I think characterizing the initial response as over-the-top, as opposed to sensible in the face of uncertainty, is somewhat the product of hindsight bias. In the early days of the pandemic, nobody knew how bad it was going to be. It was not implausible that the official case fatality rate for healthy young people was a massive underestimate.
I don't think our community is "hyper-altruistic" in the Strangers Drowning sense, but we do put a lot of emphasis on being the kinds of people who are smart enough not to pick up pennies in front of steamrollers, and on not trusting the pronouncements of officials who aren't incentivized to do sane cost-benefit analyses. And we apply that to altruism as much as anything else. So when a few people started coordinating an organized response, and used a mixture of self-preservation-y and moralize-y language to try to motivate people out of their secure-civilization-induced complacency, the community listened.
This doesn't explain why not everyone eased up on restrictions once the epistemic Wild West of February and March gave way to the new normal later in the year. That seems more like a genuine failure on our part. I think I prefer Raemon's explanation from this subthread: the concentrated attention that was required to make the initial response work turned out to be a limited resource, and it had been exhausted. By the time it replenished, there was no longer a Schelling event to coordinate around, and the problems no longer seemed so urgent to the people doing the coordinating.
Docker is not a security boundary.
Eh, if you read the raw results most are pretty innocuous.
Not at the scale that would be required to power the entire grid that way. At least, not yet. This is of course just one study (h/t Vox via Robert Wiblin) but provides at least a rough picture of the scale of the problem.
I feel obligated to link to my house's Petrov Day "Bad/X-risk Future" candle.
Cross-posting from Facebook:
Any policy goal that is obviously part of BLM's platform, or that you can convince me is, counts. Police reform is the obvious one but I'm open to other possibilities.
It's fine for "heretics" to make suggestions, at least here on LW where they're somewhat less likely to attract unwanted attention. Efficacy is the thing I'm interested in, with the understanding that the results are ultimately to be judged according to the BLM moral framework, not the EA/utilitarian one.
Small/limited returns are okay if they're the best that can be done. Time preference is moderately high (because that matches my assessment of the BLM moral framework) but still limited.
Suggestions from non-Americans are fine.
It is easy to get the impression that the concerns raised in this post are not being seen, or are being seen from inside the framework of people making those same mistakes.
I don't have a strong opinion about the CFAR case in particular, but in general, I think this is impression is pretty much what happens by default in organizations, even when people running them are smart and competent and well-meaning and want to earn the community's trust. Transparency is really hard, harder than I think anyone expects until they try to do it, and to do it well you have to allocate a lot of skill points to it, which means allocating them away from the organization's core competencies. I've reached the point where I no longer find even gross failures of this kind surprising.
(I think you already appreciate this but it seemed worth saying explicitly in public anyway.)
The organizer wound up posting their own event: https://www.lesswrong.com/events/ndqcNdvDRkqZSYGj6/ssc-meetups-everywhere-1
This looks like a duplicate.
Nit: I think this game is more standardly referred to in the literature as the "traveler's dilemma" (Google seems to return no relevant hits for "almost free lunches" apart from this post).