I map the spectrum of hyperlink usage styles between the extremes of Wikipedia vs everything2.
I have been pleasantly surprised to find much writing in the "Rationalist" internet spaces to lean strongly toward the latter. I think it shows simultaneously a certain faith in the cleverness of one's readers, and abdication of any perceived responsibility to prioritize lack-of-ambiguity for all possible readers over higher accuracy and subtlety for the target audience.
And "stop trying to make me do chores for you so that I can put that time toward the things I want instead" isn't in that same goal category?
If a superintelligence could persuade anyone to let it out of the box, why would it stop there? Why wouldn't it persuade everyone to stop asking it for immortality and eternal happiness and whatnot, and instead just make us want to keep doing what we were doing?
In that case, would it want us to remember that it had ever existed?
How do we know that hasn't happened already?
I agree that driving is more concrete, and thus slightly easier to find real numbers about.
The difference in likelihood between immortality-and-resurrection ASI vs immortality-without-resurrection ASI seems to me to be smaller than the difference in likelihood between "ASI is possible" and "ASI as we imagine it is impossible for some reason we haven't discovered yet". (for "ASI as we imagine it" being a superintelligence that both can and wants to make us immortal, the "is impossible" might be as simple as it deciding that there's some watertight ethical case against immortality which we just weren't smart enough to figure out)
I think that guesstimating an actual likelihood that an ASI which could offer immortality couldn't offer resurrection is a worthwhile exercise in reasoning about the limits of the hypothetical ASI, which would in turn offer a structure for reasoning about the likelihood that an ASI might never exist, or that it might exist and decide that giving us eternal happiness or immortality or whatever is actually not a good idea.
I hold the impression that in car crashes, injuries are vastly more common than deaths. When I seek actual statistics on this, I'm surprised by how little reporting of non-fatal injury statistics is readily available online compared to fatal injury statistics.
Wikipedia claims that "In 2010, there were an estimated 5,419,000 crashes, 30,296 deadly, killing 32,999, and injuring 2,239,000.", but the citation leads to this page, which doesn't appear to offer injury statistics. So I'm not sure where they actually got their 2.2 million injuries per 33 thousand deaths numbers from.
injuryfacts.nsc.org claims that in 2019, there were 39,107 deaths in motor-vehicle crashes, and 4.5 million "medically consulted injuries".
So if we trust either of those sources (which both claim to be derivations of NHTSA data), the rates are somewhere in the millions of injuries per tens of thousands of deaths kind of ballpark.
The other question here is whether injuries are permanent. I expect that any injury worth recording will have at least some long-term effect on the recipient's quality of life, based primarily on having had a too-minor-to-report injury in a collision over a decade ago which still causes occasional pain that I wouldn't experience without it, and also from conversations with others about the lasting effects of various "fully recovered" injuries. In trying to find any data about outcomes classified as disability, I see this and this making claims about some stats on injury-related disability, but sadly neither appears to cite any actual studies.
So, if we assume that most injuries severe enough to report cause some long-term change to quality of life, I would indeed call getting injured "many, many times more likely than dying" when it comes to modern car accidents.
Of course, all that the more reputable of this data shows is that injuries are probably something to take seriously. For those who'd value a year of life with occasional or constant pain significantly below a year without, injuries are a more relevant concern than for those who'd value the years similarly.
That's a fascinating observation! When I introspect the same process (in my case, it might be "ask how this person's diabetic cat is doing"), I find that nothing in the model itself is shaped like a specific reminder to ask about the cat. The way I end up asking is that when there's a lull in the conversation, I scan the model for recent and important things that I'd expect the person might want to talk about, and that scan brings up the cat. My own generalizations, in turn, likely leave gaps which yours would cover, just as the opposite seems to be happening here.
Ah, that's fair. I figure sometimes people remember good jokes/memes, but if the retractions aren't quite there, they wouldn't be worth noting. Thank you for the link!
Splitting the expected outcome of a risk 2 ways, "life or death", often leads to unsatisfying reasoning about risk taking. Sometimes splitting the expected outcome 3 ways: "health, life incapacitated and in agony, death" yields more satisfying explanations.
Covid is especially characteristic of this risk split in that while the expected risk of death from infection in a young and otherwise healthy person is relatively low, the risk of unknown and potentially extreme long-term health complications from infection is relatively high.
The incapacitation/agony possibility makes these calculations extremely subjective: some people might value a year of their own life spent bed-bound or in horrible pain as highly as a year in good health, while others might not. When calculating for others, things tend to go badly wrong if we assume that their relative values for incapacitation/agony versus health match our own (in either direction -- opponents to life-saving treatments for the disabled and opponents to right-to-die laws are both prone to this projection), but when calculating for ourselves we can more safely use our own values.
The NIH NLM errata policy says "Journals may retract or withdraw articles based on information from their authors, academic or institutional sponsor, editor or publisher, because of pervasive error or unsubstantiated or irreproducible data." NEJM's retraction list above the fold seems to mainly be "oops used wrong facts". Science Magazine claimed in 2018 that "The number of articles retracted by journals had increased 10-fold during the previous 10 years. Fraud accounted for some 60% of those retractions" .
Bearing in mind that I haven't yet cultivated the skill of assessing journals' credibility, and that I found these examples for their trait of looking promising early in search results, it does seem that retraction may not map to "change of mind" beyond "change of mind about whether the situation in which the science was attempted was capable of emitting valid results".
"comment on comment" sounds like a delightful part of the internet! Are there any particularly memorable examples that you'd recommend someone new to them start with to get a feel for the genre, regardless of what field they happen to be in?