see also my eaforum at https://forum.effectivealtruism.org/users/dirk and my tumblr at https://d-i-r-k-s-t-r-i-d-e-r.tumblr.com/ .
But she said the same things in her original comment as in that reply, just with less detail. Nikola did reply with that, presumably because Nikola believes we're all doomed, but Nina did say in her original comment that she thinks Nikola is way overconfident about us all being doomed.
You didn't say "you didn't say your probability was <1%", you said "You should've said this in your original comment. You obviously have a very different idea of AI development and x-risk than this guy, or even most people on lesswrong." However, the fact that she has a very different perspective on AI risk than the OP or most people on lesswrong was evident from the fact that she stated as much in the original comment. (It was also clear that she didn't think superintelligence would be built within 20 years, because she said that, and that she didn't think superintelligence was likely to kill everyone because she said that too).
She disclosed that she disagreed both about superintelligence appearing within our lifetimes and about x-risk being high. If you missed that paragraph, that's fine, but it's not her error.
She did say this in her original comment. And it's not really similar to denying the black death, because the black death, cruciallly, existed.
If my parents had known in advance that I would die at ten years old, I would still prefer them to have created me.
There are actually quite a few more, though most of them feature her being isekaied elsewhere; https://glowfic.com/characters/12823?view=posts should show you ~all of them.
When I scroll down a little bit, the text on the feed preview for 2025 Prediction Thread gets wider so it clips off the background:
Asserting nociception as fact when that's the very thing under question is poor argumentative behavior.
Does your model account for Models Don't "Get Reward"? If so, how?
I interrogated Claude further (full conversation), and it claims that it's using a system which attaches citations to its statements, rather than inserting them manually. This seems corroborated by the fact that when I asked it to try writing them manually, all the citations went to places which directly related to the paragraphs they were attached to. Consequently, I think this is not Sonnet 4.5 hallucinating, but rather an issue with some intervening layer.
I tried that prompt myself and it didn't replicate (either time); until the OP provides a link, I think we should be skeptical of this one.