On the contrary, I would expect the amor fati people to get normal prophecies, like, "you will have a grilled cheese sandwich for breakfast tomorrow," "you will marry Samantha from next door and have three kids together," or "you will get a B+ on the Chemistry quiz next week," while the horrible contrived destinies come to those who would take roads far out of their way to avoid them.
I can think of several prominent predictions in the present of similar magnitude.
The difference you're talking about might be simply due to you discounting these as insane (or maybe just disingenuous) while hailing analogous predictions in the past as wise/prescient.
“Death gives life meaning.”
A fun thing you can do is to say this line after events like natural disasters or mass murders. I'm hopeful that if it catches on as an ironic meme, people will come to realize it and the deathist sentiment that originally spawned it unironically ought to be no less obscene in any context.
equipment they’re automating would be capable of producing viruses (saying that this equipment is a normal thing to have in a bio lab
This seems to fall into the same genre as "that word processor can be used to produce disinformation," "that image editor can be used to produce 'CSAM'," and "the pocket calculator is capable of displaying the number 5318008."
you can engage with journalists while holding to rationalist principles to only say true things.
Suppose there was a relatively simple computer program, say a kind of social media bot, that when you input a statement, posts the opposite of that statement. Would you argue that as long you only type true statements yourself, using this program doesn't constitute lying?
This line of reasoning leads to Richelieu's six lines, where everyone is guilty of something, so you can punish anyone at any time for any reason: process crimes make for a much more plausible pretext to go after a target than any "intrinsically bad" thing.
tout their “holistic” approach to recognizing creativity and intellectual promise
This doesn't mean what you think it means. It's code for racial discrimination.
I edited the screenshot of this Twitter thread.
If you'd put in a link to a deleted tweet, I'd probably have believed it.
The primary application of "safety research" is improving refusal calibration, which, at least from a retail client's perspective, is exactly like a capability improvement: it makes no difference to me whether the model can't satisfy my request or can but won't. It's easy to demonstrate differences in this regard – simply show one model refusing a request another fulfills – so I disagree that this would cause clients to be "dissuaded from AI in general."