I've always disliked the idea of people who assess risk differently being called "evil." There's a big difference between "I want to kill everyone, so I'm creating technology X" and "I disagree with the risk assessment of this technology and want to make the world a better place, so I'm creating it."
I think the lesswrong community has become too radicalized in considering itself so epistemologically right that it calls people evil who simply disagree with them about the probability of p(doom).
UPD: The problem with this radicalization of the community is...
I don't understand the argument about the trillion expected lives. Why is it assumed that after the singularity we will have trillions more human children? Why is this not a mind upload and the end of reproduction in its current form? And the people alive at the time of the singularity will simply exist in virtual worlds with AI agents?
I find it strange to ask an ASI to emulate a human so that you can raise in your virtual world. And conditionally 8 billion improved uploaded human minds may not strive to emulate as many variations of human brains as possible in order to fill the universe with them.
cryonics is expensive, unpopular and unavailable in most countries of the world. This is also a situation where young and rich people in first world countries buy themselves a reduction probability of their own death, at the expense of a guaranteed deprivation of the chances of life of the poor and old people.
I don't understand what state do you think other technologies are in from the moment Agent 3 appeared until everything got out of control? What about Agent-3 help with WBE? Or increasing the intelligence of adults to solve the alignment?
Why in the worlds where we already have powerful enough agents that haven't yet gotten out of control, we see them working on self-improvement, but don't see game-changing progress in other areas?
because you might be wrong, or people might genuinely not understand that you're right? There are many people who believe that their lives are in danger of a lot of things: vaccines against covid, GMOs, climate change, etc. They also sincerely believe that they are right and can consider scientists evil because they endanger their lives and the lives of their family. But the problem is that if they start equating any people who disagree with them with the evil that deliberately wants to destroy them, it radicalizes them and promoting enmity. Is this a rational approach when every person, confident that technology N can kill him, starts openly hating the scientists who develop it?