Existentially relevant thought experiment: To kill or not to kill, a sniper, a man and a button.
There is a room with one window. Inside is a man. On the ceiling there is an interesting button. What happens when it is pressed? Everybody dies, except for the man and 1000 people he gets to pick. The button is not visible from outside the room. The man sometimes...
Thanks for the quick reply!
It is my view that AI labs are building AGI which can do everything a powerful general intelligence can do, including executing a successful world takeover plan with or without causing human extinction.
When the first AGI is misaligned, I am scared it will want to execute such a plan, which would be like pressing the button. The scenario is the most relevant when there is no aligned AGI yet, that wants to protect us.
I see now I need to clarify, the random person / man in the scenario represents the AGI itself mostly (but also a misuse situation where a non-elected person gives the command to an obedient... (read more)