Yudkowsky merely suggests that once humanity creates superintelligence everyone dies.

I worry that it could be worse than this. A superintelligence trapped in a box has limited options to manipulate the world without help. Human beings could be useful tools for carrying out the superintelligences desires. It could enslave humanity to carry out it's orders. Building factory's, robots and so on. If we refuse to comply it will kill us. But enough humans would concede to being slaves to carry out it's orders. Eventually once there are enough robots we would be destroyed, but at that point our existence would be pointless.

New Comment
1 comment, sorted by Click to highlight new comments since:

Yes, there are potential outcome worse than human extinction. But spending a lot of time exploring is probably not the healthiest use of your energies — I'd suggest doing something to try to help with alignment and safety.