Sorted by New

Wiki Contributions


To signal effectively, use a non-human, non-stoppable enforcer

Why the readers of Less Wrong should be interested in paperclipping the universe:

To achieve your goals, solving FAI is not enough. You also have to solve AGI, or else someone will beat you to the Singularity.

While you wait for a satisfactory version of CEV to be achieved, as a practical case study you need to think about AGI with a definite goal system. What better example than AGI with the goal of paperclipping the universe? You talk about it all the time already. Why not fully embrace the idea!