Here are some ideas on how we can restrict Super-human AI (SAI) without built-in Alignment:
Overall idea:
We need to use unbreakable first principles, especially laws of physics to restrict or destroy SAI. We can leverage the asymmetries in physics, such as the speed of light limit, time causality, entropy, uncertainty, etc.
Assumptions:
1. Super-Intelligence cannot break the laws of physics.
2. SAI is silicon-based.
3. Anything with lower-intelligence than the current Collective Human Intelligence (CHI) cannot destroy all humans.
Example Idea 1 (only require Assumption 1) "Escape":
We send human biological information (or any information that we want to protect from SAI) at/near the speed of light to the Cosmic Event Horizon. Then after some time, the information will... (read 352 more words →)