Controlling AGI Risk
A theory of AGI safety based on constraints and affordances. I've got this proto-idea of what's missing in much public discussion and action on AI safety. I'm hoping that by sharing it here, the hive-mind might come together and turn it into something useful. Effective control of AI risk requires...
Mar 15, 20246