Theories of Impact for Causality in AI Safety — LessWrong