AI safety without goal-directed behavior — LessWrong