Why Instrumental Goals are not a big AI Safety Problem — LessWrong