Why AGI systems will not be fanatical maximisers (unless trained by fanatical humans) — LessWrong