LLM AGI may reason about its goals and discover misalignments by default — LessWrong