Essential LLM Assumes We're Conscious—Outside Reasoner AGI Won't — LessWrong