x
Worries about latent reasoning in LLMs — LessWrong