An information-theoretic study of lying in LLMs — LessWrong