Self-interpretability: LLMs can describe complex internal processes that drive their decisions — LessWrong