Investigating causal understanding in LLMs — LessWrong