LLMs may capture key components of human agency — LessWrong