How LLMs are and are not myopic — LessWrong