Reducing Agents: When abstractions break — LessWrong