Do alignment concerns extend to powerful non-AI agents? — LessWrong