Is instrumental convergence a thing for virtue-driven agents?
A key step in the classic argument for AI doom is instrumental convergence: the idea that agents with many different goals will end up pursuing the same few subgoals, which includes things like "gain as much power as possible". If it wasn't for instrumental convergence, you might think that only...