This particular scenario is looking a bit too probable. Assuming humanity aligned AI, given sufficient variance in their alignments and a multipolar enough setting, resisting such disempowerment pressures seems quite tricky. A better case scenario I could imagine is that once one AI wins, it gives some decision making power back to humans. I think that It would be useful to determine the equilibrium boundary of number of agents and alignment variance that lies between stable human influence and runaway disempowerment.
This particular scenario is looking a bit too probable. Assuming humanity aligned AI, given sufficient variance in their alignments and a multipolar enough setting, resisting such disempowerment pressures seems quite tricky. A better case scenario I could imagine is that once one AI wins, it gives some decision making power back to humans. I think that It would be useful to determine the equilibrium boundary of number of agents and alignment variance that lies between stable human influence and runaway disempowerment.