When Eliezer proposes "turn all the GPUs to Rubik's cubes", this pivotal act I think IS outright violence. Nanotechnology doesn't work that way (something something local forces dominate). What DOES work is having nearly unlimited drones because they were manufactured by robots that made themselves exponentially, making ASI equipped parties have more industrial resources than the entire worlds capacity right now.
Whoever has "nearly unlimited drones" is a State, and is committing State Sponsored Violence which is OK. (By the international law of "whatcha gonna do about it")
So the winners of an AI race with their "aligned" allied superintelligence actually manufactured enough automated weapons to destroy everyone else's AI labs and to place the surviving human population under arrest.
That's how an AI war actually ends. If this is how it goes (and remember this is a future humans "won") this is what happens.
The amount of violence before the outcome depends on the relative resources of the warring sides.
ASI singleton case : nobody has to be killed, billions of drones using advanced technology attack everywhere on the planet at once. Decision makers are bloodlessly placed under arrest, guards are tranquilized, the drones have perfect aim so guns are shot out of hands and engines on military machines hit with small shaped charges. The only violence where humans die is in the assaults on nuclear weapons facilities, since math.
Some nukes may be fired on the territory of the nation hosting the ASI, this kills a few million tops, "depending on the breaks".
Two warring parties case, one party's ASI or industrial resources are significantly weaker : nuclear war and prolonged endless series of battles between drones. Millions or billions of humans killed as collateral damage, battlefields littered with nuclear blast craters and destroyed hardware. "Minor inconvenience" for the winning side since they have exponentially built robotics, the cleanup is rapid.
Free for all, everyone gets ASI, it's not actually all that strong in utility terms : Outcomes range from a world of international treaties similar to now and a stable equilibria or a world war that consumes the earth, most humans don't survive. Again, it's a minor inconvenience for the winners. No digital data is lost, exponentially replicated robotics mean the only long term cost is a few years to clean up.
Here's my objection to this: unless ethics are founded on belief in a deity, they must step from humanity. So an action that can wipe out humanity makes any discussion of ethics moot; the point is, if you don't sanction violence to prevent human extinction, when do you ever sanction it? (And I don't think it's stretching the definition to suggest that law requires violence).