An artificial superintelligence (ASI) emerging in a world in which war is still normalised may constitute a catastrophic existential risk, either because the ASI goes to war on behalf of itself to establish global supremacy (internal risk), or because an ASI might be employed by a single nation-state to wage war for global supremacy (external risk). We now live in a world where few states actually declare war; the last major declaration of the existence of a state of war was in 2008, for the Russo-Georgian War. This is because the 1945 United Nations’ Charter's Article 2 states that UN member states should “refrain in their international relations from the threat or use of force against the territorial integrity or political independence of any state”, while allowing for “military measures by UN Security Council resolutions” and “exercise of self-defense”. In this theoretical ideal, wars are not declared; instead, 'international armed conflicts' occur. However, interstate wars, both ‘hot’ and ‘cold’, still exist, for instance the Syrian Civil War, where an interstate proxy war is being waged, and the Korean War. Furthermore, a ‘New Cold War’ between AI superpowers (the United States and China) looms. An ASI-directed/enabled future interstate war could trigger ‘total war’, including nuclear war, and may therefore be considered ‘high risk’. One risk reduction strategy would be optimising peace through a Universal Global Peace Treaty (UGPT), which could contribute towards the ending of existing wars and towards the prevention of future wars, through conforming instrumentalism. While this strategy cannot cope with non-state actors, it could influence state actors, including those developing ASIs, or the ASI itself, should it assume agency. An opportunity to optimise peace as a risk reduction strategy is emerging, by leveraging the UGPT off the announcement of a ‘burning plasma’ fusion reaction, expected from circa 2025 to 2035, as was attempted in 1946 with fission, for atomic war, in the Baruch Plan.