This paper is now up (with the annex mentioned in the paper) as a preprint at https://osf.io/preprints/socarxiv/4268q It can be cited as Draper, John. 2020. “Optimising Peace Through a Universal Global Peace Treaty to Constrain Risk of War from a Militarised Artificial Superintelligence.” SocArXiv. April 15. doi:10.31235/osf.io/4268q. Comments are welcome...
One quite well known solution to the Fermi Paradox is John Ball's 1973 Zoo Hypothesis (https://en.wikipedia.org/wiki/Zoo_hypothesis), i.e., the hypothesis that alien life intentionally avoids communication with Earth, with one of its main interpretations being that it does so to allow for natural evolution and sociocultural development, avoiding interplanetary communication, similarly...
"Developments in AI cannot be separated from the emerging strategic competition with China and developments in the broader geopolitical landscape. We are concerned that America’s role as the world’s leading innovator is threatened. We are concerned that strategic competitors and non-state actors will employ AI to threaten Americans, our allies,...
International leaders are throwing their weight behind a proposal for a global ceasefire as the number of coronavirus cases worldwide passes the 2 million mark. President Macron of France said yesterday that President Trump, President Xi, and Boris Johnson had all confirmed to him that they backed the plea for...
Abstract An artificial superintelligence (ASI) emerging in a world in which war is still normalised may constitute a catastrophic existential risk, either because the ASI goes to war on behalf of itself to establish global supremacy (internal risk), or because an ASI might be employed by a single nation-state to...
This paper is now up (with the annex mentioned in the paper) as a preprint at https://osf.io/preprints/socarxiv/4268q It can be cited as Draper, John. 2020. “Optimising Peace Through a Universal Global Peace Treaty to Constrain Risk of War from a Militarised Artificial Superintelligence.” SocArXiv. April 15. doi:10.31235/osf.io/4268q. Comments are welcome below. Optimising Peace through a Universal Global Peace Treaty to Constrain Risk of War from a Militarised Artificial Superintelligence John Draper Abstract An artificial superintelligence (ASI) emerging in a world where war is still normalised may constitute a catastrophic existential risk, either because the ASI might be employed by a single nation-state on purpose to wage war for global domination or because the ASI goes to war on behalf of itself to establish global domination; these risks are not mutually incompatible in that the first can transition to the second. We presently live in a world where few states actually declare war on each other or even war on each other. This is because the 1945 United Nations’ Charter's Article 2 states that UN member states should “refrain in their international relations from the threat or use of force against the territorial integrity or political independence of any state”, while allowing for “military measures by UN Security Council resolutions” and “exercise of self-defense”. In this theoretical ideal, wars are not declared; instead, 'international armed conflicts' occur. However, costly interstate conflicts, both ‘hot’ and ‘cold’, still exist, for instance the Kashmir Conflict and the Korean War. Furthermore, a ‘New Cold War’ between AI superpowers (the United States and China) looms. An ASI-directed/enabled future interstate war could trigger ‘total war’, including nuclear war, and is therefore ‘high risk’. One risk reduction strategy would be optimising peace through a Universal Global Peace Treaty (UGPT), which could contribute towards the ending of existing