This is the final piece of a blog post series that explores how AI prediction services affect the risks of war. It is based on my 10-week summer research project at Stanford Existential Risk Institute. Here's a summary of my project and my epistemic status.
In previous posts, I have surveyed different prediction technologies and analyzed what implications follow for the risk of war. In this concluding post, I will describe three possible “world order” scenarios that seem internally consistent. They differ in how prediction capability distributes across nation-states and within nation-states.
Three Possible Worlds
In this world, one nation-state government gets the best AI prediction services. With a deeply integrated industrial-military complex, it achieves both intelligence and technology dominance. The two reproduce each other: control over the best information technology enables worldwide surveillance, and this intelligence access gives the government an overall best chance to keep such control - it runs the most profitable business, preempts threats and peer competitors, and finds technical and managerial solutions the fastest. The world is largely peaceful for most periods of time. World politics runs with the unipolar’s ideology and rules, more effectively than they are today. Other governments mostly accommodate the unipolar’s interests but occasionally find rooms to restrain it. Only an extremely small number of highly determined subscribers to marginalized, “extremist” ideologies are interested in violence. Most things are dealt with behind the scene, and truth tactfully disguised. The public image of “what is happening” can depart dramatically from reality.
Competitive Regional Orders
In this world, several most powerful nation-state governments are the oligarchies of the best AI prediction services as they enjoy a large amount of closed-source data and access to consumer data across different technology corporations. They intensely compete for citizens and their data as key strategic resources. They are likely to become more ideological, for ideology serves as a good strategy for retaining citizens and data. Competitive dynamics among them change little, although prediction services make international policy coordination—especially on topics of clear mutual interests, urgency, and expert consensus like climate change—easier and cheaper. Given the economies of scale of prediction services and concerns over data security, nation-states establish regional data networks to exploit the economic benefits of the prediction economy. They also share military intelligence. Great powers exert their power by constituting these networks, set standards in line with their interests, and sometimes threaten to exclude small powers from the networks if they refuse to comply. Such bounded orders have a high fixed cost and tend to reproduce themselves. Within-border transparency and trust co-exists with outside-border suspicion and secrecy. Ordinary citizens use personified prediction services, for example predicting their friends’ availability this week or political risk in their vacation destinations. Some are free with lower accuracy, while better ones cost a fortune, more commonly used in professional settings or under personal interest. These services, however, have not changed much the incentives for irrational political beliefs. Grand narratives of partisanship, ideology, and nationalism prevail.
Atomic Mass Politics
In this world, prediction capability is more widely diffused. This might be the case that the performance gap between open-source prediction services and closed-source ones turns out to be insignificant, or that secured ways of sharing and transmitting data overshadow those of disruption and manipulation. The incentive structure for political engagement is gradually altered: prediction services enable citizens in democratic societies to predict distributional impacts of domestic and foreign policy and even to predict their preferences based on their values and interests. There is less room for partisan and ideological quarrels, more for convenient political coalitions based on specific policy issues; policymaking becomes more rational and based on self-interests. Stable, long-term alliances and international regimes decline relative to open-membership, flexible international contracts. Under such dynamics, nation-states, especially small powers, may have incentives to race for transparency, the new prerequisites for profitable trade, investment, and other forms of cooperation. In authoritarian regimes, a closer alignment of electorates and political leaders similarly follows, albeit more restricted in scope because the required electorate size for political control is smaller. Overall, international trade increases; nationalistic sentiments decline, as nation-states become less accurate approximations of how individuals’ interests aggregate compared to other alternatives, such as corporations, issue advocacy groups, and geographical communities of smaller scales. Nation-states remain the last resort of coercive forces, mostly targeted at malicious actors and groups excluded from mainstream politics. However, near-term adjustment to the diffused prediction services may bring about political turmoil in democracies, especially those with heterogeneous populations, giving authoritarian regimes a competitive advantage. They either create new institutions to aggregate interests effectively or turn to secession or devolution. Over time, the political institutions of nation-states resemble each other more than they do today.
How prediction technologies interact with international politics is messy. I have not found strong evidence that AI prediction services would dramatically change how nation-states deal with each other. However, there seem to be many plausible pathways through which AI prediction services can gradually make cooperation cheaper and war relatively more costly. There could be an undersupply of AI tools designed for nation-state cooperation, and new institutions using those tools.
A final note on this project’s limitation: I have focused on governments as primary war-making actors but left out the scenarios where only tech companies get to use the best prediction capabilities. My general impression is that tech oligarchies are hard to read; what they want to achieve varies a great deal (e.g. whether they would merge, fight, or take control of public sectors). Thus, I expect those scenarios could look very dissimilar from each other, for which I do not have a good model. It may be worth exploring this further, possibly drawing from the industrial organization literature.
I would like to thank Stanford Existential Risk Institute (SERI) for sponsoring this research project. I am then most greatly indebted to my two mentors, Dr. Miles Brundage and Professor Robin Hanson. Their exceptionally patient and insightful guidance is invaluable to a less-than-junior researcher who struggles to navigate all the complexity of the problem and to see her limitations. I also had the pleasure to learn from Professor David J. Hand, Professor Vincent Conitzer, Professor Peter A. Gloor, and Professor Jeffrey Friedman, and I am grateful for their generosity of time in taking part in research interviews. Finally, I would like to thank Daniel Kokotajlo, Tamay Besiroglu, Nuño Sempere, Julia Chen, Kwan Yee Ng, participants at the Center for Long-Term Risks’ research meeting and the SERI Symposium among many other readers of an early draft of my research output, for their thoughtful comments and questions. All views expressed are my own.