This is the second piece of a blog post series that explores how AI prediction services affect the risks of war. It is based on my 10-week summer research project at Stanford Existential Risk Institute. See my first post for a summary of my project and my epistemic status.
In the first post, I have surveyed different prediction technologies, how they might be relevant to nation-state governments, and what their development trajectories might look like. This post applies international relations literature to examine what implications follow for the risk of war. The next post will describe three possible “world order” scenarios after prediction capability takes off.
Risks of War
I choose to focus on the rationalist... (read 2715 more words →)
Thanks for the comment and I enjoy reading the article! I basically agree with what you said and admit that I only get to touch a bit upon this important "multi-level interests problem" within the "domestic audience" section. I think it would depend a lot on (1) how diffused those war-relevant prediction services are and (2) the distribution of societal trust in them (e.g. whether they become politicalized), which would be country/context-specific and I did not come up with useful ways to further disentangle them on a general level.