This post highlights a few key excerpts from our full impact report. You can read the full report at https://controlai.com/impact-report-2025. ControlAI is a non-profit organization working to avert the extinction risks posed by superintelligence. We help hundreds of thousands of people understand these risks and meet hundreds of lawmakers to...
In this paper, we make recommendations for how middle powers may band together through a binding international agreement and achieve the goal of preventing the development of ASI, without assuming initial cooperation by superpowers. You can read the paper here: asi-prevention.com In our previous work Modelling the Geopolitics of AI,...
We model how rapid AI development may reshape geopolitics in the absence of international coordination on preventing dangerous AI development. We focus on predicting which strategies would be pursued by superpowers and middle powers and which outcomes would result from them. You can read our paper here: ai-scenarios.com Attempts to...
Expert opinions about future AI development span a wide range, from predictions that we will reach ASI soon and then humanity goes extinct, to predictions that AI progress will plateau soon, resulting in weaker AI that presents much more mundane risks and benefits. However, non-experts often encounter only a single...
"If China can't get millions of chips, we'll (at least temporarily) live in a unipolar world, where only the US and its allies have these models. It's unclear whether the unipolar world will last, but there's at least the possibility that, because AI systems can eventually help make even smarter...
We (Connor Leahy, Gabriel Alfour, Chris Scammell, Andrea Miotti, Adam Shimi) have just published The Compendium, which brings together in a single place the most important arguments that drive our models of the AGI race, and what we need to do to avoid catastrophe. We felt that something like this...
We have published A Narrow Path: our best attempt to draw out a comprehensive plan to deal with AI extinction risk. We propose concrete conditions that must be satisfied for addressing AI extinction risk, and offer policies that enforce these conditions. A Narrow Path answers the following: assuming extinction risk...