For several years now, I’ve been fielding calls from people who want to help make AI go well for humanity. Sometimes these folks are based outside the U.S., and often they ask me: With most of the labs concentrated in a few places (the U.S., China, and DeepMind in the U.K.), what are the rest of us to do?
I won’t deny that it’s harder. But frankly, all of us in the field are facing an uphill battle right now, and we need all the help we can get. Below, I’ll share my most common recommendations to folks who live outside the major AI hotbeds: Public technical governance research and national policy work (yes, really).
Public technical governance research
There seems to be low-hanging fruit in the space of technical governance, which aims to answer tough technical questions that are required for governance of AI research. Work on these questions can be done almost anywhere, and the answers can be useful in a wide variety of governance regimes.
(In fact, MIRI’s technical governance team is running a remote-optional fellowship in 2026 for this very purpose! Check out the link for examples of open research questions.)
National policy work
At a recent conference, I ran an AI wargame in which three teams played each of the United States, China, and the European Union. Predictably, the U.S. and China spent most of the game sabotaging each other’s stability and AI research, while the E.U. issued a bunch of policies that the labs more or less ignored. As the endgame loomed, however, the situation changed. The E.U. announced a treaty halting the race to superintelligence; a treaty which, despite their previous differences, the U.S. and China teams were both convinced to sign.
Wargames are not reality, but this was nevertheless an encouraging experience; I had been pretty sure that particular simulated world was thoroughly doomed.
A cynic might claim that the official position of (for example) Brazil on global AI development doesn’t make much difference to the world at large. Truth be told, the cynic has a point. Brazil can’t regulate the U.S. AI industry. But the national policy of many countries still matters for shaping the global environment, and good work can be done in this field.
That being said, here are some examples of national policies that I think might help:
A country becomes the first to publicly endorse an international treaty banning development of superintelligence.
A group of countries signs a preliminary agreement to that effect.
A country adopts a functional legal regime permitting widespread use of AIs for mundane purposes while banning frontier development. (The costs of such a regime are much lower in places that don’t have AI labs yet!)
A country funds public technical governance research.
I think this is an underexplored area. Those of us based in the U.S. are attempting to tug U.S. policy against a veritable tide of lobbyists with tens of millions of dollars in funding, whose preferred approach to regulation is “nah”. It needs doing, but there may be opportunities elsewhere to swim against a less powerful current.
Also, as we saw with France’s disappointingly reckless vibe shift in the 2025 Paris AI Action Summit, a bad choice of national policy can do a lot of damage to global governance. Yes, that shift in focus may have been partly downstream of U.S. policy decisions (see Exhibit A, remarks by Vance at that same summit) but it’s still up to the host country to set the tone of such an event.
Contrast this with the 2024 promise of the Japanese prime minister to lead development of an international AI framework, following the previous year’s establishment of the Hiroshima AI Process at a Japan-hosted G7 summit.
Talk may be cheap, but it is also a prelude to things actually happening.
The more world leaders can be induced to understand — and publicly acknowledge — that superintelligence can and will actually, literally kill everyone, the better our chances of getting a global governance regime in time.
For several years now, I’ve been fielding calls from people who want to help make AI go well for humanity. Sometimes these folks are based outside the U.S., and often they ask me: With most of the labs concentrated in a few places (the U.S., China, and DeepMind in the U.K.), what are the rest of us to do?
I won’t deny that it’s harder. But frankly, all of us in the field are facing an uphill battle right now, and we need all the help we can get. Below, I’ll share my most common recommendations to folks who live outside the major AI hotbeds: Public technical governance research and national policy work (yes, really).
Public technical governance research
There seems to be low-hanging fruit in the space of technical governance, which aims to answer tough technical questions that are required for governance of AI research. Work on these questions can be done almost anywhere, and the answers can be useful in a wide variety of governance regimes.
(In fact, MIRI’s technical governance team is running a remote-optional fellowship in 2026 for this very purpose! Check out the link for examples of open research questions.)
National policy work
At a recent conference, I ran an AI wargame in which three teams played each of the United States, China, and the European Union. Predictably, the U.S. and China spent most of the game sabotaging each other’s stability and AI research, while the E.U. issued a bunch of policies that the labs more or less ignored. As the endgame loomed, however, the situation changed. The E.U. announced a treaty halting the race to superintelligence; a treaty which, despite their previous differences, the U.S. and China teams were both convinced to sign.
Wargames are not reality, but this was nevertheless an encouraging experience; I had been pretty sure that particular simulated world was thoroughly doomed.
A cynic might claim that the official position of (for example) Brazil on global AI development doesn’t make much difference to the world at large. Truth be told, the cynic has a point. Brazil can’t regulate the U.S. AI industry. But the national policy of many countries still matters for shaping the global environment, and good work can be done in this field.
That being said, here are some examples of national policies that I think might help:
I think this is an underexplored area. Those of us based in the U.S. are attempting to tug U.S. policy against a veritable tide of lobbyists with tens of millions of dollars in funding, whose preferred approach to regulation is “nah”. It needs doing, but there may be opportunities elsewhere to swim against a less powerful current.
Also, as we saw with France’s disappointingly reckless vibe shift in the 2025 Paris AI Action Summit, a bad choice of national policy can do a lot of damage to global governance. Yes, that shift in focus may have been partly downstream of U.S. policy decisions (see Exhibit A, remarks by Vance at that same summit) but it’s still up to the host country to set the tone of such an event.
Contrast this with the 2024 promise of the Japanese prime minister to lead development of an international AI framework, following the previous year’s establishment of the Hiroshima AI Process at a Japan-hosted G7 summit.
Talk may be cheap, but it is also a prelude to things actually happening.
The more world leaders can be induced to understand — and publicly acknowledge — that superintelligence can and will actually, literally kill everyone, the better our chances of getting a global governance regime in time.