I’ve been writing about tangible things we can do today to help the most important century go well. Previously, I wrote about helpful messages to spread; how to help via full-time work; and how major AI companies can help.
What about major governments1 - what can they be doing today to help?
I think governments could play crucial roles in the future. For example, see my discussion of standards and monitoring.
However, I’m honestly nervous about most possible ways that governments could get involved in AI development and regulation today.
In a previous piece, I talked about two contrasting frames for how to make the best of the most important century:
The caution frame. This frame emphasizes that a furious race to develop powerful AI could end up making everyone worse off. This could be via: (a) AI forming dangerous goals of its own and defeating humanity entirely; (b) humans racing to gain power and resources and “lock in” their values.
Ideally, everyone with the potential to build something powerful enough AI would be able to pour energy into building something safe (not misaligned), and carefully planning out (and negotiating with others on) how to roll it out, without a rush or a race. With this in mind, perhaps we should be doing things like:
The “competition” frame. This frame focuses less on how the transition to a radically different future happens, and more on who's making the key decisions as it happens.
This means it could matter enormously "who leads the way on transformative AI" - which country or countries, which people or organizations.
Some people feel that we can make confident statements today about which specific countries, and/or which people and organizations, we should hope lead the way on transformative AI. These people might advocate for actions like:
Tension between the two frames. People who take the "caution" frame and people who take the "competition" frame often favor very different, even contradictory actions. Actions that look important to people in one frame often look actively harmful to people in the other.
For example, people in the "competition" frame often favor moving forward as fast as possible on developing more powerful AI systems; for people in the "caution" frame, haste is one of the main things to avoid. People in the "competition" frame often favor adversarial foreign relations, while people in the "caution" frame often want foreign relations to be more cooperative.
That said, this dichotomy is a simplification. Many people - including myself - resonate with both frames. But I have a general fear that the “competition” frame is going to be overrated by default for a number of reasons, as I discuss here.
Because of these concerns, I don’t have a ton of tangible suggestions for governments as of now. But here are a few.
My first suggestion is to avoid premature actions, including ramping up research on how to make AI systems more capable.
My next suggestion is to build up the right sort of personnel and expertise for challenging future decisions.
Another suggestion is to generally avoid putting terrible people in power. Voters can help with this!
My top non-”meta” suggestion for a given government is to invest in intelligence on the state of AI capabilities in other countries. If other countries are getting close to deploying dangerous AI systems, this could be essential to know; if they aren’t, that could be essential to know as well, in order to avoid premature and paranoid racing to deploy powerful AI.
A few other things that seem worth doing and relatively low-downside:
I’m centrally thinking of the US, but other governments with lots of geopolitical sway and/or major AI projects in their jurisdiction could have similar impacts. ↩
When discussing recommendations for companies, I imagine companies that are already dedicated to AI, and I imagine individuals at those companies who can have a large impact on the decisions they make.
By contrast, when discussing recommendations for governments, a lot of what I’m thinking is: “Attempts to promote productive actions on AI will raise the profile of AI relative to other issues the government could be focused on; furthermore, it’s much harder for even a very influential individual to predict how their actions will affect what a government ultimately does, compared to a company.” ↩