Recently I was talking to someone about Pause AI's activities in the UK. They asked something like
"Even if Kier Starmer (UK Prime Minister) decided to pause, what would that do?"
And I responded:
"We would have the resources of an entire nation state pointed towards our cause."
Canada (all this applies to the UK too) has a lot more resources to throw around than MIRI, Lighthaven, OpenPhil, and Pause/Control/Stop AI all put together. Canada is a founding NATO member. If Canada's government, and military intelligence leaders, were "Doomer-pilled" on AI, they could do a lot more than we have done.
And then you would do what with those resources? If the pause is just a proxy-goal for whatever those resources would be used for, why not make that thing the explicit goal instead of the pause?
Fortunately, the internet allows people to talk to people from other countries. There are plenty of online discussions at places like Reddit where you can argue with a global audience.
Unless, you have special connections to influence your Canadian politics I would expect that using your energy to affect global public opinion seems more promising.
Thanks for your response. I was hoping for some way to make a difference through in-person interactions for a couple of reasons:
But you're right, online dialogue matters and shouldn't be dismissed.
I don't have any special political connections. So it might seem delusional for me to be aiming for very ambitious political goals like the ones proposed in the post. I figure though that "AI is going to kill us all, but here's something we can do about it that could really make a difference" is a message that can inspire one to take action, whereas "AI is going to kill us all, and there's not much we can do about it" is not.
I think Canada still has a pretty outsized impact on AI, especially considering how many researchers and companies came out of places like University of Toronto or UWaterloo.
Additionally, I think that if we at least had an example of what good AI policy looks like, implemented successfully in a country, it could be used as a proof to other countries that good AI regulation is possible and useful. In that regard I think lobbying for well thought out AI regulation policies on the national level is really important, and it's not necessary to focus on international to get traction.
I recently worked at the Canadian Department of National Defence's newly established AI Centre, and I was surprised one of the tasks of the policy team was an AI safety report, in addition to other generic governance and responsible AI stuff. Admittedly, the AI safety report mostly focused on cybersecurity and CBRN risks, but there were some loss-of-control issues mentioned as well. It indicated to me that working to integrate AI safety by working in a department and pushing projects, or talking to director-level people is possible.
Unfortunately, I think most active capability researchers—and especially the top ones—think they are doing good already and wouldn't want to do something else.
Yes, but it's not all about the way things are right now. It's about the way things could be, and how we can get there. I think we can agree that, even though capability researchers are not doing good, they do care about doing something good, or at least something that can be rationalized as "good" and perceived as good by others. Which means that, if the culture shifts so that those activities are no longer seen as good, and the rationalizations are seen for what they are, they may well change their activities. Or at least the next generation of researchers who haven't yet locked in to a particular worldview and career path may not continue those activities.
Michael Kratsios has said recently, “We totally reject all efforts by international bodies to assert centralized control and global governance of AI.” What if the US government doesn't budge on this commitment? This is a plan B: shift the culture among academics so that frontier capabilities research in the private sector is widely frowned upon and the best people want to avoid the well-earned stigma associated with it. Sublimate the competition for capabilities into a competition for righteousness.
The AI economic / arms race is an existential threat to humanity, because it incentivizes rushing to develop more capable AI systems while cutting corners on safety. Stopping it would likely require a treaty that restricts the development of AI to a safe pace. The most important parties to this treaty would be the US and China, since they are at the frontier of AI development and are the most deeply invested in winning the race.
I would like to do what I can to end the AI arms race, but I live in Canada. For residents of third nations like me, I still I think it's important to lobby our representatives to prioritize AI safety and educate our peers about the risks of AI. But ending the AI arms race mainly comes down to what the US and China decide to do, and it's not so clear how my actions can influence that. I therefore ask this question to solicit ideas on what those of us in third nations, particularly Canada, can do about the AI arms race. I'll propose a couple of ideas first to hopefully spark some discussion.
One way to approach this problem is to ask the related question: what could an international coalition do that would slow down or stop the AI arms race, even if the US and China were not signatories? If we had a good answer to this question, then AI safety movements in third nations could advocate for the formation of such a coalition. It would give us a strategy that doesn't critically hinge on the participation of any one particular nation. Here are two ideas about what this coalition could do:
Idea 1: The coalition could require member states to agree to a preemptive ban on the use of AI models that are more powerful than some threshold. This threshold could be set just slightly beyond the limit of frontier models at the time the ban is passed, so it wouldn't restrict the use of any existing AI models. However, this ban would discourage investment into larger models, because applications built on larger models wouldn't have a market in any of the coalition member states.
Idea 2: The coalition could create something like the "GUARD" institution proposed in A Narrow Path. GUARD would "pool resources, expertise, and knowledge in a grand-scale collaborative AI safety research effort, with the primary goal of minimizing catastrophic and extinction AI risk," and would be governed by an "International AI Safety Commission" to ensure that safety is prioritized. Once this is established, we could appeal to top researchers to do responsible safety research at GUARD instead of irresponsibly contributing to the AI arms race. Rather than competing for economic and military superiority, we would be competing for the moral high ground and prestige, and using that to divert talent from a dangerous AI arms race. This won't stop the arms race of course: there will always be mediocre researchers who will take whatever work they can find. But the top researchers, who can work wherever they want, I think would usually prefer to do good over evil if they can.
Please let me know your thoughts on these ideas, or any ideas you might have about what third nations can do to help stop the AI arms race.