2307

LESSWRONG
LW

2306
Frontpage

8

[ Question ]

What can Canadians do to help end the AI arms race?

by Tom938
5th Oct 2025
2 min read
A
3
7

8

Frontpage

8

What can Canadians do to help end the AI arms race?
5J Bostock
1Tom938
3ChristianKl
1Tom938
2Perhaps
3Adele Lopez
1Tom938
New Answer
New Comment

3 Answers sorted by
top scoring

J Bostock

Oct 06, 2025

52

Recently I was talking to someone about Pause AI's activities in the UK. They asked something like

"Even if Kier Starmer (UK Prime Minister) decided to pause, what would that do?"

And I responded:

"We would have the resources of an entire nation state pointed towards our cause."

Canada (all this applies to the UK too) has a lot more resources to throw around than MIRI, Lighthaven, OpenPhil, and Pause/Control/Stop AI all put together. Canada is a founding NATO member. If Canada's government, and military intelligence leaders, were "Doomer-pilled" on AI, they could do a lot more than we have done.

Add Comment
[-]Tom9382d10

And then you would do what with those resources? If the pause is just a proxy-goal for whatever those resources would be used for, why not make that thing the explicit goal instead of the pause?

Reply

ChristianKl

Oct 06, 2025

30

Fortunately, the internet allows people to talk to people from other countries. There are plenty of online discussions at places like Reddit where you can argue with a global audience.

Unless, you have special connections to influence your Canadian politics I would expect that using your energy to affect global public opinion seems more promising. 

Add Comment
[-]Tom9382d10

Thanks for your response. I was hoping for some way to make a difference through in-person interactions for a couple of reasons:

  • the people engaging in these discussions in places like reddit are probably already familiar with the topic to some degree. But IRL, I could reach people who are not already visiting places on the internet where these topics are commonly discussed.
  • Online dialogue often doesn't give you much feedback to work with. If I post something that gets no response or a negative response, I can only speculate why. But in person, as long as I've got someone's attention initially, they probably aren't going to just stare at me with a blank poker face and then walk away silently. I'll get some sort of reaction, which can be used to tailor my message and how it is presented.

But you're right, online dialogue matters and shouldn't be dismissed.

I don't have any special political connections. So it might seem delusional for me to be aiming for very ambitious political goals like the ones proposed in the post. I figure though that "AI is going to kill us all, but here's something we can do about it that could really make a difference" is a message that can inspire one to take action, whereas "AI is going to kill us all, and there's not much we can do about it" is not.

Reply

Perhaps

Oct 06, 2025

20

I think Canada still has a pretty outsized impact on AI, especially considering how many researchers and companies came out of places like University of Toronto or UWaterloo. 

Additionally, I think that if we at least had an example of what good AI policy looks like, implemented successfully in a country, it could be used as a proof to other countries that good AI regulation is possible and useful. In that regard I think lobbying for well thought out AI regulation policies on the national level is really important, and it's not necessary to focus on international to get traction. 

I recently worked at the Canadian Department of National Defence's newly established AI Centre, and I was surprised one of the tasks of the policy team was an AI safety report, in addition to other generic governance and responsible AI stuff. Admittedly, the AI safety report mostly focused on cybersecurity and CBRN risks, but there were some loss-of-control issues mentioned as well. It indicated to me that working to integrate AI safety by working in a department and pushing projects, or talking to director-level people is possible.

Add Comment
2 comments, sorted by
top scoring
Click to highlight new comments since: Today at 2:59 PM
[-]Adele Lopez2d30

Unfortunately, I think most active capability researchers—and especially the top ones—think they are doing good already and wouldn't want to do something else.

Reply
[-]Tom9382d*10

Yes, but it's not all about the way things are right now. It's about the way things could be, and how we can get there. I think we can agree that, even though capability researchers are not doing good, they do care about doing something good, or at least something that can be rationalized as "good" and perceived as good by others. Which means that, if the culture shifts so that those activities are no longer seen as good, and the rationalizations are seen for what they are, they may well change their activities. Or at least the next generation of researchers who haven't yet locked in to a particular worldview and career path may not continue those activities.

Michael Kratsios has said recently, “We totally reject all efforts by international bodies to assert centralized control and global governance of AI.” What if the US government doesn't budge on this commitment? This is a plan B: shift the culture among academics so that frontier capabilities research in the private sector is widely frowned upon and the best people want to avoid the well-earned stigma associated with it. Sublimate the competition for capabilities into a competition for righteousness.

Reply
Moderation Log
More from Tom938
View more
Curated and popular this week
A
3
2

The AI economic / arms race is an existential threat to humanity, because it incentivizes rushing to develop more capable AI systems while cutting corners on safety. Stopping it would likely require a treaty that restricts the development of AI to a safe pace. The most important parties to this treaty would be the US and China, since they are at the frontier of AI development and are the most deeply invested in winning the race.

I would like to do what I can to end the AI arms race, but I live in Canada. For residents of third nations like me, I still I think it's important to lobby our representatives to prioritize AI safety and educate our peers about the risks of AI. But ending the AI arms race mainly comes down to what the US and China decide to do, and it's not so clear how my actions can influence that. I therefore ask this question to solicit ideas on what those of us in third nations, particularly Canada, can do about the AI arms race. I'll propose a couple of ideas first to hopefully spark some discussion.

One way to approach this problem is to ask the related question: what could an international coalition do that would slow down or stop the AI arms race, even if the US and China were not signatories? If we had a good answer to this question, then AI safety movements in third nations could advocate for the formation of such a coalition. It would give us a strategy that doesn't critically hinge on the participation of any one particular nation. Here are two ideas about what this coalition could do:

Idea 1: The coalition could require member states to agree to a preemptive ban on the use of AI models that are more powerful than some threshold. This threshold could be set just slightly beyond the limit of frontier models at the time the ban is passed, so it wouldn't restrict the use of any existing AI models. However, this ban would discourage investment into larger models, because applications built on larger models wouldn't have a market in any of the coalition member states.

Idea 2: The coalition could create something like the "GUARD" institution proposed in A Narrow Path. GUARD would "pool resources, expertise, and knowledge in a grand-scale collaborative AI safety research effort, with the primary goal of minimizing catastrophic and extinction AI risk," and would be governed by an "International AI Safety Commission" to ensure that safety is prioritized. Once this is established, we could appeal to top researchers to do responsible safety research at GUARD instead of irresponsibly contributing to the AI arms race. Rather than competing for economic and military superiority, we would be competing for the moral high ground and prestige, and using that to divert talent from a dangerous AI arms race. This won't stop the arms race of course: there will always be mediocre researchers who will take whatever work they can find. But the top researchers, who can work wherever they want, I think would usually prefer to do good over evil if they can.

Please let me know your thoughts on these ideas, or any ideas you might have about what third nations can do to help stop the AI arms race.