[Take that's been bumping around in AI governance circles, not original to me]
The middle powers have strong incentives to stop/slow AI development (even if we ignore misalignment risk, which we shouldn't), and can plausibly do something about it.
Even if we ignore misalignment risk, countries that are not the US and China by default will become totally geopolitically irrelevant if ASI is built. One or both of the US and China will have overwhelming economic and military power. The middle powers should therefore want to slow/stop AI development, until they can work out how this can be done without them becoming irrelevant.
Middle powers have some leverage here. States with critical parts of the AI chip supply chain (Netherlands, Korea, Japan, maybe Taiwan) can refuse to sell critical components without certain demands. The EU market is a fairly big deal. States with nuclear weapons can threaten to use them, if they risk becoming totally disempowered, and can defend the chip supply chain states.
My favorite ask right now is that chip supply chain states (Netherlands, Korea, Japan) should refuse to help manufacture chips (both selling components, and keeping SME machines running), unless chips are manufactured with secure verification mechanisms. These verification mechanisms can help give these states assurances that the chips aren't being used to disempower them, and can be used to verify a halt to AI development in the future.
This move might be met by strong retaliation by states that want to keep buying chips, and so middle powers (including nuclear states) should coordinate and back each other up. This includes defense pacts.
I therefore think it would probably be good to wake up governments in these countries in key middle powers about ASI. If ASI is real, then this is fairly straightforwardly in their interests, even ignoring misalignment risk.