I believe the existential risks of misaligned AI is extremely high because of the fact that the AI arms race is a race to the bottom and attempts at regulation will likely fail because of the incentive not to fall behind competitors. Also, it could come from small unsupervised groups, other countries, or even the larger companies that had a bad R&D project. Alignment requires everyone to agree that a certain methodology works is the best path forward, which is unlikely.
So why is there so much focus on alignment/regulation and less on building credible defense? Defensive AIs designed to monitor and counteract misaligned AI digital space as well as more investment in cybersecurity and infrastructure hardening may be a more valuable prospect for investment. I think someone developing a misaligned AI is not a matter of if, but when. Better to prepare than to hope everyone will just agree on certain alignment protocols.