In addition to being misleading, this just makes AI one more (small) facet of security. But security is broadly underinvested in and there is limited government pushback. In addition, there is already a security community which prioritizes other issues and thinks differently. So this would place AI in the wrong metaphorical box.
While I'm not a fan of the proposed solution I do want to note that its good that people are beginning to look at the problem.
One line of reasoning is as follows:
Exactly this. The rest, those little irregularities, at the time didn't matter, because we didn't know what we didn't know.
It is a separate and entirely different problem.
First, do no harm.
One in a hundred likely won't be enough if the organization doing the boxing is sufficiently security conscious. (And if not, there will likely be other issues.)
China is currently an effective peer competitor of the US, among other issues. 2010 is a rough estimate of when that condition started to obtain.
I think people here are uncomfortable advocating for political solutions either because of their views of politics or their comfort level with it.
You don't have to believe that alignment is impossible to conclude that you should advocate for a political/governmental solution. All you have to believe is that the probability of x-risk from AGI is reasonably high and the probably of alignment working to prevent it it not reasonably high. That seems to describe the belief of most of those on LessWrong.
I suspect you will not accept this answer, but for many practical definitions the United States had control over the world starting in 1991 and ending around 2010.
It suggests putting more weight on a plan to get AI Research globally banned. I am skeptical that this will work (though if burning all GPUs would be a pivotal act the chances of success are significantly higher), but it seems very unlikely that there is a technical solution either.
In addition, at least some purported technical solutions to AI risk seem to meaningfully increase the risk to humanity. If you have someone creating an AGI to exercise sufficient control over the world to execute a pivotal act, that raises the stakes of being first enormously which incentivizes cutting corners. And, it also makes it more likely that the AGI will destroy humanity and be quicker to do so.