Instead of edge-casing every statement, I'm going to make a series of assertions in their strongest form so that discussion can be more productive.
AGI is inevitable.
The big labs are the only ones with the resources to achieve AGI.
The first lab to achieve AGI will have a huge, permanent advantage over the rest.
(2) and (3) ⇒ the big labs are currently in a fight for a knife in the mud.
No lab will stop development just before reaching AGI voluntarily.
No lab can be made to stop development after reaching AGI even involuntarily.
The first lab to achieve AGI will try to spend as much compute as possible, as early as possible, as fast as possible to permanently cement its superiority, while trying to keep it a secret for as long as possible.
When this happens, it's in the interest of the rest of the world if the acceleration happens slowly so that the other competing labs can catch up.
The hard thing about executing this is knowing when a lab is close to AGI in the first place. I don't have any novel proposals on what to do after we know someone is close to AGI.
Regulations that propose to subjectively evaluate the risk of each newly trained model before deployment are well-intentioned (like GDPR), but they're toothless (also like GDPR) at preventing these accelerating arms race scenarios.
I propose we make each lab publicly disclose its audited ~daily total compute expenditure on both training and inference with sensible breakdowns. This isn't unnatural; public companies already do this with cashflow.
This directs competitive energy in a productive direction without stifling innovation too much: since it's in the interest of each lab to reach AGI first, they will keep close tabs on everyone else and can sound an alarm when there is an uptick. The public will too.
Total daily compute spend is a much more objective, quantifiable, and fungible metric with a clearer consensus on its definition than something vague like "risk". Transitive compute provenance like someone renting tens of thousands of H100s on AWS through intermediaries are also trivially covered.
While this incentivizes labs to obfuscating their AGI-related spend by disguising it with other products, enforcing severe financial and criminal penalties for execs should be mostly sufficient to prevent outright fraud like deliberate underreporting. Reliably detecting malfeasance won't be easy but it's sure as hell easier than detecting hidden risks in inscrutable matrices.
Auditing and enforcing this as a regulation like this is cheap, already possible with existing tools, and doesn't require major leaps in mechanistic interpretability. A good plan violently executed now is better than a perfect plan next week.