Thank you for writing this. I've tried to summarize this article (missing good points made above, but might be useful to people deciding whether to read the full article):
Summary
AGI might be developed by 2027, but we lack clear plans for tackling misalignment risks. This post:
This plan focuses on two minimum requirements:
Layer 1 interventions (essential):
Layer 2 interventions (important):
This plan is meant as a starting point, and Marius encourages others to come up with better plans.
At BlueDot we've been thinking about this a fair bit recently, and might be able to help here too. We have also thought a bit about criteria for good plans and the hurdles a plan needs to overcome, as well as have reviewed a lot of the existing literature on plans.
I've messaged you on Slack.
Re: Your comments on the power distribution problem
Agreed that multiple entities powerful adversaries controlling AI seems like not a good plan. And I agree if the decisive winner of the AI race will not act in humanity's best interests, we are screwed.
But I think this is a problem for before that happens: we can shape the world today so it's more likely the winner of the AI race will act in humanity's best interests.
Re: Your points about alignment solving this.
I agree if you define alignment as 'get your AI system to act in the best interests in humans', then the coordination problem becomes harder and likely sufficient for problems 2 and 3. But I think it then bundles more problems together in a way that might be less conducive to solving them.
For loss of control, I was primarily thinking about making systems intent-aligned, by which I mean getting the AI system to try to do what its creators intend. I think this makes dividing these challenges up into subproblems easier (and seems to be what many people appear to be gunning for).
If you do define alignment as human-values alignment, I think "If you fail to implement to implement a working alignment solution, you [the creating organization] die" doesn't hold - I can imagine successfully aligning a system to 'get your AI system to act in the best interests of its creators' working fine for its creators but not being great for the world.
It should! Fixed, thank you :)
The UK Government tends to use the PHIA probability yardstick in most of its communications.
This is used very consistently in national security publications. It's also commonly used by other UK Government departments as people frequently move between departments in the civil service, and documents often get reviewed for clearance by national security bodies before public release.
It is less granular than the IPCC terms at the extremes, but the ranges don't overlap. I don't know which is actually better to use in AI safety communications, but I think being clear if you are using either in your writing seems a good way to go! In any case being aware it's a thing you'll see in UK Government documents might be useful.
A comment provided to me by a reader, highlighting 3rd party liability and insurance as interventions too (lightly edited):
Hi! I liked your AI regulator’s toolbox post – very useful to have a comprehensive list like this! I'm not sure exactly what heading it should go under, but I suggest considering adding proposals to greatly increase 3rd party liability (and or require carrying insurance). A nice intro is here:
https://www.lawfaremedia.org/article/tort-law-and-frontier-ai-governanceSome are explicitly proposing strict liability for catastrophic risks. Gabe Weil has proposal, summarized here: https://www.lesswrong.com/posts/5e7TrmH7mBwqpZ6ek/tort-law-can-play-an-important-role-in-mitigating-ai-risk
There are also workshop papers on insurance here:
https://www.genlaw.org/2024-icml-papers#liability-and-insurance-for-catastrophic-losses-the-nuclear-power-precedent-and-lessons-for-ai
https://www.genlaw.org/2024-icml-papers#insuring-uninsurable-risks-from-ai-government-as-insurer-of-last-resortNB: when implemented correctly (i.e. when premiums are accurately risk-priced), insurance premiums are mechanically similar to Pigouvian taxes, internalizing negative externalities. So maybe this goes under the "Other taxes" heading? But that also seems odd. Like taxes, these are certainly incentive alignment strategies (rather than command and control) – maybe that's a better heading? Just spitballing :)
I don't understand how the experimental setup provides evidence for self-other overlap working.
The reward structure for the blue agent doesn't seem to provide a non-deceptive reason to interact with the red agent. The described "non-deceptive" behaviour (going straight to the goal) doesn't seem to demonstrate awareness of or response to the red agent.
Additionally, my understanding of the training setup is that it tries to make the blue agent's activations the same regardless of whether it observes the red agent or not. This would mean there's effectively no difference when seeing the red agent, i.e. no awareness of it. (This is where I'm most uncertain - I may have misunderstood this! Is it to do with only training the subset of cases where the blue agent doesn't originally observe the red agent? Or having the KL penalty?). So we seem to be training the blue agent to ignore the red agent.
I think what might answer my confusion is "Are we just training the blue agent to ignore the red agent entirely?"
Alternatively, I think I'd be more convinced with an experiment showing a task where the blue agent still obviously needs to react to the red agent. One idea could be to add a non-goal in the way that applies a penalty to both agents if either agent goes through it, that can only be seen by the red agent (which knows it is this specific non-goal). Then the blue agent would have to sense when the red agent is hesitant about going somewhere and try to go around the obstacle (I haven't thought very hard about this, so this might also have problems).
Most sunscreen feels horrible and slimy (especially in the US where the FDA has not yet approved the superior formulas available in Europe and Asia).
What superior formulas available in Europe would you recommend?
I think this is less of an issue for technical AI papers. But I'm finding more governance researchers (especially people moving from other academic communities) seem intent on journal publishing in places that policymakers can't read their stuff! I have also been blocked sometimes from sharing papers with governance friends easily because they are behind paywalls. I might see this more because at BlueDot we get a lot of people who are early on in their career transition, and producing projects they want to publish in places.