It feels to me that we are not talking about the same thing. Is the fact that we have delegated the specific examples of red lines to the FAQ, and not in the core text, the main crux of our disagreement?
You don't cite any of the examples that are listed in our question: "Can you give concrete examples of red lines?"
Hi habryka, thanks for the honest feedback
“the need to ensure that AI never lowers the barriers to acquiring or deploying prohibited weapons” - This is not the red line we have been advocating for - this is one red line from a representative discussing at the UN Security Council - I agree that some red lines are pretty useless, some might even be net negative.
"The central question is what are the lines!" The public call is intentionally broad on the specifics of the lines. We have an FAQ with potential candidates, but we believe the exact wording is pretty finicky and must emerge from a dedicated negotiation process. Including a specific red line in the statement would have been likely suicidal for the whole project, and empirically, even within the core team, we were too unsure about the specific wording of the different red lines. Some wordings were net negative according to my judgment. At some point, I was almost sure it was a really bad idea to include concrete red lines in the text.
We want to work with political realities. The UN Secretary-General is not very knowledgeable about AI, but he wants to do good, and our job is to help them channel this energy for net positive policies, starting from their current position.
Most of the statement focuses on describing the problem. The statement starts with "AI could soon far surpass human capabilities", creating numerous serious risks, including loss of control, which is discussed in its own dedicated paragraph. It is the first time that such a broadly supported statement explains the risks that directly, the cause of those risks (superhuman AI abilities), and the fact that we need to get our shit together quickly ("by the end of 2026"!).
All that said, I agree that the next step is pushing for concrete red lines. We're moving into that phase now. I literally just ran a workshop today to prioritize concrete red lines. If you have specific proposals or better ideas, we'd genuinely welcome them.
Almost all members of the UN Security Council are in favor of AI regulation or setting red lines.
Never before had the principle of red lines for AI been discussed so openly and at such a high diplomatic level.
UN Secretary-General Antonio Guterres opened the session with a firm call to action for red lines:
• “a ban on lethal autonomous weapons systems operating without human control, with [...] a legally binding instrument by next year”
• “the need to ensure that AI never lowers the barriers to acquiring or deploying prohibited weapons”
Then, Yoshua Bengio took the floor and highlighted our Global Call for AI Red Lines — now endorsed by 11 Nobel laureates and 9 former heads of state and ministers.
Almost all countries were favorable to some red lines:
China: “It’s essential to ensure that AI remains under human control and to prevent the emergence of lethal autonomous weapons that operate without human intervention.”
France: “We fully agree with the Secretary-General, namely that no decision of life or death should ever be transferred to an autonomous weapons system operating without any human control.”
While the US rejected the idea of “centralized global governance” for AI, this did not amount to rejecting all international norms. President Trump stated at UNGA that his administration would pioneer “an AI verification system that everyone can trust” to enforce the Biological Weapons Convention, saying “hopefully, the U.N. can play a constructive role.”
Extract from each intervention.
Right, but you also want to implement a red line on a system that would be precursors to this type of system, and this is why we have a red line on self-improvement.
Updates:
Thanks!
As an anecdote, some members of my team originally thought this project could be finished in 10 days after the French summit. I was more realistic, but even I was off by an order of magnitude. We learned our lesson.
This paper shows it can be done in principle, but in practice curren systems are still not capable enough to do this at full scale on the internet, and I think that even if we don't die directly from full autonomous self replication, self improvement is only a few inches away, and is a true catastrophic/existential risk.
Thanks!
Yeah, we were aware of this historical difficulty, and this is why we mention "enforcement" and "verification" in the text.
This is discussed in the Faq quickly, but I think that an IAEA for AI, which would be able to inspect the different companies, would help tremendously already. And there are many other verification mechanisms possible e.g. here:
I will see if we can add a caveat on this in the Faq.
If random people tomorrow drop AI, I guarantee you things will change
Doubts.
Thanks a lot for this comment.
Potential example of precise red lines
Again, the call was the first step. The second step is finding the best red lines.
Here are more aggressive red lines:
Here are potential already operational ones from the preparedness framework:
"help me understand what is different about what you are calling for than other generic calls for regulation"
Let's recap. We are calling for:
This is far from generic.
"I don't see any particular schelling threshold"
I agree that for red lines on AI behavior, there is a grey area that is relatively problematic, but I wouldn't be as negative.
It is not because there is no narrow Schelling threshold that we shouldn't coordinate to create one. Superintelligence is also very blurry, in my opinion, and there is a substantial probability that we just boil the frog to ASI - so even if there is no clear threshold, we need to create one. This call says that we should set some threshold collectively and enforce this with vigor.
"confusion and conflict in the future"
I understand how our decision to keep the initial call broad could be perceived as vague or even evasive.
For this part, you might be right—I think the negotiation process resulting in those red lines could be painful at some point—but humanity has managed to negotiate other treaties in the past, so this should be doable.
"Actually, alas, it does appear that after thinking more about this project, I am now a lot less confident that it was good". --> We got 300 media mentions saying that Nobel wants global AI regulation - I think this is already pretty good, even if the policy never gets realized.
"making a bunch of tactical conflations, and that rarely ends well." --> could you give examples? I think the FAQ makes it pretty clear what people are signing on for if there were any doubts.