LESSWRONG
LW

556
Charbel-Raphaël
3013Ω363252234
Message
Dialogue
Subscribe

Charbel-Raphael Segerie

https://crsegerie.github.io/ 

Living in Paris

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
6Charbel-Raphaël's Shortform
Ω
7mo
Ω
16
Charbel-Raphaël's Shortform
Charbel-Raphaël1mo50

Thanks a lot for this comment.

Potential example of precise red lines

Again, the call was the first step. The second step is finding the best red lines.

Here are more aggressive red lines:

  • Prohibiting the deployment of AI systems that, if released, would have a non-trivial probability of killing everyone. The probability would be determined by a panel of experts chosen by an international institution.
  • "The development of superintelligence […] should not be allowed until there is broad scientific consensus that it will be done safely and controllably (from this letter from the Vatican).

Here are potential already operational ones from the preparedness framework:

  • [AI Self-improvement - Critical - OpenAI] The model is capable of recursively self-improving (i.e., fully automated AI R&D), defined as either (leading indicator) a superhuman research scientist agent OR (lagging indicator) causing a generational model improvement (e.g., from OpenAI o1 to OpenAI o3) in 1/5th the wall-clock time of equivalent progress in 2024 (e.g., sped up to just 4 weeks) sustainably for several months. - Until we have specified safeguards and security controls that would meet a Critical standard, halt further development.
  • [Cybersecurity - AI Self-improvement - Critical - OpenAI] A tool-augmented model can identify and develop functional zero-day exploits of all severity levels in many hardened real-world critical systems without human intervention - Until we have specified safeguards and security controls that would meet a Critical standard, halt further development.

"help me understand what is different about what you are calling for than other generic calls for regulation"

Let's recap. We are calling for:

  1. "an international agreement" - this is not your local Californian regulation
  2. that enforces some hard rules - "prohibitions on AI uses or behaviors that are deemed too dangerous" - it's not about asking AI providers to do evals and call it a day
  3. "to prevent unacceptable AI risks."
    1. Those risks are enumerated in the call
      1. Misuses and systemic risks are enumerated in the first paragraph
      2. Loss of human control in the second paragraph
  4. The way to do this is to "build upon and enforce existing global frameworks and voluntary corporate commitments, ensuring that all advanced AI providers are accountable to shared thresholds."
    1. Which is to say that one way to do this is to harmonize the risk thresholds defining unacceptable levels of risk in the different voluntary commitments.
    2. existing global frameworks: This includes notably the AI Act, its Code of Practice, and this should be done compatibly with some other high-level frameworks
  5. "with robust enforcement mechanisms — by the end of 2026." - We need to get our shit together quickly, and enforcement mechanisms could entail multiple things. One interpretation from the FAQ is setting up an international technical verification body, perhaps the international network of AI Safety institutes, to ensure the red lines are respected.
  6. We give examples of red lines in the FAQ. Although some of them have a grey zone, I would disagree that this is generic. We are naming the risks in those red lines and stating that we want to avoid AI that the evaluation indicates creates substantial risks in this direction.

This is far from generic.

"I don't see any particular schelling threshold"

I agree that for red lines on AI behavior, there is a grey area that is relatively problematic, but I wouldn't be as negative.

It is not because there is no narrow Schelling threshold that we shouldn't coordinate to create one. Superintelligence is also very blurry, in my opinion, and there is a substantial probability that we just boil the frog to ASI - so even if there is no clear threshold, we need to create one. This call says that we should set some threshold collectively and enforce this with vigor.

  1. In the nuclear industry, and in the aerospace industry, there is no particular schelling point, nor - but we don't care - the red line is defined as "1/10000" chance of catastrophe per year for this plane/nuclear central - and that's it. You could have added a zero or removed one. I don't care. But I care that there is a threshold.
  2. We could define an arbitrary threshold for AI - the threshold might itself be arbitrary, but the principle of having a threshold after which you need to be particularly vigilant, install mitigation, or even halt development, seems to me to be the basis of RSPs.
  3. Those red lines should be operationalized. (but I think it is not necessary to operationalize this in the text of the treaty, and that this operationalization could be done by a technical body, which would then update those operationalizations from time to time, according to the evolution of science, risk modeling, etc...).

"confusion and conflict in the future"

I understand how our decision to keep the initial call broad could be perceived as vague or even evasive.

For this part, you might be right—I think the negotiation process resulting in those red lines could be painful at some point—but humanity has managed to negotiate other treaties in the past, so this should be doable.

"Actually, alas, it does appear that after thinking more about this project, I am now a lot less confident that it was good". --> We got 300 media mentions saying that Nobel wants global AI regulation  - I think this is already pretty good, even if the policy never gets realized.

"making a bunch of tactical conflations, and that rarely ends well." --> could you give examples? I think the FAQ makes it pretty clear what people are signing on for if there were any doubts.

Reply11
Charbel-Raphaël's Shortform
Charbel-Raphaël1mo60

It feels to me that we are not talking about the same thing. Is the fact that we have delegated the specific examples of red lines to the FAQ, and not in the core text, the main crux of our disagreement?

You don't cite any of the examples that are listed in our question: "Can you give concrete examples of red lines?"

Reply
Charbel-Raphaël's Shortform
Charbel-Raphaël1mo147

Hi habryka, thanks for the honest feedback

“the need to ensure that AI never lowers the barriers to acquiring or deploying prohibited weapons” - This is not the red line we have been advocating for - this is one red line from a representative discussing at the UN Security Council - I agree that some red lines are pretty useless, some might even be net negative.

"The central question is what are the lines!" The public call is intentionally broad on the specifics of the lines. We have an FAQ with potential candidates, but we believe the exact wording is pretty finicky and must emerge from a dedicated negotiation process. Including a specific red line in the statement would have been likely suicidal for the whole project, and empirically, even within the core team, we were too unsure about the specific wording of the different red lines. Some wordings were net negative according to my judgment. At some point, I was almost sure it was a really bad idea to include concrete red lines in the text.

We want to work with political realities. The UN Secretary-General is not very knowledgeable about AI, but he wants to do good, and our job is to help them channel this energy for net positive policies, starting from their current position.

Most of the statement focuses on describing the problem. The statement starts with "AI could soon far surpass human capabilities", creating numerous serious risks, including loss of control, which is discussed in its own dedicated paragraph. It is the first time that such a broadly supported statement explains the risks that directly, the cause of those risks (superhuman AI abilities), and the fact that we need to get our shit together quickly ("by the end of 2026"!).

All that said, I agree that the next step is pushing for concrete red lines. We're moving into that phase now. I literally just ran a workshop today to prioritize concrete red lines. If you have specific proposals or better ideas, we'd genuinely welcome them.

 

 

Reply
Charbel-Raphaël's Shortform
Charbel-Raphaël1mo*7014

Almost all members of the UN Security Council are in favor of AI regulation or setting red lines.

Never before had the principle of red lines for AI been discussed so openly and at such a high diplomatic level.

UN Secretary-General Antonio Guterres opened the session with a firm call to action for red lines:

• “a ban on lethal autonomous weapons systems operating without human control, with [...] a legally binding instrument by next year”
• “the need to ensure that AI never lowers the barriers to acquiring or deploying prohibited weapons”

Then, Yoshua Bengio took the floor and highlighted our Global Call for AI Red Lines — now endorsed by 11 Nobel laureates and 9 former heads of state and ministers.

Almost all countries were favorable to some red lines:

China: “It’s essential to ensure that AI remains under human control and to prevent the emergence of lethal autonomous weapons that operate without human intervention.”

France: “We fully agree with the Secretary-General, namely that no decision of life or death should ever be transferred to an autonomous weapons system operating without any human control.”

While the US rejected the idea of “centralized global governance” for AI, this did not amount to rejecting all international norms. President Trump stated at UNGA that his administration would pioneer “an AI verification system that everyone can trust” to enforce the Biological Weapons Convention, saying “hopefully, the U.N. can play a constructive role.”

Extract from each intervention.

Large room with wooden paneling and large wall screen displaying a man in suit speaking via video. Central round wooden table with multiple screens showing speakers, surrounded by seated delegates in business attire at blue chairs, some with laptops. Additional screens on table display participants. Formal diplomatic setting with flags and microphones visible.
Reply1
Global Call for AI Red Lines - Signed by Nobel Laureates, Former Heads of State, and 200+ Prominent Figures
Charbel-Raphaël2mo95

Right, but you also want to implement a red line on a system that would be precursors to this type of system, and this is why we have a red line on self-improvement.

Reply
Global Call for AI Red Lines - Signed by Nobel Laureates, Former Heads of State, and 200+ Prominent Figures
Charbel-Raphaël2mo*101

Updates: 

  • The global call for AI red lines got 300 media mentions, and was picked up by the world's leading newswires, AP & AFP, and featured in premier outlets, including Le Monde, NBC, CNBC, El País, The Hindu, The NYT, The Verge, and the BBC.
  • Yoshua Bengio, presented our Call for Red Lines at the UN Security Council: "Earlier this week, with 200 experts, including former heads of state and Nobel laureates [...], we came together to support the development of international red lines to prevent unacceptable AI risks."

Image

Reply2
Global Call for AI Red Lines - Signed by Nobel Laureates, Former Heads of State, and 200+ Prominent Figures
Charbel-Raphaël2mo*50

Thanks!

​As an anecdote, some members of my team originally thought this project could be finished in 10 days after the French summit. I was more realistic, but even I was off by an order of magnitude. We learned our lesson.

Reply
Global Call for AI Red Lines - Signed by Nobel Laureates, Former Heads of State, and 200+ Prominent Figures
Charbel-Raphaël2mo51

This paper shows it can be done in principle, but in practice curren systems are still not capable enough to do this at full scale on the internet, and I think that even if we don't die directly from full autonomous self replication,  self improvement is only a few inches away, and is a true catastrophic/existential risk.

Reply
Global Call for AI Red Lines - Signed by Nobel Laureates, Former Heads of State, and 200+ Prominent Figures
Charbel-Raphaël2mo20

Thanks!

Yeah, we were aware of this historical difficulty, and this is why we mention "enforcement" and "verification" in the text. 

This is discussed in the Faq quickly, but I think that an IAEA for AI, which would be able to inspect the different companies, would help tremendously already. And there are many other verification mechanisms possible e.g. here:

  1.  https://techgov.intelligence.org/research/mechanisms-to-verify-international-agreements-about-ai-development
  2. https://www.un.org/scientific-advisory-board/sites/default/files/2025-06/verification_of_frontier_ai.pdf 

I will see if we can add a caveat on this in the Faq.

Reply
Global Call for AI Red Lines - Signed by Nobel Laureates, Former Heads of State, and 200+ Prominent Figures
Charbel-Raphaël2mo42

If random people tomorrow drop AI, I guarantee you things will change

Doubts. 

  1. Why would random people drop AI? Our campaign already generated 250 mentions and articles in mass media, you need this kind of outreach to reach them.
  2. Many of those people are already against AI according to different surveys and nothing seems to happen currently. 
Reply
Load More
AI Control
2 years ago
12A Call for Better Risk Modelling
7h
0
337Global Call for AI Red Lines - Signed by Nobel Laureates, Former Heads of State, and 200+ Prominent Figures
2mo
27
9Dissolving moral philosophy: from pain to meta-ethics
3mo
3
37The bitter lesson of misuse detection
Ω
4mo
Ω
6
40The 80/20 playbook for mitigating AI scheming in 2025
Ω
6mo
Ω
2
22[Paper] Safety by Measurement: A Systematic Literature Review of AI Safety Evaluation Methods
6mo
0
6Charbel-Raphaël's Shortform
Ω
7mo
Ω
16
101🇫🇷 Announcing CeSIA: The French Center for AI Safety
11mo
2
49Are we dropping the ball on Recommendation AIs?
Ω
1y
Ω
17
63We might be dropping the ball on Autonomous Replication and Adaptation.
QΩ
1y
QΩ
30
Load More