2154

LESSWRONG
LW

2153
AI

80

Global Call for AI Red Lines - Signed by Nobel Laureates, Former Heads of State, and 200+ Prominent Figures

by Charbel-Raphaël
22nd Sep 2025
2 min read
1

80

This is a linkpost for https://red-lines.ai/
AI

80

Global Call for AI Red Lines - Signed by Nobel Laureates, Former Heads of State, and 200+ Prominent Figures
4MichaelDickens
New Comment
1 comment, sorted by
top scoring
Click to highlight new comments since: Today at 8:54 PM
[-]MichaelDickens2h40

This seems like a good thing to advocate for, I'm disappointed that they don't make any mention of extinction risk but I think establishing red lines would be a step in the right direction.

Reply
Moderation Log
More from Charbel-Raphaël
View more
Curated and popular this week
1Comments

Today, the Global Call for AI Red Lines was released and presented at the UN General Assembly. It was developed by CeSIA, the French Center for AI Safety, The Future Society and the Center for Human-compatible AI. 

This call has been signed by a historic coalition of 200+ former heads of state, ministers, diplomats, Nobel laureates, AI pioneers, industry experts, human rights advocates, political leaders, and other influential thinkers, as well as 70+ organizations.

Signatories include:

  • 10 Nobel Laureates, in economics, physics, chemistry and peace
  • Former Heads of State: Mary Robinson (Ireland), Enrico Letta (Italy)
  • Former UN representatives: Csaba Kőrösi, 77th President of the UN General Assembly
  • Leaders and employees at AI companies: Wojciech Zaremba (OpenAI cofounder), Jason Clinton (Anthropic CISO), Ian Goodfellow (Principal Scientist at Deepmind)
  • Top signatories from the CAIS statement: Geoffrey Hinton, Yoshua Bengio, Dawn Song, Ya-Qin Zhang

The full text of the call reads:

AI holds immense potential to advance human wellbeing, yet its current trajectory presents unprecedented dangers. AI could soon far surpass human capabilities and escalate risks such as engineered pandemics, widespread disinformation, large-scale manipulation of individuals including children, national and international security concerns, mass unemployment, and systematic human rights violations. 

Some advanced AI systems have already exhibited deceptive and harmful behavior, and yet these systems are being given more autonomy to take actions and make decisions in the world. Left unchecked, many experts, including those at the forefront of development, warn that it will become increasingly difficult to exert meaningful human control in the coming years. 

Governments must act decisively before the window for meaningful intervention closes. An international agreement on clear and verifiable red lines is necessary for preventing universally unacceptable risks. These red lines should build upon and enforce existing global frameworks and voluntary corporate commitments, ensuring that all advanced AI providers are accountable to shared thresholds. 

We urge governments to reach an international agreement on red lines for AI — ensuring they are operational, with robust enforcement mechanisms — by the end of 2026. 

 

In Seoul, companies pledged to “Set out thresholds at which severe risks posed by a model or system, unless adequately mitigated, would be deemed intolerable”, but there is still nothing today that prevents Meta/xAI from setting thresholds too high, or not setting them at all. Without common rules, this race is a race to the bottom, and safety-conscious actors are going to be disadvantaged.

Red lines have started being operationalized in the Safety and Security frameworks from AI companies. For example, for AI models above a critical level of cyber-offense capability, OpenAI states that “Until we have specified safeguards and security controls standards that would meet a critical standard, halt further development.” Those definitions of critical capabilities that require robust mitigations now need to be harmonized and strengthened between those different companies.

 

On the website, you will find an FAQ:

  • What are red lines in the context of AI?
  • Why are international AI red lines important?
  • What are some examples of possible red lines?
  • Are international AI red lines even possible?
  • Are we starting from scratch?
  • Who would enforce these red lines?
  • Why 2026?
  • What should be the next steps?

Our aim with this call is to move away from industry self-regulation and reach an international agreement on red lines for artificial intelligence by the end of 2026 to prevent the most severe AI risks.

You can access the website here: https://red-lines.ai