Today, the Global Call for AI Red Lines was released and presented at the UN General Assembly. It was developed by CeSIA, the French Center for AI Safety, The Future Society and the Center for Human-compatible AI.
This call has been signed by a historic coalition of 200+ former heads of state, ministers, diplomats, Nobel laureates, AI pioneers, industry experts, human rights advocates, political leaders, and other influential thinkers, as well as 70+ organizations.
Signatories include:
The full text of the call reads:
AI holds immense potential to advance human wellbeing, yet its current trajectory presents unprecedented dangers. AI could soon far surpass human capabilities and escalate risks such as engineered pandemics, widespread disinformation, large-scale manipulation of individuals including children, national and international security concerns, mass unemployment, and systematic human rights violations.
Some advanced AI systems have already exhibited deceptive and harmful behavior, and yet these systems are being given more autonomy to take actions and make decisions in the world. Left unchecked, many experts, including those at the forefront of development, warn that it will become increasingly difficult to exert meaningful human control in the coming years.
Governments must act decisively before the window for meaningful intervention closes. An international agreement on clear and verifiable red lines is necessary for preventing universally unacceptable risks. These red lines should build upon and enforce existing global frameworks and voluntary corporate commitments, ensuring that all advanced AI providers are accountable to shared thresholds.
We urge governments to reach an international agreement on red lines for AI — ensuring they are operational, with robust enforcement mechanisms — by the end of 2026.
In Seoul, companies pledged to “Set out thresholds at which severe risks posed by a model or system, unless adequately mitigated, would be deemed intolerable”, but there is still nothing today that prevents Meta/xAI from setting thresholds too high, or not setting them at all. Without common rules, this race is a race to the bottom, and safety-conscious actors are going to be disadvantaged.
Red lines have started being operationalized in the Safety and Security frameworks from AI companies. For example, for AI models above a critical level of cyber-offense capability, OpenAI states that “Until we have specified safeguards and security controls standards that would meet a critical standard, halt further development.” Those definitions of critical capabilities that require robust mitigations now need to be harmonized and strengthened between those different companies.
On the website, you will find an FAQ:
Our aim with this call is to move away from industry self-regulation and reach an international agreement on red lines for artificial intelligence by the end of 2026 to prevent the most severe AI risks.
You can access the website here: https://red-lines.ai