The Centre pour la Sécurité de l'IA[1] (CeSIA, pronounced "seez-ya") or French Center for AI Safety is a new Paris-based organization dedicated to fostering a culture of AI safety at the French and European level. Our mission is to reduce AI risks through education and information about potential risks and their solutions.
Our activities span four main areas:
Our team consists of 8 employees, 3 freelancers, and numerous volunteers.
By pursuing high-impact opportunities across multiple domains, we create synergistic effects between our activities. Our technical work informs our policy recommendations with practical expertise. Through education, we both identify and upskill talented individuals. Operating at the intersection of technical research and governance builds our credibility while advancing our goal of informing policy. This integrated approach allows us to provide policymakers with concrete, implementable technical guidance.
The future of general-purpose AI technology is uncertain, with a wide range of trajectories appearing possible even in the near future, including both very positive and very negative outcomes. But nothing about the future of AI is inevitable. It will be the decisions of societies and governments that will determine the future of AI.
– introduction of the International Scientific Report on the Safety of Advanced AI: Interim Report.
Our mission: Fostering a culture of AI safety by educating and informing about AI risks and solutions.
AI safety is a socio-technical problem that requires a socio-technical solution. According to the International Scientific Report on the Safety of Advanced AI, one of the most critical factors shaping the future will be society’s response to AI development. Unfortunately, many developers, policymakers, and most of the population remain completely oblivious to AI's potentially catastrophic risks.
We believe that AI safety is in need of a social approach to be correctly tackled. Technical AI safety alone is not enough, and it seems likely that solutions found through this route will be in any case bottlenecked by policy-makers' and the public's understanding of the issue.
We want to focus on such bottlenecks and maximize our counterfactual impact, targeting actions that would likely not be taken otherwise, and having clear theories of change for our project. We will be writing further articles detailing our methodology for thinking through such counterfactuals and evaluating our projects quantitatively, as we are using mixes of different approaches to ensure we are not getting trapped in the failure mode of any single one, and because the future is uncertain.
We think that at its root, AI safety is a coordination problem, and thus, awareness and engagement on multiple fronts are going to be crucial catalysts for the adoption of best practices and effective regulation.
As we are located in France, our immediate focus is on what we can achieve within France and Europe. Both regions are becoming increasingly crucial for AI safety, particularly as France prepares to host the next International AI Action Summit in February 2025, which follows the UK’s Bletchley summit in November 2023 and South Korea’s followup summit in May 2024.
France has positioned itself as a significant AI hub in Europe, hosting major research centers and offices of leading AI companies including Meta, Google DeepMind, Hugging Face, Mistral AI, and recently OpenAI. This concentration of AI development makes French AI policy particularly influential, as seen during the EU AI Act negotiations, where France expressed significant reservations about the legislation.
The country's strong advocacy for open-source AI development raises concerns about distributed misuse of future AIs, reduced control mechanisms, and potential risks from autonomous replication. As one of the organizations involved in drafting the EU AI Act's Code of Practice, we're working to ensure robust safety considerations for frontier AI systems are integrated into these guidelines. You can read more on France’s context here.
After the summit, it’s possible we expand our focus internationally.
Our current activities revolve around four pillars:
AI governance
Rationale: Engaging directly with key institutions and policymakers allows them to better understand the risks. This helps raise awareness, and inform decision-making.
Technical R&D
Rationale: We think R&D can contribute to safety culture by creating new narratives that we can then present while discussing AI risks.
Academic field-building
Rationale: Educating the next generation of AI researchers is important. By offering university courses, bootcamps, and teaching materials, we can attract and upskill top talents.
Public outreach
Rationale: By educating the general public and, in particular, the scientific communities, we aim to promote a robust culture of AI safety within public institutions, academia, and the industry. Public awareness is the foundation of societal engagement, which in turn influences institutional priorities and decision-making processes. Policymakers are not isolated from public opinion; they are often swayed by societal narratives and the concerns of their families, colleagues, and communities.
Our starting members include:
Board, by alphabetical order:
Advisors:
We also collaborate with Alexandre Variengien (co-founder of CeSIA), Diego Dorn (the main contributor to our BELLS project, head teacher at some ML4Good), Nia Gardner (Executive Director of the ML4Good new organization), and also Jérémy Andréoletti, Shaïman Thürler, Mathias Vigouroux, Pierina Camarena, Hippolyte Pigeat, Pierre Chaminade, Martin Kunev, Emma Gouné, Inès Belhadj, Capucine Marteau, Blanche Freudenreich, Amaury Lorin, Jeanne Salle, Léo Dana, Lucie Philippon, Nicolas Guillard, Michelle Nie, Simon Coumes and Sofiane Mazière.
We have received several generous grants from the following funders (and multiple smaller ones not included here), who we cannot thank enough:
These grants fell into two primary categories: unrestricted funds for CeSIA’s general operations, and targeted support for specific project initiatives.
We created the organization in March 2024. Most of the previous points were done during the last 6 months with the exception of our historic field-building activities, such as the ML4Good bootcamps.
It’s a lot of different types of activities! And it’s possible we will try to narrow down our focus in the future if this makes sense.
The next six months will be pretty intense, between the AI Action Summit, the drafting of the AI Act's Code of Practice, and completing ongoing projects such as the textbook.
We invite researchers, educators, and policymakers to collaborate with us as we navigate this important phase. Feel free to reach out at contact@securite-ia.fr.
Here is our Substack if you want to follow us or meet us in person in January-February for the Turing Dialogues on AI Governance, our series of round tables, which is an official side event of the AI Action Summit.
Thanks to the numerous people who have advised, contributed, volunteered, and have beneficially influenced us.
“Safety” usually translates to “Sécurité” in French, and “Security” translates to “Sûreté”.
and in the European Union?
OECD: Organisation for Economic Co-operation and Development.
CNIL: National Commission on Informatics and Liberty
ANSSI: National Cybersecurity Agency of France
LNE: National Laboratory of Metrology and Testing
For example, during ML4Good Brazil, we created and reactivated 4 different AI Safety field-building groups in Latin America: Argentina, Columbia, AI safety Brazil and ImpactRio, and ImpactUSP (University of San Paolo). Participants consistently rate us above 9/10 for recommending this bootcamp.