TL;DR: A Distributed Approach to European AI Safety Coordination
Disclaimer: I wrote this post a couple of months ago and didn’t get to publish it then. Relative time specifications have to be seen in this context. Although some of my opinions have slightly changed since writing this, I think there is still a lot of good stuff in there, so I am just putting it out now.
Over the past months, I have spent a bit of time reflecting on the condition of the AI safety ecosystem in Europe. This has led me to develop a field-building strategy to improve collaboration on AI safety in Europe. The strategy also lays out a path to increase public awareness of risks from AGI. In this post, I will share my insights and lay out a distributed approach towards European AI safety coordination.
I also plan to execute these ideas from Zürich over the course of this year and am optimistic about this project. However, there clearly is uncertainty in how exactly individual parts of this strategy will work out.
Part of the strategy presented in this post builds on what Severin T. Seehrich calls "Information Infrastructure" in his post on AGI safety field building projects.
I present a strategy towards establishing a strong international network of AI safety field-building organizations in Europe. While AI safety field-building organizations could be EA-associated, I particularly do not refer to just having EA orgs as the AI safety field.
The strategy decomposes into three parts:
For national organizations (2.) I present field studies of different paths. These include:
I particularly highlight the third approach, which completes the decomposition between the three levels above (local - international) into a scaling exercise. In that scaling paradigm, you start local, scale to national, and then to international networks. Ideally, the path there is achieved by keeping the final goal or international coordination in mind and collaborating with other initiatives early on.
Let's dive into more detail on why I think the ecosystem needs this kind of international network and what pros and cons different elements of my strategy to get there can have.
Since the development of GPT3 in 2020, the topics of AI and AI safety have rapidly gained popularity. Many new labs and organizations working on alignment research or on building stronger policies and ecosystems have been founded since then. However, the ecosystem of people focused on AI safety research, except for the two main hubs around Berkeley and London, remains relatively scattered. When I say relatively scattered, I mean relative in two ways:
I would like to see a Centre for Effective Altruism (CEA) for AI safety, or what the Good Food Institute (GFI) is for alternative proteins. These two associations were founded in 2011 and 2016, respectively, while most major AI safety organizations were founded around 2020 or later. The fact that these organizations had a lot more time to ramp up their operations could provide a good explanation for why they are better connected. Nonetheless, AI safety seems to be no less important in EA conversation cycles than alternative proteins. I believe a central node of coordination within the field could strengthen collaboration and make access to the field for people outside of major hubs easier.
As of now, the growing community focused on AGI risks has found another infrastructure to coordinate their endeavour (in addition to LessWrong, of course). A lot of the discussion on AI is currently happening at EA groups and conferences. This has both advantages and disadvantages. On the one hand, the double use of this infrastructure seems to enable the rapid development of the AI safety field, with conferences, offices, and networks in place to facilitate discussion. At the same time, this enables strong coordination between EA and AI safety. On the other hand, an EAG conference can not be advertised as an AI safety conference to people outside the community the same way a conference deliberately focused on only that could. Additionally, when EA groups are swamped with AI safety topics, the cause neutrality might seem lost. There are perhaps arguments to be made that AI safety is just so much more important than any other cause right now that promoting only that is just cause neutrality in disguise. But I believe that does not really count, and people with better personal fit to other causes can be pushed out of the community over the long run. In particular, this argument does not really represent the fact that when it comes to existential threats, we do not only have to avoid the most likely one, but also all the other ones. In my opinion, it would be good if people were distributing their attention to these different risks in a relative way to how likely they think the risk is.
Long story short, I imagine a stronger separation of the EA and AI safety community to be beneficial for several reasons:
I also don't see any big disadvantages of that separation yet. I do think that both communities would still naturally be very well connected. EA could just redirect people to the AI safety communities as a cause area after they have learned the EA basics. People who come in through the AI safety groups could also be redirected to EA to learn about more fundamental principles of doing good. Perhaps the only disadvantage is that more infrastructure costs more money. In the end, the limitation seems to be growing this infrastructure and doing so quickly. I will dive into my perspective on how to do that in the next section.
Well! Let's imagine we want to build a strong, united ecosystem in AI safety. And in order not to make it too complicated, let's focus on Europe for now. There are several reasons why I chose Europe for that purpose. For once, I have spent most of my life living in Europe and know the ecosystem there a lot better than in any other part of the world. But beyond that, Europe has a much more diverse political ecosystem than the US or China, has strong EA movements compared to most other parts of the world, and has many good research institutes. The EU AI Act also promises a decent political environment for AI safety. The other part of the world where the AI safety movement and EA are the strongest, namely the US, already has national safety institutes and several hubs (if you count Berkeley and Washington) spread across the country. In Europe, however, the only significant hub is around London in the UK, and they have just made travelling and working there a lot harder for the rest of Europe. So how can the collaboration on AI safety in continental Europe be strengthened?
I present a distributed approach, tailored for fast and stable expansion of the infrastructure. Therefore, I group different initiatives into three levels of locality. These levels are:
The level of coordination between these levels can vary strongly from country to country, but I will go into more detail on that in the next subsections.
It all starts with having stronger local groups. My argument is that humans are social creatures, and to keep them engaged with a field for longer time periods, you need interactions between individuals. The easiest way to talk to others is often if you just meet them in person and they speak the same language. Local groups centered around one city are the best way to facilitate regular in-person events at that location. And members of the group profit from continuously having easy access (short travel times) to in-person events if they live close to the location of a local group.
Local groups will most likely be run by volunteers, perhaps students or people drawing from EA local groups. This makes widespread establishment easy, but local groups can often benefit from the support of more structured national organizations. Another thing is that after having engaged with a local group for a while, members might get eager to contribute to the field of AI safety. This is exactly what we want. However, local groups alone might lack the resources and network to connect their members to people who could help them boost their careers and might in fact be looking for motivated individuals. To solve that issue, better coordination is needed.
The designs of national organizations can vary a lot, and there are different paths to grow them. In the following, I provide case studies to demonstrate different trajectories.
The UK AI Security Institute (AISI) or former AI Safety Institute was founded through a significant investment of £100m by the national government of the UK[1]. This is just one example of how national safety organizations can grow very quickly through substantial funding from the government. This, however, comes with the caveat of being dependent on politics. Yet, although sometimes through less direct mechanisms, this is basically true for any initiative ever. Additionally, funding from philanthropy also creates asymmetric ties to the funders.
For anyone well-connected with their national government or the relevant department, acquiring government funding for serious AI safety organizations can be a high-impact thing to do for strengthening the AI safety infrastructure. Given high levels of bureaucracy in most European democracies, I do, however, expect this to take quite some time. So, unless one is already somewhat into that process, it might not be the right way to go, depending on AI timelines.
To grow a national AI safety organization, you don't always need a starting budget of £100m. An organization could also be started by volunteers setting up a website and social media channels, which just start by organizing a few remote courses. One example for that is AI Safety Bulgaria, which was started in 2024 by a few volunteers organizing courses on a national level. It is now (as of mid-2025) starting to strengthen its in-person network in Sofia, after first establishing a national organization. Their chairman, Aleksander Angelov, whom I recently met at ML4Good Italy, has shared that they are now trying to expand from there.
This approach is closest to what Severin T. Seehrich calls starting an umbrella AGI safety non-profit organization. He points out that building MVPs and reducing administrative and infrastructure overheads as much as possible. This is generally good advice, and the approach presented here of reducing the big problem into smaller ones goes in line with the idea of building MVPs. However, I believe that to create an international network, national organizations have to expand beyond the MVP once they are established.
Another role to connect people involved in the AI safety field with each other is what Severin T. Seehrich calls an AGI Safety Coordinator. That is, someone who knows what everyone is doing in AGI safety, who collects and organizes resources, and then publishes them. While that description of a Coordinator does not specify whether this happens on a national or international level, I think it would be useful to have people with local connections and expertise map their ecosystem on a national level and eventually combine all those maps in a bigger overview. Creating such resources is, however, out of the scope for this blog post. Nonetheless, I believe that motivated individuals could start with something like that for their country, build a track record of connecting people, and expand from there.
Growing a local group into a national organization is perhaps the path that best fits with the distributed approach of this post. The idea is simple: break down the final goal into relevant substeps and start with the smallest one. This can also serve as a test of the concept right away. If you run a local group, but nobody is interested in it, you either chose the wrong city, the idea is bad, or you are doing something wrong. In both of the latter cases, this is probably not a great foundation to run something even bigger, like a national organization. There can, of course, be some edge cases, though, where there, for example, might be just enough interested people on a national level, without significant enough interest in any individual city. If the choice of the city is the issue, then this could be a good indication to also headquarter the national organization in another city. In either case, the local group can be a relatively cheap test of the general idea. If the local group works out well, one can expand to the national level from there.
There are many examples of how national orgs have been created from local groups. One of them is Effective Altruism Switzerland (EACH). EA Switzerland originally grew out of EA Geneva and now actually has office spaces in Zürich.
Note that this list of strategies to establish national organizations is incomplete. There are other very successful organizations like EffiSciences, which have branched out CeSIA, which arguably is a very strong nationwide AI safety organization in France. Their development does not neatly fit in any of the above categories, but can be described as somewhere in between the latter two. There, a group of students from the four ENS universities came together and started immediately operating on a national level, but with a strong focus on the locations of the four universities. As a remark, the ENS is just a particular type of educational institute in France.
Independent of how these organizations came to be, I would say these national organizations are the key element of the field-building chain. They are not only the middle level that potentially connects the other two, but can indeed serve many purposes. Every country in Europe has its own unique political and economic landscape. The national organization could be the central point of contact and coordination for anyone who wants to connect politics with research, lab with lab, company with university research group, or many more. Once they succeed at establishing a national network, they could also obtain much more resources and professionalism than I would expect local groups to have. They could therefore also be the best at facilitating outreach in the national language or at organizing introductory courses effectively. An exception to that last bit is probably hubs, where the local groups or platforms can be as influential as national organizations. I would make the argument that strong hubs are probably actually a very good starting point for a national organization.
One last point to consider is what happens if there are already several local groups, but no national organization. I'd say this is actually great! These groups only have to coordinate with each other, and you already have a prototype for a national organization.
After a detailed discussion of national organizations, we now get to our actual goal of creating strong international coordination. The goal of an organization acting in that field would include coordinating efforts between different national organizations and local groups, gathering all the relevant resources at one central place, potentially distributing funding, and most importantly, growing the field where it is most needed.
At the point where we have strong national organizations in many major European countries, I honestly expect them to coordinate among each other by default. People who are established enough in the field to run a national organization eventually meet each other at conferences and can start exchanging resources then. There are also already platforms that try collecting relevant resources like aisafety.com, and I think what the field is missing are good mechanisms to spread resources down to the local level and mapping local ecosystems. Strong national organizations could handle that and would be a good point of contact for anyone looking for people in one specific region of Europe.
The question, hence, is what happens if there is no national organization in an area of interest? This is where the international network comes in, and distributing funding and growing the field go hand in hand to answer this question. I believe part of the reason why CEA works is that it can distribute funding. This allows them to seed groups where they are most effective. Another relevant part to achieve this is, of course, their expertise in supporting smaller groups and their visibility to individuals who are looking for help in growing their local field. I believe that the AI safety field could also strongly benefit from such a dynamic.
The Alt Protein Project by GFI is yet another example of very successful field building. In this case, also with a rather low budget per individual local group of $1'000 basic funding per year. Another notable thing here is that the GFI also seeds local groups in countries without national alternative protein organizations. This could boost visibility at lower effort and could be particularly efficient for smaller countries. Note that I wrote earlier that I think national organizations are key. I still believe in that, but I would like to see local groups even in countries where there is no national organization.
I have now discussed the role of the international organization in much detail. But how do I think we can create such a thing? I guess my former writings already gave it away. I think international networks are best grown from strong national organizations after they have started collaborating with other national organizations. Perhaps the international network could also be branched out from several national networks at once, creating a new institution. Given funding mechanisms, I imagine it to be easier if the international network initially runs under the name of one individual national organization. That way, the project profits from the credibility of said organization. A grant application signed by several organizations could, however, also work.
I believe that this distributed approach of starting with smaller projects and then scaling up step by step is a promising path to an international AI safety organization. There are other approaches, like the one of the European Network for AI Safety (ENAIS), which is trying to connect the field. While they are doing good work and I love the fact that this exists, I think what I imagine as an international network is something much bigger. The most probable reason why ENAIS is not this something much bigger is probably because it is not as well funded as I would imagine the international network to be, and because national organizations aren't quite strong enough.
I therefore encourage everyone reading this post who is looking for a project to start small in building the AI safety field, but dream big!
https://www.gov.uk/government/publications/ai-safety-institute-overview/introducing-the-ai-safety-institute