Review

On December 8, EU policymakers announced an agreement on the AI Act. This post aims to briefly explain the context and implications for the governance of global catastrophic risks from advanced AI. My portfolio on Open Philanthropy’s AI Governance and Policy Team includes EU matters (among other jurisdictions), but I am not an expert on EU policy or politics and could be getting some things in this post wrong, so please feel free to correct it or add more context or opinions in the comments!

If you have useful skills, networks, or other resources that you might like to direct toward an impactful implementation of the AI Act, you can indicate your interest in doing so via this short Google form.

Context

The AI Act was first introduced in April 2021, and for the last ~8 months, it has been in the “trilogue” stage. The EU Commission, which is roughly analogous to the executive branch (White House or 10 Downing Street), drafted the bill; then, the European Parliament (sort of like the U.S. House of Representatives, with seats assigned to each country by a population-based formula) and the Council of the EU (sort of like the pre-17th-Amendment U.S. Senate, with each country's government getting one vote in a complicated voting system)[1] each submitted proposed revisions; then, representatives from each body negotiated to land on a final version (analogous to conference committees in the US Congress).

In my understanding, AI policy folks who are worried about catastrophic risk were hoping that the Act would include regulations on all sufficiently capable GPAI (general-purpose AI) systems, with no exemptions for open-source models (at least for the most important regulations from a safety perspective), and ideally additional restrictions on “very capable foundation models” (those above a certain compute threshold), an idea floated by some negotiators in October. In terms of the substance of the hoped-for regulations, my sense is that the main hope was that the legislation would give the newly-formed AI Office substantial leeway to require things like threat assessments/dangerous capabilities evaluations and cybersecurity measures, with a lot of the details to be figured out later by that Office and by standard-setting bodies like CEN-CENELEC’s JTC-21

GPAI regulations appeared in danger of being excluded after Mistral, Aleph Alpha, and the national governments of France, Germany, and Italy objected to what they perceived as regulatory overreach and threatened to derail the Act in November. There was also some reporting that the Act would totally exempt open-source models from regulation.

What’s in it?

Sabrina Küspert, an AI policy expert working at the EU Commission, summarized the results on some of these questions in a thread on X:

  • The agreement does indeed include regulations on “general-purpose AI,” or GPAI. 
  • There does appear to be a version of the “very capable foundation models” idea in the form of “GPAI models with systemic risks,” which are based on capabilities and “reach,” which I think means how widely deployed they are. 
  • It looks like GPAI models are presumed to have these capabilities if they’re trained on 10^25 FLOP, which is one order of magnitude smaller than the October 30 Biden executive order’s cutoff for reporting requirements (and which would probably include GPT-4 and maybe Gemini, but no other current models as far as I know).
  • Küspert also says “no exemptions,” which I interpret to mean “no exemptions to the systemic-risk rules for open-source systems.” Other reporting suggests there are wide exemptions for open-source models, but the requirements kick back in if the models pose systemic risks. However, Yann LeCun is celebrating based on this part of a Washington Post article: "The legislation ultimately included restrictions for foundation models but gave broad exemptions to “open-source models,” which are developed using code that’s freely available for developers to alter for their own products and tools. The move could benefit open-source AI companies in Europe that lobbied against the law, including France’s Mistral and Germany’s Aleph Alpha, as well as Meta, which released the open-source model LLaMA." So it’s currently unclear to me where the Act lands on this question, and I think a close review by someone with legal or deep EU policy expertise might help illuminate.

The Commission’s blog post says: “For very powerful models that could pose systemic risks, there will be additional binding obligations related to managing risks and monitoring serious incidents, performing model evaluation and adversarial testing. These new obligations will be operationalised through codes of practices developed by industry, the scientific community, civil society and other stakeholders together with the Commission.” (I’m guessing this means JTC-21 and similar, but if people with more European context can better read the tea leaves, let me know.)

Parliament’s announcement notes that GPAI systems and models will “have to adhere to transparency requirements” including “technical documentation, complying with EU copyright law and disseminating detailed summaries about the content used for training.” I think these transparency requirements are the main opportunity to develop strong requirements for evaluations.

Enforcement will be up to both national regulators and the new European AI Office, which, as the Commission post notes, will be “the first body globally that enforces binding rules on AI and is therefore expected to become an international reference point.” Companies that fail to comply with these rules face fines up to “35 million euro or 7% of global revenue,” whichever is higher. (Not sure whether this would mean 7% of e.g. Alphabet’s global revenue or DeepMind’s).

The Act also does what some people have called the obvious thing of requiring that AI-generated content be labeled as such in a machine-readable format, with fines for noncompliance. (Seems easy to do for video/audio, much harder for text, but at least requiring that AI chatbots notify users that they’re AI systems rather than humans would be a first step.)

This post focuses on the most relevant parts of the Act to frontier models and catastrophic risk, but most of the Act is focused on the application layer. It bans the use of AI for:

  • “biometric categorisation systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race);
  • untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases;
  • emotion recognition in the workplace and educational institutions;
  • social scoring based on social behaviour or personal characteristics;
  • AI systems that manipulate human behaviour to circumvent their free will;
  • AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation).”

The Act will start being enforced at the end of “a transitional period,” which the NYT says will be 12-24 months. In the meantime, the Commission is launching the cleverly titled “AI Pact,” which seeks voluntary commitments to start implementing the Act’s requirements before the legal deadline. EU Commission president Ursula von der Leyen says “around 100 companies have already expressed their interest to join” the Pact.

How big of a deal is this?

A few takeaways for me so far:

  • Despite the frontier AI companies being American and English, the EU has what I’d describe as a moderate amount of leverage on AI companies by being a large market (~17% of global GDP in dollar terms). 
    • If the regulations they impose aren’t terribly costly and make enough sense, companies will likely comply with them. If they’re poorly executed or super costly, companies may fight them in court (as they did with the GDPR) or pull out of the EU market. So, European regulators will have a not-tiny budget of cost imposition that they could spend wisely and get a decent amount of safety (but of course can’t unilaterally pause AI development).
    • This effect will be especially important insofar as the EU’s regulations apply to training, rather than applications: AI companies might release EU-compliant versions of chatbots within the EU while deploying non-EU-compliant versions elsewhere, but due to the costs of training foundation models, they’re unlikely to train entirely separate models.
  • The EU and this GovAI paper (summarized here) are both very fond of the “de jure” Brussels Effect, where other jurisdictions imitate EU regulation. (They contrast this with the “de facto” Brussels Effect, which includes the kind of direct effects in the previous bullet.) So far, I haven’t seen many signs of the US or UK imitating the EU, but it’s possible that China’s approach will be informed by the EU. Other countries with less leverage over frontier AI might also be influenced, but this is less of a big deal.
  • There’s also an effect that Markus Anderljung pointed out to me, which is that even with no imitation by policymakers, regulators themselves might be influenced by the object-level outputs of the AI Office: if Europe rules that a particular AI system was not sufficiently evaluated or secured before release, some regulators might defer, as many countries’ pharma regulators apparently defer to the FDA.
  • The EU adopting pretty strong regulations even after industry and their allies in government were seriously activated is a good sign for the politics of AI regulation in similar polities. (This is less true to the extent that even very powerful/risky/expensive-to-train models are exempt if their weights get released.)
  • The multilateral EU regulating like this is also a step towards international agreements; a robust international regime needs to include the US, China, EU, and UK at bare minimum, and the EU might be important connective tissue between a mutually distrustful China and US/UK.

Making the AI Act effective for catastrophic risk reduction

The Act appears to stake out a high-level approach to Europe’s AI policy, but will very likely task the AI Office, standard-setting organizations (SSOs) like JTC-21, and EU member states with fleshing out a lot of detail and implementing the policies. Depending on the standardization and implementation phases over the next few years, the Act could wind up strongly incentivizing AI developers to act more safely, or it could wind up insufficiently detailed, captured by industry, bogged down in legal challenges, or so onerous that AI companies withdraw from the EU market and ignore the law. 

To achieve outcomes more like the former, people who would like to reduce global catastrophic risks from future AI systems could consider doing the following:

  • Joining the SSOs. These bodies tend to include a mix of industry lobbyists and civil society representatives, and the civil society folks have a huge range of priorities, so there are very few people at these who are focused on frontier model safety, and you could make a big difference by joining.
  • Working for the European AI Office or member state implementation bodies. By default, I think these offices (like most tech regulators) will have difficulty recruiting technically knowledgeable people. (The bar for “technically knowledgeable” in government tends to be pretty low; if you’re familiar with lots of the material on the AI Safety Fundamentals governance syllabus I think you’re in decent shape.) Having a few more such people in these offices could make them more informed about the risks and governance opportunities, both specifically regarding models with “systemic risks” and in general.
  • Policy research that aims to inform catastrophic-risk-focused people in these groups or other European institutions. Think tanks and research institutions tend to be on a spectrum from “policy/strategy development,” where they write reports about what policymakers should be aiming for, and advocacy, where they mostly take ideas developed by others and talk a lot with people in governments to turn them into policy outcomes. The people I’ve spoken with who are currently working closely with the EU (in the Commission, SSOs, or think tanks on the latter side of that spectrum) said that additional “upstream” (i.e. on the former side of the spectrum) policy work like that done by e.g. IAPS and GovAI would be really useful.
  • Generally, I think AI policy and governance folks should invest some time in understanding what’s going on in the EU (though some people might have strong comparative advantage reasons not to), and relatedly probably encourage catastrophic-risk-focused Europeans to try to do useful work in the EU rather than coming to the US. At the very least, in my view, the picture has changed in an EU-favoring direction in the last year (despite lots of progress in US AI policy), and this should prompt a re-evaluation of the conventional wisdom (in my understanding) that the US has enough leverage over AI development such that policy careers in DC are more impactful even for Europeans.

And once again: if you have useful skills, networks, or other resources that you might like to direct toward an impactful implementation of the AI Act, you can indicate your interest in doing so via this short Google form.

  1. ^

    Thanks to the commenter Sherrinford for correcting me on these.

New Comment
7 comments, sorted by Click to highlight new comments since:

Thanks for this overview, Trevor. I expect it'll be helpful– I also agree with your recommendations for people to consider working at standard-setting organizations and other relevant EU offices.

One perspective that I see missing from this post is what I'll call the advocacy/comms/politics perspective. Some examples of this with the EU AI Act:

  • Foundation models were going to be included in the EU AI Act, until France and Germany (with lobbying pressure from Mistral and Aleph Alpha) changed their position.
  • This initiated a political/comms battle between those who wanted to exclude foundation models (led by France and Germany) and those who wanted to keep it in (led by Spain).
  • This political fight rallied lots of notable figures, including folks like Gary Marcus and Max Tegmark, to publicly and privately fight to keep foundation models in the act.
  • There were open letters, op-eds, and certainly many private attempts at advocacy.
  • There were attempts to influence public opinion, pieces that accused key lobbyists of lying, and a lot of discourse on Twitter.

It's difficult to know the impact of any given public comms campaign, but it seems quite plausible to me that many readers would have more marginal impact by focusing on advocacy/comms than focusing on research/policy development.

More broadly, I worry that many segments of the AI governance/policy community might be neglecting to think seriously about what ambitious comms/advocacy could look like in the space.

I'll note that I might be particularly primed to bring this up now that you work for Open Philanthropy. I think many folks (rightfully) critique Open Phil for being too wary of advocacy, campaigns, lobbying, and other policymaker-focused activities. I'm guessing that Open Phil has played an important role in shaping both the financial and cultural incentives that (in my view) leads to an overinvestment into research and an underinvestment into policy/advocacy/comms. 

(I'll acknowledge these critiques are pretty high-level and I don't claim that this comment provides compelling evidence for them. Also, you only recently joined Open Phil, so I'm of course not trying to suggest that you created this culture, though I guess now that you work there you might have some opportunities to change it).

I'll now briefly try to do a Very Hard thing which is like "put myself in Trevor's shoes and ask what I actually want him to do." One concrete recommendation I have is something like "try to spend at least 5 minutes thinking about ways in which you or others around you might be embedded in a culture that has blind spots to some of the comms/advocacy stuff." Another is "make a list of people you read actively or talked to when writing this post. Then ask if there were any other people/orgs you could've reached out, particularly those that might focus more on comms+adovacy". (Also, to be clear, you might do both of these things and conclude "yea, actually I think my approach was very solid and I just had Good Reasons for writing the post the way I did.")

I'll stop here since this comment is getting long, but I'd be happy to chat further about this stuff. Thanks again for writing the post and kudos to OP for any of the work they supported/will support that ends up increasing P(good EU AI Act goes through & gets implemented). 

Thanks for these thoughts! I agree that advocacy and communications is an important part of the story here, and I'm glad for you to have added some detail on that with your comment. I’m also sympathetic to the claim that serious thought about “ambitious comms/advocacy” is especially neglected within the community, though I think it’s far from clear that the effort that went into the policy research that identified these solutions or work on the ground in Brussels should have been shifted at the margin to the kinds of public communications you mention.

I also think Open Phil’s strategy is pretty bullish on supporting comms and advocacy work, but it has taken us a while to acquire the staff capacity to gain context on those opportunities and begin funding them, and perhaps there are specific opportunities that you're more excited about than we are. 

For what it’s worth, I didn’t seek significant outside input while writing this post and think that's fine (given the alternative of writing it quickly, posting it here, disclaiming my non-expertise, and getting additional perspectives and context from commenters like yourself). However, I have spoken with about a dozen people working on AI policy in Europe over the last couple months (including one of the people whose public comms efforts are linked in your comment) and would love to chat with more people with experience doing policy/politics/comms work in the EU.

We could definitely use more help thinking about this stuff, and I encourage readers who are interested in contributing to OP’s thinking on advocacy and comms to do any of the following:

  • Write up these critiques (we do read the forums!); 
  • Join our team (our latest hiring round specifically mentioned US policy advocacy as a specialization we'd be excited about, but people with advocacy/politics/comms backgrounds more generally could also be very useful, and while the round is now closed, we may still review general applications); and/or 
  • Introduce yourself via the form mentioned in this post.

I appreciate the comment, though I think there's a lack of specificity that makes it hard to figure out where we agree/disagree (or more generally what you believe).

If you want to engage further, here are some things I'd be excited to hear from you:

  • What are a few specific comms/advocacy opportunities you're excited about//have funded?
  • What are a few specific comms/advocacy opportunities you view as net negative//have actively decided not to fund?
  • What are a few examples of hypothetical comms/advocacy opportunities you've been excited about?
  • What do you think about EG Max Tegmark/FLI, Andrea Miotti/Control AI, The Future Society, the Center for AI Policy, Holly Elmore, PauseAI, and other specific individuals or groups that are engaging in AI comms or advocacy? 

I think if you (and others at OP) are interested in receiving more critiques or overall feedback on your approach, one thing that would be helpful is writing up your current models/reasoning on comms/advocacy topics.

In the absence of this, people simply notice that OP doesn't seem to be funding some of the main existing examples of comms/advocacy efforts, but they don't really know why, and they don't really know what kinds of comms/advocacy efforts you'd be excited about.

[-]tlevin1410

(An extra-heavy “personal capacity” disclaimer for the following opinions.) Yeah, I hear you that OP doesn’t have as much public writing about our thinking here as would be ideal for this purpose, though I think the increasingly adversarial environment we’re finding ourselves in limits how transparent we can be without undermining our partners’ work (as we’ve written about previously).

The set of comms/advocacy efforts that I’m personally excited about is definitely larger than the set of comms/advocacy efforts that I think OP should fund, since 1) that’s a higher bar, and 2) sometimes OP isn’t the right funder for a specific project. That being said:

  • So far, OP has funded AI policy advocacy efforts by the Institute for Progress and Sam Hammond. I personally don’t have a very detailed sense of how these efforts have been going, but the theory of impact for these was that both grantees have strong track records in communicating policy ideas to key audiences and a solid understanding of the technical and governance problems that policy needs to solve.
  • I’m excited about the EU efforts of FLI and The Future Society. In the EU context, it seems like these orgs were complementary, where FLI was willing to take steps (including the pause letter) that sparked public conversation and gave policymakers context that made TFS’s policy conversations more productive (despite raising some controversy). I have much less context on their US work, but from what I know, I respect the policymaker outreach and convening work that they do and think they are net-positive.
  • I think CAIP is doing good work so far, though they have less of a track record. I value their thinking about the effectiveness of different policy options, and they seem to be learning and improving quickly.
  • I don’t know as much about Andrea and Control AI, but my main current takes about them are that their anti-RSP advocacy should have been heavier on “RSPs are insufficient,” which I agree with, instead of “RSPs are counterproductive safety-washing,” which I think could have disincentivized companies from the very positive move of developing an RSP (as you and I discussed privately a while ago). MAGIC is an interesting and important proposal and worth further developing (though as with many clever acronyms I kind of wish it had been named differently).
  • I’m not sure what to think about Holly’s work and PauseAI. I think the open source protest where they gave out copies of a GovAI paper to Meta employees seemed good – that seems like the kind of thing that could start really productive thinking within Meta. Broadly building awareness of AI’s catastrophic potential seems really good, largely for the reasons Holly describes here. Specifically calling for a pause is complicated, both in terms of the goodness of the types of policies that could be called a pause and in terms of the politics (i.e., the public seems pretty on board, but it might backfire specifically with the experts that policymakers will likely defer to, but also it might inspire productive discussion around narrower regulatory proposals?). I think this cluster of activists can sometimes overstate or simplify their claims, which I worry about.

Some broader thoughts about what kinds of advocacy would be useful or not useful:

  • The most important thing, imo, is that whatever advocacy you do, you do it well. This sounds obvious, but importantly differs from “find the most important/neglected/tractable kind of advocacy, and then do that as well as you personally can do it.” For example, I’d be really excited about people who have spent a long time in Congress-type DC world doing advocacy that looks like meeting with staffers; I’d be excited about people who might be really good at writing trying to start a successful blog and social media presence; I’d be excited about people with a strong track record in animal advocacy campaigns applying similar techniques to AI policy. Basically I think comparative advantage is really important, especially in cases where the risk of backfire/poisoning the well is high.
  • In all of these cases, I think it’s very important to make sure your claims are not just literally accurate but also don’t have misleading implications and are clear about your level of confidence and the strength of the evidence. I’m very, very nervous about getting short-term victories by making bad arguments. Even Congress, not known for its epistemic and scientific rigor, has gotten concerned that AI safety arguments aren’t as rigorous as they need to be (even though I take issue with most of the specific examples they provide).
  • Relatedly, I think some of the most useful “advocacy” looks a lot like research: if an idea is currently only legible to people who live and breathe AI alignment, writing it up in a clear and rigorous way, such that academics, policymakers, and the public can interact with it, critique it, and/or become advocates for it themselves is very valuable.
  • This is obviously not a novel take, but I think other things equal advocacy should try not to make enemies. It’s really valuable that the issue remain somewhat bipartisan and that we avoid further alienating the AI fairness and bias communities and the mainstream ML community. Unfortunately “other things equal” won’t always hold, and sometimes these come with steep tradeoffs, but I’d be excited about efforts to build these bridges, especially by people who come from/have spent lots of time in the community to which they’re bridge-building.

Companies that fail to comply with these rules face fines up to “35 million euro or 7% of global revenue,” whichever is higher.

What about noncompanies?

It bans the use of AI for:

This is a list of six entries. What happens when someone thinks of a seventh?

A minor point regarding the EU's institutions:

  • The European Parliament does not have "population-proportional membership from each country", but: "the seats are distributed according to "degressive proportionality", i.e., the larger the state, the more citizens are represented per MEP. As a result, Maltese and Luxembourgish voters have roughly 10x more influence per voter than citizens of the six largest countries." (https://en.wikipedia.org/wiki/European_Parliament)
  • The Council of the EU does not have "one vote per country", but its rules usually prescribe a more complicated majority rule and sometimes unanimity.

Thank you! Classic American mistake on my part to round these institutions to their closest US analogies.