EU AI Act passed Plenary vote, and X-risk was a main topic
(mildly rewritten version of my EA forum post) The EU AI Act was originally proposed in 2020 as a very "EU regulates stuff first" kind of legislation, trying to make sure EU values are upheld (fairness, transparency, democracy, etc). Several revisions (and some lobbying) later, GPAI (general purpose AI) and foundation model language was added, and it started looking a little more X-risk friendly. After some recent political uncertainty, it passed with a strong majority at the EU Plenary meeting. I found it fascinating to watch the the live session (from June 13th, the vote was on the 14th), where the Act was discussed by various EU parties. A few things that stood out to me: * I was surprised that many EU country representatives mentioned the Open Letters and Existential Risk as a real concern, even though the EU AI Act was not originally intended to address it (though, it now has GPAI/foundation model bits added). Transparency and Fairness took a back seat, to some extent. * Real-time Biometric monitoring was a big debate topic - whether to giving an exemption for law enforcement or not, for national security. Currently it looks like it will not be allowed, other than post-incident with special approval. This may be a useful lever to keep in mind for policy work Others that watched the stream, feel free to mention insights in the comments. > Linked here (relevant timestamp 12:39 - 14:33) With the recent appointment of Ian Hogarth to the UK Foundation Model taskforce, and US talks of regulation getting stronger, I think we are in for interesting times. But it also seems like AI X-risk is a lot more mainstream, which I did not expect to be able to say.
Great post! dont have much concrete stuff to add, havent kept up that much with the policy discourse in the past few months. Personally I do feel like I became a bit complacent, and conveniently forgot some of the warning signs that lit up back when o3 (?) got fairly scary bio uplift results.
I guess, now the question is what do we do - the EU could in theory ban these models/request additional mitigations, but not sure if that would actually happen - as the CoP (despite being pretty good!) doesnt quite have enough teeth to do this cleanly.
Curious for ideas here - happy to relay some stuff to my EU policy connections/EU AIO if anyone has concrete suggestions.