(mildly rewritten version of my EA forum post)

The EU AI Act was originally proposed in 2020 as a very "EU regulates stuff first" kind of legislation, trying to make sure EU values are upheld (fairness, transparency, democracy, etc). Several revisions (and some lobbying) later, GPAI (general purpose AI) and foundation model language was added, and it started looking a little more X-risk friendly. 

After some recent political uncertainty, it passed with a strong majority at the EU Plenary meeting. 

I found it fascinating to watch the the live session (from June 13th, the vote was on the 14th), where the Act was discussed by various EU parties. A few things that stood out to me:

  • I was surprised that many EU country representatives mentioned the Open Letters and Existential Risk as a real concern, even though the EU AI Act was not originally intended to address it (though, it now has GPAI/foundation model bits added). Transparency and Fairness took a back seat, to some extent. 
  • Real-time Biometric monitoring was a big debate topic - whether to giving an exemption for law enforcement or not, for national security. Currently it looks like it will not be allowed, other than post-incident with special approval. This may be a useful lever to keep in mind for policy work

Others that watched the stream, feel free to mention insights in the comments. 

Linked here (relevant timestamp 12:39 - 14:33)

With the recent appointment of Ian Hogarth to the UK Foundation Model taskforce, and US talks of regulation getting stronger, I think we are in for interesting times. But it also seems like AI X-risk is a lot more mainstream, which I did not expect to be able to say. 

New to LessWrong?

New Comment