Ian Hogarth has just been announced as the Chair of the UK's AI Foundation Model Taskforce. He's the author of the FT article "We must slow down the race to God-like AGI", and seems to take X-risks from AI seriously.

To quote his twitter thread:

And to that end I put out a call to people across the world. If you are an AI specialist or safety researcher who wants to build out state capacity in AI safety and help shape the future of AI policy then get in touch:

We have £100m to spend on AI safety and the first global conference to prepare for. I want to hear from you and how you think you can help. The time is now and we need more people to step up and help.

The google form to leave an expression of interest is here.

(I am in no way affiliated with Ian or the UK Foundation Model Task Force)

New Comment
2 comments, sorted by Click to highlight new comments since: Today at 6:57 AM

Thank you for taking the time to highlight this. I hope that some LessWrongers with suitable credentials will sign up and try to get a major government interested in x-risk.

I mean, I think the UK government is already interested in x-risk. The core question is whether they'll do things that help or hurt; I am optimistic that the more LWers work there, the more likely they are to do helpful things.