Update on establishment of Cambridge’s Centre for Study of Existential Risk
Cambridge’s high-profile launch of the Centre for Study of Existential Risk last November received a lot of attention on LessWrong, and a number of people have been enquiring as to what‘s happened since. This post is meant to give a little explanation and update of what’s been going on. Motivated by a common concern over human activity-related risks to humanity, Lord Martin Rees, Professor Huw Price, and Jaan Tallinn founded the Centre for Study of Existential Risk last year. However, this announcement was made before the establishment of a physical research centre or securement of long-term funding. The last 9 months have been focused on turning an important idea into a reality. Following the announcement in November, Professor Price contacted us at the Future of Humanity Institute regarding the possibility of collaboration on joint academic funding opportunities; the aim being both to raise the funds for CSER’s research programmes and to support joint work by the FHI and CSER’s researchers on anthropogenic existential risk. We submitted our first grant application in January to the European Research Council – an ambitious project to create “A New Science of Existential Risk” that, if successful, would provide enough funding for CSER’s first research programme - a sizeable programme that will run for five years. We’ve been successful in the first and second rounds, and we will hear a final round decision at the end of the year. It was also an opportunity for us to get some additional leading academics onto the project – Sir Partha Dasgupta, Professor of Economics at Cambridge and an expert in social choice theory, sustainability and intergenerational ethics, is a co-PI (along with Huw Price, Martin Rees and Nick Bostrom). In addition, a number of prominent academics concerned about technology-related risk – including Stephen Hawking, David Spiegelhalter, George Church and David Chalmers – have joined our advisory board. The FHI regards establishment of CSER
The text of the plan is here:
http://hk.ocmfa.gov.cn/eng/xjpzxzywshd/202507/t20250729_11679232.htm
Features a section on AI safety:
"Advancing the governance of AI safety. We need to conduct timely risk assessment of AI and propose targeted prevention and response measures to establish a widely recognized safety governance framework. We need to explore categorized and tiered management approaches, build a risk testing and evaluation system for AI, and promote the sharing of information as well as the development of emergency response of AI safety risks and threats. We need to improve data security and personal information protection standards, and strengthen the management of data security in processes such as the collection of training data and model generation. We need to... (read more)