As of an hour ago, I had not yet heard of the Centre for the Study of Existential Risk.

Luke announced it to Less Wrong, as The University of Cambridge announced it to the world, back in April:

CSER at Cambridge University joins the others.

Good people involved so far, but the expected output depends hugely on who they pick to run the thing.

CSER is scheduled to launch next year.



Here is a small selection of CSER press coverage from the last two days:

Google News: All 119 news sources...



Here's an excerpt from one quite typical story appearing in tech-tabloid today:


Cambridge boffins fear 'Pandora's Unboxing' and RISE of the MACHINES

Boffins at Cambridge University want to set up a new centre to determine what humankind will do when ultra-intelligent machines like the Terminator or HAL pose "extinction-level" risks to our species.

A philosopher, a scientist and a software engineer are proposing the creation of a Centre for the Study of Existential Risk (CSER) to analyse the ultimate risks to the future of mankind - including bio- and nanotech, extreme climate change, nuclear war and artificial intelligence.

Apart from the frequent portrayal of evil - or just misguidedly deadly - AI in science fiction, actual real scientists have also theorised that super-intelligent machines could be a danger to the human race.

Jaan Tallinn, the former software engineer who was one of the founders of Skype, has campaigned for serious discussion of the ethical and safety aspects of artificial general intelligence (AGI).

Tallinn has said that he sometimes feels he is more likely to die from an AI accident than from cancer or heart disease, CSER co-founder and philosopher Huw Price said.

The source for these stories appears to be a press release from the University of Cambridge:

Humanity’s last invention and our uncertain future

In 1965, Irving John ‘Jack’ Good sat down and wrote a paper for New Scientist called Speculations concerning the first ultra-intelligent machine. Good, a Cambridge-trained mathematician, Bletchley Park cryptographer, pioneering computer scientist and friend of Alan Turing, wrote that in the near future an ultra-intelligent machine would be built. [...] 

Three Four quick observations:

1: That's a lot of Terminator II photos.
2: FHI at Oxford and the Singularity Institute does not often get this kind of attention.
3: CSER doesn't appear to have published anything yet.
4: The number of people who have heard the term "existential risk" must have doubled a few times today.

New Comment
16 comments, sorted by Click to highlight new comments since:


Let me introduce myself: I'm Sean and I work as project manager at FHI (finally got around to registering!). In posts here I won't be speaking on behalf of FHI unless I explicitly state so (although, like Stuart, I imagine I often will be). I'm not involved officially with CSER, but I'm in communication with them and hope to be keeping up to date with them over the coming months.

A few comments on your observations:

2) CSER have done a deliberate and well-orchestrated "media splash" campaign over the last week, but I believe they're finished with this now. They've got some big names involved and a good support structure in place in Cambridge, which helps.

3) My understanding is that CSER hasn't published anything yet because they don't exist yet in a practical sense - they've been founded but nobody's employed, and they're still gathering seed funding.

4) The Sunday Times article's a bit unfortunate and the general feeling at FHI is that we're not too impressed by the journalist's work, but please note that the more "controversial" statements are the journalist's own thoughts (it's not clear in all places if you skim the article like I did at first). CSER has some good people behind it, and at the time of writing the FHI plans to support it and collaborate with it where possible - we think it's a very positive development in the field of Xrisk. Even the term getting out there is a positive!

Welcome, and thanks for the comments.

Even the term getting out there is a positive!


If journalism demands that you stick to Hollywood references when communicating a concept,
it wouldn't be so bad if journalists managed to understand and convey the distinction between:

  • The wholly implausible, worse than useless Terminator humanoid hunter-killer robot scenario.
  • The not completely far-fetched Skynet launches every nuke, humanity dies scenario.

I think it works as a hierarchy of increasingly complex models. Readers will stop at whichever rung they are comfortable with depending on their curiosity and background.

My real life conversations on X-risk tend to go
Specialized AI
General AI
Friendly AI

News stories in post: 16
Number with a picture from the movie series Terminator: 8 / 16
Number referencing Terminator in text (some with text had no picture, and vice versa): 11 / 16

Popular but not as popular: HAL references.

News stories with no Terminator picture and no textual references to HAL or Arnold Schwarzenegger: 1 / 16, the New Scientist.

To be fair the Guardian story only references Terminator in the header. The text body is written by Lord Martin Rees and is a short but clear description of X-risk without any sci-fi references. It also focuses more on other X-risks, perhaps a difference in opinion amongst the founders?


("Lord Martin Rees is a British cosmologist and astrophysicist. He has been Astronomer Royal since 1995 and Master of Trinity College, Cambridge since 2004. He was President of the Royal Society between 2005 and 2010". For anyone like me who didn't know.)

Interesting; there is now a member of a national legislature who is publicly concerned about existential risk. I wonder if he's planning to try to use his political power to reduce x-risk. My guess: probably not. He appears to be rather a lot more interested in science than in politics, and I'm not sure to what extend the average member of the House of Lords even has political power.

Tallinn and Price are very concerned with AI-related Xrisk. Martin Rees currently considers biological risks his no.1 concern (which is not to say he's unconcerned by AI); he's famously offered bets on a major (~1 million death) bio-related catastrophe occuring in the coming years.


NPR's Morning Edition had about 30s on this topic today. They also included a voice clip from the Terminator: "Hasta la vista, baby."

I remember a post by Hanson (can't seem to find the exact url at the moment), where he said that academic big names are "risk averse," but if a long shot topic becomes hot/fashionable, the big names simply move in on the innovators' turf, and take over the topic.

If this results in existential risk reduction, let me be the first to raise a glass to xrisk's "fashionability".


This sounds like it reduces down to "I was into X-risk before it was cool."

There were some serious errors in the coverage of this story in The Sunday Times (UK).

Yudkowsky seemed to me simplistic in his understanding of moral norms. “You would not kill a baby,” he said to me, implying that was one norm that could easily be programmed into a machine.
“Some people do,” I pointed out, but he didn’t see the full significance. SS officers killed babies routinely because of an adjustment in the society from which they sprang in the form of Nazism. Machines would be much more radically adjusted away from human social norms, however we programmed them.

Wow. This particular mistake seems to be an unlikely and even difficult mistake to make in good faith,
as opposed to, for example, by outright dishonesty.

Update: I told Appleyard of his mistake, and he simply denied that his article has made a mistake on this matter.

Never mind, it seems they don't even try to be honest.

An article at CAM, the Cambridge alumni magazine. (H/T my wife, who gets it in hardcopy).

Nothing too new, but it is good to see the basic AI x-risk concepts laid out with a minimum of snarkiness in a publication aimed at a closed, elite audience. I think that more reasonable ideas about AI x-risk are gaining social status..