From the BBC:

[Hawking] told the BBC:"The development of full artificial intelligence could spell the end of the human race."


"It would take off on its own, and re-design itself at an ever increasing rate," he said. "Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded."

There is, however, no mention of Friendly AI or similar principles.

In my opinion, this is particularly notable for the coverage this story is getting within the mainstream media. At the current time, this is the most-read and most-shared news story on the BBC website.

New to LessWrong?

New Comment
15 comments, sorted by Click to highlight new comments since: Today at 8:11 PM

It was also on the BBC TV main evening news today, and BBC News 24.

Edit: more from them here:

Thank you for posting this additional story. That one is particularly good to see because it mentions Bostrom, and talks about Friendliness in AI (though not by that name).

Just spitballing, but I would guess that this type of coverage is a net benefit due to the level of exposure and subsequent curiosity generated by "holy crap what is this thing that could spell doom for us all?" That is, it seems like singularitarian ideas need as much exposure as possible (any press is good press) and are a long way away from worrying about anti-AI picketers. Am I off here?

I think you're correct. Ideas like AGI are mostly unknown by the general public and anything that can make someone curious about that cluster of ideas is probably a good thing.

What's the causal pathway by which coverage like this improves things? If we want technical expertise or research funding, it seems like there are more targeted channels. This could be optimal if we want to make some kind of political move though. What else?

Hawking is right, artificial intelligence really can spell the end of the human race.

There is perhaps no better man to alert the mainstream of the possibilities and/or dangers of AI. His comments have no doubt encouraged many people to look into this area. Some of these people may be capable of helping create Friendly AI in the future. In my opinion Steven Hawking believed making these comments were for the greater good of society and I tend to agree with him.

This story was picked up by the ABC in Australia, on radio, free-to-air TV and online.

All of these high status scientists speaking out about AGI existential risk seldom mention MIRI or use their terminology. I guess MIRI is still seen as too low status.


There has certainly been increased general media coverage lately , and MIRI was mentioned in the Financial Times recently.

All of these high status scientists speaking out about AGI existential risk seldom mention MIRI or use their terminology.

Perhaps they do, but the journalists or their editors edit it out?

It also mentions Elon Musk:

In the longer term, the technology entrepreneur Elon Musk has warned that AI is "our biggest existential threat".

To be fair though, this is in the article too:

But others are less pessimistic.

"I believe we will remain in charge of the technology for a decently long time and the potential of it to solve many of the world problems will be realised," said Rollo Carpenter, creator of Cleverbot.


"We cannot quite know what will happen if a machine exceeds our own intelligence, so we can't know if we'll be infinitely helped by it, or ignored by it and sidelined, or conceivably destroyed by it," he says.

But he is betting that AI is going to be a positive force.

For a glimpse at how "ordinary people" react to such claims, go be horrified at the comments to the same article at /r/futurology.

/r/Futurology is horrible in general, but gets even worse when talking AGI;

I think getting to a friendly AI is very hard. I trust his assessment and I think we have to be very careful with the development of AI.