http://www.theguardian.com/technology/2014/aug/30/saviours-universe-four-unlikely-men-save-world

The article is titled "The scientific A-Team saving the world from killer viruses, rogue AI and the paperclip apocalypse", and features interviews with Martin Rees, Huw Price, Jaan Tallinn and Partha Dasgupta. The author takes a rather positive tone about CSER and MIRI's endeavours, and mentions x-risks other than AI (bioengineered pandemic, global warming with human interference, distributed manufacturing).

I find it interesting that the inferential distance for the layman to the concept of paperclipping AI is much reduced by talking about paperclipping America, rather than the entire universe: though the author admits still struggling with the concept. Unusually for an journalist who starts off unfamiliar with these concepts, he writes in a tone that suggests that he takes the ideas seriously, without the sort of "this is very far-fetched and thus I will not lower myself to seriously considering it" countersignalling usually seen with x-risk coverage. There is currently the usual degree of incredulity in the comments section though.

For those unfamiliar with The Guardian, it is a British left-leaning newspaper with a heavy focus on social justice and left-wing political issues. 

New to LessWrong?

New Comment
17 comments, sorted by Click to highlight new comments since: Today at 4:32 AM

Hi,

I'd be interested on LW's thoughts on this. I was quite involved in the piece, though I suggested to the journalist it would be more appropriate to focus on the high-profile names involved. We've been lucky at FHI/Cambridge with a series of very sophisticated tech-savvy journalists with whom the inferential distance has been very low (see e.g. Ross Andersen's Aeon/Atlantic pieces); this wasn't the case here, and although the journalist was conscientious and requested reading material beforehand, I found that communicating on these concepts more difficult than expected.

In my view the interview material turned out better than expected, given the clear inferential gap. I am less happy with the 'catastrophic scenarios'' which I was asked for. The text I sent (which I circulated to FHI/CSER members) was distinctly less sensational, and contained a lot more qualifiers. E.g. for geoengineering I had: "Scientific consensus is against adopting it without in depth study and broader societal involvement in the decisions made, but there may be very strong pressure to adopt once the impacts of climate change become more severe." and my pathogen modification example did not go nearly as far. While qualifiers can seem like unnecessary padding to editors, it can really change the tone of a piece. Similarly, in a pre-emptive line to ward off sensationalism, I included "I hope you can make it clear these are "worst case possibilities that currently appear worthy of study" rather than "high-likelihood events". Each of these may only have e.g. a 1% likelihood of occurring. But in the same way an aeroplane passenger shouldn't accept a 1% possibility of a crash, society should not accept a 1% possibility of catastrophe. I see our role as (like airline safety analysts) figuring out which risks are plausible, and for those, working to reduce the 1% to 0.00001%"; this was sort-of-addressed, but not really.

That said, the basic premises - that a virus could be modified for greater infectivity and released by a malicious actor, 'termination risk' for atmospheric aerosol geoengineering, future capabilities of additive manufacturing for more dangerous weapons - are intact.

Re: 'paperclip maximiser'. I mentioned this briefly in conversation, after we'd struggled for a while with inferential gaps on AI (and why we couldn't just outsmart something smarter than us, etc), presenting it as a 'toy example' used in research papers on AI goals, meant to encapsulate the idea that seemingly harmless or trivial but poorly thought through goals can result in unforseen and catastrophic consequences when paired with the kind of advanced resource utilisation and problem-solving ability a future AI might have. I didn't expect it it to be taken as a literal doomsday concern - and it wasn't in the text I sent - and to my mind it looks very silly in there, possibly deliberately so. However, I feel that Huw and Jaan's explanations were very good, and quite well-presented..

We've been considering whether we should limit ourselves to media opportunities where we can write the material ourselves, or have the opportunity to view and edit the final material before publishing. MIRI has significantly cut back on its media engagement, and this seems on the whole sensible (FHI's still doing a lot, some turns out very good, some not so good).

Lesson to take away: 1) this stuff can be really, really hard. 2) Getting used to v sophisticated, science/tech-savvy journalists and academics can leave you unprepared. 3) Things that are v reasonable with qualifies can become v unreasonable if you remove the qualifiers - and editors often just see the qualifiers as unnecessary verbosity (or want the piece to have stronger, more senational claims)

Right now, I'm leaning fairly strongly towards 'ignore and let quietly slip away' (the guardian has a small UK readership, so how much we 'push' this will probably make a difference), but I'd be interested in whether LW sees this as net positive or net negative on balance for existential risk in the public. However, I'm open to updating. I asked a couple of friends unfamiliar with the area what their take away impression was, and it was more positive than I'd anticipated.

I'd call it a net positive. Along the axis of "Accept all interviews, wind up in some spectacularly abysmal pieces of journalism" and "Only allow journalism that you've viewed and edited", the quantity vs quality tradeoff, I suspect the best place to be would be the one where the writers who know what they're going to say in advance are filtered, and where the ones who make an actual effort to understand and summarize your position (even if somewhat incompetent) are engaged.

I don't think the saying "any publicity is good publicity" is true, but "shoddy publicity pointing in the right direction" might be.

I wonder how feasible it is to figure out journalist quality by reading past articles... Maybe ask people who have been interviewed by the person in the past how it went?

Thanks. Re: your last line, quite a bit of this is possible: we've been building up a list of "safe hands" journalists at FHI for the last couple of years, and as a result, our publicity has improved while the variance in quality has decreased.

In this instance, we (CSER) were positively disposed towards the newspaper as a fairly progressive one with which some of our people had had a good set of previous interactions. I was further encouraged by the journalist's request for background reading material. I think there was just a bit of a mismatch: they sent a guy who was anti-technology in a "social media is destroying good society values" sort of way to talk to people who are concerned about catastrophic risks from technology (I can see how this might have made sense to an editor).

I've read a fair number of x-risk related news pieces, and this was by far the most positive and non-sensationalist coverage that I've seen by someone who was neither a scientist nor involved with x-risk organisations.

The previous two articles I'd seen on the topic were about 30% Terminator references. This article, while not necessarily a 100% accurate account, at least takes the topic seriously.

Thanks, reassuring. I've mainly been concerned about a) just how silly the paperclip thing looks in the context it's been put b) the tone, a bit - as one commenter on the article put it

"I find the light tone of this piece - "Ha ha, those professors!" to be said with an amused shake of the head - most offensive. Mock all you like, but some of these dangers are real. I'm sure you'll be the first to squeal for the scientists to do something if one them came true. Price asks whether I have heard of the philosophical conundrum the Prisoner's Dilemma. I have not. Words fail me. Just what do you know then son? Once again, the Guardian sends a boy to do a man's job."

I wouldn't worry too much about the comments. Even Guardian readers don't hold the online commentariat of the Guardian in very high esteem, and it's reader opinion, not commenter opinion, that matters the most.

It seems like the most highly upvoted comments are pretty sane anyway!

I've worked in PR for the better part of 10 years and I've worked for sticky things like politics where context is everything and you are right, editors love to pull out something that "looks" very dramatic to get attention and the Guardian is notorious for this. However, I think the best thing to do is to fight fire with fire. Whatever media you do you should respond to the serious pieces with blog posts of your own. Clarifying things and making your side of the story known is just as important. I am also a believer in that you shouldn't leave your message in the hands of other people. I would then follow these stories up with awesome videos/blog posts of your own that people can interact with on a variety of platforms. That would allow you to get your message out in your way. That way when you do take that interview there is plenty to talk about. Its all about controlling the message.

I definitely thought it was one of the best pieces of X-Risk Journalism I've seen, in so far as spreading good information in a considered tone of voice.

This piece is definitely good publicity, rather than bad. It takes the ideas seriously despite deliberately emphasizing the extent to which the author is confused by them, and writes in a tone appropriate for a mass audience.

You know how television shows will have the character who starts as "the new guy" they need to explain everything to, to serve as an audience insert? This author made himself the new guy.

I agree that there are better articles to direct people towards. I doubt this piece does much damage just by existing, though; it seems more likely that it's either net neutral or slightly net positive.

Also, from the article,

After a two-year gestation, the CSER gets properly up and running next month.

Congratulations.

Thank you! We appear to have been successful with our first foundation grant; however, the official award T&C letter comes next week, so we'll know then what we can do with it, and be able to say something more definitive. We're currently putting the final touches on our next grant application (requesting considerably more funds).

I think the sentence in question refers to a meeting on existential/extreme technological risk we will be holding in Berlin, in collaboration with the German Government, on 19th of September. We hope to use this as an opportunity to forge some collaborations in relevant areas of risk with European research networks, and with a bit of luck, to put existential risk mitigation a little higher on the European policy agenda. We'll be releasing a joint press release with the German Foreign Office as soon as we've got this grant out of the way!

Is the paper mentioned and apparently quoted in the Guardian article, which Dasgupta described as "somewhat informal," available anywhere?

Nearly certainly, unfortunately that communication didn't involve me so I don't know which one it is! But I'll ask him when I next see him, and send you a link. http://www.econ.cam.ac.uk/people/crsid.html?crsid=pd10000&group=emeritus

Similarly, in a pre-emptive line to ward off sensationalism

A journalist doesn't have any interest not to engage in sensationalism.

Things that are v reasonable with qualifies can become v unreasonable if you remove the qualifiers - and editors often just see the qualifiers as unnecessary verbosity (or want the piece to have stronger, more senational claims)

Editors want to write articles that the average person understands. It's their job to simplify. That still has a good chance of leaving the readers more informed than they were before reading the article.

Explaining things to the average person is hard.

It's not the kind of article that I would sent people who have an background and who approach you. On the other hand it's quite fine for the average person.

"A journalist doesn't have any interest not to engage in sensationalism."

Yes. Lazy shorthand in my last lw post, apologies. I should have said something along the lines of "in order to clarify our concerns , and not give the journalist the honest impression we though these things all represented imminent doom, which might result in sensationalist coverage" - as in, sensationalism resulting from misunderstanding. If the journalist chooses deliberately to engage in sensationalism, that's a slightly different thing - and yes, it sells newspapers.

"Editors want to write articles that the average person understands. It's their job to simplify. That still has a good chance of leaving the readers more informed than they were before reading the article."

Yes. I merely get concerned when "scientists think we need to learn more about this, and recommend use of the precautionary principle before engaging" gets simplified to "scientists say 'don't do this", as in that case it's not clear to me that readers come away with a better understanding of the issue. There's a lot of misunderstanding of science due to simplified reporting. Anders Sandberg and Avi Roy have a good article on this in health (as do others): http://theconversation.com/the-seven-deadly-sins-of-health-and-science-reporting-21130

"It's not the kind of article that I would sent people who have an background and who approach you. On the other hand it's quite fine for the average person."

Thanks, helpful.

There's a lot of misunderstanding of science due to simplified reporting. Anders Sandberg and Avi Roy have a good article on this in health

I don't think the article you linked does demonstrate that reporting produces misunderstanding. You have to think about the alternative. How does the average person form their beliefs? They might hear something from a friend. They might read the horoscope.

Even when the journalist actually writes "scientists think we need to learn more about this, and recommend use of the precautionary principle before engaging" many readers will simply read "scientists say 'don't do this" or they simply ignore it. Especially when you focus on what they actually remember from reading the article.

First media piece on x-risk reduction that I've read which makes not just the ideas but the work itself sound high-status. The article seems to have gravitated towards the clichè "tight-knit cabal of wealthy, sophisticated geniuses quietly saving world from destruction", which it seems to me is probably one of the best ways that it could be spun. I'm certainly no PR person, though.