The Paper of Record had a good article today about existential AI risk, mentioning EY:

In the early 2000s, a young writer named Eliezer Yudkowsky began warning that A.I. could destroy humanity. His online posts spawned a community of believers. Called rationalists or effective altruists, this community became enormously influential in academia, government think tanks and the tech industry.

It even mentioned Clippy!

The worriers have often used a simple metaphor. If you ask a machine to create as many paper clips as possible, they say, it could get carried away and transform everything — including humanity — into paper clip factories.

Very happy to see this.

New to LessWrong?

New Comment
14 comments, sorted by Click to highlight new comments since: Today at 2:03 AM
[-]Razied11mo2132

Researchers and industry leaders have warned that A.I. could pose an existential risk to humanity. But they’ve been light on the details.
...
The letter was the latest in a series of ominous warnings about A.I. that have been notably light on details.

Has Cade Metz bothered to perhaps read a bit more on AI risk than the one-sentence statement in the safe.ai open letter? To my eye this article is full of sneering and dismissive insinuations about the real risk. It's like the author is only writing this article in the most grudging way possible, because at this point the prestige of the people talking about AI risk has gotten so large that he can't quite so easily dismiss it without losing status himself.

I think rationalists need to snap out of the "senpai noticed me" mode with respect to the NYT, and actually look at the pathetic level its AI articles operate on. Is quoting the oldest, most famous and most misunderstood meme of AI safety really the level you ought to expect from what is ostensibly the peak of journalism in the western world?

To add to this point, author in question is infamous for doxxing Scott Alexander and writing a hit piece on rationalist community before.

https://slatestarcodex.com/2020/09/11/update-on-my-situation/

 

Reply1111
[-]trevor11mo70

I don't think that this specific comment is a very productive way to go about things here. Journalists count as elites in democracies, and they can't publicly apologize when they make a mistake because that embarrasses the paper, so if they ever change their mind about something (especially something really big and important) then their only recourse is to write positive articles to try to make up for the negative article they originally wrote.

I'm not sure I agree with Razied on the whole "sempai noticed me" thing. I agree that it's important to wake up to that dynamic, which is silly; articles like these don't seem to have a track record of vastly increasing the number of alignment researchers, whereas mid-2010s publications like HPMOR and Superintelligence do (and those phenomenon may have failed to replicate in the 2020s, with WWOTF and planecrash). But there's tons of factors at play here that even I'm not aware of, like people at EA university groups being able to show these articles to mathematicians unfamiliar with AI safety, or orgs citing them in publications, which is the kind of thing that determines the net value of these articles.

[This comment is no longer endorsed by its author]Reply

Cade Metz already has multiple strikes against him when it comes to journalistic carelessness around the rationalist community and around AI risk. In addition to outing Scott, he blithely mischaracterized the situation between Geoff Hinton and Google

It's harmful that the NYT still has him on this beat (though I'm sure his editors don't know/care that he's treating the topic as an anthropological curiosity rather than something worth taking seriously).

[-]Viliam11mo52

Reading the tweets...

GH: In the NYT today, Cade Metz implies that I left Google so that I could criticize Google. Actually, I left so that I could talk about the dangers of AI without considering how this impacts Google. Google has acted very responsibly.

Someone Else: I read and interpreted Care Metz NY Times article the way you meant: you left Google to speak more freely about AI and its potential dangers, and at no point, I felt you criticised Google. For what it's worth.

GH: Maybe I over-reacted. When I read it I thought it could easily be interpreted as implying that I left so that I could criticize Google and that is certainly not the case.

...it seems to me like this specific case is not a strong evidence, if even GH thinks he might have overreacted.

(I don't want to defend NYT or Metz, and there is a lot of other evidence against them; I am just unimpressed by this one specific piece of evidence you chose to link.)

(I haven't read the article itself, only the tweets.)

[-]trevor11mo20

Whoops! I only knew about him from the SSC situation a couple years ago, I had no idea that he was the one behind that NYT article; I guess some people never change (especially people who are living large, like journalists).

I still think it makes sense to give people opportunities to change their ways; if nothing else, so that decent researchers/interns could ghostwrite articles under Cade Metz's name, which is a common thing for major news outlets (journalist positions are plum jobs, so they tend to get occupied by incompetent status-maximizers, who reveal their disinterest with actual work as soon as they get a position at a level they feel satisfied with; and most of the work at news outlets is secretly done by interns since there's tons of competent college students desperate for a tiny number of positions, and news outlet staff lack the will and ability to actually evaluate them for competence).

Also, in terms of treating things as "anthropological curiosities", that's actually a really major tactic for major news corps; it creates the sense that all things are beneath the news outlet itself. There's a surprisingly large proportion of middle-class people out there who buy into the myth of news outlets as the last bastion of truth. Reputation maximization is something that news outlets take very seriously, especially nowadays since they're all on such thin ice.

I agree that, given the dynamics, it's rare to get a great journalist on a technical subject (we're lucky to have Zeynep Tufekci on public health), but my opinion is that Metz has a negative Value Over Replacement Tech Journalist, that coverage of AI in the NYT would be significantly more accurate if he quit and was replaced by whomever the Times would poach.

I like Metz. I'd rather have EY, but that won't happen.

[-]Blueberry11mo0-16

This is completely false, as well as irrelevant.

  • he did not "doxx" Scott. He was going to reveal Scott's full name in a news article about him without permission, which is not by any means doxxing, it's news reporting. News is important and news has a right to reveal the full names of public figures.

  • this didn't happen, because Scott got the NYT to wait until he was ready before doing so.

  • the article on rationalism isn't a "hit piece" even if it contains some things you don't like. I thought it was fair and balanced.

  • none of this is relevant, and it's silly to hold a grudge against a reporter for an article you don't like from years ago when what's more important is this current article about AI risk.

This did not seem to me to be sneering or dismissive of the risks. I think it was just showing a bit of Cade's ignorance in the area.

Otherwise it was pretty positive, and I expect people to come away thinking "wow! A bunch of respectable people have been saying catastrophic AI is a real risk!".

This exactly. Having the Grey Lady report about AI risk is a huge step forward and probably decreased the chance of us dying by at least a little.

I don't see it as sneering at all.

I'm not sure what you mean by "senpai noticed me" but I think it is absolutely critical, as AI becomes more familiar to hoi polloi, that prominent newspapers report on AI existential risk.

The fact that he even mentions EY as the one who started the whole thing warms my EY-fangirl heart - a lot of stuff on AI risk does not mention him.

I also have no idea what you mean about Clippy - how is it misunderstood? I think it's an excellent way to explain.

Would you prefer this?

https://www.vice.com/en/article/4a33gj/ai-controlled-drone-goes-rogue-kills-human-operator-in-usaf-simulated-test

I also have no idea what you mean about Clippy - how is it misunderstood? I think it's an excellent way to explain.

See https://www.lesswrong.com/tag/squiggle-maximizer-formerly-paperclip-maximizer

[-]Viliam11mo60

Ignoring the author, the article seems to me mostly okay. There is a lot to nitpick, of course, but given my general low opinion on journalism, this seems exceptionally good.

Called rationalists or effective altruists, this community became enormously influential in academia, government think tanks and the tech industry.

This sounds like "rationalists" and "effective altruists" are synonyms. (Makes it difficult to decipher whether author means that it's the rationalists or the EAs being enormously influential in academia, etc.)