67

LESSWRONG
LW

66
InterviewsJournalismCommunityWorld ModelingAI
Personal Blog
2025 Top Fifty: 12%

151

My Interview With Cade Metz on His Reporting About Lighthaven

by Zack_M_Davis
17th Aug 2025
6 min read
15

151

151

My Interview With Cade Metz on His Reporting About Lighthaven
44ryan_b
43Elizabeth
20Zack_M_Davis
9AnthonyC
31orthonormal
34Zack_M_Davis
19localdeity
9Zack_M_Davis
6AnthonyC
1Chris_Leong
0AnthonyC
7Martin Randall
5Chris_Leong
1bodry
1WalterL
New Comment
15 comments, sorted by
top scoring
Click to highlight new comments since: Today at 12:14 PM
[-]ryan_b1mo4423

My days of not taking that person seriously sure are coming to a middle.

Reply11
[-]Elizabeth1mo4336

Thank you for your service in recording and sharing this. 

Reply
[-]Zack_M_Davis1mo201

(The original recording is available, but there's a lot of background noise.)

Reply
[-]AnthonyC1mo90

Yeah, sounds like a typical reporter perspective - seemingly or genuinely not understanding that there's a difference between what they're thinking and what they're saying, or that small wording changes have big meaning implications for anyone actually trying to learn.

Reply
[-]orthonormal1mo3115

I believe that Cade knows perfectly well what everyone has been saying for years; he's being disingenuous because the object level doesn't matter to him, and the only important thing is ensuring that these weirdos don't get status. He's never once engaged on simulacrum level 1 with this community.

Reply
[-]Zack_M_Davis1mo3417

the only important thing is ensuring that these weirdos don't get status

Seems too self-centered to be the real explanation. (Most of the time, people who do things that hurt you aren't doing it because they hate you; it's because you're in the way.)

As a technology reporter whose job is to cover what rich and powerful people in Silicon Valley are up to, the fact that companies your readers have heard of (DeepMind and OpenAI and Anthropic) are causally downstream of this internet ideology that no one has heard of, is itself an interesting story that the public deserves to hear about.

It is a legitimate and interesting story that the public deserves to hear about! The problem, from our perspective, is that he doesn't accept that the object level is a relevant part of the story. He's correct to notice the asymmetry in vibes between people at MATS trying to save the world and people at Meta trying to make money as being "a key part of the debate" as far as the psychology of the participants goes—and by writing a story about that observation, he's done his job as a technology reporter. Simulacrum 1 isn't in scope.

Reply21
[-]localdeity1mo192

I'm guessing you remember this?

People might think Matt is overstating this but I literally heard it from NYT reporters at the time. There was a top-down decision that tech could not be covered positively, even when there was a true, newsworthy and positive story. I'd never heard anything like it. https://x.com/mattyglesias/s/mattyglesias/status/1588190763413868553

The original Matt Yglesias tweet has been deleted, but the Internet Archive has it:

I think a lot of people are totally ignorant of the background dynamic driving the drama around the checkmarks.

But what happened is that a few years ago the New York Times made a weird editorial decision with its tech coverage.

Instead of covering the industry with a business press lens or a consumer lens they started covering it with a very tough investigative lens — highly oppositional at all times and occasionally unfair.

Almost never curious about technology or in awe of progress and potential.

This was a very deliberate top-down decision.

They decided tech was a major power center that needed scrutiny and needed to be taken down a peg, and this style of coverage became very widespread and prominent in the industry.

I forget, have you asked Metz about this?

Reply
[-]Zack_M_Davis1mo95

What makes you describe this as a "typical reporter perspective"? One would expect that people who write for a living are sensitive to the effects of word choices (such that, if they're nudging readers one way or the other, it's probably on purpose rather than on accident).

Reply
[-]AnthonyC1mo60

That was, admittedly, a snarky overgeneralization on my part, sorry.

It may well be on purpose. However, I tend to think in many cases it's more likely a semi-conscious or unconscious-by-long-practice habit of writing what will get people to read and discuss, and not what will get them to understand and learn.

Reply
[-]Chris_Leong1mo10

I almost agreed voted this — then read the comments below — and disagreed voted this instead.

Reply
[-]AnthonyC1mo00

Fair enough.

Reply
[-]Martin Randall1mo7-3

Phrases like "near-religious" and "leap of faith" have different meanings to rationalists, compared to NYT readership. Often rationalists have negative views about religion, and a phrase like "near-religious concerns about X" is taken to mean that the concerns about X are not based on evidence or reason. That's a common meaning. It's not the only one.

In the wider NYT readership, and in the world, there are lots of people with religion and/or faith. This pejorative meaning is not as central, and there is some space for other meanings. For example, a near-religious concern about X might mean:

  • X is eschatological. We might say, existential.
  • Fighting X is a commandment. We might say, morally demanding.
  • X is ineffable. We might say, a singularity.
  • X is extremely powerful. We might say, super-intelligent.
  • There is a community of people concerned about X that work together. We might say, like herding cats.

Meanwhile another meaning of "near-religious" and "leap of faith" is to mean that a belief is obviously and laughably false, like the dragon in my garage. Or it may mean that the person with that belief is as unworthy of respect as a creationist AGI researcher. Outside rationalist culture these are not common meanings. Again, there are many people with religion and/or faith.

So then if you're a reporter, or a troll, or both, and you want to needle rationalists while appearing even-handed to outsiders, you go with the religious metaphor. This was an old joke when Yudkowsky wrote Is Humanism A Religion-Substitute? in 2008. I'm told that explaining a joke kills it, so perhaps this will help.

Reply1
[-]Chris_Leong1mo50

I think he's clearly had a narrative he wanted to spin and he's being very defensive here.

If I wanted to steelman his position, I would do so as follows (low-confidence and written fairly quickly):

  1. I expect he believes his framing and that he feels fairly confident in it because most of the people he respects also adopt this framing.
  2. In so far as his own personal views make it into the article, I expect he believes that he's engaging in a socially acceptable amount of editorializing. In fact, I expect he believes that editorializing the article in this way is more socially responsible than not, likely due to the role of journalism being something along the lines of "critiquing power".
  3. Further, whilst I expect he wouldn't universally endorse "being socially acceptable among journalists" as guaranteeing that something is moral, he'd likely defend it as a strongly reliable heuristic, such that it would take pretty strong arguments to justify departing from this.
  4. Whilst he likely endorses some degree of objectivity (in terms of getting facts correct), I expect that he also sees neutrality as overrated by old school journalists. I expect he believes that it limits the ability of jouralists to steer the world towards positive outcomes. That is, more of as a consideration that can be overriden, rather than a rule.
Reply
[-]bodry1mo12

Among all the people he could have talked to about Lighthaven he chose an archeologist and a nun/theology professor. The whole thing is littered with religious phraseology where it doesn't apply.  That angle is pushed ridiculously hard. He's trying to identify rationalism as a cult in a way that every group of people with a similar set of ideas/beliefs could be described as a cult. 

Reply
[-]WalterL1mo1-3

Cade Metz doxxed the slatestarcodex guy, right?  My conception going into this was 'this guy pretends to not understand stuff in order to hurt people and enrich himself' and this whole 'the science word sayers are really faith believers' take is pretty clearly more of that.

Invert it, if you like.  Feels like, if silicon valley WAS a religious movement, Cade would describe it as cold eyed cynical pragmatism.

I wouldn't pay this scoundrel any more mind.  Any engagement you give him will be put to use to the detriment of whoever is currently on his plate.

Reply
Moderation Log
More from Zack_M_Davis
View more
Curated and popular this week
15Comments
InterviewsJournalismCommunityWorld ModelingAI
Personal Blog

On 12 August 2025, I sat down with New York Times reporter Cade Metz to discuss some criticisms of his 4 August 2025 article, "The Rise of Silicon Valley's Techno-Religion". The transcript below has been edited for clarity.


ZMD: In accordance with our meetings being on the record in both directions, I have some more questions for you.

I did not really have high expectations about the August 4th article on Lighthaven and the Secular Solstice. The article is actually a little bit worse than I expected, in that you seem to be pushing a "rationalism as religion" angle really hard in a way that seems inappropriately editorializing for a news article.

For example, you write, quote,

Whether they are right or wrong in their near-religious concerns about A.I., the tech industry is reckoning with their beliefs.

End quote. What is the word "near-religious" doing in that sentence? You could have just cut the word and just said, "their concerns about AI", and it would be a perfectly informative sentence.

CM: My job is to explain to people what is going on. These are laypeople. They don't necessarily have any experience with the tech industry or with your community or with what's going on here. You have to make a lot of decisions in order to do that, right? The job is to take information from lots and lots and lots of people who each bring something to the article, and then you consolidate that into a piece that tries to convey all that information. If you write a article about Google, Google is not necessarily going to agree with every word in the article.

ZMD: Right, I definitely understand that part. I'm definitely not demanding a puff piece about these people who I have also written critically about. But just in terms of like—

CM: But you and so many others in the community, who have been in the community for decades in some cases, years, use the same language. They use stronger language.

ZMD: Right, but so like—I'm just saying in terms of writing a news article, when you're trying to convey the who-what-when-where-why, "their concerns about AI", just with no adjective between "their" and "concerns", is much more straightforward.

CM: No, but people need to understand, okay, that the technology as it exists today, it is not clear how it would get to the point where it's going to, you know, destroy humanity, for instance. That is a belief that the average person doesn't understand. And when someone says that, they take it at face value, like ChatGPT is going to jump out of—

ZMD: That's not actually the argument, though.

CM: That's what I'm saying. People need to understand that the argument on some level, and people are going to debate this for years and years, or however long it takes, but that's a leap of faith, right?

ZMD: Yeah, that was actually my second question here. I was a little bit disappointed by the article, but the audio commentary was kind of worse. You open the audio commentary with:

We have arrived at a moment when many in Silicon Valley are saying that artificial intelligence will soon match the powers of the human brain, even though we have no hard evidence that will happen. It's an argument based on faith.

End quote. And just, these people have written hundreds of thousands of words carefully arguing why they think powerful AI is possible and plausibly coming soon.

CM: That's an argument.

ZMD: Right.

CM: It's an argument.

ZMD: Right.

CM: We don't know how to get there.

ZMD: Right.

CM: We do not—we don't know—

ZMD: But do you understand the difference between "uncertain probabilistic argument" and "leap of faith"? Like these are different things.

CM: I didn't say that. People need to understand that we don't know how to get there. There are trend lines that people see. There are arguments that people make. But we don't know how to get there. And people are saying it's going to happen in a year or two, when they don't know how to get there. There's a gap.

ZMD: Yes.

CM: And boiling this down in straightforward language for people, that's my job.

ZMD: Yeah, so I think we agree that we don't know how to get there. There are these arguments, and, you know, you might disagree with those arguments, and that's fine. You might quote relevant experts who disagree, and that's fine. You might think these people are being dishonest or self-deluding, and that's fine. But to call it "an argument based on faith" is different from those three things. What is your response to that?

CM: I've given my response.

ZMD: It doesn't seem like a very ...

CM: We're just saying the same thing.

ZMD: Yeah, but like I feel like there should be some way to break the deadlock of, we're just saying the same thing, like some—right?

CM: I feel like there should be a way to break lots of deadlocks, right?

ZMD: Because, for example, the Model Evaluation and Threat Research, METR, which was spun out of Paul Christiano's Alignment Research Center, they've been measuring what tasks AI models can successfully complete in terms of how long it would take a human to complete the task, and they're finding that the task length is doubling every seven months, with the idea being that when you have AI that can do intellectual tasks that would take humans a day, a week, a month, that could pose a threat in terms of its ability to autonomously self-replicate. Again, you might disagree with the methodology, but that seven month doubling time thing, which is one of the things people are looking at when they're writing these wild-sounding scenarios, that's empirical work on the existing technology.

CM: The same lab has also released a study saying that these LLMs actually slow coders down.

ZMD: Yeah, I saw that, too.

CM: Again, trend lines, okay, sometimes they slow down. Sometimes they stop. And these trend lines are also, they're logarithmic, they're not exponential.

ZMD: That's an important point, yeah.

CM: We agree on all this stuff.

ZMD: We agree on all this—

CM: It's about disagreeing about the best way to convey this to a person.

ZMD: I feel like if you said the thing you just said in the article, that would have been great. But the thing you said in the audio commentary was "an argument based on faith", which is not what you just said to me. Those are different things.

Also, speaking of this religion angle, in your previous book, Genius Makers, you told the story of Turing Award–winning deep learning pioneer Geoffrey Hinton. More recently, in 2023, Hinton left his position at Google specifically in order to speak up about AI risks. There was actually a nice article about this in The New York Times, "'The Godfather of A.I.' Leaves Google and Warns of Danger Ahead". You might have seen that one.

CM: Yes, I did. I broke that story.

ZMD: That was the joke.

CM: I'm with you.

ZMD: But so Hinton has said that his independent impression of the existential threat is more than 50%. Doesn't that undermine this narrative you're trying to build of AI risk as being a religious concern spread by rationalists? Hinton clearly isn't getting this from having read Harry Potter and the Methods of Rationality.

CM: People get ...

ZMD: So are you proposing a model where like—

CM: Go back and read that article. Listen to the Daily episode I did about Geoff. I push back on so much of what he says. I say in the articles, many of his students push back. They don't agree with it. So this is what needs to be conveyed to people, that many people working on the same technology, who are very smart, and very well educated, and very experienced, completely disagree on what is happening.

ZMD: Right. That part, I definitely agree that that part is part of the story.

CM: But there is also this community that has driven a lot of what is happening here, and people need to understand that.

ZMD: Yeah, but it just seems that the situation in terms of people who think AI is a big risk and people who think AI is just another technology, it seems to me that that debate is symmetrical, in the sense that you have smart people on both sides who just disagree about what's happening, and if you're specifically pointing at one side and saying, this is an argument based on faith, then that's editorializing, in a way that's not what you should be doing as a reporter just trying to convey the debate to readers who don't know that this is happening.

CM: I'm pointing out a key part of the debate. I'm pointing that out.