I would recommend taking your own recording (or asking the journalist to share) and getting permission to publish. (I call it "on the record in both directions.") I do talk to Cade Metz a lot, and I don't think he's very good at his job, but I've been getting some cool meta-journalism out of it, so it's fine.
you can engage with journalists while holding to rationalist principles to only say true things.
Suppose there was a relatively simple computer program, say a kind of social media bot, that when you input a statement, posts the opposite of that statement. Would you argue that as long you only type true statements yourself, using this program doesn't constitute lying?
But journalists don't invert source statements. That's against the rules and would be punished. (They do cherry-pick, which presents similar problems.)
To make the analogy stronger, what if it only inverts it 1/10 times? Then I think the answer is non-obvious and depends on your principles.
If you're deontological about it, I think you could make a case that your hands are not dirty for making the best of a bad system.
If you're consequentialist about it, I'm saying the 9/10 accuracies could outweigh the 1/10 inaccuracies. And as Zack said, the 1/10 errors are rarely true inversions. That's why
even if you do get misquoted, it doesn't mean talking to the journalist was net-negative, even for that particular piece and even ex-post. As annoying as it is, it might be outweighed by the value of steering the article in positive ways.
Even if it were 1/10, it might be the most important 1/10. Something like that is in fact plausible: if someone were optimally trying to mostly look factual while pushing a political agenda, they would probably sort statements by ratio of [political benefit of lying] / [expected cost of being caught lying], pick a threshold, and lie whenever that ratio exceeds the threshold; and political benefit, as evaluated by this hypothetical journalist-hack, likely correlates with importance to the reader.
I agree it's possible and it's worth thinking through considerations like this. But I still don't think this is a good model of journalists' incentives.
In practice, "probability of being seen as inaccurate" is the term that dominates, which means inaccuracies tend to show up at points in the news article that face the least scrutiny, eg the part of an AI article where the journalist rushes through what a transformer is. These are the parts that are often least important to readers, and least important to you as a source.
And then I would describe the motivation more as "career success" than "political benefit". As in getting a big scoop or writing a successful story, more than pushing a particular agenda. I think what journalists' consider a successful story is kind of correlated with importance to the reader, barely correlated with what's impactful, and barely correlated with how frustrating it would be for you to be misquoted. Consider the ChatGPT suicide example: the journalist is focused on their big scoop, but probably cares much less about the paragraph I pulled out. Ditto for readers. But I think it's very valuable it was included.
I'll have more on this in the epistemics post.
And then I would describe the motivation more as "career success" than "political benefit". As in getting a big scoop or writing a successful story, more than pushing a particular agenda.
This gets subtle. I can think of several cases where journalists sat on what would have been delicious scandals that should be good for a career, for what look like political reasons. That said, if one looks closer, it's plausible that, in each case, they reasoned (plausibly correctly) that it would not have actually been good for their career to publish it, because they would have faced backlash (for political/tribal reasons), and possibly their editors (if applicable) would have refused to allow it. I imagine there is partial but incomplete equivalence between this kind of "externally imposed political motivation" versus "internalized political motivation", and it may be worth tracking the difference.
That's for omitting stories. For lying... On priors, that difference of external vs internal political motivation would be important: the latter would encourage a journalist to come up with new lies and use them, while the former would mostly just make them go along with lies that the rest of their tribe is already telling. I do see plenty of "going along with lies" and not much innovative mendacity; I'll note that the "lies" I refer to are usually "not technically false, but cherry-picked and/or misleadingly phrased, which a normal person will hear and predictably come away believing a statement that is false; and which a journalist who felt a strong duty to tell the truth as best they could would not say absent stronger external pressure". (See Zvi on bounded distrust.)
Re going along with lies - Yeah, I think the coverage of data center water usage has been an example of that at its worst :/
Re journalists sitting on scoops - I'm curious if you're able to share any examples? I don't doubt that it happens.
Sources' motivations for talking to journalists are a bit of a puzzle. On the one hand, it's helpful for journalists to work out what those motivations are, to keep sources invested in the relationship. On the other hand, sources behave in perplexing ways, for instance sharing information against their own interests, so it's often best to treat their psychology as unknowable.
Reflecting on sources' willingness to share compromising information, one mystified AI journalist told me last weekend, "no reasonable person would do this."
But to the extent I can divine their motivations, here are some reasons I think people talk to me at work:
Most of these are not particularly inspiring, but if you work in AI safety, I want to appeal to your theory of change. If your theory of change relies on getting companies, policymakers, or the public to do something about AI, the media can be very helpful to you. The media is able to inform those groups about the actions you would have them take and steer them toward those decisions.
For example, news stories about GPT-4o and AI psychosis reach the public, policymakers, OpenAI investors, and OpenAI employees. Pressure from these groups can shape the company's incentives, for instance to encourage changes to OpenAI's safety practices.
More generally, talking to journalists can help raise the sanity waterline for the public conversation about AI risks.
If you are an employee at an AI lab and you could see yourself whistleblowing some day, I think it is extra valuable for you to feel comfortable talking to journalists. In my experience, safety-minded people sometimes use the possibility of being a whistleblower to license working at the labs. But in practice, whistleblowing is very difficult (a subject for a future post). If you do manage to overcome the many obstacles in your way and try to whistleblow, it would be much easier if you're not calling a journalist for the first time. Instead, get some low-stakes practice in now and establish a relationship with a journalist, so you have one fewer excuse if the time comes.
Maybe news articles offend your epistemic sensibilities because you've experienced Gell-Mann amnesia and have read too many sloppy articles. Unfortunately, I don't think we can afford to be so picky. If you don't talk to journalists, you cede the discourse to the least scrupulous sources. In this case, that's often corporate PR people at the labs, e/acc zealots, and David Sacks types. They are happy to plant misleading stories that make the safety community look bad. I think you can engage with journalists while holding to rationalist principles to only say true things.
It's pretty easy to steer articles. It often only takes one quote to connect an article on AI to existential risks, when counterfactually, the journalist wouldn't have realized the connection or had the authority to write it in their own voice. For example, take this recent CNN article on a ChatGPT suicide. Thanks to one anonymous ex-OpenAI employee, the article connected the suicide to the bigger safety picture:
One former OpenAI employee, who spoke with CNN on the condition of anonymity out of fear of retaliation, said “the race is incredibly intense,” explaining that the top AI companies are engaged in a constant tug-of-war for relevance. “I think they’re all rushing as fast as they can to get stuff out.”
It's that easy!
Overall, it sounds disingenuous to me when people in AI don't talk to journalists because they dislike the quality of AI journalism. You can change that!
If you appreciate initiatives like Tarbell that train journalists to better understand AI, you should really like talking to journalists yourself! Getting people who are already working in AI safety to talk to journalists is even more cost-effective and scalable. Plus, you will get to steer the discourse according to your specific threat models and will enjoy the fast feedback of seeing your views appear in print.
Here are some genres of safety-relevant stories that you might want to contribute to:
In practice, articles tend to cut across multiple of these categories. Op-eds also deserve an honorable mention: they don't require talking to journalists in the sense I'm writing about here, but some of the best articles on AI risks have been opinion pieces.
I'll briefly preempt a common objection: you're worried that journalists are going to misquote you or take you out of context.
First, I think that's rarer than you might expect, in part because you've probably over-indexed on the Cade Metz incident. Plus, journalists hate being wrong and try to get multiple sources, as I wrote in Read More News.
Second, you can seek out experienced beat reporters who will understand you, rather than junior ones.
Third and most importantly, even if you do get misquoted, it doesn't mean talking to the journalist was net-negative, even for that particular piece and even ex-post. As annoying as it is, it might be outweighed by the value of steering the article in positive ways.