New strategies for combating misinformation

A layperson-friendly view. Cross-posted from my personal blog, First Principles.

Fake news is on the rise. We know this from Facebook shares, WhatsApp forwards, Twitter trolls, and Potemkin news sites. We see it in elections across the world, novel coronavirus guidance, and nation-state posturing.

We’ve known about the issue for a while, and technology companies — in their role as the primary distributors — have taken action. This action has not stemmed the tide, and meanwhile the techniques of misinformation evolve and proliferate: bot armies and Deep Fakes being only a few recent innovations.

Why is it so difficult to define what fake news is? Why does calling out lies have little impact on those already deceived? And crucially, what can we do to restore trust and reason to public and social media?

What is fake news?

Fake news is deliberate, targeted, misinformation. It’s not necessarily wholly false: perpetrators are as willing to utilize truths that fit their narrative as they are to concoct falsehoods to construct it. They’re necessarily indifferent to the truth, and attached solely to the outcome: the beliefs they wish to plant within the minds of their targets. From Harry Frankfurt’s On Bullshit:

[The bullshitter’s] eye is not on the facts at all, as the eyes of the honest man and of the liar are, except insofar as they may be pertinent to his interest in getting away with what he says. He does not care whether the things he says describe reality correctly. He just picks them out, or makes them up, to suit his purpose.

Fake news, therefore, isn’t lying, but bullshit.

Why is fake news so hard to fight?

Fake news, being an extension of bullshit, inherits much of its traits:

It’s hard to refute

The claims within bullshit are many, nebulous, and often not even wrong. Fact-checking alone isn’t adequate to this task, and neither is reliance upon a trustworthy set of sources.

It’s normalized

As Frankfurt points out:

One of the most salient features of our culture is that there is so much bullshit. Everyone knows this. […] The realms of advertising and of public relations, and the nowadays closely related realm of politics, are replete with instances of bullshit so unmitigated that they can serve among the most indisputable and classic paradigms of the concept.

It’s hard to regulate

If bullshit is hard to characterize, it’s harder to legally define. By virtue of either not even being wrong or outlandishly so, fake news can take advantage of freedom of speech protections for parody and satire. In any specific case, the perpetrators may be elusive, not within the same legal jurisdiction as the victims, or, if every content repost is counted, too many in number to sue.

What will it take?

Effectively countering misinformation requires a sea change in how journalistic media engages with fake news’ misleading narratives, and in the metrics by which content distributors value and incentivize activity on their platforms.

Preempt the narrative

Fact-checking is journalists’ prime weapon against fake news, and fact-checking tools have rightfully proliferated and are even surfaced alongside suspect material by content distributors; but fact-checking alone is ineffective at changing minds, and is at best a reactive and arduous activity that can only verify a tiny fraction of publicized claims.

Instead, content creators and journalistic media must track fake news with the aim of anticipating the intended post-truth narratives, and promote countervailing, fact-based narratives instead. This narrative-busting connects with disparate audiences in ways most meaningful to each, but without forgoing journalistic neutrality. From UNESCO’s journalism handbook on fake news:

The core components of professional journalistic practice […] can be fulfilled in a range of journalistic styles and stories, each embodying different narratives that in turn are based on different values and varying perspectives of fairness, contextuality, relevant facts, etc.

Journalists must intimately understand their audience to honestly and confidently convey these narratives. Techniques of causal correction and moral reframing have been shown to be effective in conveying factual information, e.g.

Saying "the senator denies he is resigning because of a bribery investigation" is not that effective, even with good evidence that that's the truth.
More effective would look something like this: "the senator denies he is resigning because of a bribery investigation. Instead, he said he is becoming the president of a university."

Journalistic neutrality has come to mean catering to a single — mostly moderately liberal — audience, but at the expense of the touchpoints of understanding that appealed to large swathes of the population. To counter misleading and polarizing narratives, alternatives grounded in reality must be translated to the value and belief systems of diverse peoples.

Incentivize deliberation

Sharing is easy and uniform across all content, but not all engagement is created equal. Technology companies must recognize that slowing down some kinds of engagement leads to higher-quality content and better shareholder value.

Like journalistic media, content distributors have relied on fact-checking, with Facebook, YouTube, and Twitter tagging suspected misinformation. This has been applied sparingly and with mixed results, and also exacerbated the problem by implying that untagged content is verified to be true. And even on this flagged subset of content, sharing and cross-posting remains frictionless.

Blocking the sharing of any content outright is undesirable, and raises issues of censorship and free expression. However, technology companies can build in features incentivizing users to reflect on problematic content prior to sharing and improving the quality of ensuing discussions.

When a user shares flagged content, platform features can potentially enforce adding accompanying comments of a minimum length and complexity to encourage deliberation, or answering a quick IMVAIN survey to crowdsource its reliability. These need not be mandatory, but disincentives can be applied by indicating to subsequent viewers instances where the user declined to comment on or verify the post.

Such measures can be differentially applied, and distributors have already demonstrated this ability in automatically flagging and prioritizing content for fact-checking. By treating content that is new, unverified, or suspected of being misleading uniformly across the spectra of politics and values, platforms can process larger tracts of content, improve content quality, and sidestep bias.


Fake news is bullshit: hard to pin down, refute, and regulate. Countering misinformation requires journalists to promote factual narratives by engaging overlooked audiences with causal and moral reframing, and content distributors to incentivize deliberation and crowdsource reliability, discourage uncritical reposting of suspect content, and develop engagement metrics that differentiate for quality activity.


New Comment
8 comments, sorted by Click to highlight new comments since: Today at 12:55 PM

I think another distinction worth making here is whether the person "bullshitting"/"lying" even expects or intends to be believed. It's possible to have "not care whether the things he says describe reality correctly" and still be saying it because you expect people to take you seriously and believe you, and I'd still call that lying.

It's quite a different thing when that expectation is no longer there.

Thanks for the comment, jimmy! That's a good point, and I wonder if it apples to what we're seeing in some of the political misinformation today, where the objective isn't so much to be believed, but to bombard a person with so many conflicting views and narratives that they lose faith in the process and institutions altogether.

I think the conflicting narratives tend to come from different sides of the conflict, and that people generally want the institutions that they're part of (and which give them status) to remain high status. It just doesn't always work.

What I'm talking about is more like.. okay, so Chael Sonnen makes a great example here both because he's great at it and because it makes for a non-political example. Chael Sonnen is a professional fighter who intentionally plays the role of the "heel". He'll say ridiculous things with a straight face, like telling the greatest fighter in the world that he "absolutely sucks" or telling a story that a couple Brazilian fighters (the Nogueira brothers) mistook a bus for a horse and tried to feed it a carrot and sticking to it.

When people try to "fact check" Chael Sonnen, it doesn't matter because not only does he not care that what he's saying is true, he's not even bound by any expectation of you believing him. The bus/carrot story was his way of explaining that he didn't mean to offend any Brazilians, and the only reason he said that offensive stuff online is that he was unaware that they had computers in Brazil. The whole point of being a heel is to provoke a response, and in order to do that all he has to do is have the tiniest sliver of potential truth there and not break character. The bus/carrot story wouldn't have worked if the fighters from a clearly more technologically advanced country than him, even though it's pretty darn far from "they actually think buses are horses, and it's plausible that Chael didn't know they have computers". If your attempt to call Chael out on his BS is to "fact check" whether he was even there to see a potential bus/horse confusion or to point out that if anything, they're more likely to mistake a bus for a Llama, you're missing the entire point of the BS in the first place. The only way to respond is the way Big Nog actually did, which is to laugh it off as the ridiculous story it is.

The problem is that while you might be able to laugh off a silly story about how you mistook a horse for a carrot, people like Chael (if they're any good at what they do) will be able to find things you're sensitive about. You can't so easily "just laugh off" him saying that you absolutely suck even if you're the best in the world, because he was a good enough fighter that he nearly won that first match. Bullshitters like Chael will find the things that are difficult for you to entertain as potentially true and make you go there. If there's any truth there, you'll have to admit to it or end up making yourself look like a fool.

This brings up the other type of non-truthtelling that commonly occurs which is the counterpart to this. Actually expecting to be believed means opening yourself to the possibility of being wrong and demonstrating that you're not threatened by this. If I say it's raining outside and expect you to actually believe me, I have to be able to say "hey, I'll open the door and show you!", and I have to look like I'll be surprised if you don't believe me once you get outside. If I start saying "How DARE you insinuate that I might be lying about the rain!" and generally take the bait that BSers like Chael leave, I show that it's not that I want you to genuinely believe me so much as I want you to shut your mouth and not challenge my ideas. It's a 2+2=5 situation now, and that's a whole nother thing to expect. In these cases there still isn't the same pressure to conform to the truth needed if you expect to be believed, and your real constraint is how much power you have to pressure the other person into silence/conformity.

The biggest threat to truth, as I see it, is that when people get threatened by ideas that they don't want to be true, they try to 2+2=5 at it. Sometimes they'll do the same thing even when the belief they're trying to enforce is actually the correct one, and it causes just as much problems because can't trust someone saying "Don't you DARE question" even when they follow it up with "2+2=4", and unless you can do the math yourself you can't know what to believe. To give a recent example, I found a document written by a virologist PhD about why the COVID pandemic is very unlikely to have come from a lab and it was more thorough and covered more possibilities I hadn't yet seen anyone cover, which was really cool. The problem is that when I actually checked his sources, they didn't all say what he said they said. I sent him a message asking whether I was missing something in a particular reference, and his response was basically "Ah, yeah. It's not in that one it's in another one from China that has been deleted and doesn't exist anymore." and went on to cite the next part of his document as if there's nothing wrong with making blatantly false implications that the sources one gives support the point one made, and the only reason I could even be asking about it is that I hadn't read the following paragraph about something else. When I pointed out that conspiracy minded people are likely to latch on to any little reason to not trust him and that in order to be persuasive to his target audience he should probably correct it and note the change, he did not respond and did not correct his document. And he wonders why we have conspiracy theories.

Bullshitters like Chael can sometimes lose (or fail to form) their grip on reality and let their untruths actually start to impact things in a negative way, and that's a problem. However, it's important to realize that the fuel that sustains these people is the over-reaching attempts to enforce "2+2=what I want you to say it does", and if you just do the math and laugh it off when he straight face says that 2+2=22, there's no more oppressive bullshit for him to eat and fuel his trolling bullshit.

In any specific case, the perpetrators may be ... too many in number to sue.

Then choose one, and make an example out of them.

If you admit to yourself that your goal is revenge, having too many potential targets gives you the advantage that you can optimize for impact. You can choose the ones you have the biggest chance to defeat. If you win money, you can spend them all on lawyers to attack more targets. With some targets, you can make a deal that if they publish a sincere apology, you will forgive them half the money they owe you. Then use the apology as evidence against other targets.

If you get the reputation of going nuclear, journalists will think twice when writing about you in the future.

FWIW, I think this wouldn't work in an individually useful way. Say I get targeted out of the blue because someone edits a picture of my college graduation into a photo of me throwing a baby off of a cliff. I'm not going to be super worried about repeats, I'm just annoyed that this happened for the first time.

My revenge might make people less likely to do this to other people, but I've essentially already lost, and would be switching gears from "that guy who got messed up by a viral edited picture" to "that jerk who keeps suing people for years". I argue that, while I might[1] improve society as a whole by doing this, it won't noticeably improve my life.

Instead, I think I'd support your claim that your revenge would be a charitable act towards society.

(Once more if I was unclear. In Game Theory terms, a "threat" which increases your score and lowers the score of other agents isn't really a threat, it's just something you should do. Instead, it tends to be an action which lowers your score AND the score of other agents, and so is useful as a deterrent. Here, we're proposing establishing a rule "when you hit me with falsehoods, I hit back with the truth" to discourage people from creating fake news about you. For almost anyone, "hitting back" is going to be costly, so it's important to remember we're supporting SOCIETY, not OURSELVES.)

  1. I don't really care about this claim here, so I literally mean "might". I'm going after a different part of the idea to which the truth value of this particular segment is irrelevant. ↩︎

On Bullshit seems to take this in a very different direction than the OP.

From the post:

They’re necessarily indifferent to the truth, and attached solely to the outcome: the beliefs they wish to plant within the minds of their targets.

From OB:

It is clear that what makes Fourth of July oration humbug is not fundamentally that the speaker regards his statements as false. Rather, just as Blackís account suggests, the orator intends these statements to convey a certain impression of himself. He is not trying to deceive anyone concerning American history. What he cares about is what people think of KLP


A use of bullshit that seems typical:


They’re necessarily indifferent to the truth, and attached solely to the outcome: the beliefs they wish to plant within the minds of their targets. 

How do we know whether this claim is false news? How would we go about checking?

If that claim would be true then you would find fake news providers be willing to do things that reduce their traffic in return for convincing more people. 

For a good chunk of problematic outlets I doubt that's the case. There are many different actors that have very different motivations. Supplement salesmen like Mercola or Alex Jones spread a lot of stories that are false but they work very different then no-name outlets and bot farms. 

To actually tackle the subject in a fact based way, it necessary to seperate the players and look at their agenda.

To me "The important actors care more about the beliefs they are spreading then they care about making money" seems problematic bullshit. 

You might be interested in this. Drawing on Frankfurt, we present a framework to understand how bullshitting is different from lying, and how to stem and flush away bullshit in the workplace, and society.

It is currently free to download and share, here.

New to LessWrong?