[Cross-posted from Grand Unified Crazy.]

This post on the credibility of the CDC has sparked a great deal of discussion on the ethics of posts like it. Some people claim that the post itself is harmful, arguing that anything which reduces trust in the CDC will likely kill people as they ignore or reject important advice for dealing with SARS-CoV-2 and (in the long-run) other issues like vaccination. This argument has been met with two very different responses.

One response has been to argue that the CDC’s advice is so bad that reducing trust in it will actually have a net positive effect in the long run. This is an ultimately empirical question which somebody should probably address, but I do not have the skills or interest to attempt that.

The other response is much more interesting, arguing that appeals to consequences are generally bad, and that meta-level considerations mean we should generally speak the truth even if the immediate consequences are bad. I find this really interesting because it is ultimately about infohazards: those rare cases where there is a conflict between epistemic and instrumental rationality. Typically, we believe that having more truth (via epistemic rationality) is a positive trait that allows you to “win” more (thus aligning with instrumental rationality). But when more truth becomes harmful, which do we preference: truth, or winning?

Some people will just decide to value truth more than winning as an axiom of their value system. But for most of us, ultimately I think this also boils down to an empirical question of just how bad “not winning” will end up being. It’s easy to see that for sufficiently severe cases, natural selection takes over: any meme/person/thing that prefers truth over winning in those cases will die out, to be replaced by memes/people/things that choose to win. I personally will prefer winning in those cases. It’s also true that most of the time, truth actually helps you win in the long run. We should probably reject untrue claims even if they provide a small amount of extra short-term winning, since in the long run having an untrue belief is likely to prevent us from winning in ways we can’t predict.

Figuring out where the cut-over point lies between truth and winning seems non-trivial. Based on my examples above we can derive two simple heuristics to start off:

  • Prefer truth over winning by default.
  • Prefer winning over truth if the cost of not winning is destruction of yourself or your community. (It’s interesting to note that this heuristic arguably already applies to SARS-Cov-2, at least for some people in at-risk demographics.)

What other heuristics do other people use for this question? How do they come out on the CDC post and SARS-CoV-2?

New Comment
11 comments, sorted by Click to highlight new comments since: Today at 1:51 PM

I think the notion of "winning" and "truth" being opposed has been addressed by Doublethink (Choosing To Be Biased). As Eliezer puts it so well:

There is no second-order rationality. There is only a blind leap into what may or may not be a flaming lava pit. Once you know, it will be too late for blindness.

[-][anonymous]4y40

To me that post is specifically about self-deception, not about deception of others. I fully agree that once you know a thing, it’s not worth trying to deceive yourself in order for increased winning. But it can still be worth trying to deceive others.

Yes, but you also have to take into account that by adopting this norm, others will attempt to decive you when they have private information. In an iterated game, you effectively end up deceiving yourself indirectly.

[-][anonymous]4y30

https://www.lesswrong.com/posts/xdwbX9pFEr7Pomaxv/meta-honesty-firming-up-honesty-around-its-edge-cases (also by Eliezer) is closer to what I was actually trying to express.

[-][anonymous]4y10

That seems both correct and desirable to me in certain scenarios? If somebody can help me win more effectively by deceiving me, I would prefer they do that. Especially when the consequences of “not winning” are severe, as in the case of a potentially deadly pandemic.

[This comment is no longer endorsed by its author]Reply

It's a relief to know you aren't advocating self-deception, and you may want to choose your phrasing in the
post not to give that impression. "Epistemic rationality" means knowing the truth for yourself. Been honest with others is a different virtue.

That said, I think telling the truth almost always does more good than harm, and my policy is to only lie to defend myself or others from violence. In this particular case, I don't see how the CDC post is going to hurt the average person, since the readers are not average people, but LW community.

There are people who need simple and clear advice to engage in effective action. It's important that those people trust the CDC. On the other hand there is a contingent of intellectual people who are willing to read multiple sources and think about the merit of individual advice, who profit from having from having accurate information because they make decisions based on analysis of information instead of through blind trust.

Writing a easy to read post and not writing a post aren't the only option. It's also possible to write a post in a way that's less accessible to readers. Using highly arcane academic language is a way that scientific journals can discuss dangerous ideas in a way that has less of an impact on public opinion but that still serves the search for truth.

The failure of the the linked post about the US healthcare institution seems to me that it uses language that's too accessible and it should have been written in a lot more academic style.

I appreciate you raising this issue, Evan. And especially the clarity of the trade off between instrumental and epistemic rationality brings into focus a sense of discomfort I have felt in a lot of the recent activity on LW critical of the CDC.

I think it's especially important to keep our egos small and remember that expertise does not generalise.

expertise does not generalise.

To me it also brings home the difficulty of working out if "experts" are really expert. Or if a given organization is optimized to deliver the benefits of expertise. Many times I have been seriously harmed by 'experts' who didn't know what they were doing.

One indication: The CDC been subject to some trenchant criticism from medical people.

Another: Their problems seem not to be so much in highly technical issues but basic organisational failures.

<Maybe it is better not to speak the truth> pseudo-quote

The long term costs of past lying, or even of signaling that you would lie in certain circumstances, can be very severe. Consider the fact that people are by now basically discounting everything the CCP says to zero. Even when they tell the truth, they are not able to transmit that information to people because we assume they are lying.

I reject the framing of truth vs winning. Instead, I propose that only winning matters. Truth has no terminal value.

It does, however, have enormous instrumental value. For this reason, I support the norm always to tell the truth even if it appears as if the consequences are net negative – with the reasoning that they probably aren't, at least not in expectation. This is so in part because truth feels extremely important to many of us, which means that having such a norm in place is highly beneficial.

The other response is much more interesting, arguing that appeals to consequences are generally bad, and that meta-level considerations mean we should generally speak the truth even if the immediate consequences are bad. I find this really interesting because it is ultimately about infohazards: those rare cases where there is a conflict between epistemic and instrumental rationality. Typically, we believe that having more truth (via epistemic rationality) is a positive trait that allows you to “win” more (thus aligning with instrumental rationality). But when more truth becomes harmful, which do we preference: truth, or winning?

The keyword here is "immediate" [emphasis added], which you drop by the end. I agree with the first part of this paragraph but disagree with the final sentence. Instead, my question would have been, "but when more truth appears to become harmful, how do we balance the immediate consequences against the long term/fuzzy/uncertain but potentially enormous consequences of violating the truth norm?"

I read jimrandomh's comment as reasoning from this framework (rather than arguing that we should assign truth terminal value), but this might be confirmation bias.

[-][anonymous]4y10

I also support the general norm to default to truth. But I do believe there are cases where the negative consequences of truth become so severe and immediate that it is reasonable to not do so in favour of winning. The bar for that should be very high, but not unreachable.