The phrase "no evidence to suggest" has been used as an excuse to avoid inaction in the face of what is in fact ample evidence.

Health authorities continued bumbling response to coronavirus has really served to highlight this, starting right at the beginning:

And then continuing pretty steadily though the crisis:

(I couldn't quickly find an 'official authority' saying the above, but it must somewhat reflect the mindset of some of them, since otherwise the case for single dose is pretty overwhelming).

Of course from a Bayesian perspective this entire idea of "no evidence to suggest" is almost always meaningless, as pointed out in the full thread of the above tweet: There was plenty of evidence at the time for everything the WHO dismissed, as evidenced by all the people on this very site who got it right.

However not everyone thinks in terms of Bayesian statistics. Viewing the entire world as a probability distribution and acting accordingly, is not for the average person. Instead the way of deciding between the unsubstantiated and reality is via empirical science. Whilst not perfect, treating something as false until one has done a carefully regulated study is certainly far better than what we had in the past. It sounds at first like the WHO is making the correct decisions here - waiting till we have 'evidence' for something before acting on it, and evidence is not anecdotal data (under the empirical view of things), but double blinded placebo controlled studies. How do we articulate what the WHO did wrong here, without using the word Bayesian?


One idea I had was to use another commonly used phrase: "no reason to suggest". Whilst they sound similar the phrases I think mean very different things to the average person.

To deal in extremes, consider the 2 statements:

1. There is no reason to suggest holding onto the tail of a plane as it takes off is dangerous.

2. There is no evidence to suggest holding onto the tail of a plane as it takes off is dangerous.

The first is obviously false. It takes about 2 seconds to think of reasons why it's a terribly stupid idea.

The second is less obviously false. By evidence some people mean a certain level of rigorously done study. That has presumably never taken place for this exact question.

In other cases the two are likely to agree. For example there is no reason or evidence to suggest that the vaccine can make you infertile.


So here's a simple trick: whenever you read a sentence containing the phrase "no evidence to suggest", try replacing it with the phrase "no reason to suggest".

If they both seem equally true, then that's fine. If the latter seems obviously false, then the sentence is likely misleading.

And if, as is usually the case, the modified sentence seems less true, but not obviously false, then the claim is probably not as strong as it makes out, but still may be somewhat valid.


This is basically reinventing Bayesian statistics. However it doesn't require any thinking about probability, priors, or technical lingo. It's a simple heuristic to easily tell if in a particular case, a "no evidence" claim is informative. If there is strong reason to suggest something is true, even lacking evidence, it's worth assuming it's probably true.

New Comment
9 comments, sorted by Click to highlight new comments since:

There's no evidence to suggest that this mental substitution will lead to better outcomes.

Relevant Sequences post: Scientific Evidence, Legal Evidence, Rational Evidence. I think this may be a byproduct of the leadership of governmental organizations having too many lawyers (and legal culture); the "only a published peer-reviewed study counts as evidence" thing seems like a natural result of hybridizing the legal notion of evidence (something admissible in court) with the scientific notion of evidence (something produced via the scientific method).

To pick a nit: science accepts (or should accept) evidence which is not produced "via the scientific method". Ordinary everyday experience, anecdotes from friends, historical records, intuitive arguments about "beauty" or "naturalness", inferences from causal models, and pure math all contribute to scientific knowledge. The reason for privileging scientific experiment over most of these is not that the others cannot be evidence, but that rigorous experiment allows us to more thoroughly rule out alternate explanations and establish causation.

You probably already know this, but it's worth reiterating given the context of the above.

nit on the nit: anything that is in fact used to produce science becomes by definition part of the scientific method.

And within research the distintiction whether something is on the "anecdotes from friends level" or "meta-analyses from decades" is a pretty significant one.

And if we as someone to "believe science" that is often how their personal "anecdotes from friends" level clashes with productions from professional belief formers.

The danger model where science gets ignored is when that everyday experience dominates and that is suspected to form bad epistemics. There is atleast the idea hovering around that science is more reliable because it can get by without the utilization of this kind of "dirty epistemics" that scienced should strive to be as dirt-free as possible to practise.

How do we articulate what the WHO did wrong here, without using the word Bayesian?

This is the basic problem is the WHO believing in Evidence-based Medicine. In that paradigm you don't use reasons that aren't proper evidence for making medical decisions.

I think it's a useful mental check of what you really mean. It can lead you astray (e.g. "there is no reason to suggest that vaccine cause autism" is not obviously false, not without proper research), but it certainly works in the cases you describe.

Another aspect than the degere of support that such pharasing might refer to is the method of knowing. If one believed image boards and based on such evidence is pretty certain about something, that might be a high degree of support but trust in those information and judgement sources is not shared. A health authority might feel a pressure and can be argued that it should base its stances on things that have societal basis and which it can stand behind. If you as an individual act on rumors you take on the responcibility of possibly being reckless with your actions that they are maladaptive. Is a health authority in position to be reckless on behalf of the public? If you are going to shout fire in a crowded threather it is one thing to note whether there is a fire or not but weighing trampling deaths vs fireburn deaths (and I guess vs play restarts) is another lineation to make decision on.

If you see a spider and are mortally afraid it might make sense to be empathetic to you being afraid but be firm in that death is not to be expected, ie it is understandable to have the reaction and the reaction is around but there is a second line of logic that suggests another line of action.

A very important special case of this: There is no evidence that COVID vaccination reduces transmission.

I expect this will be directly useful in my discussions with relatives/friends who are not the LW type. I do have trouble with getting them to 'accept' Bayesian thinking first and then moving on to the topic at hand. Bypassing it this way might give better results. Thank you for this post!