When people encounter other people they recognize, they exclaim "small world!"
I suspect that most people have 300 acquaintances or less. I probably have under 100. Still, sometimes I run into people I know and I'm tempted to say "small world".
But it's not actually a small world, is it? It's an unimaginably enormous world.
I mean that literally. You cannot imagine how big the world is.
You're not likely to meet a million people in your life. If you were to meet 100 strangers in 8 hours, you would have less than 5 minutes to spend with each person. If you met 100 strangers every day including weekends, with no vacation days, it would take over 27 years to meet a million people.
How many of those million people would you be able to remember after you've been meeting 100 of them every day for 27.4 years? A few hundred, maybe? A few thousand if you have an especially good memory? It seems to me that even after you've met a million people, your brain is already too small to properly comprehend the thing you just accomplished.
And a million people is nothing in this world. This world has over 7,000 million people. It's truly beyond imagination.
There was a time when the entire global anti-vax movement was centered around a single man who wrote a single paper citing the opinion of 12 parents that perhaps the combination MMR (measles, mumps, and rubella) vaccine caused a combination autism and bowel disease, or as the paper put it, "chronic enterocolitis in children that may be related to neuropsychiatric dysfunction." Among other anomalies, this man took unusual steps like holding a press conference about his N=12
study "Early Report", having a "publicist" answer his phone, and filing a patent for a measles vaccine months before publishing his paper.
At that time you could argue that we should Beware The Man of One Study. Science produces many studies, including many that suffer from a small sample size, and even some with large biases. Some studies are even fraudulent. Did you know that over 100,000 papers have been published on the topic of climate change? The point is, any reasonable person won't take a single study as proof (though it is still evidence).
Of course, it's not as if "Beware The Man of One Study" would have ever been an effective argument against an anti-vaxxer, even back then. Somehow, the original claim that "the combination MMR vaccine is related to a bowel disease and autism, and we should give kids 3 single vaccines instead" morphed into "the MMR vaccine causes autism" which turned into "vaccines cause autism". The man of one
study "early report" became the global movement of zero studies. And the telephone game alone can't explain this transformation. In an actual telephone game, the last child in line will not insist that what they heard is obviously the real truth and that the rest of the class is engaged in a coverup, nor will the child suspect that maybe the conspiracy goes all the way up to the principal's office. So if somebody can explain why anyone bought into "all vaccines cause autism" in the first place, I'm all ears. (Post hoc ergo propter hoc, obviously, but what's hard to explain is extreme confidence based on basically no evidence.)
So, kudos to those skeptical of an idea supported only just one study or blog post.
It's not enough though.
If there is just one crank or quack with a degree in science or medicine for every hundred ordinary scientists, how many is that?
Very roughly, there are 11 million people with science degrees in the U.S. alone, and if 1 out of every hundred is a crank or quack, that would be 110,000 cranks and quacks with science degrees, including roughly 6,500 cranks and quacks with science PhDs in the U.S. alone. I don't have a good estimate of the prevalence of quackery or crankery, but even if it were only 0.1%, we'd still have 11,000 cranks and quacks with science degrees and 650 with science PhDs in the U.S. That's the nature of living in a Giant World.
This leads me to propose the Small World Fallacy: the feeling that if you see a long parade of scientists or doctors proposing the same ideas over and over, that idea must surely be correct.
It's the Chinese Robber Fallacy in reverse. The Chinese Robber Fallacy allows you to demonize a group by writing out a parade of negative facts about the group you want to demonize. Like demonizing Chinese people by talking about each and every robbery recorded in the world's largest country. Or if we wanted to demonize cardiologists, we'd dig up every accusation and conviction made against any cardiologist:
It takes a special sort of person to be a cardiologist. This is not always a good thing.
You may have read about one or another of the “cardiologist caught falsifying test results and performing dangerous unnecessary surgeries to make more money” stories, but you might not have realized just how common it really is. Maryland cardiologist performs over 500 dangerous unnecessary surgeries to make money. Unrelated Maryland cardiologist performs another 25 in a separate incident. California cardiologist does “several hundred” dangerous unnecessary surgeries and gets raided by the FBI. Philadelphia cardiologist, same. North Carolina cardiologist, same. 11 Kentucky cardiologists, same. Actually just a couple of miles from my own hospital, a Michigan cardiologist was found to have done $4 million worth of the same. Etc, etc, etc.
My point is not just about the number of cardiologists who perform dangerous unnecessary surgeries for a quick buck. It’s not even just about the cardiology insurance fraud, cardiology kickback schemes, or cardiology research data falsification conspiracies. That could all just be attributed to some distorted incentives in cardiology as a field. My point is that it takes a special sort of person to be a cardiologist.
Consider the sexual harassment. Head of Yale cardiology department fired for sexual harassment with “rampant bullying”. Stanford cardiologist charged with sexually harassing students. Baltimore cardiologist found guilty of sexual harassment. LA cardiologist fined $200,000 for groping med tech. Three different Pennsylvania cardiologists sexually harassing the same woman. Arizona cardiologist suspended on 19 (!) different counts of sexual abuse. One of the “world’s leading cardiologists” fired for sending pictures of his genitals to a female friend. New York cardiologist in trouble for refusing to pay his $135,000 bill at a strip club. Manhattan cardiologist taking naked pictures of patients, then using them to sexually abuse employees. New York cardiologist secretly installs spycam in office bathroom. Just to shake things up, a Florida cardiologist was falsely accused of sexual harassment as part of feud with another cardiologist.
And yeah, you can argue that if you put high-status men in an office with a lot of subordinates, sexual harassment will be depressingly common just as a result of the environment. But there’s also the Texas cardiologist who pled guilty to child molestation. The California cardiologist who killed a two-year-old kid. The author of one of the world’s top cardiology textbooks arrested on charges Wikipedia describes only as “related to child pornography and cocaine”.
Then it gets weird. Did you about the Australian cardiologist who is fighting against extradition to Uganda, where he is accused of “terrorism, aggravated robbery and murdering seven people”? What about the Long Island cardiologist who hired a hitman to kill a rival cardiologist, and who was also for some reason looking for “enough explosives to blow up a building”?
Like I said, it takes a special sort of person.
Of course, to prove that our reporting is fair and balanced, we also acknowledge that cardiologists sometimes help people. #NotAllCardiologists
Using this technique in reverse, we seek out the many cranks and quacks who agree with us (just so long as they have academic credentials), gather them all together on the same blog, TV channel or documentary, and sing praises to their credentials and their bravery for coming forward despite the risks to their career. As for any who disagree with us, we simply don't invite them. (Though if we do want the appearance of legitimacy, we could also invite a token voice from the other side. In that case we can talk over them, or edit out their key arguments, or try to goad them into anger so that we appear to be the reasonable ones, or invite an expert in a certain field (e.g. glaciology) and then counter him with arguments about related fields (e.g. ocean science) that the expert doesn't know much about. Or we can simply take advantage of the fact that most scientists are not stars of their college debate club, and face the scientist off against a quack with years of experience in debate and salesmanship.)
So that's the Small World Fallacy. Related to it is what I will call the Gish Fallacy, named after the Gish Gallop: a series of arguments delivered in rapid succession so that there are too many arguments for your debate opponent to address. The Gish Fallacy, then, is to believe that a long series of arguments constitutes good evidence that a belief is true. (Plus there's another small world fallacy, where e.g. 1,000 deaths is treated as a large number in a country of 330 million people, while inconveniently high numbers are stated as a percentage of the population instead. Probably this trick has another name.)
By themselves, the Small World fallacy and the Gish Fallacy aren't very interesting, because they can be understood as reasonable consequences of how humans process information. Each new piece of information fits into either a mental model or (more often) a story/narrative, which any good Bayesian would recognize as evidence for the proposition(s) supported by that mental model or narrative.
In other words, it's more likely that you would hear people say "vaccines cause autism" in a world where vaccines do cause autism than in a world where they don't. It's also more likely that you would see a parade of doctors talking about the dangers of vaccines in a world where vaccines are dangerous than in a world where they aren't.
So there's actually nothing wrong with believing those doctors and coming away thinking that "vaccines cause autism" or "spike protein is dangerous" or even "Covid vaccine could be worse than the disease". This is all fine! Believing this can be perfectly reasonable under circumstances in which you've accidentally received a biased stream of information.
It's just that...
We don't live in that world.
In our world you hear both "vaccines cause infertility" and "there's no evidence vaccines cause infertility" (autism is so 1998 — try to keep up), and then somehow you pick one of those statements and are completely confident that you picked correctly.
The problem comes when someone provides evidence that a particular vaccine could possibly cause infertility and you completely ignore it. (When I heard this, I didn't ignore it, I listened closely and remain open to evidence to this day. It's just that I need much more evidence than "one guy said this on a blog and then some other guys cited the blog.")
By the same token, the problem comes when someone provides evidence that the guy who said "the ovaries get the highest concentration" of vaccine LNPs was lying. At this point, is your response to refuse to acknowledge even a chance that he isn't trustworthy?
If so, you may be a proud member of at least half the population (including, no doubt, some LessWrong fans). I'm not talking about the minority of Americans who refuse Covid vaccines — I'm talking about the majority who ignore evidence, regardless of political stripe.
Whatever it is, it's a real problem that causes real conflict and real deaths. I would go so far as to say that lousy epistemic practice, on the whole, not only kills people, but is the root cause of most suffering and early death in the world.
Case in point: My uncle — and former legal guardian, a man who I grew up with for 8 years and who gave me my first real job — died last week after spending weeks on a ventilator following a Covid-19 infection and stroke. I will be attending the funeral tomorrow.
Like my own father, my uncle was unvaccinated.
Will his brother's death affect my father's views on vaccination? I doubt it. I predict he will blame the stoke and the hospital staff for refusing to give him drugs such as ivermectin (if they didn't give him ivermectin; I really have no idea.) "Covid wasn't what killed him", he will say, "and vaccines are still dangerous".
My dad, you see, has been watching his very own Small World Fallacy, a "faith-based" TV channel called Daystar with its own dedicated anti-vax web site. It features a parade of opinions from people called "doctor", bringing far-left luminaries like Robert Kennedy Jr together with the Evangelical Right, plus gospel truths from the original anti-vaxxer Andrew Wakefield in the film "Vaxxed".
- After you filter out one side of a debate, the other side is still a very large group that can be used to create the Small World Fallacy: an impression of tremendous evidence based on the sheer number of proponents of a theory. It's often paired with the Gish Fallacy: an impression of tremendous evidence created by a large number of arguments.
- Therefore, to the extent that an information source filters out ideas/analyses based simply on what conclusion those ideas/analyses lead to, a large collection of supporters and arguments presented for a theory do not prove or disprove the theory, but should reduce your confidence in the trustworthiness of the source. Even if you like the source, it could be misleading you.
But all of this leaves us in a pickle. Without becoming experts ourselves, how are we supposed to tell which side of the debate is right?
- Even if the mainstream media were trustworthy, it lost most of its funding when the internet arrived. It not only competes with unpaid bloggers like myself, but faces a mentality that "information should be free".
- The CDC and FDA have said and done boneheaded things throughout the pandemic. When, how and why can we trust anything they say?
- Scientists and journalists are paid! Can we trust them anyway, or should be put our faith in bloggers who make wild accusations for free? Or maybe we should trust the private sector? "Greed is good", so any research they fund must be kosher?
The non-answer to this is "trust no one". But most people use "trust no one" as an excuse to believe whatever the hell they want.
Here are some practices I would advocate:
First, don't trust any source that consistently sides with one political party or one political ideology, because Politics is the Mind Killer.
Second, more generally, be suspicious of a source that filters out information according to whether it points toward The Desired Conclusion. Such sources aren't useless, but are certainly not to be trusted. Prefer to read sources without obvious biases. Spend time looking for a variety of opinions, and hang out with smart people who share your disdain for echo chambers.
Third, consider scientists (and other experts talking about their own field) to be generally more trustworthy than non-scientists (full disclosure: I'm not a scientist), and consider scientists as a group are more trustworthy than any individual scientist.
I'm not saying you can trust any random scientist. And yes there is a replication crisis, and social science doesn't have a good reputation. But it seems like a great many people think that you can trust a non-scientist because they sound trustworthy, or speak with confidence, or tell a good story, or most dangerously, share your politics.
In other words, people think they can ignore credentials and trust someone who "speaks to their gut", when in fact this is a great way to end up believing bullshit. Another way people screw up is to think someone is trustworthy because they use a lot of technical language that sounds scientific. Unfortunately, this is ambiguous; they might be truthful, or they might be using fancy words in an effort to look smart. Even someone who has the university degree of an expert, and has published papers in a field, might be a crank in that same field (though cranks often hop over to nearby fields). And while only a small minority of scientists are cranks, cranks have a tendency to attract far more attention than non-cranks. It's not necessarily that cranks are more charismatic, but they are always very confident and have very strong views, and it seems like a lot of influencers are attracted to confident people who sound trustworthy, tell a good story, share their politics and make bold statements. Thus, cranks rise to the top.
The fact that many scientists are awful communicators who are lousy as telling stories is not a point against them. It means that they were more interested in figuring out the truth than figuring out how to win popularity contests.
So, trust scientific consensus where available. However, scientific consensus information is often hard to find, or no one has gathered it. Plus, information you are told about consensus could be biased. I heard, for instance, that there was a 97% consensus about something, but it turned out to be more like an 90% consensus give-or-take when I researched it. That's still pretty decent, but importantly, it turned out that the other 10%-ish were highly disunified, often proposing different explanations; there was no serious competing theory for them to rally around.
And this brings me to another reason why scientists tend to be more trustworthy: they tend to have "gears-level models", i.e. their understanding of the world is mechanistic, like a computer; it's the kind of understanding that allows predictions to be made and checked, which in turn allows their models to be refined over time (or in some cases thrown out completely) when it makes prediction errors. Unlike layperson explanations or post-hoc rationalizations, this allows scientific models to improve over time, until eventually all scientists end up believing the same thing. This is not groupthink; careful scientific thinking and experiments allow different people to arrive at the same conclusion independently. In contrast, many people calling themselves "independent thinkers" come up with suspiciously different physical mechanisms to justify their suspiciously similar beliefs.
Fourth, if you can't figure out what the consensus is, but you still want to know if a theory is true, research two bold claims from that theory in some detail — the first two bold claims will do nicely. Ideally, however, don't pick claims from an obvious crank or you'll bias your own conclusion; pick the most reasonable-sounding version of the theory you know of. Search Google Scholar, email experts, read a textbook about the topic of interest, or call a random professor in a random university on the goddamn phone if that's what it takes.
But the detail is the important thing. People are normally motivated to stop their research when they have "proven" the conclusion they like. For many people this just means posting an article to Facebook because the headline spoke to them, so in comparison you probably think you're some kind of genius for searching on YouTube for a controversial claim and finding a video supporting or refuting it. Sorry, that's not enough. Keep digging until you know lot of detail about at least one of those claims. Where did it come from? How much evidence is there? Is there a competing theory for the same evidence? How often do scientists agree or disagree? Does readily-available data fit the theory? Does readily-available data fit a competing theory? It may sound like a lot of work, and it could be, but if you really care about the topic, you are only researching two claims and you should be able to push through it. This is called epistemic spot checking, and it works pretty well because cranks usually lie a lot. Therefore every bold claim from a crank is much more likely to be false than true, and two truthful bold claims in a row proves that the source is either truthful or unusually lucky. (If it turns out that one claim is true and the other is false, chances are the theory can't be trusted, but check a third claim to be sure.)
As an example I picked a video on Daystar featuring two names I didn't recognize, and recorded a passage with two bold claims from Vladimir Zelenko's gish gallop. Anybody want to tackle these?
"...Professor Dolores Cahill from Ireland saying that she believes that within two years 90% of the people that got the vaccine will be dead. Dr. Michael Yeadon, who was the Vice President of Pfizer and the head of their vaccine development program, saying for every one child that dies naturally from Covid, a hundred will die from the vaccine, statistically. So if that's not child sacrifice, I don't know what is.
By the way, don't believe a word that I'm saying. Don't make the same mistake you've done with the government by believing them blindly. Take the information I'm giving you, make sure that I'm accurate, make sure that I'm conveying the truth, and then reach your own conclusions. But I'm giving you very specific information that you can look up and you'll see where I'm getting it from."
Edit: sorry, I broke my own rule about not choosing an obvious crank. Normally I would fix this by finding a more reasonable-sounding guest, but I've got to leave for the funeral in 10 minutes. You know, for my uncle who listened to people like this? Anyway the show did try their best to present him as a non-crank by touting his 20 years experience as an MD and his peer reviewed publication"s".
Fifth, if someone is making false claims, it should reduce your opinion of their credibility. If someone is making original false claims that millions of people are parroting, that's a crank dammit. Don't lend them any credence! I would also stress that people have reputation. Liars keep lying; honest people keep being honest. All honest people make mistakes and are sometimes wrong, but cranks and liars are reliably full of crap.
Sixth, look for people who have a history of good forecasting. Predicting the future is hard, so a person who proves they are good at predicting the future has also proven a penchant for clear thought. (Now, can anyone tell me how to find blogs written by superforecasters?)
Eighth, if you read this all the way through, your epistemology was probably pretty good in the first place and you hardly needed this advice. Nevertheless I do want to stress that "who should I trust?" is a question whose difficulty is wildly underestimated, and the fact that 100 million people can so vehemently disagree with another 100 million people about simple factual questions like "does it cause autism?" is evidence of this.
Ninth, there really should be more and better methods available than those above. For instance, research is hard, peer-reviewed articles are jargon-filled to the point of incomprehensibility, and we shouldn't all have to do separate individual research. Someday I want to build an evidence-aggregation web site so we can collectively work out the truth using mathematically sane crowdsourcing. Until then, see above.