Trust and The Small World Fallacy

by DPiepgrass19 min read4th Oct 202114 comments

25

FallaciesRationality
Frontpage

When people encounter other people they recognize, they exclaim "small world!"

I suspect that most people have 300 acquaintances or less. I probably have under 100. Still, sometimes I run into people I know and I'm tempted to say "small world".

But it's not actually a small world, is it? It's an unimaginably enormous world.

I mean that literally. You cannot imagine how big the world is.

You're not likely to meet a million people in your life. If you were to meet 100 strangers in 8 hours, you would have less than 5 minutes to spend with each person. If you met 100 strangers every day including weekends, with no vacation days, it would take over 27 years to meet a million people.

How many of those million people would you be able to remember after you've been meeting 100 of them every day for 27.4 years? A few hundred, maybe? A few thousand if you have an especially good memory? It seems to me that even after you've met a million people, your brain is already too small to properly comprehend the thing you just accomplished.

And a million people is nothing in this world. This world has over 7,000 million people. It's truly beyond imagination.

There was a time when the entire global anti-vax movement was centered around a single man who wrote a single paper citing the opinion of 12 parents that perhaps the combination MMR (measles, mumps, and rubella) vaccine caused a combination autism and bowel disease, or as the paper put it, "chronic enterocolitis in children that may be related to neuropsychiatric dysfunction." Among other anomalies, this man took unusual steps like holding a press conference about his N=12 study "Early Report", having a "publicist" answer his phone, and filing a patent for a measles vaccine months before publishing his paper.

At that time you could argue that we should Beware The Man of One Study. Science produces many studies, including many that suffer from a small sample size, and even some with large biases. Some studies are even fraudulent. Did you know that over 100,000 papers have been published on the topic of climate change? The point is, any reasonable person won't take a single study as proof (though it is still evidence).

Of course, it's not as if "Beware The Man of One Study" would have ever been an effective argument against an anti-vaxxer, even back then. Somehow, the original claim that "the combination MMR vaccine is related to a bowel disease and autism, and we should give kids 3 single vaccines instead" morphed into "the MMR vaccine causes autism" which turned into "vaccines cause autism". The man of one study "early report" became the global movement of zero studies. And the telephone game alone can't explain this transformation. In an actual telephone game, the last child in line will not insist that what they heard is obviously the real truth and that the rest of the class is engaged in a coverup, nor will the child suspect that maybe the conspiracy goes all the way up to the principal's office. So if somebody can explain why anyone bought into "all vaccines cause autism" in the first place, I'm all ears. (Post hoc ergo propter hoc, obviously, but what's hard to explain is extreme confidence based on basically no evidence.)

So, kudos to those skeptical of an idea supported only just one study or blog post.

It's not enough though.

If there is just one crank or quack with a degree in science or medicine for every hundred ordinary scientists, how many is that?

Very roughly, there are 11 million people with science degrees in the U.S. alone, and if 1 out of every hundred is a crank or quack, that would be 110,000 cranks and quacks with science degrees, including roughly 6,500 cranks and quacks with science PhDs in the U.S. alone. I don't have a good estimate of the prevalence of quackery or crankery, but even if it were only 0.1%, we'd still have 11,000 cranks and quacks with science degrees and 650 with science PhDs in the U.S. That's the nature of living in a Giant World.

This leads me to propose the Small World Fallacy: the feeling that if you see a long parade of scientists or doctors proposing the same ideas over and over, that idea must surely be correct.

It's the Chinese Robber Fallacy in reverse. The Chinese Robber Fallacy allows you to demonize a group by writing out a parade of negative facts about the group you want to demonize. Like demonizing Chinese people by talking about each and every robbery recorded in the world's largest country. Or if we wanted to demonize cardiologists, we'd dig up every accusation and conviction made against any cardiologist:

It takes a special sort of person to be a cardiologist. This is not always a good thing.

You may have read about one or another of the “cardiologist caught falsifying test results and performing dangerous unnecessary surgeries to make more money” stories, but you might not have realized just how common it really is. Maryland cardiologist performs over 500 dangerous unnecessary surgeries to make money. Unrelated Maryland cardiologist performs another 25 in a separate incident. California cardiologist does “several hundred” dangerous unnecessary surgeries and gets raided by the FBI. Philadelphia cardiologist, same. North Carolina cardiologist, same. 11 Kentucky cardiologists, same. Actually just a couple of miles from my own hospital, a Michigan cardiologist was found to have done $4 million worth of the same. Etc, etc, etc.

My point is not just about the number of cardiologists who perform dangerous unnecessary surgeries for a quick buck. It’s not even just about the cardiology insurance fraud, cardiology kickback schemes, or cardiology research data falsification conspiracies. That could all just be attributed to some distorted incentives in cardiology as a field. My point is that it takes a special sort of person to be a cardiologist.

Consider the sexual harassment. Head of Yale cardiology department fired for sexual harassment with “rampant bullying”. Stanford cardiologist charged with sexually harassing students. Baltimore cardiologist found guilty of sexual harassment. LA cardiologist fined $200,000 for groping med tech. Three different Pennsylvania cardiologists sexually harassing the same woman. Arizona cardiologist suspended on 19 (!) different counts of sexual abuse. One of the “world’s leading cardiologists” fired for sending pictures of his genitals to a female friend. New York cardiologist in trouble for refusing to pay his $135,000 bill at a strip club. Manhattan cardiologist taking naked pictures of patients, then using them to sexually abuse employees. New York cardiologist secretly installs spycam in office bathroom. Just to shake things up, a Florida cardiologist was falsely accused of sexual harassment as part of feud with another cardiologist.

And yeah, you can argue that if you put high-status men in an office with a lot of subordinates, sexual harassment will be depressingly common just as a result of the environment. But there’s also the Texas cardiologist who pled guilty to child molestation. The California cardiologist who killed a two-year-old kid. The author of one of the world’s top cardiology textbooks arrested on charges Wikipedia describes only as “related to child pornography and cocaine”.

Then it gets weird. Did you about the Australian cardiologist who is fighting against extradition to Uganda, where he is accused of “terrorism, aggravated robbery and murdering seven people”? What about the Long Island cardiologist who hired a hitman to kill a rival cardiologist, and who was also for some reason looking for “enough explosives to blow up a building”?

Like I said, it takes a special sort of person.

Of course, to prove that our reporting is fair and balanced, we also acknowledge that cardiologists sometimes help people. #NotAllCardiologists

Using this technique in reverse, we seek out the many cranks and quacks who agree with us (just so long as they have academic credentials), gather them all together on the same blog, TV channel or documentary, and sing praises to their credentials and their bravery for coming forward despite the risks to their career. As for any who disagree with us, we simply don't invite them. (Though if we do want the appearance of legitimacy, we could also invite a token voice from the other side. In that case we can talk over them, or edit out their key arguments, or try to goad them into anger so that we appear to be the reasonable ones, or invite an expert in a certain field (e.g. glaciology) and then counter him with arguments about related fields (e.g. ocean science) that the expert doesn't know much about. Or we can simply take advantage of the fact that most scientists are not stars of their college debate club, and face the scientist off against a quack with years of experience in debate and salesmanship.)

So that's the Small World Fallacy. Related to it is what I will call the Gish Fallacy, named after the Gish Gallop: a series of arguments delivered in rapid succession so that there are too many arguments for your debate opponent to address. The Gish Fallacy, then, is to believe that a long series of arguments constitutes good evidence that a belief is true. (Plus there's another small world fallacy, where e.g. 1,000 deaths is treated as a large number in a country of 330 million people, while inconveniently high numbers are stated as a percentage of the population instead. Probably this trick has another name.)

By themselves, the Small World fallacy and the Gish Fallacy aren't very interesting, because they can be understood as reasonable consequences of how humans process information. Each new piece of information fits into either a mental model or (more often) a story/narrative, which any good Bayesian would recognize as evidence for the proposition(s) supported by that mental model or narrative.

In other words, it's more likely that you would hear people say "vaccines cause autism" in a world where vaccines do cause autism than in a world where they don't. It's also more likely that you would see a parade of doctors talking about the dangers of vaccines in a world where vaccines are dangerous than in a world where they aren't.

So there's actually nothing wrong with believing those doctors and coming away thinking that "vaccines cause autism" or "spike protein is dangerous" or even "Covid vaccine could be worse than the disease". This is all fine! Believing this can be perfectly reasonable under circumstances in which you've accidentally received a biased stream of information.

It's just that...

We don't live in that world.

In our world you hear both "vaccines cause infertility" and "there's no evidence vaccines cause infertility" (autism is so 1998 — try to keep up), and then somehow you pick one of those statements and are completely confident that you picked correctly.

The problem comes when someone provides evidence that a particular vaccine could possibly cause infertility and you completely ignore it. (When I heard this, I didn't ignore it, I listened closely and remain open to evidence to this day. It's just that I need much more evidence than "one guy said this on a blog and then some other guys cited the blog.")

By the same token, the problem comes when someone provides evidence that the guy who said "the ovaries get the highest concentration" of vaccine LNPs was lying. At this point, is your response to refuse to acknowledge even a chance that he isn't trustworthy

If so, you may be a proud member of at least half the population (including, no doubt, some LessWrong fans). I'm not talking about the minority of Americans who refuse Covid vaccines — I'm talking about the majority who ignore evidence, regardless of political stripe.

What do we call this behavior? Tim Urban calls it The Attorney on the Thinking Ladder. Scott Alexander calls it a trapped prior, maybe forgetting his earlier musings about related medical conditions.

Whatever it is, it's a real problem that causes real conflict and real deaths. I would go so far as to say that lousy epistemic practice, on the whole, not only kills people, but is the root cause of most suffering and early death in the world.

Case in point: My uncle — and former legal guardian, a man who I grew up with for 8 years and who gave me my first real job — died last week after spending weeks on a ventilator following a Covid-19 infection and stroke. I will be attending the funeral tomorrow.

Like my own father, my uncle was unvaccinated.

Will his brother's death affect my father's views on vaccination? I doubt it. I predict he will blame the stoke and the hospital staff for refusing to give him drugs such as ivermectin (if they didn't give him ivermectin; I really have no idea.) "Covid wasn't what killed him", he will say, "and vaccines are still dangerous".

My dad, you see, has been watching his very own Small World Fallacy, a "faith-based" TV channel called Daystar with its own dedicated anti-vax web site. It features a parade of opinions from people called "doctor", bringing far-left luminaries like Robert Kennedy Jr together with the Evangelical Right, plus gospel truths from the original anti-vaxxer Andrew Wakefield in the film "Vaxxed".

In summary:

  • After you filter out one side of a debate, the other side is still a very large group that can be used to create the Small World Fallacy: an impression of tremendous evidence based on the sheer number of proponents of a theory. It's often paired with the Gish Fallacy: an impression of tremendous evidence created by a large number of arguments.
  • Therefore, to the extent that an information source filters out ideas/analyses based simply on what conclusion those ideas/analyses lead to, a large collection of supporters and arguments presented for a theory do not prove or disprove the theory, but should reduce your confidence in the trustworthiness of the source. Even if you like the source, it could be misleading you.

But all of this leaves us in a pickle. Without becoming experts ourselves, how are we supposed to tell which side of the debate is right?

  • Even if the mainstream media were trustworthy, it lost most of its funding when the internet arrived. It not only competes with unpaid bloggers like myself, but faces a mentality that "information should be free".
  • The CDC and FDA have said and done boneheaded things throughout the pandemic. When, how and why can we trust anything they say?
  • Scientists and journalists are paid! Can we trust them anyway, or should be put our faith in bloggers who make wild accusations for free? Or maybe we should trust the private sector? "Greed is good", so any research they fund must be kosher?

The non-answer to this is "trust no one". But most people use "trust no one" as an excuse to believe whatever the hell they want.

Here are some practices I would advocate:

First, don't trust any source that consistently sides with one political party or one political ideology, because Politics is the Mind Killer.

Second, more generally, be suspicious of a source that filters out information according to whether it points toward The Desired Conclusion. Such sources aren't useless, but are certainly not to be trusted. Prefer to read sources without obvious biases. Spend time looking for a variety of opinions, and hang out with smart people who share your disdain for echo chambers.

Third, consider scientists (and other experts talking about their own field) to be generally more trustworthy than non-scientists (full disclosure: I'm not a scientist), and consider scientists as a group are more trustworthy than any individual scientist.

I'm not saying you can trust any random scientist. And yes there is a replication crisis, and social science doesn't have a good reputation. But it seems like a great many people think that you can trust a non-scientist because they sound trustworthy, or speak with confidence, or tell a good story, or most dangerously, share your politics.

In other words, people think they can ignore credentials and trust someone who "speaks to their gut", when in fact this is a great way to end up believing bullshit. Another way people screw up is to think someone is trustworthy because they use a lot of technical language that sounds scientific. Unfortunately, this is ambiguous; they might be truthful, or they might be using fancy words in an effort to look smart. Even someone who has the university degree of an expert, and has published papers in a field, might be a crank in that same field (though cranks often hop over to nearby fields). And while only a small minority of scientists are cranks, cranks have a tendency to attract far more attention than non-cranks. It's not necessarily that cranks are more charismatic, but they are always very confident and have very strong views, and it seems like a lot of influencers are attracted to confident people who sound trustworthy, tell a good story, share their politics and make bold statements. Thus, cranks rise to the top.

The fact that many scientists are awful communicators who are lousy as telling stories is not a point against them. It means that they were more interested in figuring out the truth than figuring out how to win popularity contests.

So, trust scientific consensus where available. However, scientific consensus information is often hard to find, or no one has gathered it. Plus, information you are told about consensus could be biased. I heard, for instance, that there was a 97% consensus about something, but it turned out to be more like an 90% consensus give-or-take when I researched it. That's still pretty decent, but importantly, it turned out that the other 10%-ish were highly disunified, often proposing different explanations; there was no serious competing theory for them to rally around.

And this brings me to another reason why scientists tend to be more trustworthy: they tend to have "gears-level models", i.e. their understanding of the world is mechanistic, like a computer; it's the kind of understanding that allows predictions to be made and checked, which in turn allows their models to be refined over time (or in some cases thrown out completely) when it makes prediction errors. Unlike layperson explanations or post-hoc rationalizations, this allows scientific models to improve over time, until eventually all scientists end up believing the same thing. This is not groupthink; careful scientific thinking and experiments allow different people to arrive at the same conclusion independently. In contrast, many people calling themselves "independent thinkers" come up with suspiciously different physical mechanisms to justify their suspiciously similar beliefs.

Fourth, if you can't figure out what the consensus is, but you still want to know if a theory is true, research two bold claims from that theory in some detail — the first two bold claims will do nicely. Ideally, however, don't pick claims from an obvious crank or you'll bias your own conclusion; pick the most reasonable-sounding version of the theory you know of. Search Google Scholar, email experts, read a textbook about the topic of interest, or call a random professor in a random university on the goddamn phone if that's what it takes.

But the detail is the important thing. People are normally motivated to stop their research when they have "proven" the conclusion they like. For many people this just means posting an article to Facebook because the headline spoke to them, so in comparison you probably think you're some kind of genius for searching on YouTube for a controversial claim and finding a video supporting or refuting it. Sorry, that's not enough. Keep digging until you know lot of detail about at least one of those claims. Where did it come from? How much evidence is there? Is there a competing theory for the same evidence? How often do scientists agree or disagree? Does readily-available data fit the theory? Does readily-available data fit a competing theory? It may sound like a lot of work, and it could be, but if you really care about the topic, you are only researching two claims and you should be able to push through it. This is called epistemic spot checking, and it works pretty well because cranks usually lie a lot. Therefore every bold claim from a crank is much more likely to be false than true, and two truthful bold claims in a row proves that the source is either truthful or unusually lucky. (If it turns out that one claim is true and the other is false, chances are the theory can't be trusted, but check a third claim to be sure.)

As an example I picked a video on Daystar featuring two names I didn't recognize, and recorded a passage with two bold claims from Vladimir Zelenko's gish gallop. Anybody want to tackle these?

"...Professor Dolores Cahill from Ireland saying that she believes that within two years 90% of the people that got the vaccine will be dead. Dr. Michael Yeadon, who was the Vice President of Pfizer and the head of their vaccine development program, saying for every one child that dies naturally from Covid, a hundred will die from the vaccine, statistically. So if that's not child sacrifice, I don't know what is.

By the way, don't believe a word that I'm saying. Don't make the same mistake you've done with the government by believing them blindly. Take the information I'm giving you, make sure that I'm accurate, make sure that I'm conveying the truth, and then reach your own conclusions. But I'm giving you very specific information that you can look up and you'll see where I'm getting it from."

Edit: sorry, I broke my own rule about not choosing an obvious crank. Normally I would fix this by finding a more reasonable-sounding guest, but I've got to leave for the funeral in 10 minutes. You know, for my uncle who listened to people like this? Anyway the show did try their best to present him as a non-crank by touting his 20 years experience as an MD and his peer reviewed publication"s".

Fifth, if someone is making false claims, it should reduce your opinion of their credibility. If someone is making original false claims that millions of people are parroting, that's a crank dammit. Don't lend them any credence! I would also stress that people have reputation. Liars keep lying; honest people keep being honest. All honest people make mistakes and are sometimes wrong, but cranks and liars are reliably full of crap.

Sixth, look for people who have a history of good forecasting. Predicting the future is hard, so a person who proves they are good at predicting the future has also proven a penchant for clear thought. (Now, can anyone tell me how to find blogs written by superforecasters?)

Seventh, read the sequences to improve yourself. Internalize the 12 virtues of rationality and all that. This stuff isn't perfect, but I don't know of anything better.

Eighth, if you read this all the way through, your epistemology was probably pretty good in the first place and you hardly needed this advice. Nevertheless I do want to stress that "who should I trust?" is a question whose difficulty is wildly underestimated, and the fact that 100 million people can so vehemently disagree with another 100 million people about simple factual questions like "does it cause autism?" is evidence of this.

Ninth, there really should be more and better methods available than those above. For instance, research is hard, peer-reviewed articles are jargon-filled to the point of incomprehensibility, and we shouldn't all have to do separate individual research. Someday I want to build an evidence-aggregation web site so we can collectively work out the truth using mathematically sane crowdsourcing. Until then, see above.

25

14 comments, sorted by Highlighting new comments since Today at 7:44 PM
New Comment

Regarding problems related pseudoscientific quacks and cranks as a kind of example given, at this point it seems obvious that it needs to be taken for granted that there will be causal factors that, absent effective interventions, will induce large sections of society to embrace pseudo-scientific conspiracy theories. In other words, we should assume that if there is another pandemic in a decade or two, there will be more conspiracy theories. 

At that point in time, people will beware science again because they'll recall the conspiracies they believed in from the pandemic of 2019-2022 and how their misgivings about the science back then were never resolved either. As there is remaining skepticism of human-caused climate change now, in the world a couple decades from now after QAnon, don't be shocked if there are conspiracy theories about how catastrophic natural disasters are caused by weather control carried out by the same governments who tried convincing everyone decades ago that climate change was real. 

At present, we live in a world where the state of conspiracy theories in society has evolved to a point that it's insufficient to think about them in terms of how they were thought about even a decade ago. Conspiracy theories like how the moon landing was faked or even how 9/11 was a false flag attack don't seem to have the weight and staying power of conspiracy theories today. A decade from now, I expect COVID-19 conspiracy theories won't be the butt of jokes the same way those other conspiracy theories. Those other conspiracy theories didn't cause thousands of people to have their lives so needlessly shortened. I'm aware in the last few years there has been an increased investment in academia to research the nature of conspiracy theories as a way to combat them. 

It also doesn't help that we live in a time when some of the worst modern conspiracies or otherwise clandestine activities by governments are being confirmed. Lies early on in the pandemic on the scientific consensus about the effectiveness of masks to the common denial of any evidence the origin of COVID-19 could have been a lab outbreak are examples regarding only that specific case. From declassified documents in recent years proving CIA conspiracies from decades ago to stories breaking every couple years about the lengths governments have gone to cover up their illegal and clandestine activities, it's becoming harder in general to blame anyone for believing conspiracy theories.

Given such a low rate of crankery among scientists but how that alone has proven sufficient to give a veneer of scientific credibility to fuel the most extreme COVID-19 conspiracy theories, it seems like the main chokepoint won't be to neutralize the spread of the message at the original source that is that small percentage of cranks among experts. (By neutralize I don't mean anything like stopping their capacity to speak freely but counter them with other free speech in the form of a strategy composed of communication tactics of the most convincing known-down arguments as soon as any given crank is on the brink of becoming popular.) It's also self-evident that it's insufficient to undo the spread of a conspiracy theory once it's hit critical mass. 

Based on the few articles I've read on the research that's been done on this subject in the last few years, the chokepoint in the conspiracy theory pipeline to focus on to have the greatest impact may be to neutralize their viral spread as they first begin growing in popularity on social media. Again, with the cranks at the beginning of that pipeline, to stop the spread of so many conspiracy theories in the first place at their points of origin may prove too difficult. The best best may not to eliminate them in the first place but to minimize how much they spread once it becomes apparent. 

This entails anticipating different kinds of conspiracy theories before they happen, perhaps years in advance. In other words, for the most damaging kinds of conspiracy theories one can most easily imagine taking root among the populace in the years to come, the time to begin mitigating the impact they will have is now. 

Regarding the potential of prediction markets to combat this kind of problem, we could suggest that the prediction markets that are already related to the rationality community in some way begin facilitating predictions of future (new developments in) conspiracy theories starting now. 

The fact that many scientists are awful communicators who are lousy as telling stories is not a point against them. It means that they were more interested in figuring out the truth than figuring out how to win popularity contests.

This implies to me that there is a market for science communicators who in their careers specialize in winning popularity contests but do so to spread the message of scientific consensus in a way optimized to combat the most dangerous pseudoscience and misinformation/disinformation. It seemed like the Skeptics movement was trying to do the latter part if not the part about doing so by winning popularity contests at some point over a decade ago but it's been sidetracked by lots of others things since. 

For some science communicators to go about their craft in a way meant to win popularity contests may raise red flags about how it could backfire and those are potential problems worth thinking about. Yet I expect the case for doing so, in terms of cost-benefit analysis, is sufficient to justify considering this option. 

First, don't trust any source that consistently sides with one political party or one political ideology, because Politics is the Mind Killer.

One challenge with this is that it's harder to tell what the ideology in question is. If anti-vaxxers are pulled from among the populations of wingnuts on both the left and the right, I'm inclined to take lots of people whose views consistently side with one political party much more seriously not only on vaccines but on many other issues as well. 

It's quantitatively difficult to meet one million people, e.g., in terms of the amount of time it takes to accomplish that feat but how qualitatively hard it is makes it seem almost undoable but to me it's more imaginable. I've worked in customer service and sales jobs in multiple industries. 

I never kept count enough to know if I ever met one hundred people in one day but it could easily have been several dozen people everyday. I wouldn't be surprised if someone working the till at a McDonalds in Manhattan met over one hundred people on some days. Most people won't work a career like that for 27 years straight but enough do. I expect I could recall hundreds of people I interacted with only once but it would take a lot of effort to track all of that and it would still be the minority of them. 

Nonetheless, I thought it notable that to meet one million people in one's own lifetime is something common enough that it wouldn't surprise me if at least a few million people in the United States were people who had met over one million other individuals. 

Difficult questions require careful research. Most questions have obvious answers. The main task for the easier questions that are initially confusing is to mine sources for gears (definitions, explanations specific enough to allow inferring things from other things). The gears naturally assemble into narratives/hypotheses/theories, which then passively sift through anything else you encounter on the topic, shifting the balance of credence between them according to the pieces of evidence that are specific enough to have to be either largely correct or fabricated out of whole cloth.

By the same token, the problem comes when someone provides evidence that the guy who said "the ovaries get the highest concentration" of vaccine LNPs was lying. At this point, is your response to refuse to acknowledge even a chance that he isn't trustworthy

You are mistating a conversation. The conversation was about credibile. Credible is about giving credit. There are people who do some good things and some bad things.

Basing your thinking on ad hominems and thinking in black/white terms and whether there's X percent likelihood that something is white and Y likelihood that it's white is bad epistemics. 

Thinking in terms of black and white gets you situations like Standford bikers wearing masks on their bikes twice as often as bike helmets.

If someone is making original false claims that millions of people are parroting, that's a crank dammit

That principles brings you the Great Stagnation. Good original thinkers sometimes make correct and sometimes make false claims. 

And this brings me to another reason why scientists tend to be more trustworthy: they tend to have "gears-level models", i.e. their understanding of the world is mechanistic, like a computer; it's the kind of understanding that allows predictions to be made and checked, which in turn allows their models to be refined over time (or in some cases thrown out completely) when it makes prediction errors. 

I haven't seen any mainstream person offer a gear-model that explain why the flu vaccine results in nearly nobody being ill the next day, the COVID-19 vaccines manage to make nearly half ill the next day. "It shows that your immune system is reacting" when it doesn't happen with every person and didn't happen most other vaccines just isn't an explanation that inspires trust and gives me a feeling like I'm getting a lie for children. 

>>I haven't seen any mainstream person offer a gear-model that explain why the flu vaccine results in nearly nobody being ill the next day, the COVID-19 vaccines manage to make nearly half ill the next day.
This is actually a really interesting question in its own right!  I think you're both underestimating flu vaccine side effects and overestimating COVID vaccine side effects, but there certainly seems to be more and worse side effects based on a quick search (of popular, not peer reviewed, sources - it was a quick search).  The vaccines are still effective enough to make getting both vaccines a very good choice, of course.

Any biologists/ medical researchers who inexplicably have free time right now want to check the research, or just hazard a guess as to why the COVID vaccine has notably worse and more common side effects than the flu vaccine?  Bonus points if you have a thought on whether that's something we can improve over time or it's inherent in the vaccine.

The primary meaning of the word "credible" is "believable", not something about credit (as two dictionaries put it: "able to be believed; convincing", "Capable of being believed; believable or plausible: synonym: plausible"). Were you unaware of this?

Basing your thinking on ad hominems and thinking in black/white terms and whether there's X percent likelihood that something is white and Y likelihood that it's white is bad epistemics. 

I advocate assigning reputations to individuals, which determine their credibility. I cannot guess why you would object to that. I did not understand the second half of that sentence... "black/white" thinking is not often paired with probabilistic thinking.

Good original thinkers sometimes make correct and sometimes make false claims. 

I think that cranks (or "bad original thinkers") are much more common than "Good" original thinkers (edit edit: a key reason for this is that good thinkers are rarely original because good epistemics tends to lead people independently to similar conclusions, whereas bad epistemics can lead people anywhere, so there will be far more originality among thinkers with bad epistemics. Also, politics creates demand for people who express specific kinds of views, so cranks who hold those views tend to "rise to the top", unlike, say, math cranks who are much less well-known. This demand for cranks creates an appearance that cranks are more common than they are.) Regardless of the exact ratio between the "bad" and "good", the world is rich with people (roughly a billion English speakers) and no individual person can listen to a significant fraction of the original thinkers among them. Therefore it's wise to be willing to identify cranks (or at least persons of questionable credibility) and mostly ignore them. Edit: also, to be clear, good original thinkers aren't shy about making true claims, so a good original thinker should not normally make two bold false claims in a row. Even if they do make such a mistake, they would be willing to correct it (cranks resist correction very strongly).

The only good reason I know of not to ignore them is the fact that that other people are going to listen to them and take their word very seriously (e.g. my dad & uncle, who are/were engineers by trade), which could cause harm, so if we don't want harm we should do things to counteract such people. To celebrate such people instead is to promote the harm they cause.

I haven't seen any mainstream person offer a gear-model that explain why [...] the COVID-19 vaccines manage to make nearly half ill the next day.

The results from your own survey on this don't seem to support your "nearly half" assertion, even though you asked only for side effects. The Canadian national survey shows 5-9% of people had a "health event" after each Covid shot (unfortunately the survey questions aren't very specific, but there are some additional details at the link).

Can anyone reading this point me to info about research into Covid side effects? It has been suggested that the doses used in the shots are too high and I heard there is research into smaller doses (I also heard that in the U.S., doses are hard to change due to FDA rules).

Kirsch did engage in good original thinking. Even if you don't count the optical mouse because that's not medicine, the proposal to run fund studies for all generics that top coronavirus researchers consider to be promising was a good original idea and if it would have been persued with more funds there's a good chance that it would have ended the pandemic sooner.

When talking about math cranks we usually don't talk about people who had teams of scientific advisors of the relevant domain. 

I do think his early pandemic response warrents treating Kirsch as a person worth listening to.

Therefore it's wise to be willing to identify cranks (or at least persons of questionable credibility) and mostly ignore them.

If you use a term like questionable credibility, you can start by removing all people who are lab leak denailists. Then you can remove everyone who spoke early in the pandemic against masks. You might also remove everyone employed by company with a history of being fined for illegally engaging in making misleading statements. 

The problem is that this doesn't leave you with that many people. 

Given the lack of good sources it makes sense to listen to people with a diversity of opinions and reason about their claims.

The results from your own survey on this don't seem to support your "nearly half" assertion, even though you asked only for side effects

My own survey didn't ask anybody to report whether or not they are ill the next day. There are people who comment on the post for reasons besides answering my survey question but I see no reason why you would want to draw such conclusion from it. To me that sounds like you aren't thinking well about how falsifying hypothesis works. 

As to why I'm using that number, I got it initially from a German blog by an MD. I did ask the doctor that vaccinated me about the number and he said it's reasonable for that many people not going to work the next day. It also plausible given what people I know well experienced.

As far as the link to the Canadian data source goes, vaccine side effect reporting systems historically report much less side effects then the studies for vaccine approval. While I do think that those systems were improved over the last year, which likely partly resulted in the increase VEARS numbers, I don't think that they register all side effects.

Kirsch did engage in good original thinking. Even if you don't count the optical mouse

Even if Kirsch deserves the credit for that (rather than Stephen B. Jackson or Richard Lyon), I think cranks are somewhat scoped, so for example Linus Pauling was a crank when if came to Vitamin C, but not necessarily a crank on other topics. Even so, when evaluating Linus Pauling from a position of ignorance, I would absolutely not take him at his word regarding other topics once he is clearly shown to be a Vitamin C crank. Since people compartmentalize, it's possible that he's a non-crank on other topics, but by no means guaranteed. Certainly poor epistemics have the potential to significantly affect other "compartments". Again, reputation is earned and deserved.

Regardless, this reasoning doesn't work in reverse. It does not follow from "Kirsch is not an optical mouse crank" that "Kirsch is not a vaccine crank". Even if you were to show that Kirsch isn't a repurposed-drugs crank, that still wouldn't imply he's not a vaccine crank. (To show that, you'd have to show that he doesn't make original false claims about vaccines that he won't take corrections on. Edit: on second thought, Kirsch himself doesn't seem to compartmentalize between vaccines & repurposed drugs, so crankiness can be expected for both.)

Edit: Now, it should be noted that reputation from other topics can create a prior on reasonableness. I don't know anyone for whom this is more true than for Linus Pauling, who, I gather, rightly earned an excellent reputation before he became obsessed with Vitamin C. Also, while Pauling is dead and won't care what I say about him, calling him a "crank" may have been unfair of me, because all my information about him comes to me secondhand from sources that could have exaggerated his crank-ness.

You might also remove everyone employed by company with a history of being fined for illegally engaging in making misleading statements. The problem is that this doesn't leave you with that many people. 

I think you want to equate the reputation of a company with every single employee in an effort to make sure "this doesn't leave you with that many people". Which isn't reasonable. Not that the lab leak hypothesis is proven or anything, but if we remove lab leak "denailists" and "early anti-maskers", we'll have tons of people left after that. 

But before removing all those people, consider that someone saying "Covid is natural! I heard experts saying so!" or "Conservatives are dumb for believing lab leak!" is in no way original so they don't meet my definition of crank.

Crank doesn't mean "someone reads an article that contains errors, reaches wrong conclusion from that, and shares wrong conclusion with others", it's more like "someone reaches new and original wrong conclusions, writes an article promoting them, makes a big effort to publicize them and won't accept corrections". But you knew that.

As far as the link to the Canadian data source goes, vaccine side effect reporting systems historically report much less side effects then the studies for vaccine approval.

I see no reason here to disregard the Canadian data in favor of an unspecified blog and doctor.

Even if Kirsch deserves the credit for that (rather than Stephen B. Jackson or Richard Lyon), I think cranks are somewhat scoped, so for example Linus Pauling was a crank when if came to Vitamin C, but not necessarily a crank on other topics. Even so, when evaluating Linus Pauling from a position of ignorance, I would absolutely not take him at his word regarding other topics once he is clearly shown to be a Vitamin C crank.

Taking anybody at his word instead of thinking critically about what they are saying is no good idea. That's nothing that I practice or advocate.

I generally do think we would have less of a Great Stagnation if we would would listen as a society more to people like Linus Pauling. 

I see no reason here to disregard the Canadian data in favor of an unspecified blog and doctor.

I see no reason why I should do work here to shift your belief, I was just open about why I argue the way I do. If you would however be interested in having accurate beliefs, then understanding how data is produced instead of just taking it at face value is generally good.

What's to understand? The government ran a survey and routinely asked people getting vaccinated if they would like to take it (privately, link sent by email). I took it. I saw the survey questions, I saw the results. If it were up to me the questions would have been more specific, but the results are what they are.

If the government runs a survey, not everyone is going to tell the government about what goes on with them. It's really not any different in the kind of error that someone who takes the VAERS death numbers on face value makes instead of trying to understand what those numbers actually mean.

Right. They have significant side effects but lie on the survey because...? Or maybe you're saying they refuse to do the survey at all.

But then, why didn't they lie or refuse in the unnamed information sources you advocated?

Of course I could tell a story where people who don't have side effects forget about the survey and don't bother to report their absence of side effects, but you're going to like your story better, so that makes your story the true one. And also the size of the bias you're assuming exists would have to be enormous, but whatever.