You can’t fact check everything you hear and read; you literally don’t have the time, energy, or knowledge needed.
I've long thought that it's also true that an entrpreneur could build a tool that allows people to easily see whether virtually everything they read or see on the internet is true.
On LessWrong if a reader thinks something someone says in a post is false they can highlight the sentence and Disagree-react it. Then everyone else reading the post can see that the sentence is highlighted and see who said they disagreed with it. This is great for epistemics.
I envision a system (could be as simple as a browser extension) that allows users to frictionlessly report their feedback/beliefs when reading any content online, noting when things they read seem true or false or definitely false, etc. The system crowdsources all of this epistemic feedback and then uses the data to estimate whether things actually are true or false, and shares this insight with other users.
Then no longer will someone have to read a news article or post that 100 or more other people have already read and be left to their own devices to determine what parts are true or not.
Perhaps some users might not trust the main algorithm's judgment and would prefer to choose a set of other users who they trust have good judgment, and have their personalized algorithm give these people's epistemic feedback extra weight. Great, the system should have this feature.
Perhaps some users mark something as false and later other users come along and show that it is true. Then perhaps the first users should have an epistemic score that goes down as a consequence of their mistaken/bad epistemic feedback.
Perhaps the system should track how good of judgment users have over time to ascertain which users give reliable feedback and which users have bad epistemics and largely just contribute noise.
There are a lot of features that could be added to such a system. But the point is that I've read far too many news articles and posts on the broader internet and had the experience of noting a mistake or inaccuracy or outright falsehood, and then moved on without sharing the insight with anyone due to there being no efficient way to do so.
Surely also there are many inaccuracies that I miss, and I'd benefit from being informed by others who did catch them noting that they were there in a way I could just believe as a non-expert on the claim.
First, environment: if you want to believe true things, try not to spend too much time around people who are going to sneeze false information or badly reasoned arguments into your face. You can’t fact check everything you hear and read; you literally don’t have the time, energy, or knowledge needed. Cultivate a social network that cares about true things.
This is good advice, but I really wish (and think it possible) that some competent entrepreneurs made it much less needed by creating epistemic tools that enhance the ability of anyone to discern what's true out in the wild where people do commonly sneeze false information in your face.
I have also imagined things like the browser extension. With trustworthy commenters, it could become a powerful tool against disinformation. But that's passing the buck... where to find the trustworthy commenters? Without them, the extension could just as well become a tool to spread hoaxes.
You need a trustworthy community first. Then, you can add some mechanisms, such as removing users who are reported by too many other users as liars. But you won't get that if you start with a majority of liars.
It's like the voting system on Less Wrong. It helps to keep the website sane. But it works because we have already started with a mostly sane community. If you took the same software, but started the website with random population, it would probably not evolve into a new rationalist community. More likely, it would enforce some random opinion which got a majority at some moment, and used it to eliminate the opposing opinions. At best, the system can maintain the truth-seeking community, but it cannot create it. (And even if it could, the dark side would simply create their own browser extension, and insist that it is your extension that is biased.)
I could be wrong here, considering the success of things like Community Notes on Twitter. (At least I think it was a success; haven't heard about it recently. Maybe people already found a way to defeat it.) Seems like you can extract something in a kinda-mostly-true direction from chaos, the "things that people agree on even if they disagree on most other things".
Another solution could be to let every user specify whom they trust, and show the opinions of your friends more visibly than the opinion of randos. So you would get mostly good results if you import the list of rationalists; and everyone else, uhm, will use the tool to reinforce the bubble they are already in.
Another problem is that there may be systematic difference about which users read (and review) which sources. For example, a fanatical antivaxer might review thousands of medical articles on vaccination; while an actual doctor probably wouldn't bother to do that. Also, the less you think when you write a review, the more reviews you can write during the same time.
...so, I like the vision, but there are many difficult problems to solve. (And most entrepreneurs would probably be more interested in making the profit than actually solving those problems. Just like Facebook doesn't care about the bots and spammers.)
But that's passing the buck... where to find the trustworthy commenters?
My idea for this has been that rather than require that all users use and trust the extension's single foxy aggregation / deference algorithm, the tool instead ought to give users the freedom to choose between different aggregation mechanisms, including being able to select which users to epistemically trust or not. In other words, it could almost be like an epistemic social network where users can choose whose judgment they respect qnd have their aggregation algorithm give special weight to those users (as well as users those users say they respect the judgment of).
Perhaps this would lead to some users using the system to support their own tribalism or whatever and have their personalized aggregation algorithm spit out poor judgments, but I think it'd allow users like those on LW to use the tool and become more informed as a result.
Another solution could be to let every user specify whom they trust, and show the opinions of your friends more visibly than the opinion of randos. So you would get mostly good results if you import the list of rationalists; and everyone else, uhm, will use the tool to reinforce the bubble they are already in.
Yeah, exactly.
I think it'd be a valuable tool despite the challenges you mentioned.
I think the main challenge would be getting enough people to give the tool/extension enough input epistemic data, rather than (in my view) the lesser challenges of making the outputs based on that input data valuable enough to be informative to users.
And to solve this problem, I imagine the developers would have to come up with creative ways to make giving the tool epistemic data fast and low friction (though maybe not - e.g. is submitting Community Notes fast or low friction? (IDK, but) perhaps not necesarily and maybe some users do it anways because they value the exposure and impact their note may have if approved).
And perhaps also making sure that the way the users provide the onput data is a way that allows that data to be aggregated by some algorithm. E.g. It's easier to aggregate submissions claiming a sentence is true or false, but what if a user just wants to submit a claim as misleading - do you need a more creative way to capture that data if you want to be able to communicate to other users the manner in which it is misleading rather than just a "misleading" tag? I haven't thought through these sorts of questions, but suspect strongly that there is some MVP version of the extension that I at the very least would value as an end user and would also be happy to contribute to, even if only a few people I know would be seeing my data/notes when reading the same content as me after the fact. Though of course the more people who uae the tool and see the data, the more willing I'd be to contribute assuming some small time cost of contributing data. I already spend time leaving comments on things to point out mistakes and I imagine such a tool would just reduce the friction of providing such feedback.
Do avalanches get caused by loud noises?
Based on my dozen+ times giving this class or presentation, at least 7/10 of you are nodding yes, and the main reason the other 3 aren’t is that you sense a trap.
So. What do you think you know, and why do you think you know it?
Our bodies are under constant assault. Bacteria, viruses, parasites—an endless parade of microscopic organisms trying to hijack our cellular machinery for their own replication. You don't notice most of these attacks because your immune system handles them quietly, distinguishing self from non-self, helpful from harmful, and deploying targeted responses to neutralize threats before they can take hold.
We all lived through a global pandemic not too long ago, and got a crash course reminder on how to keep ourselves safe from hostile genetic code in COVID-19.
But our minds face a parallel challenge, particularly as the “information age” continues to warp into an endless misinformation war, and public safety efforts on the memetic warfront are lagging hard.
Now more than ever in human history, ideas, beliefs, and narratives continuously seek entry in your mind, competing for the limited real estate of your attention and memory. Richard Dawkins coined the term "meme" precisely to highlight this analogy: just as genes propagate through reproduction and mutation, memes propagate via communication. And just as genes guide their host life forms toward replicative fitness, independent of ethical notions of benevolence or suffering, memes spread based on memetic fitness—being catchy, being shareable—independent of being true.
This creates a problem: the ideas most likely to reach you are not the most accurate ideas. They're the ideas most optimized to spread.
Two Immune Systems
Your biological immune system has two main components. The innate immune system provides general-purpose defenses: skin barriers, inflammatory responses, cells that attack anything foreign-looking. The adaptive immune system learns from exposure, building targeted antibodies against specific pathogens and remembering them for faster future responses.
Your memetic immune system is similar in some ways, and also shares similar failure modes:
1) Failure to recognize threats. Some pathogens evade the immune system by mimicking the body's own cells or by mutating faster than antibodies can adapt. Similarly, some bad ideas evade epistemic defenses by mimicking the structure of good arguments, by appealing to emotions that feel like evidence, or by coming from sources we've learned to trust in other contexts.
2) Autoimmune responses. Sometimes the immune system attacks the body's own healthy tissue, causing chronic inflammation and damage. Epistemically, this manifests as excessive skepticism that rejects true and useful information, or as a reflexive contrarianism that opposes ideas simply because they're popular.
3) Vulnerability through entry points. Pathogens exploit specific vulnerabilities—mucous membranes, cuts in the skin, the respiratory system. Memes exploit specific emotional and cognitive vulnerabilities—fear, tribal loyalty, the desire to feel special or vindicated, the discomfort of uncertainty.
4) Compromised states. Stress, malnutrition, and lack of sleep all weaken the biological immune system. Emotional distress, cognitive fatigue, and social pressure all weaken epistemic defenses, but not just negative ones! You can't get more sick from being too entertained or validated, but you're certainly more open to believing false things that way... like, say, when a loud noise causes an avalanche in a cartoon or film.
No matter how rational you think you are, you cannot evaluate all information perfectly. Just as your body doesn't have infinite resources to investigate every molecule that enters it, your mind doesn't have infinite resources to carefully reason through every claim it encounters. Your reasoning ability is affected by your physical and emotional state, and often relies on heuristics, shortcuts, and trusted filters. This is necessary and appropriate—but it creates exploitable vulnerabilities.
Lines of Defense
If you want to avoid getting sick, you have various different lines of defense. Most people think of their skin as their first line of defense, but it’s actually your environment. By avoiding others who are sick, you reduce the risk of being exposed to hostile pathogens in the first place.
Then comes the skin, which does a pretty good job of keeping hostile genes away from your body’s resources. Some bacteria and viruses specialize in getting through the skin, but most have to rely on wounds or entry points: ears, eyes, nose, mouth, etc.
The equivalent lines of defense exist for your mind.
First, environment: if you want to believe true things, try not to spend too much time around people who are going to sneeze false information or badly reasoned arguments into your face. You can’t fact check everything you hear and read; you literally don’t have the time, energy, or knowledge needed. Cultivate a social network that cares about true things.
An important, practical aspect of this is to be on guard against any group or forum you're part of that is extremely ideological and likely to act as an echo-chamber, particularly if it exists primarily to dunk on or hate on a particular other ideology or group. There's no surer way I know to get regularly exposed to a slew of one-sided, memetically fit, but false, ideas. You might think you can skim the titles and laugh at the good digs while not having your epistemics affected, but my bet would be it all adds up, little though each "hit" might be.
Second is your “skin,” in this case, the beliefs you already have. The more knowledge you have, the less susceptible you are to naively believing falsehoods.
Many people will read a dozen news stories and presume that the authors at least put in some reasonable effort toward accuracy and truth… until they read a news story about a topic they’re an expert in, and get upset at all the falsehoods the journalist got away with publishing. Gell-Mann Amnesia is the name for the bias where they fail to then go back and notice that all the other articles were likely also worthy of similar scrutiny, but they lacked the knowledge needed to scrutinize them.
You may think you’re a skeptical person, but all your skepticism doesn’t matter if you don’t have enough knowledge to activate it in the first place when you encounter new information. You can try to rely on broad heuristics (“journalists aren’t usually experts in the topic they’re writing about” or “journalists often have biases”) but heuristics are not bulletproof, and worse, misfire in the opposite direction all the time.
Like many who grew up at the start of the information age, I used to think I didn’t need to memorize facts and figures because I could just look things up on the internet if I needed to. I no longer believe this. Your memetic immune system requires that you know things to activate. Confusion is your greatest strength as someone who values truth, but you need to feel and notice it first, and you need some beliefs for new information to bounce off of to feel it.
Third comes your active immune system: your ability to reason through bad arguments and research information to separate truth from falsehood. The better you get at identifying logical fallacies and common manipulation methods, the better you are at fighting off harmful memes once they get past your other defenses. Practice good epistemics. Investigate claims, resist emotional manipulation, check your blind spots with trusted peers.
Vaccinate: Knowledge as Protection
Why are children so gullible? Because they don’t know anything.
Children will believe literally anything you tell them, until you tell them something that either directly contradicts what someone told them before, or directly contradicts their own experiences. Only then do they feel confusion or uncertainty or skepticism, and only after enough instances of being misinformed do they form heuristics like “my parents can be wrong about things” or “adults sometimes lie,” which eventually grows into “people/teachers/governments/etc lie.”
A healthy organism’s genes are constantly informing the cells and systems that make up their body what it needs to do to maintain or return to a healthy state. But there is no inherent homeostasis that human minds know to maintain against each invading idea: baby brains are much closer to empty boxes, and the initial ideas we’re exposed to are what form the immunoresponses against what comes after.
Noticing confusion is the most powerful tool for investigating what's true. When something doesn't fit, when a claim contradicts what you thought you knew, that friction is information. But you need existing beliefs to experience that friction. If you know nothing about a topic, you can't notice when a claim about it is suspicious.
Gullibility isn't just a deficit of critical thinking skills. It can result from a deficit of information. Critical thinking tools don't help if you never think to deploy them, and you're less likely to deploy them when a claim doesn't trigger any "wait, that doesn't match" response.
We learn through the context of what we already know. New information bounces against existing beliefs, gets incorporated or rejected based on how well it fits. The more robust your existing knowledge, the better you can evaluate new claims. The more you know, the harder you are to fool.
This means that rationality training shouldn't neglect object-level knowledge. Learn lots of things. Read widely. Build detailed models of how various domains work. This isn't separate from epistemic skill-building—it's a core component of it.
Stay Informed: Memetic Weak Points
Your eyes and ears are the entry point for memes to get in, just like genes, but there are other, more relevant weakpoints in your memetic immune system.
Ideological Bias is one of them. Just like Gell-Mann Amnesia, most people will read dozens of articles making all sorts of claims, and it’s all accepted uncritically so long as the authors are reaching conclusions they agree with. Once the author says something the reader doesn’t agree with, their reasoning, or even motives, are subjected to much more scrutiny. In general, we’re more susceptible to believing false things if they confirm what we already believe.
And then there’s Emotional Bias. Emotions aren’t irrational—they're encoded combinations of implicit beliefs and predictions, rapid evaluations that something is good or bad, safe or dangerous. The problem isn’t that we feel emotions when we consider assertions or hypotheses, it’s that the emotions narrow our awareness and ability to process all the data. Most emotions that are harmful to our epistemics are those meant to drive action quickly.
Fear is adaptive when it drives you to react to the shape of a snake in the grass before you have time to evaluate if it’s actually a snake. The people who were not sufficiently afraid when their fellow tribesmate yelled “cougar!” did not pass along their genes as often as those who were. When the cost of being wrong is way lower in one direction than the other, you should expect that there will be a “bias” in a particular direction.
Anger is similar. Protective instincts are far more powerful when acted on quickly, and before there was any sort of misinformation-driven culture wars, a tribe that’s easy to unite in anger would easily outcompete tribes that weren’t.
Our emotions all serve purposes, and one of those purposes is to fuel heuristics that save us cognitive effort and motivate us toward helpful behaviors. It’s only when we recognize a false belief or unhelpful outcome that we label the heuristic a bias.
To avoid these biases in your epistemology, know what your emotional weakpoints are. Some people are especially vulnerable to claims that validate their intelligence or moral superiority. Others are vulnerable to claims that justify resentment toward outgroups. Still others are vulnerable to claims that promise simple solutions to complex problems, or that offer belonging to an exclusive community of truth-knowers.
Relatedly, logical fallacies exploit structural vulnerabilities in reasoning. Ad hominem attacks, appeals to authority, false dichotomies, slippery slope arguments—these persist because they work on human minds, sometimes even minds that "know better." Awareness is necessary but not sufficient. You need to practice noticing these patterns in real-time, in the wild, when your emotions are engaged and the social stakes feel real.
Exercises: Which emotions make you more credulous? More skeptical? Think of some beliefs you hold that you know many others disagree with. Can you tell if you want any of them to be true? Notice if you feel a “flinching” when you imagine any of them being wrong.
Origin Tracing: Where Do Beliefs Come From?
Imagine a line, starting at a point and squiggling through three-dimensional space. The origin is the point in spacetime you were born, and the arrow at the end is where you now sit or stand or lie reading. This is your lifeline, the path your physical body has taken through the world.
Imagine now a sheathe around that line, spaced out to about three miles. That’s how far you can see on a clear day before your gaze meets the horizon. Within that sheathe is everything you’ve ever seen with your own two eyes, and also everything you’ve ever heard, smelled, tasted, and touched.
That’s your sensorium, the tunnel through spacetime that makes up all the things you’ve ever experienced.
Everything outside that tunnel was told to you by someone else. Everything.
Books you’ve read, things your teachers told you, even video and audio recordings or live footage, all of it is something you had to trust someone else to understand and fact check and honestly present to you.
Can you honestly say that you evaluated all those sources? What about how they learned what they passed along to you?
Beliefs, like viruses, always come from somewhere. Knowing the source of a belief is crucial for evaluating its reliability.
Pick something you believe that is not the direct result of something you experienced yourself, whether it’s an economic argument or a simple historical fact.
Ask yourself: Who "coughed" this idea on you? Why did you believe them? Confidence? Authority? Something else?
How long ago did it happen? Has new information become available? Do you know if the studies have replicated?
What was the context? Were they sharing something they learned themselves? Or repeating something they read online or heard someone else say?
If the latter, the tracing can continue all the way back to whoever experienced the observation directly from their own sensorium, then wrote a paper about it, or talked about it on TV, or told it to others who did.
It could continue. Should it, though?
Well, I’d say that depends a whole hell of a lot on what your error tolerance is on your beliefs and how important that belief being correct is to your goals or values.
But at least be aware that this is what it means to do origin tracing on your beliefs, and why it matters if you don’t.
Again, we cannot expect ourselves to independently verify everything we believe. We can’t rerun every scientific study we hear of, and we can’t personally verify every historical event to the level of the Apollo 11 moon landing, the Holocaust, or the 9/11 terrorist attack on New York City… three of the most well documented events in history that still somehow manage to provoke conspiracies.
We must trust, at some point, that we have evidence enough, and that the society around us is sane enough in aggregate, to hold with high enough probability that the belief is “justified.”
But there are still pitfalls to watch out for, and the reliance on others should always be paired with understanding of what heuristics are being used to fill in the gaps.
For example: if all you know about two beliefs is that a million people believe in Belief A, and a thousand people believe in an unrelated Belief B, is there a justified reason to hold a higher probability in belief A?
Many people find this question difficult because it sets at odds different intuitions about proper epistemology; the answer is “it depends.”
A thousand experts who've examined evidence independently might outweigh a million people who absorbed a belief from their surrounding culture. But a million people who all observed Event A does add more reliability than a thousand people who saw Event B. And of course a single person with direct access to crucial evidence might outweigh them all.
With that in mind, beware double-counting evidence! If five of your friends believe something, that might seem like strong corroboration—but if they all believe it because they read the same viral tweet, you have one source of information, not five.
Information cascades create the illusion of independent verification. Belief tracing, consistently applied all the way back to the origin, can reveal how much actual evidence underlies apparent consensus, not to mention the quality of that evidence.
It seems intuitive to believe that if a piece of information has passed through multiple people, it’s more likely to be true, because multiple people haven’t determined that it’s false. But in reality, people aren’t just at risk of passing something along without factchecking it; they're also likely to misremember or misunderstand or misquote things as well. The thing you heard them say may not even be the thing they originally read or heard!
Exercise: Name one of your load-bearing beliefs—something that supports significant parts of your worldview or decision-making. Where did you learn it? How reliable were the sources? How much time did you spend looking for counter-evidence? Is there some obvious way the world would be different if it wasn’t true?
Mask Up: Don't Cough Misinfo Onto Others
As mentioned before, the first step in protecting yourself from hostile genes or memes is a safe environment. And you are part of that environment, both for others and also for your future self. If you pollute the epistemic commons, you’ll likely be affected by something downstream of those false beliefs yourself sooner or later.
The most simple rule: don't repeat things with the same confidence you'd have if you'd verified them yourself. Notice how confidently you talk about things in general, and what phrases you and others use. "I heard that..." should prompt a thought or question from yourself or others: “heard from where?”
"I checked and found that..." should evoke a similar “checked where? When was that?”
If you pass on secondhand information as if it were firsthand, you launder away the uncertainty, making the claim seem better-supported than it is. You also use your own credibility in the place of the source you’re repeating, which for your friends or peers, may make them believe it more than they should if you didn’t look deep enough into it.
More vitally, notice when you’re trying to persuade others of something. Notice if you start trying to argue someone into a belief and ask yourself why. What emotion is driving you? What are you hoping or fearing or trying to protect?
Persuasion is inherently a symmetric weapon. People who believe in true or false things can both be persuasive by means both subtle and forceful. Asymmetric tools like direct evidence and formal logic are cleaner.
The best practice when facing someone who disagrees with you on something important is to try to explain what convinced you the thing was true in the first place. If that’s not convincing for them, investigate what beliefs they hold that your evidence or reasoning is “bouncing off” of so you can examine those beliefs yourself, and then explain why you don’t find them convincing (assuming you don’t!).
This preserves important information—the actual evidence and reasoning—rather than just the conclusion. It treats the other person as an epistemic agent capable of evaluating evidence rather than as a target to be manipulated into agreement. And it allows you to stay open to new information and arguments, while also better understanding others and why they believe the things they believe. Julia Galef calls this "scout mindset" as opposed to "soldier mindset": the goal is to map reality accurately, not to win rhetorical battles.
Use good filtration on your information sources. Before absorbing someone's object-level claims, try to evaluate their epistemics. Do they practice what they preach? Do they build things with their hands, or just opine? Are they regularly wrong? Do they admit when they are? Are they sloppy with beliefs, making confident claims without adequate support? Can they explain their reasoning clearly, or do they rely on appeals to authority or status quo bias?
Variety has value—seek perspectives with different heuristics than yours, even if some have lower epistemic rigor than others. But weight those sources accordingly.
There's a deeper point here, articulated in Eliezer Yudkowsky's Planecrash through the character Keltham:
Or, as he summarizes later: "It is impossible to coherently expect to convince yourself of anything… You can't expect anyone else to convince you of something either, even if you think they're controlling everything you see.”
Your expected posterior equals your prior—you might end up more convinced, but there's a counterbalancing chance you'll find disconfirming evidence and end up less convinced. On net, if you're reasoning correctly, it balances out. You can't rationally plan to move your beliefs in a particular direction.
This means that if you notice yourself hoping to be convinced of something, or trying to convince yourself, something has gone wrong. That's not truth-seeking; it’s the dreaded (and badly named) rationalization.
Exercise: List three people you think have good epistemics, three with bad epistemics, and three you're unsure about. For the uncertain cases, what would it take to find out? Notice if there's something you've wanted to convince yourself of, or hoped someone else would convince you of. Why?
Risks and Practice
Information hygiene, like physical hygiene, requires ongoing maintenance. It can also be overdone.
If you decide to stay indoors all day and never talk to anyone except through a glass door and only eat dry food… you’re definitely minimizing the chance you’ll get sick, but also leading an impoverished life, and in many ways a less healthy one. It might be justified if your immune system is compromised or during a pandemic, but something has likely gone wrong if you’re living your life that way.
Similarly, beware of being so skeptical that you can no longer trust anything you read or hear. We cannot trust everything others say. We cannot even trust everything we observe ourselves. But caring about truth requires an endless fight against the forces of solipsism or nihilism: reality exists, independent of our subjective experience. We can’t understand the territory, but we still have to live in it, and our maps don’t need to be perfectly accurate to still be worth improving.
Some practices that help:
Improve self-awareness through mindfulness exercises. Notice your emotional reactions to claims. Notice when you feel defensive or vindicated.
Practice explaining what you believe and why to someone skeptical. Write more if you’re practiced in writing. Speak if you’re not practiced at speaking. Articulating the justifications for your beliefs will often reveal that they're weaker than you thought.
When something changes your mind, record the context and circumstances. Build a model of what kinds of evidence actually move you.
Practice asking people what informed their beliefs. Make it a habit to trace claims to their sources. Keep track of people who reveal solid reasoning. Keep them part of your information feeds, and eject people who constantly cough without a mask on.
Prepare for epistemic uncertainty—lots of it. People are generally bad at remembering how uncertain they should be. Even in communities that explicitly value calibration, it's hard. The feeling of knowing is not the same as actually knowing. Betting helps.
And remember: this is genuinely difficult. Even with good intentions and good tools, you will sometimes be wrong. The goal isn't perfect accuracy, but building systems and habits that make you wrong less often and help you correct errors faster when they occur.
Your mind, like your body, will face an endless stream of would-be invaders. You can't examine every one. But you can understand your vulnerabilities, trace your beliefs to their sources, take responsibility for what you spread, and build the knowledge base that makes deception harder. The next pandemic hopefully won’t be for a long while, but the information age has brought with it a memetic endemic, and we all need to be better at hygiene, for our own sake and each other’s.