This essay introduces a new framework explaining why, as we learn, our knowledge tends to disconnect from other disciplines, accumulate errors, and become increasingly corrupt. I map how several established cognitive biases combine into a broader mechanism I call the expert trap.
I see it as a major reason why knowledge in our civilization is often corrupted. At higher levels, this manifests as scientists compromising research through p-hacking, importance hacking, and confirmation bias, directly contributing to the replication crisis. This dynamic explains why experts cling to false beliefs and resist changing their minds even when presented with contradictory evidence, and why most well-meaning expert collaborations like adversarial collaborations often fail. At lower levels, this corruption manifests as groupish conspiracy theories, binary thinking, rigid "never-changed-my-mind" attitudes, and inflexible partisan positions. It also explains why important knowledge spreads slowly through society, and why our civilization's methods for encoding and sharing knowledge are often faulty, inefficient, and make that knowledge hard to use.
The first part of this essay explains the mechanics of the expert trap: how expertise creates silos that prevent us from explaining concepts to novices (The Curse of Knowledge); how we overestimate our communication clarity while listeners receive something entirely different (Epistemic overconfidence); how we selectively filter evidence to reinforce existing beliefs, allowing errors to compound (Confirmation/my-side bias); and how genuine truth-seeking gradually degrades into status-seeking behavior (Hierarchy bias).
In the second part, I'll present practical countermeasures: techniques to understand what audiences actually "hear" (User research); methods for verifying knowledge from first principles (Minimal-trust investigations); strategies for testing beliefs against intelligent opposition (Adversarial collaborations); practices for identifying mismatches between expectations and reality (Noticing surprise); and approaches for revisiting ideas until we can explain them simply across various contexts (Feynman learning).
This article isn't meant to be anti-expertise, but rather about understanding and countering expert bias. In writing this essay, I hope to shift attitudes in following ways: first, to recognize that "expert knowledge" is often more corrupted than we typically assume; second, to encourage the verification of knowledge firsthand; and third, to develop robust methods for counteracting the effects of the expert trap.
I am not absolutely certain about all of this. The argument rests on a mix of well-established findings from psychology, a couple of more speculative ones, and a few of my own observations. It is a large and a bit speculative frame, but I will indicate my level of certainty and how strong the sources were so you can shape your opinion on your own...
I’m also adding a skippable section—flagging which bits you can skip without losing the main argument.
Certainty: the curse of knowledge bias is well-established, expert trap is my take | Skippable: no
The curse of knowledge bias occurs when experts find themselves unable to effectively teach or communicate with those who know less, creating a gap between what the expert knows and what they can successfully convey. I view the expert trap as a broader and more pervasive phenomenon, with the curse of knowledge being just one component of it. It’s a good place to start because the curse of knowledge has a very similar shape and dynamic to the expert trap.
The phrase "curse of knowledge" was first coined in 1989 by Camerer and Loewenstein. They recognized its close relationship to hindsight bias and earlier research by Baruch Fischhoff. In 1975, they demonstrated in their paper “Hindsight is not equal to foresight”that once people learn an outcome, they falsely believe they would have predicted it correctly.[1] They theorized that this poor reconstruction happens because participants become "anchored in the hindsightful state of mind created by receipt of knowledge". Study participants couldn't accurately reconstruct their previous, less knowledgeable states of mind—which connects to the curse of knowledge. When people acquire expertise, their knowledge becomes increasingly inaccessible to those at earlier stages of understanding. It's like a one-way knowledge ladder where you can climb up and increase the complexity of your knowledge, but your ability to explain that knowledge down the ladder is inhibited.
I propose the name expert trap because I believe there’s a larger phenomenon at play than typically assumed when discussing the curse of knowledge—especially because my-side bias, confirmation bias and status seeking behaviors are at play. After explaining the expert trap, I hope it will be intuitive to see it as a much more widespread dynamic, and to spot the ways knowledge often becomes corrupted as we develop expertise. Following I will showcase four major mechanisms driving this dynamic. Mechanisms one and two (forgetting and epistemic overconfidence) are more closely connected to what people typically associate with the curse of knowledge, while mechanisms three and four (confirmation and hierarchy biases) expand this concept and connect to what I see as the expert trap.
Certainty: this topic is my observation, a bit speculative, but i am fairly certain its at least directionally correct | Skippable: no
We are in a constant process of forming new memories and forgetting old ones. Our brains prioritize certain memories over others, especially those most relevant to our ongoing process of gaining further understanding. We tend to forget the state of how was it not to understand it, what is the less knowledgable state of mind. We tend to forget what are necessary intellectual breakthroughs or mental models that from the space of all possibilities are actually moving you to a higher level.
As soon as we learn something more complex, it may seem more important, and we tend to deprioritize remembering why it was initially difficult to understand. It's as if out of the entire space of possible answers to an initial question, we collapse into the correct one without paying attention to why we were confused in the first place. Perhaps these early struggles seem redundant to our brain since the goal is to keep ascending further up the knowledge ladder. This process also mirrors the hindsight bias, discussed earlier. As soon as the more complex version starts to make sense, it feels like that’s what we thought all along—of course that was the answer—and we collapse onto the “correct” path, erasing the memory of the branches that didn’t make sense (or ones that still don’t).
Certainty: tested, well-established | Skippable: no
Another dynamic involves our overconfidence about how effectively we communicate our internal mental models to others. We consistently overestimate how well others understand what we're trying to convey. In 1990, a tapping experiment at Stanford[2] demonstrated this phenomenon. Participants were asked to finger-tap a well-known tune and predict how many listeners would identify it. The results were striking: tappers estimated about 50% of listeners would recognize their song, but in reality, only 1.33% did. This illustrates the expert trap perfectly—in the tapper's mind, they heard the complete song with melody, rhythm, and lyrics, while listeners heard only disconnected taps. The expert (tapper) couldn't separate their rich internal experience from what they were actually communicating. They projected their knowledge onto the audience, not seing how little was actually being transmitted.
Certainty: tested, well-established | Skippable: no
Another dynamic causing the expert trap is confirmation bias. It points to the tendency that the more specialized knowledge becomes, the more corrupt it becomes. To really understand confirmation bias, I want to start with what I see as its parent category—my-side bias. This is a bias that’s very close to what others often call motivated reasoning.
In short, my-side bias shows that we fundamentally distort our self-image. Our ego distorts facts, manufactures impressions and memories to create the best possible self-image The dynamic is described at length by Daniel Kahneman, Daniel Gilbert, Julia Galef among others.
A deeper look at my-side bias can clarify the core dynamics of the expert trap. Below are examples of research studies from Stumbling on Happiness by Daniel Gilbert:
I believe confirmation bias is a sub-effect of my-side bias, but specifically in the context of opinions we already hold. From the vast ocean of available data, we filter for information that strengthens our existing views. Here are two studies that show how this works:
In one study researchers juxtaposed two groups. One was for the death penalty and another was against it. Researchers fabricated two studies with the same strength of evidence, with one supporting the death penalty and the other opposing it. Both groups read both texts, after which they became even more polarized, believing more strongly in their original positions than they did before. [3]
When we do something ourselves, we become less critical of it and overestimate its quality. In a study on selective laziness in reasoning, participants were asked to evaluate reasoning exercises. The results were striking: when reviewing their own work, participants rarely made corrections—less than 15% revised their answers. However, when the same answers were presented as someone else's work, approximately 60% rejected arguments that were in fact their own. People were able to distinguish valid from invalid arguments when they belong to others rather than themselves.[4]
Returning to the one-way ladder concept, when learning, confirmation bias leads us to strengthen theories we already believe in while showing less interest in alternative explanations or ideas that challenge our existing knowledge. Knowledge acquisition becomes less about discovering, validating, evaluating what’s true and how it connects to other areas of knowledge and more about selecting information that aligns with what we already believe. By chance, you might confirm your way into a correct idea, but more likely, given the vast space of possible answers, you'll confirm yourself into an incorrect one. This dynamic suggest that the more knowledge one acquires, the more potentially corrupted that knowledge becomes.
Certainty: not-well established, but I put a strong credence on this hypothesis | Skippable: no
Another dynamic causing the expert trap is what I call hierarchy bias—being interested in ideas because they bring us status rather than because we're trying to figure out the truth.
I've created the term hierarchy bias as a shortcut for the dynamic explained in Elephant in the Brain by Robin Hanson and Kevin Simler. The main thesis of the book is that we are deeply hierarchical creatures and at the same time don’t view ourselves this way. The hierarchies in our world aren’t completely overlooked, but Hanson and Simler assert a significantly broader influence they have on human motivations.
They say humans are “ultrasocial apes” who evolved to compete for status and allies. To keep the peace and be better at deception, the brain “actively hides our motives—especially from ourselves,”. Many cherished institutions, conversation, laughter, consumption, charity, education, medicine, religion, and politics—“are less about their stated goals and more about signaling” qualities like intelligence, loyalty, or wealth—means of our status in the group[5]. Primatologist Frans de Waal has also wondered why in primatology, when we discuss primates, almost everything is viewed through the perspective of hierarchy in the group. Yet in social sciences, when we talk about humans, hierarchy is rarely mentioned.
The research from Elephant in the Brain helps explain ways we approach learning and why we strive to be perceived as experts. One of the main motivations to acquire knowledge seems to be a need to impress others and climb social hierarchies. In another article Robin Hanson sees three main functions of academia. Alongside preserving and teaching, “Academia functions to create and confer prestige to associated researchers, students, firms, cities, and nations.”[6]
This may help explain our drive to use jargon. Sometimes more complex vocabulary is a shortcut to create a more specific definition. However, it more often may be used for signaling—complex-sounding word that only insider or experts use—and be used because people want to be viewed as more knowledgeable and higher status. Again this is most likely a subconscious force. So people who use jargon are not only confusing others but also themselves. And it’s another way—next to confirmation bias—how, when knowledge climbs the ladders of complexity, it tends to become corrupt, because what’s optimized isn’t truth but the perception of being smart, sophisticated, and high status.
Certainty: Tetlock and Kahneman are firm, the rest is more anecdotal | Skippable: can be skipped
I think the expert trap influences a lot around us. It corrupts the way we share and acquire knowledge—from scientists compromising research through statistical manipulation to everyday people clinging to disproven beliefs despite contradictory evidence.
Studies have found that deep expertise in a subject does not positively correlate with accuracy in judgment. As part of his research on forecasting, professor Phillip Tetlock conducted a study with 284 political experts, that generated over 80,000 informed (where the estimate matched the area of expertise of the individual) and uninformed predictions over the course of twenty years. Surprisingly, Tetlock discovered that specialists are less reliable than non-experts, even within their specific area of study. In fact, the study concludes that after a certain point, deepening one's knowledge about a specific topic is affected by the law of diminishing returns and can hinder the ability to accurately predict a certain outcome.[7]
In the same book Tetlock also suggests that experts often make less accurate predictions because as they’re more vulnerable to confirmation bias and more sensitive to reputational pressures—both of which can distort judgment.
The expert trap is also abundantly visible in our educational system. It's often structured around memorization rather than understanding. We think we understand something, but what we’ve actually done is just familiarized ourselves with the area and memorized the terminology. Memorized names often function as covers—conveniently obscuring the parts that are still fuzzy. Eliezer Yudkowsky highlights this phenomenon in his essay "Guessing the Teacher's Password".
Suppose the teacher asks you why the far side of a metal plate feels warmer than the side next to the radiator. If you say “I don’t know,” you have no chance of getting a gold star—it won’t even count as class participation. But, during the current semester, this teacher has used the phrases “because of heat convection,” “because of heat conduction,” and “because of radiant heat.” One of these is probably what the teacher wants. You say, “Eh, maybe because of heat conduction?” This is not a hypothesis about the metal plate. This is not even a proper belief. It is an attempt to guess the teacher’s password. […] “What remains is not a belief, but a verbal behavior.”
Eliezer points out, that real learning that escapes the constraints of expert trap, is about being aware of the difference between an explanation and a password. Learning is about finding knowledge that is in close contact with how we anticipate it will show up in reality. It's about asking: If this hypothesis is true, what should I and shouldn't I encounter in reality?
The expert trap seems to be also underpinning replication crisis, which may not be an aberration of scientific process. It may be an effect of our norms around learning, our biases and status seeking behavior. Daniel Kahneman, who spoke widely on this subject, sees my-side bias and confirmation biases as the main drivers. To put it simply—scientists may be subconsciously finding ways to prove theories that will make them more highly acclaimed in their field.[8]
In this section, I explore practical approaches for moving beyond the constraints of the expert trap.
Certainty: Anecdotal | Skippable: can be skipped
We are in constant confirmation loop. To counteract this, one of my favorite questions I like to ask people is what they found surprising about our experience together. It is interesting how often people have a hard time answering. I ask this question also to evoke it for myself. I want to get myself out of the common trance—I am right and everything is as it should be.
As described earlier, I think we are often overriding memories to our own advantage or selecting information that confirms beliefs we previously held. To simplify this dynamic, if we believe that x is 2 and then we see that x is 3 we will override our memory to perceive that we always thought x was 3. We are in this constant confirmation tunnel. Despite how often our perceptions are wrong, we rarely experience feelings of surprise. We continuously bend reality to match our expectations. But when one pays attention to the feeling of surprise or confusion I think one can slightly rapture this dynamic. Perhaps there is this tiny moment, a short opening between thinking that x is 2 and seeing that x is 3 when we can register that we were wrong, that this result is surprising. Perhaps, if we keep doing that we can calibrate our thinking. We can build better intuitions on how we reason and where we are imprecise, wrong, and mistaken.
Noticing surprise is a high-frequency, low-effort practice that, while challenging to develop as a habit, but with enough repetition, it can gradually shift our confirmation bias machinery, and could help slowly break the cycle of reinforcing incorrect beliefs.
Certainty: Anecdotal | Skippable: can be skipped
Knowledge often travels through chains of recommendation that hinge on esteem and expertise status—people often validate knowledge through recommendations from experts they trust. As this essay tries to showcase, we can expect that a significant portion of expert knowledge is corrupted—through experts' loops toward self-confirmation or status-seeking. That's why it may be so valuable to try to verify knowledge on your own. "Take no one's word for it" was the motto of the Royal Society during the Enlightenment, and as David Deutsch explains, figuring things out from frist-principles and being anti-authority was one of the driving forces of the Enlightenment.
In Minimal Trust Investigations, Holden Karnofsky outlines his process for independently verifying knowledge from first principles. He gives an example of how he approached the first evaluation of Against Malaria Foundation. The primary goal of this exercise was to defer to the knowledge of others as little as possible. Holden explains how he, all by himself, delved into the research, checked calculations, explored counterfactuals, and verified all other variables influencing the topic.
Ideally all knowledge would be approached from first principles but it’s both impractical and impossible to verify everything on our own. Any question when examined deeply actually very quickly becomes too complex and one can needs to defer to the knowledge of others, but verifying even just a few steps seems worthwhile. Holden admitted this is a laborious and intensive process, and throughout his life he has completed only a handful of minimal-trust investigations. Nevertheless, he believes that approaching learning this way, despite being extremely time-consuming, provided significant value because it influenced his broader analytical process. Even completing just a couple of these investigations gave him new intuitions for evaluating ideas more accurately. For example, he now has a habit of quickly checking research citations in texts he reads. He found that spotting low-quality research could be done quickly with minimal effort.
Manually checking knowledge, especially when you feel confused, and verifying it yourself, even against expert or established sources, may feel slow, boring or inefficient, but it may be one of the most important practices for countering hierarchy bias in our tree of knowledge.
Certainty: Anecdotal | Skippable: can be skipped
Similarly in order to help counter confirmation bias and expert trap, create better antithesis to your work is do what Daniel Kahneman described as Adversarial Collaboration. In this approach, a person with a hypothesis finds a competent individual who holds an opposing view. Together, they collaborate to identify the core points of disagreement and establish common ground. A neutral arbiter oversees the process, and before beginning the experiment, both parties discuss what specific results would convince them to change their positions.
Kahneman himself conducted several workshops to stress-test his own ideas. Interestingly, he found changing his mind was more difficult than he had anticipated. From what I understand, Kahneman still values the process and believes it improved his thinking, though he never shifted his fundamental positions[9]. Others have had more success with this approach. Scott Alexander encouraged people to engage in adversarial collaborations and published the results as articles on his blog. I believe that difficult tasks like identifying flaws in my own thinking may be better solved socially—working with a well-meaning a person who knows a lot but disagree with you may be one of the best ways to counter expert trap. While adversarial collaborations may not frequently change deeply held opinions, they serve as a valuable approach to counteracting confirmation bias, my-side bias, and hierarchy bias—potentially laying groundwork for even more effective methods yet to be developed.
Certainty: creator’s bias is my framing, user-research is well-established | Skippable: can be skipped
When we create something, we become experts on our own creation. Whether it's a hypothesis, article, press release, painting, short story, or film narration—we spend vastly more time with the piece than those who eventually view it. I call this the creator's bias. Creators attempt to communicate their message clearly, but cognitive distortions like my-side-bias, confirmation bias, hierarchy bias distort what is actually communicated.
There are several ways to counter this distortion. First, pay attention to your initial impressions of what you create. Remember the feeling when you first create something or when you look at it again after taking a break. This impression will be soon distorted or gone. Secondly, it helps to look for opportunities to find distance from your own work. It helps to take frequent and long breaks—when you proactively try not to look at your work. Your job is to find ways to forget your own intentions and see work as far removed from you as possible. Find ways to see it like a person who sees it for the first time.
Lastly, it helps to be extremely skeptical about the contents of your communication and clarity of your point of view. The most effective way to do that is to do user-research or more ask other people how they perceive it. But it’s crucial to find the right audience and not to fall into sampling bias. Pick people who are unaffiliated with you or don’t have stakes in the work. Try not to ask leading questions. Gathering quality feedback is a skill. You can just as easily end up with misleading information. However, it doesn't need to be overcomplicated—the most effective methodology I've found and regularly use is the Design Sprint approach, particularly the user-research chapters written by Michael Margolis. I am surprised that user research methods rarely spread beyond the tech industry. These methodologies were originally developed for testing apps and websites, but they're perfectly suited for a much wider range of disciplines including writing, sociology, urban planning, art, and virtually any form of communication. Their widespread adoption could substantially reduce the prevalence of the expert trap.
Certainty: Anecdotal, Richard Feynman perspective | Skippable: no
But perhaps the most powerful way to counteract the expert trap was practiced by Richard Feynman, one of the most accomplished physicists of the twentieth century. If you were to take one idea from all this writing take this one. Feynman followed:
"If I cannot explain it simply, I don't understand it well enough”.
When Feynman tackled complex physics concepts, he would approach freshman students and try to explain the ideas in the simplest possible way. If he couldn't explain successfully, he would go back to studying. He repeated this process until he understood the topic thoroughly enough to explain it clearly to someone with less knowledge.
This method may, in a large way, break the constraints of the expert trap. Whenever Feynman deepened his knowledge he also was forced to explain it at the more basic level. This approach prevents falling into an expertise silo by requiring you to explain complex knowledge using language, metaphors, and ideas from more basic levels and other knowledge areas. Learning things this way makes one see how things connect. Rather than just climbing up expertise levels making narrow connections, one builds links to a broader knowledge base—continually validating ideas across varied disciplines and creating multiple knowledge pathways.
Feynman learning also means progressing toward more complex ideas while revisiting your earlier less-knowledgeable states. If the expert trap is affected by disconnecting contexts—when you are at the higher level of expertise you forget what you didn’t get previously—then this method forces you to keep your “cables plugged” into the prior contexts—continuously linking each idea to a wider range of disciplines, contexts, and ways of thinking.
One of my absolute favorite validations of this approach is “Fun to Imagine”, where Feynman casually explains all sorts of concepts from physics and chemistry. While school taught these subjects as separate domains, Feynman integrates matter, heat, magnetism, and electricity into one cohesive framework. It is one of these interpretations that is impossible to unsee. Whenever I learn anything new on these topics—I come back to it to reference, expand on, visualize, or verify new knowledge.
Certainty: Anecdotal | Skippable: no
So to summarize, the Expert Trap is a dynamic in which expertise gradually distorts how knowledge is understood and communicated. It’s driven by four main forces: forgetting what it was like not to know something, becoming overconfident in how well we communicate, filtering evidence to support what strengthen our own image or what we already believe, and being driven more by status-seeking rather than truth-seeking. These forces often lead to larger problems—replication crisis, false beliefs that persist even in the face of evidence, increasing polarization, and expert knowledge becoming inaccessible or misleading. Its effects can be minimized by adopting several practices
But maybe one of the biggest first obstacles to tackling biases like the expert trap is the bias blind spot—our conviction that while biases may affect others, they somehow don't influence our own judgment. And it’s a hard one to tackle—for example, while writing this, I can feel the active force of confirmation bias within myself—I have a preexisting idea and feel compelled to find justifications for it. On the other hand, this makes sense, as I must eventually limit how thoroughly I examine any single idea. I think this is a common pattern when communicating ideas—whether during casual conversation at a party or while crafting a formal article. First, you recall a belief, and then you start collecting the arguments for it—which often turns into full-on confirmation bias or status-seeking-through-knowledge, where you’re forcing arguments onto a memorized, sometimes vague belief. But sometimes, there’s a little more space. Maybe you include some uncertainty. Maybe you examine more. Maybe you stay curious about whether the definition you’re working with is the right shape, force, or influence. Maybe you simply think out loud and weigh the pros and cons.
There are tradeoffs with this approach. It takes more time and it may result in a less aesthetically pleasing narrative. But as Cate Hall points out
Many social dynamics are paradoxical — social acts that seem weak from the inside, when undertaken without apology, actually read as very strong. For example, being willing to say “I’m wrong” or “I don’t know.”[10]
I think that counteracting the expert trap and its underlying biases like my-side bias, confirmation bias, hierarchy bias, and the expert trap (if that’s possible at scale) could have world-repairing consequences. In the age of accelerating technology and transformative AI, during what some call “the most important century,” when our tools have wider and wider scope and influence, our ability to overcome these biases might determine whether we end up adrift or actually manage to steer public discourse, democratic decision-making, and collective coordination.
Fischhoff B. (1975) Hindsight not equal to foresight: the effect of outcome knowledge on judgment under uncertainty, Qual Saf Health Care
Elizabeth Louise Newton (1990), The Rocky Road from Actions to Intentions, Doctoral dissertation, Stanford University
Lord, Charles & Ross, Lee & Lepper, Mark. (1979). Biased Assimilation and Attitude Polarization: The Effects of Prior Theories on Subsequently Considered Evidence. Journal of Personality and Social Psychology
Trouche E, Johansson P, Hall L, Mercier H. (2016), “The Selective Laziness of Reasoning.” Cognitive science vol. 40,8
Robin Hanson & Kevin Simler (2018), The Elephant in the Brain: Hidden Motives in Everyday Life, New York: Oxford University Press
Robin Hanson (2022), More Academic Prestige Futures, Overcoming Bias
Simone Magurno (2021), A Playbook for Expressive Products, Magur.no
Daniel Kahneman, Putting Your Intuition on Ice, The Knowledge Project Ep. #68
Daniel Kahneman (2022), Adversarial Collaboration: An EDGE Lecture by Daniel Kahneman, Edge.org
Cate Hall (2025), 50 things I know, Useful Fictions