• Boy: Why are you washing your hands?
  • Shaman: Because this root is poisonous.
  • Boy: Then why pull it out of the ground?
  • Shaman: Because the pulp within it can cure the mosquito disease.
  • Boy: But, I thought you said it was poisonous.
  • Shaman: It is, but the outside is more so than the inside.
  • Boy: Still, if it's poisonous, how can it cure things.
  • Shaman: It's the dose that makes the poison.
  • Boy: So can I eat the pulp?
  • Shaman: No, because you are not sick, it's the disease that makes the medicine.
  • Boy: So it's good for a sick man, but bad for me?
  • Shaman: Yes, because, the sick man must suffer in order to be cured.
  • Boy: Why?
  • Shaman: Because he made the spirits angry, that's why he got the disease.
  • Boy: So why will suffering make it better?
  • Shaman: Suffering is part of admitting guilt and seeking forgiveness, the root helps the sick man atone for his transgression. Afterwards, the spirits lift the disease.
  • Boy: The spirits seem evil.
  • Shaman: The spirits are neither evil, nor are they, good, as we understand it.
  • Boy: Ohhh
  • Shaman: Here, sit, let me tell you how our world was made...

Ok, that sounds like a half believable wise mystic, right? That would make for a decent intro to a mediocre young-adult fantasy coming of age powertrip novel

Now, imagine if the shaman just left it as "The root is poisonous if you are not sick, but it helps if you are sick and eat a bit of its pulp, most of the time". No rules of thumb about dose making the poison, no speculations as to how it works, no postulation of metaphysical rules, no analogies to morality, no grand story for the creation of the universe.

That'd be one boring shaman, right? Outright unbelievable. Nobody would trust such a witcherman to take them in the middle of the jungle and give them a psychedelic brew, he's daft. The other guy though... he might be made of the right stuff to hold ayahuasca ceremonies.


This is not limited to shamans though, but it's more obvious to see in the case of shamans, because you don't use their services.

If you are willing to stop adverting your eyes from uncomfortable uncertainty, you'll see it in most modern specialists, from electricians, to doctors to nuclear physicists. We are all just 2 or 3 reality-validated claims away from babbling nonsense in order to give answers to a a "why".

Ok, let me try again:

  • Boy: Why did you give my dad ibuprofen?
  • Doctor: Because it helps cure diseases, like colds.
  • Boy: So if I take it every day, will I never get sick?
  • Doctor: Oh, no no, you shouldn't do that, it can be dangerous.
  • Boy: Then why is it good when you're sick?
  • Doctor: Because it treats inflammation, which is something that happens only when your sick.
  • Boy: What's inflammation, why does it happen when you're sick?
  • Doctor: It's a process through which the body fights diseases.
  • Boy: So isn't it good?
  • Doctor: Well no, because the body also damages itself by doing it.
  • Boy: Why?
  • Doctor: Well, because it's impossible to fight a disease without also damaging the body, the infected cells need to die, and the systems fighting the infection are imprecise, they will end up destroying nearby healthy tissue.
  • Boy: Ok, but then if we stop inflammation, how does the disease go away?
  • Doctor: Ahm, it's specific types of inflammation that are bad, or specific parts of it, and ibuprofen stops those nasty parts.
  • Boy: Why are the nasty parts there to begin with?
  • Doctor: Well, because evolution made our immune system adapted for an environment with more dangerous pathogens and injuries, but for modern man diseases are less dangerous.
  • Boy: But aren't there more diseases now due to people travelling all over? Like the coronavirus
  • Doctor: Oh, well, yes, but hygiene also played a part in this. Have you ever heard about Louis Pasteur? No? Ok, let me tell you a story...

Is this a believable smart doctor? I'd say so, I'd say she's even above average. The kind of medic you want as your GP.

Would you be less likely to go to her if she just said: "Because ibuprofen makes people with colds feel better until they go away on their own. It might be a bit dangerous, but it might also be a bit helpful. We don't really know, the processes involved are so complex as to be on the edge of scientific understanding, and the few studies that tried to take the question seriously I've never read, it's just common practice to take them when cold."

To most people this doctor sounds like an uninformed moron, they'll go and complain about her in their antivaxx naturist mom facebook groups. But, why? This doctor is being more honest, both about their own reason for saying take ibuprofen (everybody does it) and about the real state of knowledge on the subject of efficacy and mechanism (it's complicated and uncertain).

We really don't understand if NSAIDs are useful when confronted with "inflammation", which in itself is a vague term that nobody agrees on. We neither perfectly understand how COX1&2 interact with the immune system, platelets and other signalling mechanisms, nor do we understand the multitude of NSAID effects beyond COX inhibition. Even if we did, we don't understand the immune system as a whole. There's likely some situations where NSAIDs help and others when they hurt and others when they are neutral but help alleviate unnecessary suffering, but currently, we can't explain why or even when those happen.


I have observed a pattern around the internet:

  1. Someone holds claims the expert/educated/mainstream consensus on some topic is bonk. Some branch of medicine is hogwash, some physics theory is incoherent and useless, the ethical stances of some group is blatantly inconsistent and dangerous.
  2. I cheer them on, you go fellow crazy person! This stuff is bs and more should hear about it.
  3. I keep reading their reply/post/article/book... and get increasingly sad as they finish of their claims with: But I have THE SOLUTION that medics don't want you to know, but I KNOW the correct interpretation for this realm of physics, but MY ETHICS could be imposed upon that group and they'd be saved.

I cannot express how much this saddens me. Why must it be that all healthy scepticism always turns into quackery.

Maybe this in part scares me because it contributes towards a rule like: All writers which are sceptical of a lot of popular theories are insane quacks. Given that I fit in that category, I might be the same, and simply missing out on whatever my outrageous claims are.

More broadly though, it fits this model of people needing to know certain things, being unable to live in scepticism, in lack of knowledge.

You can't just disagree with modern medicine, you must be into yoga and crystal healing. You can't just think that cosmology is unfalsifiable and unreplicable, thus unscientific, you must then go on about your own theory of the universe. You can't just think that the tectonic plate model adds little value and is unfounded, you must then predicate at length why your cult has a better answer in your theory of geogensis.

Why ? Why !? Why !!


And I think in that very desperate question I've given myself an answer of sorts.

Because sometimes we can't stop demanding answers and other people, or our own selves, feel obliged to provide them.

I don't know why people can't be sceptical of things (theories, models, findings) without needing to fill that hole with other things.

I don't even claim that I'm correct in thinking this. Maybe my observations of this phenomenon are wrong, maybe the vast majority of sceptical thinkers don't do this, maybe I'm just selectively remembering these incidents because they bug me.

I think there's some probability that it's a thing that happens, that we should safeguard against being sceptical of something as an excuse for inserting our own dumb theory. Ideally, try and cultivate the ability to be ok without knowing under almost any circumstances.

I feel like any further speculation would be gravely disrespecting my own opinion.

But it does seem like something that a good pop writer would do. I'd be a much more credible social critic if I went on to expand about how this phenomenon defines what are fundamental questions, or how the rational mind needs some sort of "completeness" for its models, or how conceptual thinking is additive... or some such.

New Comment
11 comments, sorted by Click to highlight new comments since: Today at 11:02 PM

Why must it be that all healthy scepticism always turns into quackery.

Because our instincts are optimized for signalling. Trusting the authorities on some topics and doubting them on other topics, is a lousy signal of either conformity or noncomformity. You will have no allies; both groups will laugh at you.

This points at something I find it very hard to work against: a desire to explain why things are the way they are rather than just accept that they are the way they are. Explanations are useful, but things will still be as they are even if i have no explanation for why they are the way they are. Yet when I find something in the world, there's a movement of mind that quickly follows observation that seeks to create a gears level model of the world. On the one hand, such models are useful. On the other, a desire to explain in the absence of any information to build it off of is worse than useless—it's the path to confusion.

This deep psychological need to latch onto some story, any story, to explain what we don't understand, seems to me to tie back in to the Bayesian Brain Hypothesis. Basically, our brains are constantly and uncontrollably generating hypotheses for the evidence we encounter in the world, seeing which ones predict our experiences with the greatest likelihood (weighted by our biological and cultural priors, of course). These hypotheses come in the form of stories because stories have the minimum level of internal complexity to explain the complex phenomena we experience (which, themselves, we internalize as stories). Choosing the "best" explanation, of course, follows Bayes' formula:

A few problems with this:

  1. We might just be terrible at choosing good priors (). Occam's Razor / Solomonoff Induction just isn't that intuitive to most humans. Most people find consciousness (which is familiar) to be simpler than neuroscience (which is alien), so they see no problem hypothesizing disembodied spirits, yet they scoff at the idea of humans being no more than matter. Astrology sounds reasonable when you have no reason to think that stars and planets shouldn't have personalities and try to affect your personal life, like everyone else, just so long as you don't try to figure out how that would actually work at a mechanistic level. Statistical modeling, on the other hand, is hard for humans to grasp, and therefore much more complicated, and therefore much less likely to have any explanatory power a priori, at least as far as most people are concerned.
  2. Likelihood functions () can be really hard to figure out. They require coming up with hypotheses that have the same causal structure as the real system they're trying to predict. When most of our declarative mental models exist at the level of abstraction of human social dynamics, it can be difficult to accurately imagine all the interacting bodily systems and metabolic pathways that make NSAIDs (or any other drugs, to say nothing of whole foods) have the precise effect that they do.
  3. Unfortunately, evolution didn't equip us with very good priors for how much weight to give to unimagined hypotheses, so we end up normalizing the posterior distribution by only those hypotheses we can think of. That means the denominator in the equation above () is often much less than it should be, even if the priors and evidential likelihoods are all correct, because other hypotheses have not had a chance to weigh in. For most people, all future (or as-yet unheard-of) scientific discoveries are effectively given a prior probability of 0, while all the myths passed down from the tribal/religious/political elders seem to explain everything as well as anything they've ever heard, and so those stories get all the weight and all the acceptance.

It's unavoidable for us as humans with Bayesian-ish brains to start coming up with stories to explain phenomena, even when evidence is lacking. We just need to be careful to cultivate an awareness for when our priors may be mistaken, for when our stories don't have sufficiently reductionist internal causal structure to explain what they are meant to explain, and for when we probably haven't even considered hypotheses that are anywhere close to the true explanation.

I am eager to explore your answer. Why do you think that "stories have the minimum level of internal complexity to explain the complex phenomena we experience"? Is it only because you suppose we internalize phenomena as stories? Do you have any data or studies on that? What's your understanding of a story? Isn't a straightforward description not even less complex because you do not need a full-blown plot to depict something like a chair?

I notice that while a lot of the answer is formal and well-grounded, "stories have the minimum level of internal complexity to explain the complex phenomena we experience" is itself a story :) Personally, I would say that any gear-level model will have gaps in the understanding, and trying to fill these gaps will require extra modeling which also has gaps, and so on forever. My guess is that part of our brain will constantly try to find the answers and fill the holes, like a small child asking "why x? ...and why y?". So if a more practical part of us wants to stop investigating, it plugs the holes with fuzzy stories which sound like understanding. Obviously, this is also a story, so discount it accordingly...

I notice that while a lot of the answer is formal and well-grounded, "stories have the minimum level of internal complexity to explain the complex phenomena we experience" is itself a story :)

Yep. That's just how humans think about it: complex phenomena require complex explanations. "Emergence," as complexity arising from the many simple interactions of many simple components, I think is a pretty recent concept for humanity. People still think intelligent design makes more intuitive sense than evolution, for instance, even though the latter makes astronomically fewer assumptions and should be favored a priori by Occam's Razor.

[-][anonymous]2y20

I don't have anything to add, but this phenomenon was discussed in greater detail in Explain/Worship/Ignore. https://www.lesswrong.com/posts/yxvi9RitzZDpqn6Yh/explain-worship-ignore

By "story," I mean something like a causal/conceptual map of an event/system/phenomenon, including things like the who, what, when, where, why, and how. At the level of sentences, this would be a map of all the words according to their semantic/syntactic role, like part of speech, with different slots for each role and connections relating them together. At the level of what we would normally call "stories," such a story map would include slots for things like protagonist, antagonist, quest, conflict, plot points, and archetypes, along with their various interactions.

In the brain, these story maps/graphs could be implemented as regions of the cortex. Just as some cortical regions have retinotopic or somatotopic maps, more abstract regions may contain maps of conceptual space, along with neural connections between subregions that represent causal, structural, semantic, or social relationships between items in the map. Other brain regions may learn how to traverse these maps in systematic ways, giving rise to things like syntax, story structure, and action planning.

I've suggested before (https://www.lesswrong.com/posts/KFbGbTEtHiJnXw5sk/?commentId=PHYKtp7ACkoMf6hLe) that I think these sorts of maps may be key to understanding things like language and consciousness. Stories that can be loaded into and from long-term memory or transferred between minds via language can offer a huge selective advantage, both to individual humans and to groups of humans. I think the recogition, accumulation, and transmission of stories is actually pretty fundamental to how human psychology works.

Thank you for explaining it. I really like this concept for stories because it focuses on the psychological aspect of stories as understanding something which sometimes is missing in literary perspectives. How would you differentiate between a personal understanding of a definition and a story? Would you?

My main approach to stories is to define them more abstractly as a rhetorical device for representing change. This allows me to differentiatie between a story (changes), a description (states) and an argument (logical connections of assertions). I suppose, in your understanding, all of them would be some kind of story? This differentiation could also be helpful in understanding the process of telling a story versus giving a description.

Unfortunately, you did not explain how your answer relates to "stories have the minimum level of internal complexity to explain the complex phenomena we experience". In your answer you do not compare stories to other ways of encoding information in the brain. Are there any others, in your opinion?

Unfortunately, evolution didn’t equip us with very good priors for how much weight to give to unimagined hypotheses, so we end up normalizing the posterior distribution by only those hypotheses we can think of

..and any effort to push against that, to assign more probability to the unknown hypotheses, is an effort in the direction of modest epistemology.

Thanks for your essay, it was encouraging and inspiring!

What you have observed seems to accurately reflect the world and the way people function (not just on the internet).  When I did a google search for "the need to believe" I found links that seemed interesting and relevant.  I have a working theory about the human brain which seems to fit the evidence that I see in my life, and what I have read.

The human brain is a giant pattern-matching machine.  It operates most of the time on incomplete data.  But the brain doesn't express patterns as abstract theories, it expresses those observed patterns as "belief".  We observe evidence, and we form a belief about the world in a very unscientific way.

There is no genetic, neurological process for "possible pattern, but not enough data"

Science itself (and by extension rationality itself) seems to be something humans invented to overcome the normal operating mode of the human brain which naturally operates more as a social instrument governed by sophistry and cognitive bias.

Another thing that ties in more specifically to the internet is the need to grab people's interest.  Claiming that an accepted pattern is not true or not useful is unlikely to attract attention or support.  Claiming that a different pattern (moral, ethical, social, etc) fits reality better will be more engaging to readers because of the nature of human brains that I described above.