Let's start by talking about scientific literacy. I'm going to use a weak definition of scientific literacy, one that simply requires familiarity with the Baconian method of inquiry.

I don't want to place an exact number on this issue, but I'd wager the vast majority of the population of "educated" countries scientifically illiterate.

I - The gravity of the issue

I first got a hint that this could be a real issue when I randomly started asking people about the theory of gravity. I find gravity to be interesting because it's not at all obvious. I don't think any of us would have been able to come up with the concept in Newton's shoes. Yet it is taught to people fairly early in school.

Interestingly enough, I found that most people were not only unaware of how Newton came up with the idea of gravity, but not even in the right ballpark. I think I can classify the mistakes made into three categories, which I'll illustrate with an answer each:

  1. The Science as Religion mistake: Something something, he saw apples falling towards earth, and then he wrote down the formula for gravity (?)
  2. The Aristotelian Science mistake: Well, he observed that objects of different mass fell towards Earth with the same speed, and from that he derived that objects attract each other. Ahm, wait, hmmm.
  3. The Lack of information mistake: Well, he observed something about the motion of the planets and the moon... and, presumably he estimated the mass of some, or, hmmm, no that can't be right, maybe he just assumed mass_sun >> mass_plant >> mass_moon and somehow he found that his formula accounted for the motion of the planets.

I should caveat this by saying I don't count mistake nr 3 as scientific illiteracy, in this case, I think most of us fall in that category most of the time. Ask me how gravity can be derived in principle and I might be able to make an educated guess and maybe (once the observations are in) I could even derive it. But the chances of that are small, I probably wouldn't know the exact information I'd need which can be measured with 17th-century devices. I most certainly don't have the information readily sitting in my brain.

It's mainly failure modes 1 and 2 that I'm interested in here.

II - And Science said: Let there be truth

I think failure mode 1 is best illustrated by the first youtube result if you search for "how newton discovered gravity". This failure mode includes two mistakes:

  • Not understanding the basis of the actual theory (in this case 'gravity' is presented as "objects fall towards Earth", rather than objects attracting each other proportional to their mass and distance).
  • Not understanding the idea of evidence as a generator of theory.

In this failure, mode science works more or less like religion. There's a clergy (researchers, teachers, engineers) and there are various holy texts (school manuals, papers, specialized books).

I think a good indication of this failure mode is that people stuck here don't seem to differentiate between "what other humans in authority positions are saying" versus "what we observe in the world" as having fundamentally different epistemic weight.

Good examples here are e.g. young-earth creationists, people the believe the earth was created ~6000 years ago. Most of these kinds of people are obviously not scientists, but some are, a quick google search brings up Duane Gish (Berkely P.hD) and Kurt Wise (professor at a no-name university in Georgia).

However, young-earth creationism is not the only unscientific belief system people have, there are insane conspiracy theories aplenty, from vaccines being brainwashing mechanisms or 5G causing viral infections.

This kind of insanity is usually not represented in people affiliated with scientific or engineer institutions, but I'm unsure it is for the right reasons.

That is to say, assume you think of science as a religion. Your epistemology is based on what other people tell you, you weigh that by their social rank and thus derive what you hold as "truth".

Assume you are a doctor that falls into this category and 70% of your friends tell you "5G towers cause covid-19". Well, then, you could probably start believing that yourself. But keep in mind, it's not the only number of people that matters, the status also matters. If the priest tells you about the word of God that counts 100x as much as the village idiot telling you about the word of God.

Even with this context, if our good doctor's boss tells him "covid-19 infection is caused by an airborne coronavirus that passes from human to human via various bodily fluids dispersed in the air and on objects", then whatever this boss told him would have enough status magnitude to make him set his opinion on the more scientifically correct explanation.

The problem here is that our good doctor would be unable to come up with this explanation on his own, even in a hypothetical, he lacks even the foundational epistemology required to understand how such answers can be derived.

Even worst, our doctor's boos could share his epistemology, all that would be needed is for her boos to have told her the same thing and she would have believed it in an instant.

This Science as a Religion worldview is likely sprinkled through all engineers and scientists. The reason we don't see it is that for it to become obvious, one needs to start believing an obviously insane thing (e.g. young-earth creationism), however, the chance of this happening is fairly low since it would require all their peers to also believe insane things.

As long as "correct" ideas are observed throughout his professional environment, unless he is socially inept, he will only hold the correct idea.

You would need to look at his research or question him on the scientific method or on his epistemology more broadly in order to spot this mistake. Sadly enough, I've yet to find a university that has "scientific epistemology" as a subject on the entrance exam or even as a graduation or employment requirement.

I won't speculate as to how many people who are called scientists and engineers fall into this failure mode. I think there's a gradient between this and failure mode nr 2.

However, it should be noted that this failure mode is unobvious until a new idea comes along. Then, the real scientists will assume it's probably false but judge it on its merit. The religious scientists will assume it's false because their peers haven't said it's right yet.

This is both an issue in regards to new ideas proliferating and an issue with the scientific consensus. Scientific consensus is valuable if you assume everyone pooled reasoned their way through theory, independent research, and primary source dissection to reach a conclusion.

In a world where 90% of scientists just assume that science works like a religion, a 96%-4% consensus is not a good indicator for implementing policy, it's an indicator that the few real scientists are almost evenly split on the correct solution.

This is bleak stuff, if most scientists were understanding science as a religion then the whole institution would be compromised. Not only would academia have to be thrown in the bin, but all evidence and theory produced for the last half-century would have to be carefully curated and replicated before it can be considered scientifically true.

Surface level intuitions want me to think there's a significant probability this might be the case with certain sub-fields. But my theory of mind and the fact that science seems to keep progressing tells me this is unlikely to be the case in relevant areas.

III - If there's a fit there's a way

In short, these are the people that don't understand why a regression being fit on all the data is different from using the same regression to determine correlation strength via cross-validation.

I think most people and most scientists probably fall under the second failure mode, they are not Baconians or Popperians, but rather they are Aristotelians.

Aristotle understood the idea that we can observe the world and we can come up with theories about how it works based on observation.

He lacked was a rigorous understanding of how observations should be undertaken. He was probably unaware of the idea of having similar experimental error standards and replications as the rules by which the validity of data can be compared.

He lacked an understanding of the language of probability which would allow him to formulate these experimental standards.

He lacked an understanding of falsifiability and Occam's razor, he didn't have a rigorous system for comparing competing theories.

In an Aristotelian framework, dropping 3 very heavy and well-lackered balls towards Earth and seeing they fall with a constant and equal acceleration barring any wind is enough to say FG = G * m1 * m2 / r^2 is a true scientific theory.

If things like the constant G
and the mass of the ball and the radius of the earth are already known, then the Aristotelian has no issue with declaring the theory correct. He needn't ask:

  • Why do you assume this holds for all objects? After all, the only thing we have observed is three objects falling towards Earth. Even more, the balls are too light to observe this effect between them.
  • Why can this equation not be simpler? I could simplify this equation to only a single term if what you wished to describe is just the fall of objects towards the Earth, which is the only thing your experiment is showing anyway.
  • Why is dropping 3 balls enough to derive anything? Why are 2 not enough, why aren't 100 needed? Also, why is weight the property in question here and not some other property of the ball? Maybe it works for lead balls but not for copper balls?

I will grant I might be straw-manning Aristotle here, he would have been able to ask some of those questions, he just didn't have a rigorous frameworks from which to derive them. He was working from Aristotelian logic and intuition.

This seems to be the kind of failure that most people fall into, and why wouldn't they, it's an intuitive spot to be in.

To exemplify the sentiment, let me quote a former director of the Yale-Griffin Prevention Research Center, an organization I chose randomly because I came upon a pseudo-scientific article written by him:

But science was never intended to question the reliable answers we already had. Science can and should certainly invite us to question answers, too, but not all answers are subject to doubt.

The organization in question here seems perfectly respectable, their research is not wors than any other medical research (which is not high praise, I just want to say it's not an outlier).

This is the core of the Aristotelian mistake, the assumption that we shouldn't question everything, the assumption that the way the world works is mostly obvious. You should leave it alone and just look at it non-judgementally, not try to nitpick various edge cases in our understanding.

This is a good enough point of view from where one can do engineering, but obviously not so for science. The very purpose of science is to take "obvious" things and see where they become no longer "obvious" and try to come up with better theories that explain those edge cases... ad infinitum.

  • In Galileo's time, it was obvious that the "nature" of an object dictated the speed with which they fall.
  • In Newtons' time, it was not obvious one couldn't apply the same laws of motion both on Earth and in "the heavens"
  • When de Morveau was born phlogiston caused fire.
  • When Max Plank rose to prominence the universe was obviously contiguous and deterministic.
  • Space and time were obviously separate and necessary entities to do physics when Einstein was beginning to operate.
  • Nuclei were obviously indivisible until the 30s, ten years later they were divisible enough to be the basis of a weapon that could destroy humanity.

For an engineer, questioning the obvious is usually a waste of time, for a scientist, it's the only good use of time.

But, why is the Aristotelian mistake seemingly so common nowadays? Why do most "scientists" and virtually all people lack the understanding of how to reduce the world to rigorous predictive theories?

Because...

IV - Science eats its young

Imagine you are a computer scientist in the 50s. You can write programs in the form of binary bunch cards and get some primordial Von Neuman machines to execute them... sometimes, it's really hard, there are loads of bugs and loads of hardware restrictions.

Your program risks breaking the computer, returning a seemingly correct but actually erroneous result, or working just part of the time because of a physical error (e.g. an actual dead bug) in the room-sized monstrosity it's running on.

So obviously your work will require becoming a decent digital hardware engineer. You certainly know precisely how your computer functions, from the high-level component down to the fabrication method for the transistors or switched inside. That's because assuming computers "just work" is skipping over the biggest hurdle, the fact that computers are usually really bad at "just working" and the issue often lies in the hardware.

But skip forward to today and most programmers couldn't even describe how a CPU works in principle. And why would they? That's the magic of modern computers, the fact that you don't have to understand how a CPU works to write a website. But this would become problematic if some programmers suddenly had to work on computer hardware.

This is more or less the problem with science. We spend the first 20+ years of people's lives teaching them "obvious" things that just work. Theories that are well defined and never failed, theories they could never derive nor judge the merit of. But nowadays we believe the theories are mostly correct so we aren't teaching them anything wrong.

Maybe they are thought how to run experiments, but if their experiments contradict the "expected results" we just write it off as an error and tell them to try again, we don't fawn over the setup until we discover the error. Replicative lab work in college requires proving that existing theories and observations are true, even though real replication should be focused on the exact opposite.

When people ask why something is true they are given the Aristotelian explanation: Well, look at case x,y,z, it works in all of those cases, so it's true. Because most teachers don't have the required epistemology to say anything else, they are Aristotelians. Why would they be otherwise?

By the time people have the "required context" to look at theories that are under the lens of examination and "kind of works but not really", they are in their mid-20s. But these are the only theories that that matter, the only theories for which we still need science.

After 20+ years of teaching people that experiments are wrong if they generate unexpected results and that the universe is a series of theories that work because they work on some particular examples... we suddenly expect them to generate theories and experiment using a whole different epistemology.

On the other hand, a 14-year-old is probably not capable of scientific discovery, he would just be rediscovering obvious things people already know. So we see it as pointless to tell him "go out and do science the right way" if all the information produced is already known. I harp on about this more in Training our humans on the wrong dataset... so I won't restate that entire point, suffice to say, I think this is a horrible mistake.

The only way to teach people how to do science, to teach them how science works, and to get new and interesting discoveries that break out of the current zeitgeist... is to have them do it. Ideally have them do so starting at age 10, not at age 30. Ideally have 100% of the population doing it, even if just for the sake of understanding the process. Otherwise you end up with people that are rightfully confused as to what the difference between science and religion is.

But I think the issue goes even further:

V - Epistemic swamps and divine theories

A problem I kind of address in If Van der Waals was a neural network is that of missing information in science.

For some reason, presumably, the lack of hard-drives and search engines, people of the past were much more likely to record theories and discard experiments.

This seems to me to be one of the many artifacts the scientific establishment unwittingly carried over from times past. In the current world, we have enough space for storing as much experimental data as we want. From the results obtained at CERN down to every single high school with a laboratory.

But theory in itself is useless for the purpose of science. At most, it's a good mental crutch or starting point, since you'd rather not start from 0. Maybe if the inductive process by which it was deduced is re-discovered it can serve as an example or inspiration, but in itself, it has little value.

Indeed, I think theory can be rather harmful. Theory is a map of the world, it's a good starting point if one wants to extend the map, but a horrible starting point if one wants to correct it since a lot of things are interlinked, it's hard to correct something without changing everything. It has built-in biases and mistakes that are hard to observe, especially if the original data an experimental setup is unavailable to us.

Finally, I don't wish to say that the "religious" failure mode and the "Aristotelian" failure modes are all bad.

The fact most people don't have any basis for their ethics system and just learn it "religiously" from their peer group is a feature, not a bug. If people were convinced going around killing people is ok until they could understand and found a reasonable ethical system that discourages murder society couldn't exist.

In case you haven't noticed, this article and most of the stuff you read is "Aristotelian" in nature. I am not using all the evidence that I could be using, I am not providing ways to falsify my viewpoint, I am basing my arguments on pleasant rhetoric and a few key examples to illustrate them, examples for which I don't even have the exact data or an exact set of questions to replicate them.

If we couldn't start with "Aristotelian" thinking we would forever be in decision paralysis. Unable to come up with new ideas or say anything new about the world. The purpose of the scientific method is to bring extreme rigor to the things which are widespread and useful enough to require it. A fun chat about politics over a glass of wine is perfectly acceptable without hard evidence, implementing a policy that affects the lives of millions of people isn't.

New Comment
13 comments, sorted by Click to highlight new comments since: Today at 11:26 AM

One interesting way to teach that kind of thinking could be an "artificial physics" computer game. The program would show you a bunch of virtual components that could be combined in various ways. One goal could be to predict an experiment. So your virtual lab could do every experiment, except experiment X (and maybe some set of very similar experiments.) you have to correctly predict the results of X by spotting the pattern from other experiments.

Alternately, the programmers could have chosen a hypothesis space that contains the correct outcome, and give the player access to a hypothesizing space where they can propose a range of possible explanations, and they win if they can find the correct hypothesis.

For the hypothesis space, I was thinking something like, each component implements some terms in a differential equation. Think like an electrical simulation, with unusual components, and components that don't actually exist. You know that each wire carries several real numbers, like voltage and current, and each component represents differential equation in terms of those numbers. Like an inductor having the voltage difference across it equal to the rate of change of current through it. But of course, each round of the game gives new silly names, and new random equations, so you have to do experiments to figure out what the rules are this time.

I kinda of agree with this approach, I actually propose it (thought a different program related to biology) in my previous article I posted here.

The reason I haven't gotten into describing it that much is because it's not like this is an area where I have a lot of power to influence stuff, my only goal here is to figure out why the failure modes happen to better avoid them myself.

The reason I haven't gotten into describing it that much is because it's not like this is an area where I have a lot of power to influence stuff,

I'm not sure that's the case. If there's a well thought out concepts about how to teach scientific thinking to kids there's a chance that someone else puts it into practice if you openly publish it. You don't need to personally implement it to have influence. 

Korzybski posited that we need to develop an educational curriculum that passes through Aristotelian reasoning and then moves people onward to non-Aristotelian and similarly to non-Euclidean and non-Newtonian thinking so that we don't have adults walking around with the wrong intuitions. Essentially, modernity couldn't catch up with itself, educationally speaking.

Personally I've found that analytical chemistry is a good discipline for learning the scientific method. You're using the same theories as everyone else, but the specific composition of your particular unknown sample has to be figured out by testing, guesswork, and induction.

In this failure, mode science works more or less like religion. There's

In this failure mode, science works more or less like religion. There's

 

There's a difference between science and "science". (In this case, schools != science. And it's not obvious why there would be a relation - the process of absorbing 'knowledge' or 'information' need not be conducted any differently between truth and falsehood.)

 

However, young-earth creationism is not the only unscientific belief system people have, there are insane conspiracy theories aplenty, from vaccines being brainwashing mechanisms or 5G causing viral infections.

What makes a belief unscientific?

 

doctor's boos

boss

He lacked was a

What he lacked was/He lacked a

 

I will grant I might be straw-manning Aristotle here, he would have been able to ask some of those questions, he just didn't have a rigorous frameworks from which to derive them. He was working from Aristotelian logic and intuition.

Close - he was working from his own logic and intuition. (If he learned 'other' 'systems' from other people that might be relevant.)

 

Replicative lab work in college requires proving that existing theories and observations are true, even though real replication should be focused on the exact opposite.

Or seeing under what conditions they do and don't hold, but yes.

 

Perhaps some day this (A) will be seen the same way as (B).

[A]

The fact most people don't have any basis for their ethics system and just learn it "religiously" from their peer group is a feature, not a bug. If people were convinced going around killing people is ok until they could understand and found a reasonable ethical system that discourages murder society couldn't exist.

[B]

On the other hand, a 14-year-old is probably not capable of scientific discovery, he would just be rediscovering obvious things people already know. So we see it as pointless to tell him "go out and do science the right way" if all the information produced is already known. I harp on about this more in Training our humans on the wrong dataset... so I won't restate that entire point, suffice to say, I think this is a horrible mistake.

The only way to teach people how to do science, to teach them how science works, and to get new and interesting discoveries that break out of the current zeitgeist... is to have them do it. Ideally have them do so starting at age 10, not at age 30. Ideally have 100% of the population doing it, even if just for the sake of understanding the process. Otherwise you end up with people that are rightfully confused as to what the difference between science and religion is.

But I think the issue goes even further:

Does the world lose out because people aren't running around committing murder? Doesn't seem like it. But whether or not you think 'widespread morality has issues' it seems worth noting that not everyone is 'moral'. Perhaps this is related to the way 'ethics' is understood.

 

Otherwise you end up with people that are rightfully confused as to what the difference between [what is right] and [what is popular/authority says] is [- or that there exists 'what is right' as separate from that.]

.

When framing the question this way, the first important thing is to start by getting some knowledge of how Newton actually went about his discovery. 

From history and philosophy of science we know that different scientists seem to be quite different methods. https://www.quora.com/How-did-Newton-derive-the-universal-law-of-gravitation seems a longer writup, and there's nothing about Newton making experiments in it. 

What makes you think that running experiments was central to Newton's discoveries about gravity?

I don't think I ever said "running experiments", I said looking at data was relevant (i.e. the data other people had collected about the movement of the planets, Earth's moon and objects here on earth)

If I implied otherwise my bad, please point it out, I will correct it.

Given that Newton's method was to stare at a given problem for a few decades and think about how it interacts with the available data it's not something that you can simply repeat in school. Newton was also previously exposed to other theory such as Hooke's law and Kepler's laws of planetary motion.

What kind of real science do you think children in school could actually do?

[-][anonymous]4y10
In a world where 90% of scientists just assume that science works like a religion, a 96%-4% consensus is not a good indicator for implementing policy, it's an indicator that the few real scientists are almost evenly split on the correct solution.

Why would that cause a 96%-4% split and not a 60%-40% split?


In an Aristotelian framework, dropping 3 very heavy and well-lackered balls towards Earth and seeing they fall with a constant speed barring any wind is enough to say
FG = G * m1 * m2 / r^2
is a true scientific theory.

You mean increasing speed?

I meant to say the same speed, but yes, point taken.