# 72

Previously: Some physics 101 students calculate that a certain pendulum will have a period of approximately 3.6 seconds. Instead, when they run the experiment, the stand holding the pendulum tips over and the whole thing falls on the floor.

The students, being diligent Bayesians, argue that this is strong evidence against Newtonian mechanics, and the professor’s attempts to rationalize the results in hindsight are just that: rationalization in hindsight. What say the professor?

“Hold on now,” the professor answers, “‘Newtonian mechanics’ isn’t just some monolithic magical black box. When predicting a period of approximately 3.6 seconds, you used a wide variety of laws and assumptions and approximations, and then did some math to derive the actual prediction. That prediction was apparently incorrect. But at which specific point in the process did the failure occur?

For instance:

• Were there forces on the pendulum weight not included in the free body diagram?
• Did the geometry of the pendulum not match the diagrams?
• Did the acceleration due to gravity turn out to not be 9.8 m/s^2 toward the ground?
• Was the acceleration of the pendulum’s weight times its mass not always equal to the sum of forces acting on it?
• Was the string not straight, or its upper endpoint not fixed?
• Did our solution of the differential equations governing the system somehow not match the observed trajectory, despite the equations themselves being correct, or were the equations wrong?
• Was some deeper assumption wrong, like that the pendulum weight has a well-defined position at each time?
• … etc”

The students exchange glances, then smile. “Now those sound like empirically-checkable questions!” they exclaim. The students break into smaller groups, and rush off to check.

Soon, they begin to report back.

“After replicating the setup, we were unable to identify any significant additional forces acting on the pendulum weight while it was hanging or falling. However, once on the floor there was an upward force acting on the pendulum weight from the floor, as well as significant friction with the floor. It was tricky to isolate the relevant forces without relying on acceleration as a proxy, but we came up with a clever - ” … at this point the group is drowned out by another.

“On review of the video, we found that the acceleration of the pendulum’s weight times its mass was indeed always equal to the sum of forces acting on it, to within reasonable error margins, using the forces estimated by the other group. Furthermore, we indeed found that acceleration due to gravity was consistently approximately 9.8 m/s^2 toward the ground, after accounting for the other forces,” says the second group to report.

Another arrives: “Review of the video and computational reconstruction of the 3D arrangement shows that, while the geometry did basically match the diagrams initially, it failed dramatically later on in the experiment. In particular, the string did not remain straight, and its upper endpoint moved dramatically.”

Another: “We have numerically verified the solution to the original differential equations. The error was not in the math; the original equations must have been wrong.”

Another: “On review of the video, qualitative assumptions such as the pendulum being in a well-defined position at each time look basically correct, at least to precision sufficient for this experiment. Though admittedly unknown unknowns are always hard to rule out.” [1]

A few other groups report, and then everyone regathers.

“Ok, we have a lot more data now,” says the professor, “what new things do we notice?”

“Well,” says one student, “at least some parts of Newtonian mechanics held up pretty well. The whole F = ma thing worked, and the force due to gravity was basically as claimed.”

“And notably, the parts which did not hold up as well are parts which don’t generalize as directly to other systems,” says another student.

Another: “Expanding on that: there’s an underspecified step in the use of Newtonian mechanics where we need to figure out what geometry to use, and what the relevant forces are on each body. Our deeper experimental investigation really highlighted that underdetermination, because the underspecified places are where the problems were: the string’s upper endpoint moved, the string didn’t stay straight, there were forces from the floor. All of those were things which we could, in principle, include in the model while still using basically-standard Newtonian mechanics.

On the other hand, that also emphasizes the incompleteness of Newtonian mechanics as a model: it doesn’t fully specify how to figure out the geometry and forces for any particular physical setup. Which does call its predictive power into question somewhat - though at least some parts, like F = ma, made predictions which replicated just fine.”

“But,” another student chimes in, “Newtonian mechanics isn’t completely silent about how to specify geometry and forces for any particular physical setup. We’re supposed to draw free-body diagrams showing which things interact with which other things nearby. And certain common physical components are supposed to exert standard forces - like springs, or friction, or normal force from the ground, or gravity. So while there is some underspecification, there aren’t arbitrary degrees of freedom there. In other words, there should be some imaginable behaviors which aren’t consistent with any Newtonian mechanics-based model…”

“Sounds like Bell’s Theorem?” says another student.

“... don’t get started on that, we’ll be here all day,” replies a TA.

“Anyway,” says the professor, “one generalizable takeaway from all this is that Newtonian mechanics isn’t a monolithic black box. Like any practically-useful scientific theory, it has ‘gears’ - individual pieces which we compose in order to apply the theory to specific physical systems. Part of what makes gears useful is that, when a prediction is wrong (as inevitably happens all the time), we can go look at a whole bunch of details from our experiment, and then back out which specific gears were correct and which weren’t.”

“I buy that it worked here, but that’s starting to sound suspiciously like hindsight bias again,” replies a student. “We look at a bunch of details from the experiment, and only then decide which sub-predictions were correct and which weren’t? Sounds fishy.”

The professor: “In practice, predictions are hard to get right, even when the underlying theory is basically correct. Even a problem as simple as rolling a steel ball down a hotwheels ramp and getting it to land in a cup is surprisingly hard to get on the first try. So think of it like this: in order for precise predictions to work well in practice, they basically-always need to come with an implicit disclaimer saying ‘... and if this is wrong, then one or a few of the input assumptions are wrong, but probably not most or all of them, and I’m more confident in <some> and less confident in <others>’. With that implicit fallback built-in, the theory still makes falsifiable predictions even when the headline claim is wrong. Indeed, the theory makes additional falsifiable predictions even when the headline prediction is right - e.g. if we’d found a period of 3.6 seconds for the pendulum, but follow-up investigations found that the string wasn’t taut at all, that sure would be a failed prediction of the theory.

Applied to our pendulum: the implicit prediction would be that the period would most probably be 3.6 seconds, but if not then one or a few of the input assumptions was violated but the theory was otherwise basically correct. And among those input assumptions, F = ma was relatively unlikely to be violated, while failures of geometric assumptions or unaccounted-for forces were more likely. And of course in practice we don’t list all the relevant implicit assumptions, because that rabbit hole runs pretty deep.”

1. ^

Note that one of the ways in which this story most heavily diverges from scientific practice is that all of the follow-up experiments did get the answers we intuitively expect, and did not diverge from both the original model and the original experiment in still further ways which would themselves require recursive examination to uncover.

# 72

New Comment

Notably, this approach requires a culture where you reject "theory-gurus" and instead expect theorists to share the gears of their theories explicitly, so you don't end up with bullies who pretend to have hard-to-understand gearsy models which allows them to rationalize any failed prediction in order to manipulate others.

This sound like law or legal bickering!

That is an unrealistic and thoroughly unworkable expectation.

World models are pre-conscious. We may be conscious of verbalised predictions that follow from our world models, and various cognitive processes that involve visualisation (in the form of imagery, inner monologue, etc.), since these give rise to qualia. We do not however possess direct awareness of the actual gear-level structures of our world models, but must get at these through (often difficult) inference.

When learning about any sufficiently complex phenomenon, such as pretty much any aspect of psychology or sociology, there are simply too many gears for it to be possible to identify all of them; a lot of them are bound to remain implicit and only be noticed when specifically brought into dispute. This is not to say that there can be no standard by which to expect "theory gurus" to prove themselves not to be frauds. For example, if they have unusual worldviews, they should be able to pinpoint examples (real or invented) that illustrate some causal mechanism that other worldviews give insufficient attention to. They should be able to broadly outline how this mechanism relates to their worldview, and how it cannot be adequately accounted for by competing worldviews. This is already quite sufficient, as it opens up the possibility for interlocutors to propose alternate views of the mechanism being discussed and show how they are, after all, able to be reconciled with other worldviews than the one proposed by the theorist.

Alternatively, they should be able to prove their merit in some other way, like showing their insight into political theory by successfully enacting political change, into crowd psychology by being successful propagandists, into psychology and/or anthropology by writing great novelists with a wide variety of realistic characters from various walks of life, etc.

But expecting them to be able to explicate to you the gears of their models is somewhat akin to expecting a generative image AI to explain its inner workings to you. It's a fundamentally unreasonable request, all the more so because you have a tendency to dismiss people as bluffing whenever they can't follow you into statistical territory so esoteric that there are probably less than a thousand people in the world who could.

It can be hard to predict the gears ahead of time, but it's not that hard to lay out a bunch of gears when queried. One can then maintain and refine a library of gears with explanations as part of the discourse.

The students are all acting like that Literal Internet Guy who doesn't understand how normies communicate. The problem isn't the existence of implicit assumptions. The problem is that students with normal social skills will understand those implicit assumptions in advance. If you ask any normal student, before the experiment, "if the pendulum stand falls over, will the measure of the pendulum's period prove much of anything", they'll not only answer "no", they'll consistently answer "no"--it really is something they already know in advance, not something that's made up by the professor only in hindsight.

Of course, this is complicated by the ability to use pedantry for trolling. Any student who did understand the implicit assumptions in advance could pretend that he doesn't, and claim that the professor is making excuses in hindsight. Since you can't read the student's mind, you can't prove that he's lying.

You can likely find contact with some students in an entirely different place who haven't heard of this experiment taking place, and ask them to find out.

taught

Should be 'taut'.

Fixed, thanks.