Hard vs Soft in fields as attitudes towards model collision

by Jemist2 min read20th Apr 20212 comments


Practice & Philosophy of ScienceWorld Modeling

Many people will describe physics, chemistry, microeconomics and some parts of biology as "hard sciences" while describing psychology, sociology, politics and other parts of biology as "soft sciences". I think this taps into a set of attitudes within each field towards what we might refer to as model collision.

In fields which attempt to describe reality, different systems are described with different models. For example in physics we might use fluid dynamics to describe flowing water, rigid body mechanics to describe the movement of a set of gears, quantum electrodynamics to calculate the energy of a chemical bond between two hydrogen atoms etc. Each model can be considered to cover a certain area of reality, with different models covering different situations. Some models have clearly-delineated boundaries, sometimes there are gaps where nothing is understood, but in many cases it is not clear which model to apply. This could be because the boundaries of a model are not well defined within the model (what counts as a fluid?) or because two or more models overlap in scope (almost all economics and psychology). We can refer to this as a model collision.

For example (to use physics again) should we model not-quite-molten metal as a fluid which can flow or as a plastic solid body deforming under its own weight. In this case the answer is to experiment first and build a model later. In fields like this the collision of two different models can be resolved by experiment. It helps in physics that the underlying reality is well understood, in this case we know that the rigidity (or not) of a body is governed by the forces between particles.

Without the ability to experiment (or experiment reproducibly), as happens often in fields like psychology or politics, the two (or many more) models must end up coexisting. There are two ways for this to resolve: one is "everyone in the field puts appropriate weight on each model when making decisions, while searching for the truth" and the other is "the field splits into several angry mobs trying to prove that their model is the obviously correct one". Sadly the second case is more common (partially because it results in more papers being published). This can also result in models going in and out of fashion according to political concerns of those working to get grant funding. (I believe the hypothesis that protein aggregates cause Alzheimer's is finally going out of fashion, hurrah!)

So is there a way to shift from the second scenario to the first? Well as stated above if we can simply do experiments, we can find out the answer. Demanding mathematical rigour in our models can also help, as it allows us to compare them more meaningfully (in some cases in chemistry, once two models are known to be accurate in extreme cases, we just numerically interpolate between them for intermediate cases). We can also try and force culture shifts away from the warlike confusion scenario to the more collaborative one.


As an aside: If we can put different scientific disciplines on a scale based on how they handle model collision, what happens if we go off the deep end of model coexistence? I think we end up with disciplines like film, literature or art analysis. Here various theories of analysis are explicitly allowed to coexist, and are more about rationalizing why a piece of art has an effect, rather than trying to predict the effect of new art. Note that softer disciplines can definitely still have something interesting to say, though evaluating the field as a whole will probably not allow you to make predictions about it.


2 comments, sorted by Highlighting new comments since Today at 12:31 AM
New Comment

Consider these two fields, gravitational waves as of just before Ligo. And rat nutrition. Gravitational waves is very much an area driven by simple (but mathematically difficult) formal theories, and a lack of data. Rat nutrition is a field with much easily accessible data, fairly easy to do experiments, but much more complexity. If you gave sociologists some magical ability to run lots of society scale experiments. (Maybe really good simulations, maybe a multiverse viewer) then the field still wouldn't be physics. The most the sociologists could produce is huge tables of statistical correlations. 

the field splits into several angry mobs trying to prove that their model is the obviously correct one

People do that because they think models might be true and not just useful. If they didn't, they would be quite happy to co-exist, trading off predictive accuracy against complexity. Or domain of applicability.

Truth leads to warfare in a way that usefulness dies not.

Well as stated above if we can simply do experiments, we can find out the answer

There's no guarantee that everything is experimentally testable.

Worse still, the relationship between predictive accuracy and correspondence-to-reality is obscure. What happens if two theories are equally predictive, but imply different things about reality?