It's generally accepted here that theories are valuable to the extent that they provide testable predictions. Being falsifiable means that incorrect theories can be discarded and replaced with theories that better model reality (see Making Beliefs Pay Rent). Unfortunately, reality doesn't play nice and we will sometimes possess excellent theoretical reasons for believing a theory, but that theory will possess far too many degrees of freedom to make it easily falsifiability.

The prototypical example are the kinds of hypotheses that are produced by evolutionary psychology. Clearly all aspects of humanity have been shaped by evolution and the idea that our behaviour is an exception would be truly astounding. In fact, I'd say that it is something of an anti-prediction.

But what use is a theory that doesn't make any solid predictions? Firstly, believing in such a theory will normally have a significant impact on your priors, even if no-one observation would provide strong evidence of its falsehood. But secondly, if the existing viable theories all claim A and you propose viable a theory that would be compatible with A or B, then that would make B viable again. And sometimes that can be a worthy contribution in and of itself. Indeed, you can have a funny situation arise where people nominally reject a theory for not sufficiently constraining expectations, while really opposing it because of how people's expectations would adjust if the theory was true.

See also: Building Intuitions on Non-Empirical Arguments in Science

New to LessWrong?

New Comment
6 comments, sorted by Click to highlight new comments since: Today at 4:04 AM
[-]gjm4y130

Some "theories that can explain everything" may actually have the property that they can explain any individual observation but constrain what combinations of observations we can observe.

Consider, for instance, a vague but strongly adaptationist version of evolutionary psychology: it says that all features of human thought and behaviour have their origins in evolutionary advantage. Pretty much any specific feature of thought or behaviour can surely be given some sort of just-so-story explanation that will fit this theory, but it might be that feature 1 and feature 2 require mutually incompatible just-so stories, in which case the theory will forbid them both to occur; or at least that no one is able to come up with a plausibly-compatible pair of stories, in which case the theory will predict that features 1 and 2 are unlikely to occur together.

Arguably all theories are actually somewhat like this.

if the existing viable theories all claim A and you propose viable a theory that would be compatible with A or B, then that would make B viable again

Doesn't it mean that this theory makes a prediction related to B? Maybe you want to give an example of what you mean.

Saying anything is possible is a prediction, but a trivial prediction. Nonetheless, it changes expectations if before only A seemed possible.

[note: not sure where I saw this concept, and I haven't explored it enough to know if it's useful]

Some things called "theories" aren't predictive, but are explanatory. Such models may be useful for organizing your beliefs, rather than for updating your beliefs.

Interesting idea. What is the use of organising beliefs without updating them?

The idea would be that these kinds of frameworks can improve the salience or accessibility of information, used when evaluating or executing more predictive models. Human brains can't actually access all the details of all the evidence they have experienced, so indexing is necessary to help determine which are available.

Thinking more about it, though, this may be just a restatement of what ALL models do - they're not evidence in themselves, they're filters on evidence to make the quantity manageable and the weightings useful.