Today's post, Science Isn't Strict Enough was originally published on 16 May 2008. A summary (taken from the LW wiki):

 

Science lets you believe any damn stupid idea that hasn't been refuted by experiment. Bayesianism says there is always an exactly rational degree of belief given your current evidence, and this does not shift a nanometer to the left or to the right depending on your whims. Science is a social freedom - we let people test whatever hypotheses they like, because we don't trust the village elders to decide in advance - but you shouldn't confuse that with an individual standard of rationality.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was When Science Can't Help, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

New Comment
5 comments, sorted by Click to highlight new comments since: Today at 7:08 AM

The obvious answer to "I have something better than science" is "cool, show us your track record."

(The actual claim appears somewhere along the lines of "I have something that I think might be better than the present social process of science", but that's rather less splashy a headline.)

It might help to watch actual working scientists work.

This feels a bit like beating up a straw man. Every actual scientific professional develops a sense of what problems are worth working on and what approaches are more or less promising. This sort of intuition isn't scientifically provable -- there's no way to know in advance what you'll find -- but people can and do give reasons why they think X is more promising than Y. People value things like elegance, simplicity, ease of use, and so forth. Learning these sorts of judgements is one of the major things people do as PhD students and junior researchers. It may not be part of the scientific method, strictly defined, but it's something we deliberately teach apprentice scientists.

You can formalize those technical judgements in terms of Solomonof priors and expected utilities if you like, but doing so is a little silly. Different people have different computational hardware and therefore different measures of complexity. Saying "X has a lower Kolmogorov complexity than Y, for me", is no more or less objective than "X seems simpler".

There's also something a little silly about saying "Science isn't good enough, use Bayes". General Bayesian updating is intractable. So you can't use it. All you can ever really do is crude approximations. I don't think you gain a lot by dressing up your judgement in a mathematical formalism that doesn't really do any work for you.

I feel like the suggested distinction between bayes and science is somewhat forced. Before I knew of bayes, I knew of Occam's razor and its incredible role in science. I had always been under the impression that science favored simpler hypotheses. If it is suggested that we don't see people rigorously adhering to bayes theorem when developing hypotheses, then the answer to why is not because science doesn't value the simpler hypotheses suggested by bayes and priors, but because determining the simplest hypothesis is incredibly difficult to do in many cases. And this difficulty is acknowledged in the post. As is such, I'm not seeing science as diverging from bayes, the way its practiced is just a consequence of the admitted difficulty of finding the correct priors and determining the space of hypotheses.

Similarly, if the Bayesian answer is difficult to compute, that doesn't mean that Bayes is inapplicable; it means you don't know what the Bayesian answer is.

The author is far too free with the notion of the Bayesian answer. At the level of common practice there is meta-analysis, which is fraught with problems. There's subjective Bayesianism, which is fine in principle, but in practice has the same limitations: why should that be my prior? what underlying mechanism can explain all these inconsistently measured results and how do I formulate all those complicating possibilities into a likelihood function? Objective priors are a perennial subject of research in statistics which help somewhat in simple parametric problems. Non-parametric priors (e.g. Dirichlet, Gaussian, Levy, ... processes) can be made to work in some cases, but aren't easy to formulate in statistically efficient, computationally efficient, or even sensible (e.g. statistically consistent) ways, in general. AIXI and Solomonoff priors hold out tantalizing theoretical possibilities, but these are not yet practicable.

The best practice of smart scientists and statisticians in my experience is a process of iterative refinement. (This is true both in their own head and in the conduct of "Science." Call it a computational shortcut, if you will, designed to accomodate our limited human brains. Importantly, it also let's us stand on the shoulders of the giants who came before.)

One conducts experiments, build models and hypotheses, tests predictions, and conducts new experiments. Bayesian inference is only a consistent way to procede to the truth if the prior contains mass on that truth. Often reality turns out to be more complicated than we had any business imagining it to be before conducting the experiments, and practicable priors would have missed out on it.

The main point here then is that iteration and testing of models happens as a part of the process of understanding an experiment / analyzing data, not just at the level of designing a sequence of experiments.

Box's 1980 paper contains much wisdom still relevant today to the aspiring rationalist: http://www.cs.princeton.edu/courses/archive/fall11/cos597C/reading/Box1980.pdf