Frugality and working from finite data

bySnowyowl9y3rd Sep 201047 comments

27


The scientific method is wonderfully simple, intuitive, and above all effective. Based on the available evidence, you formulate several hypotheses and assign prior probabilities to each one. Then, you devise an experiment which will produce new evidence to distinguish between the hypotheses. Finally, you perform the experiment, and adjust your probabilities accordingly. 

So far, so good. But what do you do when you cannot perform any new experiments?

This may seem like a strange question, one that leans dangerously close to unprovable philosophical statements that don't have any real-world consequences. But it is in fact a serious problem facing the field of cosmology. We must learn that when there is no new evidence that will cause you to change your beliefs (or even when there is), the best thing to do is to rationally re-examine the evidence you already have.


Cosmology is the study of the universe as we see it today. The discoveries of supernovae, black holes, and even galaxies all fall in the realm of cosmology. More recently, the CMB (Cosmic Microwave Background) contains essential information about the origin and structure of our universe, encoded in an invisible pattern of bright and dark spots in the sky.

Of course, we have no way to create new stars or galaxies of our own; we can only observe the behaviour of those that are already there. But the universe is not infinitely old, and information cannot travel faster than light. So all the cosmological observations we can possibly make come from a single slice of the universe - a 4-dimensional cone of spacetime. And as there are a finite number of events in this cone, cosmology has only a limited amount of data it can ever gather; in fact, the amount of data that even exists is finite.

Now, finite does not mean small, and there is much that can be deduced even from a restricted data set. It all depends on how you use the data. But you only get one chance; if you need to find trained physicists who have not yet read the data, you had better hope you didn't already release it to the public domain. Ideally, you should know how you are going to distribute the data before it is acquired.


The problem is addressed in this paper (The Virtues of Frugality - Why cosmological observers should release their data slowly), published almost a year ago by three physicists. They give details of the Planck satellite, whose mission objective is to perform a measurement of the CMB to a greater resolution and sensitivity than anyone has ever done before. At the time the paper was written, the preliminary results had been released, showing the satellite to be operating properly. By now, its mission is complete, and the data is being analysed and collated in preparation for release.

The above paper holds the Planck satellite to be significant because with it we are rapidly reaching a critical point. As of now, analysis of the CMB is limited not primarily by the accuracy of our measurements, but by interference from other microwave sources, and by the cosmic variance itself.

"Cosmic variance" stems from the notion that the amount of data in existence is finite. Imagine a certain rare galactic event A that occurs with probability 0.5 whenever a certain set of conditions are met, independently of all previous occurrences of A. So far, the necessary conditions have been met exactly 2 million times. How many events A can be expected to happen? The answer is 1 million, plus or minus one thousand. This uncertainty of 1,000 is the cosmic variance, and it poses a serious problem. If we have two theories of the universe, one of which is correct in its description of A, and one of which predicts that A will happen with probability 0.501, when A has actually happened 1,001,000 times (a frequency of 0.5005), this is not statistically significant evidence to distinguish between those theories. But this evidence is all the evidence there is; so if we reach this point, there will never be any way of knowing which theory is correct, even though there is a significant difference between their predictions.

This is an extreme example and an oversimplification, but we do know (from experience) that people tend to cling to their current beliefs and demand additional evidence. If there is no such evidence either way, we must use techniques of rationality to remove our biases and examine the situation dispassionately, to see which side the current evidence really supports.


The Virtues of Frugality proposes one solution. Divide the data into pieces (methods for determining the boundaries of these pieces are given in VoF). Find a physicist who has never seen the data set in detail. Show him the first piece of data, let him design models and set parameters based on this data piece. When he is satisfied with his ability to predict the contents of the second data piece, show him that one as well and let him adjust his parameters and possibly invent new models. Continue until you have exhausted all the data.

To a Bayesian superintelligence, this is transparent nonsense. Given a certain list of theories and associated prior probabilities (e.g. the set of all computable theories with complexity below a given limit), there is only one right answer to the question "What is the probability that theory K is true given all the available evidence?" Just because we're dealing in probability doesn't mean we can't be certain.

Humans, however, are not Bayesian superintelligences, and we are not capable of conceiving of all computable theories at once. Given new evidence, we might think up a new theory that we would not previously have considered. VoF asserts that we cannot then use the evidence we already have to check that theory; we must find new evidence. We already know that the evidence we have fits the theory, because it made us think of it. Using that same evidence to check it would be incorrect; not because of confirmation bias, but simply because we are counting the same evidence twice.


This sounds reasonable, but I happen to disagree. The authors' view forgets the fact that the intuition which brought the new theory to our attention is itself using statistical methods, albeit unconsciously. Checking the new theory against the available evidence (basing your estimated prior probability solely on Occam's Rasor) is not counting the same evidence twice; it's checking your working. Every primary-school child learning arithmetic is told that if they suspect they have made a mistake (which is generally the case with primary-school children), they should derive their result again, ideally using a different method. That is what we are doing here; we are re-evaluating our subconscious estimate of the posterior probability using mathematically exact methods.

That is not to say that the methods for analysing finite data sets cannot be improved, simply that the improvement suggested by VoF is suboptimal. Instead, I suggest a method which paraphrases one of Yudkowsky's posts: that of giving all the available evidence to separate individuals or small groups, without telling them of any theories which had already been developed based on that evidence, and without letting them collude with any other such groups. The human tendency to be primed by existing ideas instead of thinking of new ones would thus be reduced in effect, since there would be other groups with different sets of existing ideas.

Implementing such a system would be difficult, if not downright politically dangerous, in our current academic society. Still, I have hope that this is merely a logistic problem, and that we as a species are able to overcome our biases even in such restricted circumstances. Because we may only get one chance.

27