A Fervent Defense of Frequentist Statistics

[Highlights for the busy: de-bunking standard "Bayes is optimal" arguments; frequentist Solomonoff induction; and a description of the online learning framework. Note: cross-posted from my blog.]

Short summary. This essay makes many points, each of which I think is worth reading, but if you are only going to understand one point I think it should be “Myth 5″ below, which describes the online learning framework as a response to the claim that frequentist methods need to make strong modeling assumptions. Among other things, online learning allows me to perform the following remarkable feat: if I’m betting on horses, and I get to place bets after watching other people bet but before seeing which horse wins the race, then I can guarantee that after a relatively small number of races, I will do almost as well overall as the best other person, even if the number of other people is very large (say, 1 billion), and their performance is correlated in complicated ways.

If you’re only going to understand two points, then also read about the frequentist version of Solomonoff induction, which is described in “Myth 6″.

Main article. I’ve already written one essay on Bayesian vs. frequentist statistics. In that essay, I argued for a balanced, pragmatic approach in which we think of the two families of methods as a collection of tools to be used as appropriate. Since I’m currently feeling contrarian, this essay will be far less balanced and will argue explicitly against Bayesian methods and in favor of frequentist methods. I hope this will be forgiven as so much other writing goes in the opposite direction of unabashedly defending Bayes. I should note that this essay is partially inspired by some of Cosma Shalizi’s blog posts, such as this one.

This essay will start by listing a series of myths, then debunk them one-by-one. My main motivation for this is that Bayesian approaches seem to be highly popularized, to the point that one may get the impression that they are the uncontroversially superior method of doing statistics. I actually think the opposite is true: I think most statisticans would for the most part defend frequentist methods, although there are also many departments that are decidedly Bayesian (e.g. many places in England, as well as some U.S. universities like Columbia). I have a lot of respect for many of the people at these universities, such as Andrew Gelman and Philip Dawid, but I worry that many of the other proponents of Bayes (most of them non-statisticians) tend to oversell Bayesian methods or undersell alternative methodologies.

If you are like me from, say, two years ago, you are firmly convinced that Bayesian methods are superior and that you have knockdown arguments in favor of this. If this is the case, then I hope this essay will give you an experience that I myself found life-altering: the experience of having a way of thinking that seemed unquestionably true slowly dissolve into just one of many imperfect models of reality. This experience helped me gain more explicit appreciation for the skill of viewing the world from many different angles, and of distinguishing between a very successful paradigm and reality.

If you are not like me, then you may have had the experience of bringing up one of many reasonable objections to normative Bayesian epistemology, and having it shot down by one of many “standard” arguments that seem wrong but not for easy-to-articulate reasons. I hope to lend some reprieve to those of you in this camp, by providing a collection of “standard” replies to these standard arguments.

I will start with the myths (and responses) that I think will require the least technical background and be most interesting to a general audience. Toward the end, I deal with some attacks on frequentist methods that I believe amount to technical claims that are demonstrably false; doing so involves more math. Also, I should note that for the sake of simplicity I’ve labeled everything that is non-Bayesian as a “frequentist” method, even though I think there’s actually a fair amount of variation among these methods, although also a fair amount of overlap (e.g. I’m throwing in statistical learning theory with minimax estimation, which certainly have a lot of overlap in ideas but were also in some sense developed by different communities).

The Myths:

  • Bayesian methods are optimal.
  • Bayesian methods are optimal except for computational considerations.
  • We can deal with computational constraints simply by making approximations to Bayes.
  • The prior isn’t a big deal because Bayesians can always share likelihood ratios.
  • Frequentist methods need to assume their model is correct, or that the data are i.i.d.
  • Frequentist methods can only deal with simple models, and make arbitrary cutoffs in model complexity (aka: “I’m Bayesian because I want to do Solomonoff induction”).
  • Frequentist methods hide their assumptions while Bayesian methods make assumptions explicit.
  • Frequentist methods are fragile, Bayesian methods are robust.
  • Frequentist methods are responsible for bad science
  • Frequentist methods are unprincipled/hacky.
  • Frequentist methods have no promising approach to computationally bounded inference.

Myth 1: Bayesian methods are optimal. Presumably when most people say this they are thinking of either Dutch-booking or the complete class theorem. Roughly what these say are the following:

Dutch-book argument: Every coherent set of beliefs can be modeled as a subjective probability distribution. (Roughly, coherent means “unable to be Dutch-booked”.)

Complete class theorem: Every non-Bayesian method is worse than some Bayesian method (in the sense of performing deterministically at least as poorly in every possible world).

Let’s unpack both of these. My high-level argument regarding Dutch books is that I would much rather spend my time trying to correspond with reality than trying to be internally consistent. More concretely, the Dutch-book argument says that if for every bet you force me to take one side or the other, then unless I’m Bayesian there’s a collection of bets that will cause me to lose money for sure. I don’t find this very compelling. This seems analogous to the situation where there’s some quant at Jane Street, and they’re about to run code that will make thousands of dollars trading stocks, and someone comes up to them and says “Wait! You should add checks to your code to make sure that no subset of your trades will lose you money!” This just doesn’t seem worth the quant’s time, it will slow down the code substantially, and instead the quant should be writing the next program to make thousands more dollars. This is basically what dutch-booking arguments seem like to me.

Moving on, the complete class theorem says that for any decision rule, I can do better by replacing it with some Bayesian decision rule. But this injunction is not useful in practice, because it doesn’t say anything about which decision rule I should replace it with. Of course, if you hand me a decision rule and give me infinite computational resources, then I can hand you back a Bayesian method that will perform better. But it still might not perform well. All the complete class theorem says is that every local optimum is Bayesan. To be a useful theory of epistemology, I need a prescription for how, in the first place, I am to arrive at a good decision rule, not just a locally optimal one. And this is something that frequentist methods do provide, to a far greater extent than Bayesian methods (for instance by using minimax decision rules such as the maximum-entropy example given later). Note also that many frequentist methods do correspond to a Bayesian method for some appropriately chosen prior. But the crucial point is that the frequentist told me how to pick a prior I would be happy with (also, many frequentist methods don’t correspond to a Bayesian method for any choice of prior; they nevertheless often perform quite well).

Myth 2: Bayesian methods are optimal except for computational considerations. We already covered this in the previous point under the complete class theorem, but to re-iterate: Bayesian methods are locally optimal, not global optimal. Identifying all the local optima is very different from knowing which of them is the global optimum. I would much rather have someone hand me something that wasn’t a local optimum but was close to the global optimum, than something that was a local optimum but was far from the global optimum.

Myth 3: We can deal with computational constraints simply by making approximations to Bayes. I have rarely seen this born out in practice. Here’s a challenge: suppose I give you data generated in the following way. There are a collection of vectors {x_1}, {x_2}, {\ldots}, {x_{10,000}}, each with {10^6} coordinates. I generate outputs {y_1}, {y_2}, {\ldots}, {y_{10,000}} in the following way. First I globally select {100} of the {10^6} coordinates uniformly at random, then I select a fixed vector {u} such that those {100} coordinates are drawn from i.i.d. Gaussians and the rest of the coordinates are zero. Now I set {x_n = u^{\top}y_n} (i.e. {x_n} is the dot product of {u} with {y_n}). You are given {x} and {y}, and your job is to infer {u}. This is a completely well-specified problem, the only task remaining is computational. I know people who have solved this problem using Bayesan methods with approximate inference. I have respect for these people, because doing so is no easy task. I think very few of them would say that “we can just approximate Bayesian updating and be fine”. (Also, this particular problem can be solved trivially with frequentist methods.)

A particularly egregious example of this is when people talk about “computable approximations to Solomonoff induction” or “computable approximations to AIXI” as if such notions were meaningful.

Myth 4: the prior isn’t a big deal because Bayesians can always share likelihood ratios. Putting aside the practical issue that there would in general be an infinite number of likelihood ratios to share, there is the larger issue that for any hypothesis {h}, there is also the hypothesis {h'} that matches {h} exactly up to now, and then predicts the opposite of {h} at all points in the future. You have to constrain model complexity at some point, the question is about how. To put this another way, sharing my likelihood ratios without also constraining model complexity (by focusing on a subset of all logically possible hypotheses) would be equivalent to just sharing all sensory data I’ve ever accrued in my life. To the extent that such a notion is even possible, I certainly don’t need to be a Bayesian to do such a thing.

Myth 5: frequentist methods need to assume their model is correct or that the data are i.i.d. Understanding the content of this section is the most important single insight to gain from this essay. For some reason it’s assumed that frequentist methods need to make strong assumptions (such as Gaussianity), whereas Bayesian methods are somehow immune to this. In reality, the opposite is true. While there are many beautiful and deep frequentist formalisms that answer this, I will choose to focus on one of my favorite, which is online learning.

To explain the online learning framework, let us suppose that our data are {(x_1, y_1), (x_2, y_2), \ldots, (x_T, y_T)}. We don’t observe {y_t} until after making a prediction {z_t} of what {y_t} will be, and then we receive a penalty {L(y_t, z_t)} based on how incorrect we were. So we can think of this as receiving prediction problems one-by-one, and in particular we make no assumptions about the relationship between the different problems; they could be i.i.d., they could be positively correlated, they could be anti-correlated, they could even be adversarially chosen.

As a running example, suppose that I’m betting on horses and before each race there are {n} other people who give me advice on which horse to bet on. I know nothing about horses, so based on this advice I’d like to devise a good betting strategy. In this case, {x_t} would be the {n} bets that each of the other people recommend, {z_t} would be the horse that I actually bet on, and {y_t} would be the horse that actually wins the race. Then, supposing that {y_t = z_t} (i.e., the horse I bet on actually wins), {L(y_t, z_t)} is the negative of the payoff from correctly betting on that horse. Otherwise, if the horse I bet on doesn’t win, {L(y_t, z_t)} is the cost I had to pay to place the bet.

If I’m in this setting, what guarantee can I hope for? I might ask for an algorithm that is guaranteed to make good bets — but this seems impossible unless the people advising me actually know something about horses. Or, at the very least, one of the people advising me knows something. Motivated by this, I define my regret to be the difference between my penalty and the penalty of the best of the {n} people (note that I only have access to the latter after all {T} rounds of betting). More formally, given a class {\mathcal{M}} of predictors {h : x \mapsto z}, I define

\displaystyle \mathrm{Regret}(T) = \frac{1}{T} \sum_{t=1}^T L(y_t, z_t) - \min_{h \in \mathcal{M}} \frac{1}{T} \sum_{t=1}^T L(y_t, h(x_t))

In this case, {\mathcal{M}} would have size {n} and the {i}th predictor would just always follow the advice of person {i}. The regret is then how much worse I do on average than the best expert. A remarkable fact is that, in this case, there is a strategy such that {\mathrm{Regret}(T)} shrinks at a rate of {\sqrt{\frac{\log(n)}{T}}}. In other words, I can have an average score within {\epsilon} of the best advisor after {\frac{\log(n)}{\epsilon^2}} rounds of betting.

One reason that this is remarkable is that it does not depend at all on how the data are distributed; the data could be i.i.d., positively correlated, negatively correlated, even adversarial, and one can still construct an (adaptive) prediction rule that does almost as well as the best predictor in the family.

To be even more concrete, if we assume that all costs and payoffs are bounded by {\$1} per round, and that there are {1,000,000,000} people in total, then an explicit upper bound is that after {28/\epsilon^2} rounds, we will be within {\epsilon} dollars on average of the best other person. Under slightly stronger assumptions, we can do even better, for instance if the best person has an average variance of {0.1} about their mean, then the {28} can be replaced with {4.5}.

It is important to note that the betting scenario is just a running example, and one can still obtain regret bounds under fairly general scenarios; {\mathcal{M}} could be continuous and {L} could have quite general structure; the only technical assumption is that {\mathcal{M}} be a convex set and that {L} be a convex function of {z}. These assumptions tend to be easy to satisfy, though I have run into a few situations where they end up being problematic, mainly for computational reasons. For an {n}-dimensional model family, typically {\mathrm{Regret}(T)} decreases at a rate of {\sqrt{\frac{n}{T}}}, although under additional assumptions this can be reduced to {\sqrt{\frac{\log(n)}{T}}}, as in the betting example above. I would consider this reduction to be one of the crowning results of modern frequentist statistics.

Yes, these guarantees sound incredibly awesome and perhaps too good to be true. They actually are that awesome, and they are actually true. The work is being done by measuring the error relative to the best model in the model family. We aren’t required to do well in an absolute sense, we just need to not do any worse than the best model. Of as long as at least one of the models in our family makes good predictions, that means we will as well. This is really what statistics is meant to be doing: you come up with everything you imagine could possibly be reasonable, and hand it to me, and then I come up with an algorithm that will figure out which of the things you handed me was most reasonable, and will do almost as well as that. As long as at least one of the things you come up with is good, then my algorithm will do well. Importantly, due to the {\log(n)} dependence on the dimension of the model family, you can actually write down extremely broad classes of models and I will still successfully sift through them.

Let me stress again: regret bounds are saying that, no matter how the {x_t} and {y_t} are related, no i.i.d. assumptions anywhere in sight, we will do almost as well as any predictor {h} in {\mathcal{M}} (in particular, almost as well as the best predictor).

Myth 6: frequentist methods can only deal with simple models and need to make arbitrary cutoffs in model complexity. A naive perusal of the literature might lead one to believe that frequentists only ever consider very simple models, because many discussions center on linear and log-linear models. To dispel this, I will first note that there are just as many discussions that focus on much more general properties such as convexity and smoothness, and that can achieve comparably good bounds in many cases. But more importantly, the reason we focus so much on linear models is because we have already reduced a large family of problems to (log-)linear regression. The key insight, and I think one of the most important insights in all of applied mathematics, is that of featurization: given a non-linear problem, we can often embed it into a higher-dimensional linear problem, via a feature map {\phi : X \rightarrow \mathbb{R}^n} ({\mathbb{R}^n} denotes {n}-dimensional space, i.e. vectors of real numbers of length {n}). For instance, if I think that {y} is a polynomial (say cubic) function of {x}, I can apply the mapping {\phi(x) = (1, x, x^2, x^3)}, and now look for a linear relationship between {y} and {\phi(x)}.

This insight extends far beyond polynomials. In combinatorial domains such as natural language, it is common to use indicator features: features that are {1} if a certain event occurs and {0} otherwise. For instance, I might have an indicator feature for whether two words appear consecutively in a sentence, whether two parts of speech are adjacent in a syntax tree, or for what part of speech a word has. Almost all state of the art systems in natural language processing work by solving a relatively simple regression task (typically either log-linear or max-margin) over a rich feature space (often involving hundreds of thousands or millions of features, i.e. an embedding into {\mathbb{R}^{10^5}} or {\mathbb{R}^{10^6}}).

A counter-argument to the previous point could be: “Sure, you could create a high-dimensional family of models, but it’s still a parameterized family. I don’t want to be stuck with a parameterized family, I want my family to include all Turing machines!” Putting aside for a second the question of whether “all Turing machines” is a well-advised model choice, this is something that a frequentist approach can handle just fine, using a tool called regularization, which after featurization is the second most important idea in statistics.

Specifically, given any sufficiently quickly growing function {\psi(h)}, one can show that, given {T} data points, there is a strategy whose average error is at most {\sqrt{\frac{\psi(h)}{T}}} worse than any estimator {h}. This can hold even if the model class {\mathcal{M}} is infinite dimensional. For instance, if {\mathcal{M}} consists of all probability distributions over Turing machines, and we let {h_i} denote the probability mass placed on the {i}th Turing machine, then a valid regularizer {\psi} would be

\displaystyle \psi(h) = \sum_i h_i \log(i^2 \cdot h_i)

If we consider this, then we see that, for any probability distribution over the first {2^k} Turing machines (i.e. all Turing machines with description length {\leq k}), the value of {\psi} is at most {\log((2^k)^2) = k\log(4)}. (Here we use the fact that {\psi(h) \geq \sum_i h_i \log(i^2)}, since {h_i \leq 1} and hence {h_i\log(h_i) \leq 0}.) This means that, if we receive roughly {\frac{k}{\epsilon^2}} data, we will achieve error within {\epsilon} of the best Turing machine that has description length {\leq k}.

Let me note several things here:

  • This strategy makes no assumptions about the data being i.i.d. It doesn’t even assume that the data are computable. It just guarantees that it will perform as well as any Turing machine (or distribution over Turing machines) given the appropriate amount of data.
  • This guarantee holds for any given sufficiently smooth measurement of prediction error (the update strategy depends on the particular error measure).
  • This guarantee holds deterministically, no randomness required (although predictions may need to consist of probability distributions rather than specific points, but this is also true of Bayesian predictions).

Interestingly, in the case that the prediction error is given by the negative log probability assigned to the truth, then the corresponding strategy that achieves the error bound is just normal Bayesian updating. But for other measurements of error, we get different update strategies. Although I haven’t worked out the math, intuitively this difference could be important if the universe is fundamentally unpredictable but our notion of error is insensitive to the unpredictable aspects.

Myth 7: frequentist methods hide their assumptions while Bayesian methods make assumptions explicit. I’m still not really sure where this came from. As we’ve seen numerous times so far, a very common flavor among frequentist methods is the following: I have a model class {\mathcal{M}}, I want to do as well as any model in {\mathcal{M}}; or put another way:

Assumption: At least one model in {\mathcal{M}} has error at most {E}.
Guarantee: My method will have error at most {E + \epsilon}.

This seems like a very explicit assumption with a very explicit guarantee. On the other hand, an argument I hear is that Bayesian methods make their assumptions explicit because they have an explicit prior. If I were to write this as an assumption and guarantee, I would write:

Assumption: The data were generated from the prior.
Guarantee: I will perform at least as well as any other method.

While I agree that this is an assumption and guarantee of Bayesian methods, there are two problems that I have with drawing the conclusion that “Bayesian methods make their assumptions explicit”. The first is that it can often be very difficult to understand how a prior behaves; so while we could say “The data were generated from the prior” is an explicit assumption, it may be unclear what exactly that assumption entails. However, a bigger issue is that “The data were generated from the prior” is an assumption that very rarely holds; indeed, in many cases the underlying process is deterministic (if you’re a subjective Bayesian then this isn’t necessarily a problem, but it does certainly mean that the assumption given above doesn’t hold). So given that that assumption doesn’t hold but Bayesian methods still often perform well in practice, I would say that Bayesian methods are making some other sort of “assumption” that is far less explicit (indeed, I would be very interested in understanding what this other, more nebulous assumption might be).

Myth 8: frequentist methods are fragile, Bayesian methods are robust. This is another one that’s straightforwardly false. First, since frequentist methods often rest on weaker assumptions they are more robust if the assumptions don’t quite hold. Secondly, there is an entire area of robust statistics, which focuses on being robust to adversarial errors in the problem data.

Myth 9: frequentist methods are responsible for bad science. I will concede that much bad science is done using frequentist statistics. But this is true only because pretty much all science is done using frequentist statistics. I’ve heard arguments that using Bayesian methods instead of frequentist methods would fix at least some of the problems with science. I don’t think this is particularly likely, as I think many of the problems come from mis-application of statistical tools or from failure to control for multiple hypotheses. If anything, Bayesian methods would exacerbate the former, because they often require more detailed modeling (although in most simple cases the difference doesn’t matter at all). I don’t think being Bayesian guards against multiple hypothesis testing. Yes, in some sense a prior “controls for multiple hypotheses”, but in general the issue is that the “multiple hypotheses” are never written down in the first place, or are written down and then discarded. One could argue that being in the habit of writing down a prior might make practitioners more likely to think about multiple hypotheses, but I’m not sure this is the first-order thing to worry about.

Myth 10: frequentist methods are unprincipled / hacky. One of the most beautiful theoretical paradigms that I can think of is what I could call the “geometric view of statistics”. One place that does a particularly good job of show-casing this is Shai Shalev-Shwartz’s PhD thesis, which was so beautiful that I cried when I read it. I’ll try (probably futilely) to convey a tiny amount of the intuition and beauty of this paradigm in the next few paragraphs, although focusing on minimax estimation, rather than online learning as in Shai’s thesis.

The geometric paradigm tends to emphasize a view of measurements (i.e. empirical expected values over observed data) as “noisy” linear constraints on a model family. We can control the noise by either taking few enough measurements that the total error from the noise is small (classical statistics), or by broadening the linear constraints to convex constraints (robust statistics), or by controlling the Lagrange multipliers on the constraints (regularization). One particularly beautiful result in this vein is the duality between maximum entropy and maximum likelihood. (I can already predict the Jaynesians trying to claim this result for their camp, but (i) Jaynes did not invent maximum entropy; (ii) maximum entropy is not particularly Bayesian (in the sense that frequentists use it as well); and (iii) the view on maximum entropy that I’m about to provide is different from the view given in Jaynes or by physicists in general [edit: EHeller thinks this last claim is questionable, see discussion here].)

To understand the duality mentioned above, suppose that we have a probability distribution {p(x)} and the only information we have about it is the expected value of a certain number of functions, i.e. the information that {\mathbb{E}[\phi(x)] = \phi^*}, where the expectation is taken with respect to {p(x)}. We are interested in constructing a probability distribution {q(x)} such that no matter what particular value {p(x)} takes, {q(x)} will still make good predictions. In other words (taking {\log p(x)} as our measurement of prediction accuracy) we want {\mathbb{E}_{p'}[\log q(x)]} to be large for all distributions {p'} such that {\mathbb{E}_{p'}[\phi(x)] = \phi^*}. Using a technique called Lagrangian duality, we can both find the optimal distribution {q} and compute its worse-case accuracy over all {p'} with {\mathbb{E}_{p'}[\phi(x)] = \phi^*}. The characterization is as follows: consider all probability distributions {q(x)} that are proportional to {\exp(\lambda^{\top}\phi(x))} for some vector {\lambda}, i.e. {q(x) = \exp(\lambda^{\top}\phi(x))/Z(\lambda)} for some {Z(\lambda)}. Of all of these, take the q(x) with the largest value of {\lambda^{\top}\phi^* - \log Z(\lambda)}. Then {q(x)} will be the optimal distribution and the accuracy for all distributions {p'} will be exactly {\lambda^{\top}\phi^* - \log Z(\lambda)}. Furthermore, if {\phi^*} is the empirical expectation given some number of samples, then one can show that {\lambda^{\top}\phi^* - \log Z(\lambda)} is propotional to the log likelihood of {q}, which is why I say that maximum entropy and maximum likelihood are dual to each other.

This is a relatively simple result but it underlies a decent chunk of models used in practice.

Myth 11: frequentist methods have no promising approach to computationally bounded inference. I would personally argue that frequentist methods are more promising than Bayesian methods at handling computational constraints, although computationally bounded inference is a very cutting edge area and I’m sure other experts would disagree. However, one point in favor of the frequentist approach here is that we already have some frameworks, such as the “tightening relaxations” framework discussed here, that provide quite elegant and rigorous ways of handling computationally intractable models.

 

References

(Myth 3) Sparse recovery: Sparse recovery using sparse matrices
(Myth 5) Online learning: Online learning and online convex optimization
(Myth 8) Robust statistics: see this blog post and the two linked papers
(Myth 10) Maximum entropy duality: Game theory, maximum entropy, minimum discrepancy and robust Bayesian decision theory

125 comments, sorted by
magical algorithm
Highlighting new comments since Today at 6:49 AM
Select new highlight date
Moderation Guidelinesexpand_more

I would love to know which parts of this post Eliezer disagrees with, and why.

Don't have time for a real response. Quickly and ramblingly:

1) The point of Bayesianism isn't that there's a toolbox of known algorithms like max-entropy methods which are supposed to work for everything. The point of Bayesianism is to provide a coherent background epistemology which underlies everything; when a frequentist algorithm works, there's supposed to be a Bayesian explanation of why it works. I have said this before many times but it seems to be a "resistant concept" which simply cannot sink in for many people.

2) I did initially try to wade into the math of the linear problem (and wonder if I'm the only one who did so, unless others spotted the x-y inversion but didn't say anything), trying to figure out how I would solve it even though that wasn't really relevant for reasons of (1), but found that the exact original problem specified may be NP-hard according to Wikipedia, much as my instincts said it should be. And if we're allowed approximate answers then yes, throwing a standard L1-norm algorithm at it is pretty much what I would try, though I might also try some form of expectation-maximization using the standard Bayesian L2 technique and repeatedly truncating the small coefficients and then trying to predict the residual error. I have no idea how long that would take in practice. It doesn't actually matter, because see (1). I could go on about how for any given solution I can compute its Bayesian likelihood assuming Gaussian noise, and so again Bayes functions well as a background epistemology which gives us a particular minimization problem to be computed by whatever means, and if we have no background epistemology then why not just choose a hundred random 1s, etc., but lack the time for more than rapid rambling here. Jacob didn't say what he thought an actual frequentist or Bayesian approach would be, he just said the frequentist approach would be easy and that the Bayesian one was hard.

(3) Having made a brief effort to wade into the math and hit the above bog, I did not attempt to go into Jacob's claim that frequentist statistics can transcend i.i.d. But considering the context in which I originally complained about the assumptions made by frequentist guarantees, I should very much like to see explained concretely how Jacob's favorite algorithm would handle the case of "You have a self-improving AI which turns out to maximize smiles, in all previous cases it produced smiles by making people happy, but once it became smart enough it realized that it ought to preserve your bad generalization and faked its evidence, and now that it has nanotech it's going to tile the universe with tiny smileyfaces." This is the Context Change Problem I originally used to argue against trying for frequentist-style guarantees based on past AI behavior being okay or doing well on other surface indicators. I frankly doubt that Jacob's algorithm is going to handle it. I really really doubt it. Very very roughly, my own notion of an approach here would be a Bayesian-viewpoint AI which was learning a utility function and knew to explicitly query model ambiguity back to the programmers, perhaps using a value-of-info calculation. I should like to hear what a frequentist viewpoint on that would sound like.

(4) Describing the point of likelihood ratios in science would take its own post. Three key ideas are (a) instead of "negative results" we have "likelihood ratios favoring no effect over 5% effect" and so it's now conceptually simpler to get rid of positive-result bias in publication; (b) if we compute likelihood ratios on all the hypotheses which are actually in play then we can add up what many experiments tell us far more easily and get far more sensible answers than with present "survey" methods; and (c) having the actual score be far below expected log score for the best hypothesis tells us when some of our experiments must be giving us bogus data or having been performed under invisibly different conditions, a huge problem in many cases and something far beyond the ability of present "survey" methods to notice or handle.

EDIT: Also everything in http://lesswrong.com/lw/mt/beautiful_probability/

when a frequentist algorithm works, there's supposed to be a Bayesian explanation of why it works. I have said this before many times but it seems to be a "resistant concept" which simply cannot sink in for many people.

Perhaps the reason this is not sinking in for many people is because it is not true.


Bayes assumes you can write down your prior, your likelihood and your posterior. That is what we need to get Bayes theorem to work. If you are working with a statistical model where this is not possible*, you cannot really use the standard Bayesian story, yet there still exist ways of attacking the problem.

(*) Of course, "not possible in principle" is different from "we don't know how to yet." In either case, I am not really sure what the point of an official Bayesian epistemology explanation would be.


This idea that there is a standard Bayesian explanation for All The Things seems very strange to me. Andrew Gelman has a post on his blog about how to define "identifiability" if you are a Bayesian:

(http://andrewgelman.com/2014/02/12/think-identifiability-bayesian-inference/)

This is apparently a tricky (or not useful) concept to define within that framework. Which is a little weird, because it is both a very useful concept, and a very clear one to me.

Gelman is a pretty prominent Bayesian. Either he is confused, or I am confused, or his view on the stuff causal folks like me work on is so alien that it is not illuminating. The issue to me seems to me to be cultural differences between frameworks.

Do you have a handy example of a frequentist algorithm that works, for which there is no Bayesian explanation?

I wouldn't say "no Bayesian explanation," but perhaps "a Bayesian explanation is unknown to me, nor do I see how this explanation would illuminate anything." But yes, I gave an example elsewhere in this thread. The FCI algorithm for learning graph structure in the non-parametric setting with continuous valued variables, where the correct underlying model has the following independence structure:

A is independent of B and C is independent of D (and nothing else is true).

Since I (and to my knowledge everyone else) do not know how to write the likelihood for this model, I don't know how to set up the standard Bayesian story here.

Eliezer,

The point of Bayesianism is to provide a coherent background epistemology which underlies everything; when a frequentist algorithm works, there's supposed to be a Bayesian explanation of why it works. I have said this before many times but it seems to be a "resistant concept" which simply cannot sink in for many people.

First, I object to the labeling of Bayesian explanations as a "resistant concept". I think it's not only uncharitable but also wrong. I started out with exactly the viewpoint that everything should be explained in terms of Bayes (see one of my earliest and most-viewed blog posts if you don't believe me). I moved away from this viewpoint slowly as the result of accumulated evidence that this is not the most productive lens through which to view the world.

More to the point: why is it that you think that everything should have a Bayesian explanation? One of the most-cited reasons why Bayes should be an empistemic ideal is the various "optimality" / Dutch book theorems, which I've already argued against in this post. Do you accept the rebuttals I gave, or disagree with them?

My guess is that you would still be in favor of Bayes as a normative standard of epistemology even if you rejected Dutch book arguments, and the reason why you like it is because you feel like it has been useful for solving a large number of problems. But frequentist statistics (not to mention pretty much any successful paradigm) has also been useful for solving a large number of problems, some of which Bayesian statistics cannot solve, as I have demonstrated in this post. The mere fact that a tool is extremely useful does not mean that it should be elevated to a universal normative standard.

but found that the exact original problem specified may be NP-hard according to Wikipedia, much as my instincts said it should be

We've already discussed this in one of the other threads, but I'll just repeat here that this isn't correct. With overwhelmingly high probability a Gaussian matrix will satisfy the restricted isometry property, which implies that appropriately L1-regularized least squares will return the exact solution.

I could go on about how for any given solution I can compute its Bayesian likelihood assuming Gaussian noise, and so again Bayes functions well as a background epistemology

The point of this example was to give a problem that, from a modeling perspective, was as convenient for Bayes as possible, but that was computationally intractable to solve using Bayesian techniques. I gave other examples (such as in Myth 5) that demonstrate situations where Bayes breaks down. And I argued indirectly in Myths 1, 4, and 8 that the prior is actually a pretty big deal and has the capacity to cause problems in ways that frequentists have ways of dealing with.

I should very much like to see explained concretely how Jacob's favorite algorithm would handle the case of "You have a self-improving AI which turns out to maximize smiles, in all previous cases it produced smiles by making people happy, but once it became smart enough it realized that it ought to preserve your bad generalization and faked its evidence, and now that it has nanotech it's going to tile the universe with tiny smileyfaces."

I think this is a very bad testing ground for how good a technique is, because it's impossible to say whether something would solve this problem without going through a lot of hand-waving. I think your "notion of how to solve it" is interesting but has a lot of details to fill in, and it's extremely unclear how it would work, especially given that even for concrete problems that people work on now, an issue with Bayesian methods is overconfidence in a particular model. I should also note that, as we've registered earlier, I don't think that what you call the Context Change Problem is actually a problem that an intelligent agent would face: any agent that is intelligent enough to behave at all functionally close to the level of a human would be robust to context changes.

However, even given all these caveats, I'll still try to answer your question on your own terms. Short answer: do online learning with an additional action called "query programmer" that is guaranteed to always have some small negative utility, say -0.001, that is enough to outweigh any non-trivial amount of uncertainty but will eventually encourage the AI to act autonomously. We would need some way of upper-bounding the regret of other possible actions, and of incorporating this utility constraint into the algorithm, but I don't think the amount of fleshing out is any more or less than that required by your proposal.

[WARNING: The rest of this comment is mostly meaningless rambling.]

I want to stress again that the above paragraph is only a (sketch of) an answer to the question as you posed it. But I'd rather sidestep the question completely and say something like: "OK, if we make literally no assumptions, then we're completely screwed, because moving any speck of dust might cause the universe to explode. Being Bayesian doesn't make this issue go away, it just ignores it.

So, what assumptions can we be reasonably okay with making that would help us solve the problem? Maybe I'd be okay assuming that the mechanism that takes in my past actions and returns a utility is a Turing machine of description length less than 10^15. But unfortunately that doesn't help me much, because for every Turing machine M, there's one of not that much longer description length that behaves identically to M up until I'm about to make my current decision, and then penalizes my current decision with some extraordinary large amount of disutility. Note that, again, being Bayesian doesn't deal with this issue, it just assigns it low prior probability.

I think the question of exactly what assumptions one would be willing to make, that would allow one to confidently reason about actions with potentially extremely discontinuous effects, is an important and interesting one, and I think one of the drawbacks of "thinking like a Bayesian" is that it draws attention away from this issue by treating it as mostly solved (via assigning a prior)."

My guess is that you would still be in favor of Bayes as a normative standard of epistemology even if you rejected Dutch book arguments, and the reason why you like it is because you feel like it has been useful for solving a large number of problems.

Um, nope. What it would really take to change my mind about Bayes is seeing a refutation of Dutch Book and Cox's Theorem and Von Neumann-Morgenstern and the complete class theorem , combined with seeing some alternative epistemology (e.g. Dempster-Shafer) not turn out to completely blow up when subjected to the same kind of scrutiny as Bayesianism (the way DS brackets almost immediately go to [0-1] and fuzzy logic turned out to be useless etc.)

Neural nets have been useful for solving a large number of problems. It doesn't make them good epistemology. It doesn't make them a plausible candidate for "Yes, this is how you need to organize your thinking about your AI's thinking and if you don't your AI will explode".

some of which Bayesian statistics cannot solve, as I have demonstrated in this post.

I am afraid that your demonstration was not stated sufficiently precisely for me to criticize. This seems like the sort of thing for which there ought to be a standard reference, if there were such a thing as a well-known problem which Bayesian epistemology could not handle. For example, we have well-known critiques and literature claiming that nonconglomerability is a problem for Bayesianism, and we have a chapter of Jaynes which neatly shows that they all arise from misuse of limits on infinite problems. Is there a corresponding literature for your alleged reductio of Bayesianism which I can consult? Now, I am a great believer in civilizational inadequacy and the fact that the incompetence of academia is increasing, so perhaps if this problem was recently invented there is no more literature about it. I don't want to be a hypocrite about the fact that sometimes something is true and nobody has written it up anyway, heaven knows that's true all the time in my world. But the fact remains that I am accustomed to somewhat more detailed math when it comes to providing an alleged reductio of the standard edifice of decision theory. I know your time is limited, but the real fact is that I really do need more detail to think that I've seen a criticism and be convinced that no response to that criticism exists. Should your flat assertion that Bayesian methods can't handle something and fall flat so badly as to constitute a critique of Bayesian epistemology, be something that I find convincing?

We've already discussed this in one of the other threads, but I'll just repeat here that this isn't correct. With overwhelmingly high probability a Gaussian matrix will satisfy the restricted isometry property, which implies that appropriately L1-regularized least squares will return the exact solution.

Okay. Though I note that you haven't actually said that my intuitions (and/or my reading of Wikipedia) were wrong; many NP-hard problems will be easy to solve for a randomly generated case.

Anyway, suppose a standard L1-penalty algorithm solves a random case of this problem. Why do you think that's a reductio of Bayesian epistemology? Because the randomly generated weights mean that a Bayesian viewpoint says the credibility is going as the L2 norm on the non-zero weights, but we used an L1 algorithm to find which weights were non-zero? I am unable to parse this into the justifications I am accustomed to hearing for rejecting an epistemology. It seems like you're saying that one algorithm is more effective at finding the maximum of a Bayesian probability landscape than another algorithm; in a case where we both agree that the unbounded form of the Bayesian algorithm would work.

What destroys an epistemology's credibility is a case where even in the limit of unbounded computing power and well-calibrated prior knowledge, a set of rules just returns the wrong answer. The inherent subjectivity of p-values as described in http://lesswrong.com/lw/1gc/frequentist_statistics_are_frequently_subjective/ is not something you can make go away with a better-calibrated prior, correct use of limits, or unlimited computing power; it's the result of bad epistemology. This is the kind of smoking gun it would take to make me stop yammering about probability theory and Bayes's rule. Showing me algorithms which don't on the surface seem Bayesian but find good points on a Bayesian fitness landscape isn't going to cut it!

Eliezer, I included a criticism of both complete class and Dutch book right at the very beginning, in Myth 1. If you find them unsatisfactory, can you at least indicate why?

Your criticism of Dutch Book is that it doesn't seem to you useful to add anti-Dutch-book checkers to your toolbox. My support of Dutch Book is that if something inherently produces Dutch Books then it can't be the right epistemological principle because clearly some of its answers must be wrong even in the limit of well-calibrated prior knowledge and unbounded computing power.

The complete class theorem I understand least of the set, and it's probably not very much entwined with my true rejection so it would be logically rude to lead you on here. Again, though, the point that every local optimum is Bayesian tells us something about non-Bayesian rules producing intrinsically wrong answers. If I believed your criticism, I think it would be forceful; I could accept a world in which for every pair of a rational plan with a world, there is an irrational plan which does better in that world, but no plausible way for a cognitive algorithm to output that irrational plan - the plans which are equivalent of "Just buy the winning lottery ticket, and you'll make more money!" I can imagine being shown that the complete class theorem demonstrates only an "unfair" superiority of this sort, and that only frequentist methods can produce actual outputs for realistic situations even in the limit of unbounded computing power. But I do not believe that you have leveled such a criticism. And it doesn't square very much with my current understanding that the decision rules being considered are computable rules from observations to actions. You didn't actually tell me about a frequentist algorithm which is supposed to be realistic and show why the Bayesian rule which beats it is beating it unfairly.

If you want to hit me square in the true rejection I suggest starting with VNM. The fact that our epistemology has to plug into our actions is one reason why I roll my eyes at the likes of Dempster-Shafer or frequentist confidence intervals that don't convert to credibility distributions.

I could accept a world in which for every pair of a rational plan with a world, there is an irrational plan which does better in that world, but no plausible way for a cognitive algorithm to output that irrational plan

We already live in that world.

(The following is not evidence, just an illustrative analogy) Ever seen Groundhog Day? Imagine him skipping the bulk of the movie and going straight to the last day. It is straight wall to wall WTF but it's very optimal.

One of the criticisms I raised is that merely being able to point to all the local optima is not a particularly impressive property of an epistemological theory. Many of those local optima will be horrible! (My criticism of VNM is essentially the same.)

Many frequentist methods, such as minimax, also provide local optima, but they provide local optima which actually have certain nice properties. And minimax provides a complete decision rule, not just a probability distribution, so it plugs directly into actions.

FYI, there are published counterexamples to Cox's theorem. See for example Joseph Halpern's at http://arxiv.org/pdf/1105.5450.pdf.

You need to not include the period in your link, like so.

Short answer: do online learning with an additional action called "query programmer" that is guaranteed to always have some small negative utility, say -0.001, that is enough to outweigh any non-trivial amount of uncertainty but will eventually encourage the AI to act autonomously.

This short answer is too short for me to understand, unfortunately. Do you think there is enough potential merit in this idea to try to understand it better or further develop it? (I've been learning about online learning recently in an effort to understand/evaluate Paul Christiano's recent "AI control" ideas. If you have your own ideas also based on online learning, I'd love to try to understand them while the online learning stuff is fresh in my mind.)

We've already discussed this in one of the other threads, but I'll just repeat here that this isn't correct. With overwhelmingly high probability a Gaussian matrix will satisfy the restricted isometry property, which implies that appropriately L1-regularized least squares will return the exact solution.

I do wonder if it would have been better to include something along the lines of "with probability 1" to the claim that non-Bayesian methods can solve it easily. Compressed sensing isn't magic, even though it's very close.

any agent that is intelligent enough to behave at all functionally close to the level of a human would be robust to context changes.

Humans get tripped up by context changes very frequently. It's not obvious to me where you think this robustness would come from.

Compressed sensing isn't even magic, if you're halfway versed in signal processing. I understood compressed sensing within 30 seconds of hearing a general overview of it, and there are many related analogs in many fields.

Compressed sensing isn't even magic

The convex optimization guys I know are all rather impressed by compressed sensing- but that may be because they specialize in doing L1 and L2 problems, and so compressed sensing makes the things they're good at even more important.

(c) having the actual score be far below expected log score for the best hypothesis tells us when some of our experiments must be giving us bogus data or having been performed under invisibly different conditions, a huge problem in many cases and something far beyond the ability of present "survey" methods to notice or handle.

The standard meta-analysis toolkit does include methods of looking at the heterogeneity in effect sizes. (This is fresh in my mind because it actually came up at yesterday's CFAR colloquium regarding some academic research that we were discussing.)

I do not know how the frequentist approach compares to the Bayesian approach in this case.

I don't have a technical basis for thinking this, but I'm beginning to suspect that as time goes on, more and more frequentist methods will be proven to be equivalent or good approximations to the ideal Bayesian approach. If that happens, (Edit: Hypothetical) Bayesians who refused to use those methods on ideological grounds would look kind of silly in hindsight, as if relativistic physics came first and a bunch of engineers refused to use Newtonian equations for decades until someone proved that they approximate the truth well at low speeds.

Who are these mysterious straw Bayesians who refuse to use algorithms that work well and could easily turn out to have a good explanation later? Bayes is epistemological background not a toolbox of algorithms.

After a careful rereading of http://lesswrong.com/lw/mt/beautiful_probability/, the 747 analogy suggests that, once you understand the difference between an epistemological background and a toolbox, it might be a good idea to use the toolbox. But I didn't really read it that way the first time, so I imagine others might have made a similar mistake. I'll edit my post to make the straw Bayesians hypothetical, to make it clear that I'm making a point to other LW readers rather than criticizing a class of practicing statisticians.

I'd actually forgotten I'd written that. Thank you for reminding me!

Bayes is epistemological background not a toolbox of algorithms.

I disagree: I think you are lumping two things together that don't necessarily belong together. There is Bayesian epistemology, which is philosophy, describing in principle how we should reason, and there is Bayesian statistics, something that certain career statisticians use in their day to day work. I'd say that frequentism does fairly poorly as an epistemology, but it seems like it can be pretty useful in statistics if used "right". It's nice to have nice principles underlying your statistics, but sometimes ad hoc methods and experience and intuition just work.

Yes, but the sounder the epistemology is the harder is to [ETA: accidentally] misuse the tools. Cue all the people misunderstanding what p-values mean...

The fundamental confusion going on here comes from peculiar terminology.

jsteinhardt writes:

Also, I should note that for the sake of simplicity I’ve labeled everything that is non-Bayesian as a “frequentist” method

So every algorithm that isn't obviously Bayesian is labeled Frequentist, while in fact what we have are two epistemological frameworks, and a zillion and one algorithms that we throw at data that don't neatly fit into either framework.

Great post! It would be great if you had cites for various folks claiming myth k. Some of these sound unbelievable!


"Frequentist methods need to assume their model is correct."

This one is hilarious. Does anyone say this? Multiply robust methods (Robins/Rotnitzky/et al) aren't exactly Bayesian, and their entire point is that you can get a giant piece of the likelihood arbitrarily wrong and your estimator is still consistent.

I am confused about your use of the word optimal. In particular in the sentences

Bayesian methods are optimal (except for computational considerations).

and

Bayesian methods are locally optimal, not global optimal.

are you talking about the same sort of 'optimal'? From wikipedia (here and here) I found the rigorous definition of the word 'optimal' in the second sentence, which can be written in terms of your utility function (a decision rule is optimal if there is no other decision rule which will always give you at least as much utility and in at least one world will give you more utility).

Also I agree with many of your myths, namely 3,8,9 and 11. I was rather surprised to see that these things even needed to be mentioned, I don't see why making good trade-offs between truth and computation time should be 'simple' (3), as you mentioned the frequentist tests are chosen precisely with robustness in mind (8), bad science is more than getting your statistics wrong (9) (small sidenote: while it might be true that scientists can get confused by frequentist statistics, which might corrupt their science, I don't think the problem would be smaller when using a different form of statistics, and I therefore think it would not be correct to attribute this bad science to frequentism), and we know from practice that Bayesianism (not frequentism) is the method which has most problems with computational bounds (11).

However, I think it is important to make a distinction between the validity of Bayesianism and the application of Bayesianism. I recall reading on lesswrong (although I cannot find the post at this moment) that the relation between Bayesianism and frequentism should be seen like the relation between Quantum Mechanics and classical physics (although QM has lots of experimental data to support it, so it is rightfully more accepted than Bayesianism). Like QM, Bayesianism is governed by simple mathematical rules (Schrodinger's equation and Bayes' theorem), which will give the right answer when supplied with the correct initial conditions. However, to fly a plane we do not invoke QM, and similarly we will in most practical instances to estimate a parameter not invoke Bayes. Instead we use approximations (classical physics/frequentism), which will not give the exact answer but will give a good approximation thereof (as you mention: a method close to the global optimum, although I am still unclear what we are optimising for there). The key point is that these approximations are correct only insofar as they approximate the answer that would be given by the correct theory. If classical physics and QM disagree then QM was correct. Similarly if we have a parameter estimation obtained by a Bayesian algorithm, and one using a frequentist algorithm, the Bayesian one is going to give the correct subjective probability. But the correct algorithms are (nearly?) impossible to implement, so we stick with the approximations. This is why physicists still use and teach classical physics, and why I personally endorse many frequentist tools. The difference between validity and application seems to be lost in myths 4-7 and 10:

  • 4: Strictly speaking the only way to truly share your arguments for having a certain degree of belief in a hypothesis would be to share all sensory data that is dependent on the hypothesis (after all, this is how evidence works). This is clearly not feasible, but it would be the correct thing to do if we only care about being correct. You explain in this myth that this does not lead to a simple and quick algorithm. But this is not an argument against validity, it is an argument against a possible application.
  • 5: Again this whole myth deals with application. The myth you debunk states that the approximations made when turning degrees of belief into an actual strategy must be bad, and you debunk this by giving an algorithm that gets very good results. But this is not an argument that distinguishes between Bayesianism and frequentism, it merely states that there are easy-to-compute (in a relative sense) algorithms that get close to the correct answer, which we know how to find in theory but not in practice. (In case you are wondering: the approximation takes place in the step where you simplify your utility function to the Regret function, and your prior is whatever decision rule you use for the first horse.)
  • 6: This myth hinges on the word 'simple'. Frequentist methods can deal with many complicated problems, and a lot of high quality work has been done to increase the scope of the tools of frequentism. Saying that only simple models can be dealt with would be an insult. However, as mentioned above, these methods are all approximations, and each method is valid only if the approximations made are satisfied. So while frequentist methods can deal with many complicated models it is imporant to realise that the scope of each method is limited.
  • 7+10: Myth 10 seems to be a case of confusion by the people using the tools. Frequentist methods (derived from approximations) come with boundaries, such as limitations on the type of model that can be distilled from data or limitations on the meaning of the outcome of the algorithm (it might answer a different question than the one you hoped to answer). If you break one of these limitations it is not surprising that the results are wacky. This is not a problem of frequentism, provided the tools are explained properly. If the tools are not explained properly then problems arise. Your explanation, we have a class M and a solution E, and we look for a simple approximation which will give E+epsilon, is very clear. Problems arise when the class M is not specified, or the existence of E is unclear. I would like to classify this as an error of the practitioners of frequentism, rather than an error of the method.

Lastly I would like to make a small note that the example on myth 10 is very similar to something called the Boltzmann distribution from statistical physics, discovered in the 19th century. Here the function phi is the energy divided by the temperature.

Edit: during the writing of this post it seems that other people have already made this remark on myth 10. I agree that physicists would probably not interpret this as a game played between nature and the predictor.

Thanks for your comments. One thing you say a few times throughout your comment is "frequentist methods are an approximation to Bayes". I wouldn't agree with this. I think Bayesian and frequentist methods are often trying to do different things (although in many practical instances their usage overlaps). In what sense do you believe that Bayes is the "correct" answer?

At the beginning of your comment, I would have used "admissible" rather than "optimal" to describe the definition you gave:

a decision rule is optimal if there is no other decision rule which will always give you at least as much utility and in at least one world will give you more utility

I don't see how the online learning algorithm in myth 5 can be interpreted as an approximation to Bayes. The guarantee I'm getting just seems way better and more awesome than what Bayes provides. I also don't think it's right to say that "regret is an approximation to utility". Regret is an alternative formulation to utility that happens to lead to a set of very fruitful results, one of which I explained under myth 5.

While writing this answer I realised I forgot an important class of exceptions, namely the typical school example of hypothesis testing. My explanation now consists of multiple parts.

To answer the first question: the Bayesian method gives the "correct" answer in the sense that it optimises the expectation of your utility function. If you choose a utility function like log(p) this means that you will find your subjective probabilities. I also think Bayesianism is "correct" in the philosophical sense (which is a property of the theory), but I believe there are many posts on lesswrong that can explain this better than I can.

  • The approximation made can often be rewritten in terms of a particular choice of utility function (or risk function, which is more conventional according to wikipedia). As you mentioned choosing the Regret function for cost and a non-silly prior (for example whichever one you are using) will yield a Bayesian algorithm to your problem. Unfortunately I haven't looked at the specific algorithm in detail, but if admissible solutions are Bayesian algorithms, why would a Bayesian approach using your data not outperform (and therefore produce at least as good asymptotic behaviour) the frequentist algorithm? Also I would like to leave open the possibility that the algorithm you mention actually coincides with a Bayesian algorithm. Sometimes a different approach (frequentism/Bayesianism) can lead to the same conclusion (method).

  • Suppose I find myself in a situation in which I have several hypotheses and a set of data. The thing I'm interested in is the probability of each hypothesis given the data (in other words, finding out which hypothesis is correct). In frequentism there is no such thing as a 'probability of the hypothesis', after all a hypothesis is either true or false and we don't know which. So as a substitution frequentists consider the other conditional probability, the probability of seeing this data or worse provided the hypothesis is true, where worse must be defined beforehand. I'd say this is a wrong approach, a very very wrong approach. My opinion is that frequentists have adopted an incorrect worldview which leads them to dismiss and answer the wrong questions in this case. Here I expect pure conflict rather than some Bayesian approach which will coincide with frequentist methods.

I hope this explains how Bayesian and frequentist methods overlap and seem to disagree sometimes, and how many instances of frequentist algorthms should be compared to Bayesian algorithms with a properly chosen utility function.

Say I am interested in distinguishing between two hypotheses for p(a,b,c,d) (otherwise unrestricted):

hypothesis 1: "A is independent of B, C is independent of D, and nothing else is true"

hypothesis 2: "no independences hold"

Frequentists can run their non-parametric marginal independence tests. What is the (a?) Bayesian procedure here? As far as I can tell, for unrestricted densities p(a,b,c,d) no one knows how to write down the likelihood for H1. You can do a standard Bayesian setup here in some cases, e.g. if p(a,b,c,d) is multivariate normal, in which case H1 corresponds to a (simple) Gaussian ancestral graph model. Maybe one can do some non-parametric Bayes thing (???). It's not so simple to set up the right model sometimes, which is what Bayesian methods generally need.

You should check out chapter 20 of Jaynes' Probability Theory, which talks about Bayesian model comparison.

We wish to calculate P[H1 | data] / P[H2 | data] = P[data | H1] / P[data | H2] * P[H1] / P[H2].

For Bayesians, this problem does not involve "unrestricted densities" at all. We are given some data and presumably we know the space from which it was drawn (e.g. binary, categorical, reals...). That alone specifies a unique model distribution. For discrete data, symmetry arguments mandate a Dirichlet model prior with the categories given by all possible outcomes of {A,B,C,D}. For H2, the Dirichlet parameters are updated in the usual fashion and P[data | H2] calculated accordingly.

For H1, our Dirichlet prior is further restricted according to the independencies. The resulting distribution is not elegant (as far as I can tell), but it does exist and can be updated. For example, if the variables are all binary, then the Dirichlet for H2 has 16 categories. We'll call the 16 frequencies X0000, X0001, X0010, ... with parameters a0000, a0001, ... where the XABCD are the probabilities which the model given by X assigns to each outcome. Already, the Dirichlet for H2 is constrained to {X | sum(X) = 1, X > 0} within R^16. The Dirichlet for H1 is exactly the same function, but further constrained to the space {X | sum(X) = 1, X > 0, X00.. / X10.. = X01.. / X11.., X..00 / X10.. = X..01 / X..11} within R^16. This is probably painful to work with (analytically at the very least), but is fine in principle.

So we have P[data | H1] and P[data | H2]. That just leaves the prior probabilities for each model. At first glance, it might seem that H1 has zero prior, since it corresponds to a measure-zero subset of H2. But really, we must have SOME prior information lending H1 a nonzero prior probability or we wouldn't bother comparing the two in the first place. Beyond that, we'd have to come up with reasonable probabilities based on whatever prior information we have. Given no other information besides the fact that we're comparing the two, it would be 50/50.

Of course this is all completely unscalable. Fortunately, we can throw away information to save computation. More specifically, we can discretize and bin things much like we would for simple marginal independence tests. While it won't yield the ideal Bayesian result, it is still the ideal result given only the binned data.

I am a bit curious about the non-parametric tests used for H1. I am familiar with tests for whether A and B are independent, and of course they can be applied between C and D, but how does one test for independence between both pairs simultaneously without assuming that the events (A independent of B) and (C independent of D) are independent? It is precisely this difficulty which makes the Bayesian likelihood calculation of H1 such a mess, and I am curious how frequentist methods approach it.

My apologies for the truly awful typesetting, but this is not the evening on which I learn to integrate tex in lesswrong posts.

Thanks for this post.

The resulting distribution is not elegant (as far as I can tell).

In the binary case, the saturated model can be parameterized by p(S = 0) for S any non-empty subset of { a,b,c,d }. The submodel corresponding to H1 is just one where p({a,b} = 0) = p({a}=0)p({b}=0), and p({c,d} = 0) = p({c}=0)p({d}=0).

For Bayesians, this problem does not involve "unrestricted densities" at all.

I am sorry, Bayesians do not get to decide what my problem is. My problem involves unrestricted densities by definition. I don't think you get to keep your "fully general formalism" chops if you suddenly start redefining my problem for me.

how does one test for independence between both pairs simultaneously without assuming that the events (A independent of B) and (C independent of D) are independent?

This is a good question. I don't know a good answer to this that does not involve dealing with the likelihood in some way.

Sorry, I didn't mean to be dismissive of the general densities requirement. I mean that data always comes with a space, and that restricts the density. We could consider our densities completely general to begin with, but as soon as you give me data to test, I'm going to look at it and say "Ok, this is binary?" or "Ok, these are positive reals?" or something. The space gives the prior model. Without that information, there is no Bayesian answer.

I guess you could say that this isn't fully general because we don't have a unique prior for every possible space, which is a very valid point. For the spaces people actually deal with we have priors, and Jaynes would probably argue that any space of practical importance can be constructed as the limit of some discrete space. I'd say it's not completely general, because we don't have good ways of deriving the priors when symmetry and maximum entropy are insufficient. The Bayesian formalism will also fail in cases where the priors are non-normalizable, which is basically the formalism saying "Not enough information."

On the other hand, I would be very surprised to see any other method which works in cases where the Bayesian formalism does not yield an answer. I would expect such methods to rely on additional information which would yield a proper prior.

Regarding that ugly distribution, that parameterization is basically where the constraints came from. Remember that the Dirichlets are distributions on the p's themselves, so it's an hierarchical model. So yes, it's not hard to right down the subspace corresponding to that submodel, but actually doing an update on the meta-level distribution over that subspace is painful.

I mean that data always comes with a space, and that restricts the density.

Sorry I am confused. Say A,B,C,D are in [0,1] segment of the real line. This doesn't really restrict anything.

For the spaces people actually deal with we have priors.

I deal with this space. I even have a paper in preparation that deals with this space! So do lots of people that worry about learning graphs from data.

On the other hand, I would be very surprised to see any other method which works in cases where the Bayesian formalism does not yield an answer.

People use variations of the FCI algorithm, which from a Bayesian point of view is a bit of a hack. The asymptopia version of FCI assumes a conditional independence oracle, and then tells you what the model is based on what the oracle says. In practice, rather than using an oracle, people do a bunch of hypothesis tests for independence.


Regarding that ugly distribution

You are being so mean to that poor distribution. You know, H1 forms a curved exponential family if A,B,C,D are discrete. That's sort of the opposite of ugly. I think it's beautiful! H1 is an instance of Thomas Richardson's ancestral graph models, with the graph:

A <-> B <-> C <-> D <-> A

Oh, saying A,B,C,D are in [0,1] restricts quite a bit. It eliminates distributions with support over all the reals, distributions over R^n, distributions over words starting with the letter k, distributions over Turing machines, distributions over elm trees more than 4 years old in New Hampshire, distributions over bizarre mathematical objects that I can't even think of... That's a LOT of prior information. It's a continuous space, so we can't apply a maximum entropy argument directly to find our prior. Typically we use the beta prior for [0,1] due to a symmetry argument, but that admittedly is not appropriate in all cases. On the other hand, unless you can find dependencies after running the data through the continuous equivalent of a pseudo-random number generator, you are definitely utilizing SOME additional prior information (e.g. via smoothness assumptions). When the Bayesian formalism does not yield an answer, it's usually because we don't have enough prior info to rule out stuff like that.

I think we're still talking past each other about the distributions. The Bayesian approach to this problem uses an hierarchical distribution with two levels: one specifying the distribution p[A,B,C,D | X] in terms of some parameter vector X, and the other specifying the distribution p[X]. Perhaps the notation p[A,B,C,D ; X] is more familiar? Anyway, the hypothesis H1 corresponds to a subset of possible values of X. The beautiful distribution you talk about is p[A,B,C,D | X], which can indeed be written quite elegantly as an exponential family distribution with features for each clique in the graph. Under that parameterization, X would be the lambda vector specifying the exponential model. Unfortunately, p[X] is the ugly one, and that elegant parameterization for p[A,B,C,D | X] will probably make p[X] even uglier.

It is much prettier for DAGs. In that case, we'd have one beta distribution for every possible set of inputs to each variable. X would then be the set of parameters for all those beta distributions. We'd get elegant generative models for numerical integration and life would be sunny and warm. So the simple use case for FCI is amenable to Bayesian methods. Latent variables are still a pain, though. They're fine in theory, just integrate over them when calculating the posterior, but it gets ugly fast.

Oh, saying A,B,C,D are in [0,1] restricts quite a bit. It eliminates distributions with support over all the reals

???

There are easy to compute bijections from R to [0,1], etc.

The Bayesian approach to this problem uses an hierarchical distribution with two levels: one specifying the distribution p[A,B,C,D | X] in terms of some parameter vector X, and the other specifying the distribution p[X]

Yes, parametric Bayes does this. I am giving you a problem where you can't write down p(A,B,C,D | X) explicitly and then asking you to solve something frequentists are quite happy solving. Yes I am aware I can do a prior for this in the discrete case. I am sure a paper will come of it eventually.

Latent variables are still a pain, though.

The whole point of things like the beautiful distribution is you don't have to deal with latent variables. By the way the reason to think about H1 is that it represents all independences over A,B,C,D in this latent variable DAG:

A <- u1 -> B <- u2 -> C <- u3 -> D <- u4 -> A

where we marginalize out the ui variables.


which can indeed be written quite elegantly as an exponential family distribution with features for each clique in the graph

I think you might be confusing undirected and bidirected graph models. The former form linear exponential families and can be parameterized via cliques, the latter form curved exponential families, and can be parameterized via connected sets.

There are easy to compute bijections from R to [0,1], etc.

This is not true, there are bijections between R and (0,1), but not the closed interval.

Anyway there are more striking examples, for example if you know that A, B, C, D are in a discrete finite set, it restricts yout choices quite a lot.

Did you mean to say continuous bijections? Obviously adding two points wouldn't change the cardinality of an infinite set, but "easy to compute" might change.

You're right, I meant continuous bijections, as the context was a transformation of a probability distribution.

This is not true, there are bijections between R and (0,1), but not the closed interval.

No.

This is not true, there are bijections between R and (0,1), but not the closed interval.

You are right, apologies.

In frequentism there is no such thing as a 'probability of the hypothesis', after all a hypothesis is either true or false and we don't know which. So as a substitution frequentists consider the other conditional probability, the probability of seeing this data or worse provided the hypothesis is true, where worse must be defined beforehand. I'd say this is a wrong approach, a very very wrong approach.

That's not a substitution, and it's the probability of seeing the data provided the hypothesis is false, not true.

It gives the upper bound on the risk that you're going to believe in a wrong thing if you follow the strategy of "do experiments, believe the hypothesis if confirmed".

Mostly we want to update all probabilities until they're very close to 0 or to 1 , because the uncertainty leads to loss of expected utility in the future decision making.

In frequentism there is no such thing as a 'probability of the hypothesis'

Yeah, and in Bayesianism, any number between 0 and 1 will do - there's still no such thing as a specific "probability of the hypothesis", merely a change to an arbitrary number.

edit: it's sort of like arguing that worst-case structural analysis of a building or a bridge is a "very very wrong approach", and contrast it with some approach where you make up priors about the quality of concrete, and end up shaving a very very small percent off the construction cost, while building a weaker bridge which bites you in the ass eventually anyway when something unexpected happens to the bridge.

However, I think it is important to make a distinction between the validity of Bayesianism and the application of Bayesianism. I recall reading on lesswrong (although I cannot find the post at this moment) that the relation between Bayesianism and frequentism should be seen like the relation between Quantum Mechanics and classical physics

Quantum Mechanics isn't consistent with General Relativity, our best explanation of gravity. Despite decades of trying, neither can be formulated as an approximation of the other. Even if one day physicists finally figure out a "Theory of Everything", it would still be a model. It would be epistemically incorrect to claim it was "exact".

Curiously, there is one interpretation of QM known as Quantum Bayesianism, which holds that wavefunctions are subjective and they are the fundamental concepts for reasoning about the world, and subjective probability distributions arise as approximations of wavefunctions under decoherence. That is, Bayesianism itself might be an approximation of a "truer" epistemic theory!

My humble opinion is that there is no ultimately "true" epistemic theory. They are all just models of what humans do to gain knowledge of the world. Some models can work better than others, often within certain bounds, but none of them is perfect.

I am very interested in Quantum Bayesianism (in particular Leifer's work) because one of the things we have to do to be "quantum Bayesians" is figure out a physically neutral description of quantum mechanics, that is, a description of quantum mechanics that doesn't use physical jargon like 'time.' In particular, physicists I believe describe spacelike and timelike separated entanglement differently.

That is, a Bell inequality violation system (that is where B and C are space separated) has this graph

A -> B <-> C <- D

(where famously, due to Bell inequality violation, there is no hidden variable corresponding to the bidirected arc connecting B and C).

But the same system can arise in a temporally sequential model which looks like this:

A -> B -> D -> C, with B <-> C

where an appropriate manipulation of the density matrix corresponding to this system ought to give us the Bell system above. In classical probability we can do this. In other words, in classical probability the notion of "probabilistic dependence" is abstracted away from notions like time and space.


Also we have to figure out what "conditioning" even means. Can't be Bayesian if we don't condition, now can we!

where an appropriate manipulation of the density matrix corresponding to this system ought to give us the Bell system above. In classical probability we can do this. In other words, in classical probability the notion of "probabilistic dependence" is abstracted away from notions like time and space.

Yes, but the notion of Bayesian inference, where you start with a prior and build a sequence of posteriors, updating as evidence accumulates, has an intrinsic notion of time. I wonder if that's enough for Quantum Bayesianism (I haven't read the original works, so I don't really know much about it).

The temporal order for sequential computation of posteriors is just our interpretation, it is not a part of the formalism. If we get pieces of evidence e1, e2, ..., ek in temporal order, we could do Bayesian updating in the temporal order, or the reverse of the temporal order, and the formalism still works (that is our overall posterior will be the same, because all the updates commute). And that's because Bayes theorem says nothing about time anywhere.

My humble opinion is that there is no ultimately "true" epistemic theory. They are all just models of what humans do to gain knowledge of the world. Some models can work better than others, often within certain bounds, but none of them is perfect.

Exactly!

I've been thinking about what program, exactly, is being defended here, and I think a good name for it might be "prior-less learning". To me, all procedures under the prior-less umbrella have a "minimax optimality" feel to them. Some approaches search for explicitly minimax-optimal procedures; but even more broadly, all such approaches aim to secure guarantees (possibly probabilistic) that the worst-case performance of a given procedure is as limited as possible within some contemplated set of possible states of the world. I have a couple of things to say about such ideas.

First, for the non-probabilistically guaranteed methods: these are relatively few and far between, and for any such procedure it must be ensured that the loss that is being guaranteed is relevant to the problem at hand. That said, there is only one possible objection to them, and it is the same as one of my objections to prior-less probabilistically guaranteed methods. That objection applies generically to the minimaxity of the prior-less learning program: when strong prior information exists but is difficult to incorporate into the method, the results of the method can "leave money on the table", as it were. Sometimes this can be caught and fixed, generally in a post hoc and ad hoc way; sometimes not.

For probabilistically-guaranteed methods, there is a epistemic gap -- in principle -- in going from the properties of such procedures in classes of repeating situations (i.e., pre-data claims about the procedure) to well-warranted claims in the cases at hand (i.e., post-data claims about the world). But it's obvious that this is merely an in-principle objection -- after all, many such techniques can be and have been successfully applied to learn true things about the world. The important question is then: does the heretofore implicit principle justifying the bridging of this gap differ significantly from the principle justifying Bayesian learning?

Thanks a lot for the thoughtful comment. I've included some of my own thoughts below / also some clarifications.

First, for the non-probabilistically guaranteed methods: these are relatively few and far between

Do you think that online learning methods count as an example of this?

when strong prior information exists but is difficult to incorporate into the method, the results of the method can "leave money on the table", as it were

I think this is a valid objection, but I'll make two partial counter-arguments. The first is that, arguably, there may be some information that is not easy to incorporate as a prior but is easy to incorporate under some sort of minimax formalism. So Bayes may be forced to leave money on the table in the same way.

A more concrete response is that, often, an appropriate regularizer can incorporate similar information to what a prior would incorporate. I think the regularizer that I exhibited in Myth 6 is one example of this.

For probabilistically-guaranteed methods...

I think it's important to distinguish between two (or maybe three) different types of probabilistic guarantees; I'm not sure whether you would consider all of the below "probabilistic" or whether some of them count as non-probabilistic, so I'll elaborate on each type.

The first, which I presume is what you are talking about, is when the probability is due to some assumed distribution over nature. In this case, if I'm willing to make such an assumption, then I'd rather just go the full-on Bayesian route, unless there's some compelling reason like computational tractability to eschew it. And indeed, there exist cases where, given distributional assumptions, we can infer the parameters efficiently using a frequentist estimation technique, while the Bayesian analog runs into NP-hardness obstacles, at least in some regimes. But there are other instances where the Bayesian method is far cheaper computationally than the go-to frequentist technique for the same problem (e.g. generative vs. discriminative models for syntactic parsing), so I only mean to bring this up as an example.

The second type of guarantee is in terms of randomness generated by the algorithm, without making any assumptions about nature (other than that we have access to a random number generator that is sufficiently independent from what we are trying to predict). I'm pretty happy with this sort of guarantee, since it requires fairly weak epistemic commitments.

The third type of guarantee is somewhat in the middle: it is given by a partial constraint on the distribution. As an example, maybe I'm willing to assume knowledge of certain moments of the distribution. For sufficiently few moments, I can estimate them all accurately from empirical data, and I can even bound the error to within high probability, making no assumption other than independence of my samples. In this case, as long as I'm okay with making the independence assumption, then I consider this guarantee to be pretty good as well (as long as I can bound the error introduced into the method by the inexact estimation of the moments, which there are good techniques for doing). I think the epistemic commitments for this type of method are, modulo making an independence assumption, not really any stronger than those for the second type of method, so I'm also fairly okay with this case.

there may be some information that is not easy to incorporate as a prior but is easy to incorporate under some sort of minimax formalism

If you can cook up examples of this, that would be helpful.

I assume you mean in the "infer u" problem? Or am I missing something?

Also, is there a good real-world problem which this reflects?

Yes, I mixed up x and y, good catch. It's not trivial for me to fix this while maintaining wordpress-compatibility, but I'll try to do so in the next few days.

This problem is called the "compressed sensing" problem and is most famously used to speed up MRI scans. However it has also had a multitude of other applications, see here: http://en.wikipedia.org/wiki/Compressed_sensing#Applications.

Many L1 constraint-based algorithms (for example the LASSO) can be interpreted as producing maximum a posteriori Bayesian point estimates with Laplace (= double exponential) priors on the coefficients.

Yes, but in this setting maximum a posteriori (MAP) doesn't make any sense from a Bayesian perspective. Maximum a posteriori is supposed to be a point estimate of the posterior, but in this case, the MAP solution will be sparse, whereas the posterior given a laplacian prior will place zero mass on sparse solutions. So the MAP estimate doesn't even qualitatively approximate the posterior.

Ah, good point. It's like the prior, considered as a regularizer, is too "soft" to encode the constraint we want.

A Bayesian could respond that we rarely actually want sparse solutions -- in what situation is a physical parameter identically zero? -- but rather solutions which have many near-zeroes with high probability. The posterior would satisfy this I think. In this sense a Bayesian could justify the Laplace prior as approximating a so-called "slab-and-spike" prior (which I believe leads to combinatorial intractability similar to the fully L0 solution).

Also, without L0 the frequentist doesn't get fully sparse solutions either. The shrinkage is gradual; sometimes there are many tiny coefficients along the regularization path.

[FWIW I like the logical view of probability, but don't hold a strong Bayesian position. What seems most important to me is getting the semantics of both Bayesian (= conditional on the data) and frequentist (= unconditional, and dealing with the unknowns in some potentially nonprobabilistic way) statements right. Maybe there'd be less confusion -- and more use of Bayes in science -- if "inference" were reserved for the former and "estimation" for the latter.]

Also, without L0 the frequentist doesn't get fully sparse solutions either. The shrinkage is gradual; sometimes there are many tiny coefficients along the regularization path.

See this comment. You actually do get sparse solutions in the scenario I proposed.

Cool; I take that back. Sorry for not reading closely enough.

Okay, I'm somewhat leaving my expertise here and going on intuition, but I would be somewhat surprised if the problem exactly as you stated it turned out to be solvable by a compressed-sensing algorithm as roughly described on Wikipedia. I was trying to figure out how I'd approach the problem you stated, using techniques I already knew about, but it seemed to me more like a logical constraint problem than a stats problem, because we had to end up with exactly 100 nonzero coefficients and the 100 coefficients had to exactly fit the observations y, in what I assume to be an underdetermined problem when treated as a linear problem. (In fact, my intuitions were telling me that this ought to correspond to some kind of SAT problem and maybe be NP-hard.) Am I wrong? The Wikipedia description talks about using L1-norm style techniques to implement an "almost all coefficients are 0" norm, aka "L0 norm", but it doesn't actually say the exact # of coefficients are known, nor that the observations are presumed to be noiseless.

You minimize the L1-norm consistently with correct prediction on all the training examples. Because of the way the training examples were generated, this will yield at most 100 non-zero coefficients.

It can be proved that problem is solvable in polynomial time due to a reduction to linear programming:
let m = 10,000

You can further manipulate it to get rid of the absolute value. For each coefficient introduce two variables: and :