There once lived a great man named E.T. Jaynes. He knew that Bayesian inference is the only way to do statistics logically and consistently, standing on the shoulders of misunderstood giants Laplace and Gibbs. On numerous occasions he vanquished traditional "frequentist" statisticians with his superior math, demonstrating to anyone with half a brain how the Bayesian way gives faster and more correct results in each example. The weight of evidence falls so heavily on one side that it makes no sense to argue anymore. The fight is over. Bayes wins. The universe runs on Bayes-structure.

Or at least that's what you believe if you learned this stuff from Overcoming Bias.

Like I was until two days ago, when Cyan hit me over the head with something utterly incomprehensible. I suddenly had to go out and understand this stuff, not just believe it. (The original intention, if I remember it correctly, was to impress you all by pulling a Jaynes.) Now I've come back and intend to provoke a full-on flame war on the topic. Because if we can have thoughtful flame wars about gender but not math, we're a bad community. Bad, bad community.

If you're like me two days ago, you kinda "understand" what Bayesians do: assume a prior probability distribution over hypotheses, use evidence to morph it into a posterior distribution over same, and bless the resulting numbers as your "degrees of belief". But chances are that you have a very vague idea of what frequentists do, apart from deriving half-assed results with their ad hoc tools.

Well, here's the ultra-short version: frequentist statistics is the art of drawing true conclusions about the real world instead of assuming prior degrees of belief and coherently adjusting them to avoid Dutch books.

And here's an ultra-short example of what frequentists can do: estimate 100 independent unknown parameters from 100 different sample data sets and have 90 of the estimates turn out to be true to fact afterward. Like, fo'real. Always 90% in the long run, truly, irrevocably and forever. No Bayesian method known today can reliably do the same: the outcome will depend on the priors you assume for each parameter. I don't believe you're going to get lucky with all 100. And even if I believed you a priori (ahem) that don't make it true.

(That's what Jaynes did to achieve his awesome victories: use trained intuition to pick good priors by hand on a per-sample basis. Maybe you can learn this skill somewhere, but not from the Intuitive Explanation.)

How in the world do you do inference without a prior? Well, the characterization of frequentist statistics as "trickery" is totally justified: it has no single coherent approach and the tricks often give conflicting results. Most everybody agrees that you can't do better than Bayes if you have a clear-cut prior; but if you don't, no one is going to kick you out. We sympathize with your predicament and will gladly sell you some twisted technology!

Confidence intervals: imagine you somehow process some sample data to get an interval. Further imagine that hypothetically, for any given hidden parameter value, this calculation algorithm applied to data sampled under that parameter value yields an interval that covers it with probability 90%. Believe it or not, this perverse trick works 90% of the time without requiring any prior distribution on parameter values.

Unbiased estimators: you process the sample data to get a number whose expectation magically coincides with the true parameter value.

Hypothesis testing: I give you a black-box random distribution and claim it obeys a specified formula. You sample some data from the box and inspect it. Frequentism allows you to call me a liar and be wrong no more than 10% of the time reject truthful claims no more than 10% of the time, guaranteed, no prior in sight. (Thanks Eliezer for calling out the mistake, and conchis for the correction!)

But this is getting too academic. I ought to throw you dry wood, good flame material. This hilarious PDF from Andrew Gelman should do the trick. Choice quote:

Well, let me tell you something. The 50 states aren't exchangeable. I've lived in a few of them and visited nearly all the others, and calling them exchangeable is just silly. Calling it a hierarchical or multilevel model doesn't change things - it's an additional level of modeling that I'd rather not do. Call me old-fashioned, but I'd rather let the data speak without applying a probability distribution to something like the 50 states which are neither random nor a sample.

As a bonus, the bibliography to that article contains such marvelous titles as "Why Isn't Everyone a Bayesian?" And Larry Wasserman's followup is also quite disturbing.

Another stick for the fire is provided by Shalizi, who (among other things) makes the correct point that a good Bayesian must never be uncertain about the probability of any future event. That's why he calls Bayesians "Often Wrong, Never In Doubt":

The Bayesian, by definition, believes in a joint distribution of the random sequence X and of the hypothesis M. (Otherwise, Bayes's rule makes no sense.) This means that by integrating over M, we get an unconditional, marginal probability for f.

For my final quote it seems only fair to add one more polemical summary of Cyan's point that made me sit up and look around in a bewildered manner. Credit to Wasserman again:

Pennypacker: You see, physics has really advanced. All those quantities I estimated have now been measured to great precision. Of those thousands of 95 percent intervals, only 3 percent contained the true values! They concluded I was a fraud.

van Nostrand: Pennypacker you fool. I never said those intervals would contain the truth 95 percent of the time. I guaranteed coherence not coverage!

Pennypacker: A lot of good that did me. I should have gone to that objective Bayesian statistician. At least he cares about the frequentist properties of his procedures.

van Nostrand: Well I'm sorry you feel that way Pennypacker. But I can't be responsible for your incoherent colleagues. I've had enough now. Be on your way.

There's often good reason to advocate a correct theory over a wrong one. But all this evidence (ahem) shows that switching to Guardian of Truth mode was, at the very least, premature for me. Bayes isn't the correct theory to make conclusions about the world. As of today, we have no coherent theory for making conclusions about the world. Both perspectives have serious problems. So do yourself a favor and switch to truth-seeker mode.

New Comment
163 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Hypothesis testing: I give you a black-box random distribution and claim it obeys a specified formula. You sample some data from the box and inspect it. Frequentism often allows you to call me a liar and be wrong no more than 10% of the time, guaranteed, no priors in sight.

Wrong. If all black boxes do obey their specified formulas, then every single time you call the other person a liar, you will be wrong. P(wrong|"false") ~ 1.

I'm thinking you still haven't quite understood here what frequentist statistics do.

It's not perfectly reliable. They assume they have perfect information about experimental setups and likelihood ratios. (Where does this perfect knowledge come from? Can Bayesians get their priors from the same source?)

A Bayesian who wants to report something at least as reliable as a frequentist statistic, simply reports a likelihood ratio between two or more hypotheses from the evidence; and in that moment has told another Bayesian just what frequentists think they have perfect knowledge of, but simply, with far less confusion and error and mathematical chicanery and opportunity for distortion, and greater ability to combine the results of multiple experiments.

And more importantly, we understand what likelihood ratios are, and that they do not become posteriors without adding a prior somewhere.

Thanks for the catch, struck out that part. Yes, you can get your priors from the same source they get experimental setups: the world. Except this source doesn't provide priors. ETA: likelihood ratios don't seem to communicate the same info about the world as confidence intervals to me. Can you clarify?
Ok, bear with me. cousin_it's claim was that P(wrong|boxes-obey-formulas)<=.1, am I right? I get that P(wrong|"false" & boxes-obey-formulas) ~ 1, so the denial of cousin_it's claim seems to require P("false"|boxes-obey-formulas) > .1? I assumed that the point was precisely that the frequentist procedure will give you P("false"|boxes-obey-formulas)<=.1. Is that wrong?
My claim was what Eliezer said, and it was incorrect. Other than that, your comment is correct.
Ah, I parsed it wrongly. Whoops. Would it be worth replacing it with a corrected claim rather than just striking it?
Done. Thanks for the help!

a good Bayesian must never be uncertain about the probability of any future event

Who? Whaa? Your probability is your uncertainty.

Also, didn't we already cover metauncertainty here?
Yup. Shalizi's point is that once you've taken meta-uncertainty into account (by marginalizing over it), you have a precise and specific probability distribution over outcomes.

Well, yes. You have to bet at some odds. You're in some particular state of uncertainty and not a different one. I suppose the game is to make people think that being in some particular state of uncertainty, corresponds to claiming to know too much about the problem? The ignorance is shown in the instability of the estimate - the way it reacts strongly to new evidence.

I'm with you on this one. What Shalizi is criticizing is essentially a consequence of the desideratum that a single real number shall represent the plausibility of an event. I don't think the methods he's advocating dispense with the desideratum, so I view this as a delicious bullet-shaped candy that he's convinced is a real bullet and is attempting to dodge.
Shalizi says "Bayesian agents never have the kind of uncertainty that Rebonato (sensibly) thinks people in finance should have". My guess is that this means (something that could be described as) uncertainty as to how well-calibrated one is, which AFAIK hasn't been explicitly covered here.
I think what Shalizi means is that a Bayesian model is never "wrong", in the sense that it is a true description of the current state of the ideal Bayesian agent's knowledge. I.e., if A says an event X has probability p, and B says X has probability q, then they aren't lying even if p!=q. And the ideal Bayesian agent updates that knowledge perfectly by Bayes' rule (where knowledge is defined as probability distributions of states of the world). In this case, if A and B talk with each other then they should probably update, of course. In frequentist statistics the paradigm is that one searches for the 'true' model by looking through a space of 'false' models. In this case if A says X has probability p and B says X has probability q != p then at least one of them is wrong.

Can you give a detailed numerical examples of some problem where the Bayesian and Frequentist give different answers, and you feel strongly that the Frequentist's answer is better somehow?

I think you've tried to do that, but I don't fully understand most of your examples. Perhaps if you used numbers and equations, that would help a lot of people understand your point. Maybe expand on your "And here's an ultra-short example of what frequentists can do" idea?

Short answer: Bayesian answers don't give coverage guarantees. Long answer: see the comments to Cyan's post.
5Eliezer Yudkowsky
"Coverage guarantees" is a frequentist concept. Can you explain where Bayesians fail by Bayesian lights? In the real world, somewhere?
How about this: a Bayesian will always predict that she is perfectly calibrated, even though she knows the theorems proving she isn't.
6Eliezer Yudkowsky
A Bayesian will have a probability distribution over possible outcomes, some of which give her lower scores than her probabilistic expectation of average score, and some of which give her higher scores than this expectation. I am unable to parse your above claim, and ask for specific math on a specific example. If you know your score will be lower than you expect, you should lower your expectation. If you know something will happen less often than the probability you assign, you should assign a lower probability. This sounds like an inconsistent epistemic state for a Bayesian to be in.
I spent some time looking up papers, trying to find accessible ones. The main paper that kicked off the matching prior program is Welch and Peers, 1963, but you need access to JSTOR. The best I can offer is the following example. I am estimating a large number of positive estimands. I have one noisy observation for each one; the noise is Gaussian with standard deviation equal to one. I have no information relating the estimands; per Jaynes, I give them independent priors, resulting in independent posteriors*. I do not have information justifying a proper prior. Let's say I use a flat prior over the positive real line. No matter the true value of each estimand, the sampling probability of the event "my posterior 90% quantile is greater than the estimand" is less than 0.9 (see Figure 6 of this working paper by D.A.S. Fraser). So the more estimands I analyze, the more sure I am that the intervals from 0 to my posterior 90% quantiles will contain less than 90% of the estimands. I don't know if there's an exact matching prior in this problem, but I suspect it lacks the correct structure. * This is a place I think Jaynes goes wrong: the quantities are best modeled as exchangeable, not independent. Equivalently, I put them in a hierarchical model. But this only kicks the problem of priors guaranteeing calibration up a level.
2Eliezer Yudkowsky
I'm sorry, but the level of frequentist gibberish in this paper is larger than I would really like to work through. If you could be so kind, please state: What the Bayesian is using as a prior and likelihood function; and what distribution the paper assumes the actual parameters are being drawn from, and what the real causal process is governing the appearance of evidence. If the two don't match, then of course the Bayesian posterior distributions, relative to the experimenter's higher knowledge, can appear poorly calibrated. If the two do match, then the Bayesian should be well-calibrated. Sure looks QED-ish to me.
The example doesn't come from the paper; I made it myself. You only need to believe the figure I cited -- don't bother with the rest of the paper. Call the estimands mu_1 to mu_n; the data are x_1 to x_n. The prior over the mu parameters is flat in the positive subset of R^n, zero elsewhere. The sampling distribution for x_i is Normal(mu_i,1). I don't know the distribution the parameters actually follow. The causal process is irrelevant -- I'll stipulate that the sampling distribution is known exactly. Call the 90% quantiles of my posterior distributions q_i. From the sampling perspective, these are random quantities, being monotonic functions of the data. Their sampling distributions satisfy the inequality Pr(q_i > mu_i | mu_i) < 0.9. (This is what the figure I cited shows.) As n goes to infinity, I become more and more sure that my posterior intervals of the form (0, q_i] are undercalibrated. You might cite the improper prior as the source of the problem. However, if the parameter space were unrestricted and the prior flat over all of R^n, the posterior intervals would by correctly calibrated. But it really is fair to demand a proper prior. How could we determine that prior? Only by Bayesian updating from some pre-prior state of information to the prior state of information (or equivalently, by logical deduction, provided that the knowledge we update on is certain). Right away we run into the problem that Bayesian updating does not have calibration guarantees in general (and for this, you really ought to read the literature), so it's likely that any proper prior we might justify does not have a calibration guarantee.
Wanna bet? Literally. Have a Bayesian to make and a whole bunch of predictions and then offer her bets with payoffs based on what apparent calibration the results will reflect. See which bets she accepts and which she refuses.
Are you volunteering?
Sure. :) But let me warn you... I actually predict my calibration to be pretty darn awful.
We need a trusted third party.
Find a candidate. I was about to suggest we could just bet raw ego points by publicly posting here... but then I realised I prove my point just by playing. It should be obvious, by the way, that if the predictions you have me make pertain to black boxes that you construct then I would only bet if the odds gave a money pump. There are few cases in which I would expect my calibration to be superior to what you could predict with complete knowledge of the distribution.
Phooey. There goes plan A.
Plan B involves trying to use some nasty posterior inconsistency results, so don't think you're out of the woods yet.
I am convinced in full generality that being offered the option of a bet can only provide utility >= 0. So if the punch line is 'insuficiently constrained rationality' then yes, the joke is on me! And yes, I suspect trying to get my head around that paper would (will) be rather costly! I'm a goddam programmer. :P
I volunteer, if y'all tell me what to do.
I volunteer.
I think this is incorrect. A Bayesian doesn't predict a variance of zero on their calibration calculated ten samples later.
Of course not. If you choose to care only about the things Bayes can give you, it's a mathematical fact that you can't do better.
I didn't like the "by Bayesian lights" phrase either. What I take as the relevant part of the question is this: Can you provide an example of a frequentist concept that can be used to make predictions in the real world for which a bayesian prediction will fail? "Bayesian answers don't give coverage guarantees" doesn't demonstrate anything by itself. The question is could the application of Bayes give a prediction equal to or superior to the prediction about the real world implicit in a coverage guarantee? If you can provide such an example then you will have proved many people to be wrong in a significant, fundamental way. But I haven't seen anything in this thread or in either of Cyan's which fits that category.
Once again: the real-world performance (as opposed to internal coherence) of the Bayesian method on any given problem depends on the prior you choose for that problem. If you have a well-calibrated prior, Bayes gives well-calibrated results equal or superior to any frequentist methods. If you don't, science knows no general way to invent a prior that will reliably yield results superior to anything at all, not just frequentist methods. For example, Jaynes spent a large part of his life searching for a method to create uninformative priors with maxent, but maxent still doesn't guarantee you anything beyond "cross your fingers".

If your prior is screwed up enough, you'll also misunderstand the experimental setup and the likelihood ratios. Frequentism depends on prior knowledge just as much as Bayesianism, it just doesn't have a good formal way of treating it.

I give you some numbers taken from a normal distribution with unknown mean and variance. If you're a frequentist, your honest estimate of the mean will be the sample mean. If you're a Bayesian, it will be some number off to the side, depending on whatever bullshit prior you managed to glean from my words above - and you don't have the option of skipping that step, and don't have the option of devising a prior that will always exactly match the frequentist conclusion because math doesn't allow it in the general case . (I kinda equivocate on "honest estimate", but refusing to ever give point estimates doesn't speak well of a mathematician anyway.) So nah, Bayesianism depends on priors more, not "just as much". If tomorrow Bayesians find a good formalization of "uninformative prior" and a general formula to devise them, you'll happily discard your old bullshit prior and go with the flow, thus admitting that your careful analysis of my words about "unknown normal distribution" today wasn't relevant at all. This is the most fishy part IMO. (Disclaimer: I am not a crazy-convinced frequentist. I'm a newbie trying to get good answers out of Bayesians, and some of the answers already given in these threads satisfy me perfectly well.)
The normal distribution with unknown mean and variance was a bad choice for this example. It's the one case where everyone agrees what the uninformative prior is. (It's flat with respect to the mean and the log-variance.) This uninformative prior is also a matching prior -- posterior intervals are confidence intervals.
I didn't know that was possible, thanks. (Wow, a prior with integral=infinity! One that can't be reached as a posterior after any observation! How'd a Bayesian come by that? But seems to work regardless.) What would be a better example? ETA: I believe the point raised in that comment still deserves an answer from Bayesians.
Done, but I think a more useful reply could be given if you provided an actual worked example where a frequentist tool leads you to make a different prediction than the application of Bayes would (and where you prefer the frequentist prediction.) Something with numbers in it and with the frequentist prediction provided.
Here's one. There is one data point, distributed according to 0.5*N(0,1) + 0.5*N(mu,1). Bayes: any improper prior for mu yields an improper posterior (because there's a 50% chance that the data are not informative about mu). Any proper prior has no calibration guarantee. Frequentist: Neyman's confidence belt construction guarantees valid confidence coverage of the resulting interval. If the datum is close to 0, the interval may be the whole real line. This is just what we want [claims the frequentist, not me!]; after all, when the datum is close to 0, mu really could be anything.
Can you explain the terms "calibration guarantee", and what "the resulting interval" is? Also, I don't understand why you say there is a 50% chance the data is not informative about mu. This is not a multi-modal distribution; it is blended from N(0,1) and N(mu,1). If mu can be any positive or negative number, then the one data point will tell you whether mu is positive or negative with probability 1.
By "calibration guarantee" I mean valid confidence coverage: if I give a number of intervals at a stated confidence, then relative frequency with which the estimated quantities fall within the interval is guaranteed to approach the stated confidence as the number of estimated quantities grows. Here we might imagine a large number of mu parameters and one datum per parameter. Not easily. The second cousin of this post (a reply to wedrifid) contains a link to a paper on arXiv that gives a bare-bones overview of how confidence intervals can be constructed on page 3. When you've got that far I can tell you what interval I have in mind. I think there's been a misunderstanding somewhere. Let Z be a fair coin toss. If it comes up heads the datum is generated from N(0,1); if it comes up tails, the datum is generated from N(mu,1). Z is unobserved and mu is unknown. The probability distribution of the datum is as stated above. It will be multimodal if the absolute value of mu is greater than 2 (according to some quick plots I made; I did not do a mathematical proof). If I observe the datum 0.1, is mu greater than or less than 0?
Thanks Cyan. I'll get back to you when (and if) I've had time to get my head around Neyman's confidence belt construction, with which I've never had cause to acquaint myself.
This paper has a good explanation. Note that I've left one of the steps (the "ordering" that determines inclusion into the confidence belt) undetermined. I'll tell you the ordering I have in mind if you get to the point of wanting to ask me.
That's a lot of integration to get my head around.
All you need is page 3 (especially the figure). If you understand that in depth, then I can tell you what the confidence belt for my problem above looks like. Then I can give you a simulation algorithm and you can play around and see exactly how confidence intervals work and what they can give you.
It's called an improper prior. There's been some argument about their use but they seldom lead to problems. The posteriors usually has much better behavior at infinity and when they don't, that's the theory telling us that the information doesn't determine the solution to the problem. The observation that an improper prior cannot be obtain as a posterior distribution is kind of trivial. It is meant to represent a total lack of information w.r.t. some parameter. As soon you have made an observation you have more information than that.
Maybe the difference lies in the format of answers? * We know: set of n outputs of a random number generator with normal distribution. Say {3.2, 4.5, 8.1}. * We don't know: mean m and variance v. * Your proposed answer: m = 5.26, v = 6.44. * A Bayesian's answer: a probability distribution P(m) of the mean and another distribution Q(v) of the variance. How does a frequentist get them? If he hasn't them, what's his confidence in m = 5.26 and v = 6.44? What if the set contains only one number - what is the frequentist's estimate for v? Note that a Bayesian has no problem even if the data set is empty, he only rests with his priors. If the data set is large, Bayesian's answer will inevitably converge at delta-function around the frequentist's estimate, no matter what the priors are.
1cousin_it 50% confidence interval for mean: 4.07 to 6.46, stddev: 2.15 to 4.74 90% confidence interval for mean: 0.98 to 9.55, stddev: 1.46 to 11.20 If there's only one sample, the calculation fails due to division by n-1, so the frequentist says "no answer". The Bayesian says the same if he used the improper prior Cyan mentioned.
Hm, should I understand it that the frequentist assumes normal distribution of the mean value with peak at the estimated 5.26? If so, then frequentism = bayes + flat prior. Improper priors are however quite tricky, they may lead to paradoxes such as the two-envelope paradox.
The prior for variance that matches the frequentist conclusion isn't flat. And even if it were, a flat prior for variance implies a non-flat prior for standard deviation and vice versa. :-)
Of course, I meant flat distribution of the mean. The variance cannot be negative at least.
In this problem, yes. In the general case no one knows exactly what the flat prior is, e.g. if there are constraints on model parameters.
Using the flat improper prior I was talking about before, when there's only one data point the posterior distribution is improper, so the Bayesian answer is the same as the frequentist's.
Yep, I know that. Woohoo, an improper prior!
A Bayesian does not have the option of 'just skipping that step' and choosing to accept whichever prior was mandated by Fisher (or whichever other statistitian created or insisted upon the use of the particular tool in question). It does not follow that the Bayesian is relying on 'Bullshit' more than the frequentist. In fact, when I use the label 'bullshit' I usually mean 'the use of authority or social power mechanisms in lieu of or in direct defiance of reason'. I obviously apply 'bullshit prior' to the frequentist option in this case.
Why in the world doesn't a Bayesian have that option? I thought you were a free people. :-) How'd you decide to reject those priors in favor of other ones, anyway? As far as I currently understand, there's no universally accepted mathematical way to pick the best prior for every given problem and no psychologically coherent way to pick it of your head either, because it ain't there. In addition to that, here's some anecdotal evidence: I never ever heard of a Bayesian agent accepting or rejecting a prior.
That was a partial quote and partial paraphrase of the claim made by cousin_it (hang on, that's you! huh?). I thought that the "we are a free people and can use the frequentist implicit priors whenever they happen to be the best available" claim had been made more than enough times so I left off that nitpick and focussed on my core gripe with the post in question. That is, the suggestion that using priors because tradition tells you to makes them less 'bullshit'. I think your inclusion of 'just' alows for the possibility that off all possible configurations of prior probabilities the frequentist one so happens to be the one worth choosing. I'm confused. What do you mean by accepting or rejecting a prior?
Funny as it is, I don't contradict myself. A Bayesian doesn't have the option of skipping the prior altogether, but does have the option of picking priors with frequentist justifications, which option you call "bullshit", though for the life of me I can't tell how you can tell. Frequentists have valid reasons for their procedures besides tradition: the procedures can be shown to always work, in a certain sense. On the other hand, I know of no Bayesian-prior-generating procedure that can be shown to work in this sense or any other sense. Some priors are very bad. If a Bayesian somehow ends up with such a prior, they're SOL because they have no notion of rejecting priors.
There are two priors for A that a bayesian is unable to update from. p(A) = 0 and p(A) = 1. If a Bayesian ever assigns p(a) = 0 || 1 and are mistaken then they fail at life. No second chances. Shalizi's hypothetical agent started with the absolute (and insane) belief that the distribution was not a mix of the two gaussians in question. That did not change through the application of Bayes rule. Bayesians cannot reject a prior of 0. They can 'reject' a prior of "That's definitely not going to happen. But if I am faced with overwhelming evidence then I may change my mind a bit." They just wouldn't write that state as p=0 or imply it through excluding it from the a simplified model without being willing to review the model for sanity afterward.
I am trying to understand the examples on that page, but they seem strange; shouldn't there be a model with parameters, and a prior distribution for those parameters? I don't understand the inferences. Can someone explain?
Well, the first example is a model with a single parameter. Roughly speaking, the Bayesian initially believes that the true model is either a Gaussian around 1, or a Gaussian around -1. The actual distribution is a mix of those two, so the Bayesian has no chance of ever arriving at the truth (the prior for the truth is zero), instead becoming over time more and more comically overconfident in one of the initial preposterous beliefs.
Vocabulary nitpick: I believe you wrote "in luew of" in lieu of "in lieu of". Sorry, couldn't help it. IAWYC, anyhow.
Damn that word and its excessive vowels!

I didn't mean to rehabilitate frequentism! I only meant to point out that calibration is a frequentist optimality criterion, and one that Bayesian posterior intervals can be proved not to have in general. I view this as a bullet to be bitten, not dodged.

It's out of your hands now. Overcoming Bayes!

Can someone do something I've never seen anyone do - lay out a simple example in which the Bayesian and frequentist approaches give different answers?


I've had some training in Bayesian and Frequentist statistics and I think I know enough to say that it would be difficult to give a "simple" and satisfying example. The reason is that if one is dealing with finite dimensional statistical models (this is where the parameter space of the model is finite) and one has chosen a prior for those parameters such that there is non-zero weight on the true values then the Bernstein-von Mises theorem guarantees that the Bayesian posterior distribution and the maximum likelihood estimate converge to the same probability distribution (although you may need to use improper priors). The covers cases where we consider finite outcomes such as a toss of a coin or rolling a die.

I apologize if that's too much jargon, but for really simple models that are easy to specify you tend to get the same answer. Bayesian stats starts to behave different than frequentist statistics in noticeable ways when you consider infinite outcome spaces. An example here might be where you are considering probability distributions over curves (this arises in my research on speech recognition). In this case even if you have a seemingly sensible prior you can end ... (read more)

Thanks much! What do "non-zero weight" and "improper priors" mean? EDIT: Improper priors mean priors that don't sum to one. I would guess "non-zero weight" means "non-zero probability". But then I would wonder why anyone would introduce the term "weight". Perhaps "weight" is the term you use to express a value from a probability density function that is not itself a probability.
No problem. Improper priors are generally only considered in the case of continuous distributions so 'sum' is probably not the right term, integrate is usually used. I used the term 'weight' to signify an integral because of how I usually intuit probability measures. Say you have a random variable X that takes values in the real line, the probability that it takes a value in some subset S of the real line would be the integral of S with respect to the given probability measure. There's a good discussion of this way of viewing probability distributions in the wikipedia article. There's also a fantastic textbook on the subject that really has made a world of difference for me mathematically.
How about this?

I had another thought on the subject. Consider flipping a coin; a Bayesian says that the 50% estimate of getting tails is just your own inability to predict with sufficient accuracy; a frequentist says that the 50% is a property of the coin - or to be less straw-making about it, a property of large sets of indistinguishable coin-flips. So, ok, in principle you could build a coin-predictor and remove the uncertainty. But now consider an electron passing through a beam splitter. Here there is no method even in principle of predicting which Everett branch you... (read more)

The relevant property of the electron+beamsplitter(+everything else) system is that its wavefunction will be evenly split between the two Everett branches. No chance involved. 50% is how much I care about each branch. And after performing the experiment but before looking at the result, I can continue using the same reasoning: "I have already decohered, but whatever deterministic decision algorithm I apply now will return the same answer in both branches, so I can and should optimize both outcomes at once." Or I can switch to indexical uncertainty: "I am uncertain about which instance I am, even though I know the state of the universe with certainty." These two methods should be equivalent. If we ever do find some nondeterministic physical law, then you can have your probability as a fundamental property of particles. Maybe. I'm not sure how one would experimentally distinguish "one stochastic world" from "branch both ways" or from "secure pseudo-random number generator" in the absence of any interference pattern to have a precise theory of; but I'm not going to speculate here about what physicists can or can't learn.
I believe the answer to this question is currently "we don't know". But notice that "the electron" doesn't exist, it's a pattern ("just" a pattern? :)) in the wavefunction. A pattern which happens to occur in lots of places, so we call it an electron. My intuition, IANAP, is that if anything it is more natural to say the 50% belongs somehow to which branch you find yourself in, not the pattern in the wavefunction we call an electron.
Ok, but I don't think that matters for the question of frequentist versus Bayesian. You're still saying that the 50% is a property of something other than your own uncertainty. Moving the problem to lexical uncertainty seems to me to rely on moving the question in time; you can only do this after you've done the experiment but before you've looked at the measurement. This feels to me like asking a different question.
Finally, the electron is found at some certain polarisation. You just don't know which before actually doing the experiment (same as for the coin) and you can't make in principle (at least according to present model of physics - don't forget that non-local hidden variables are not ruled out) any observation which tells you the result with more certainty in advance (for coin you can). So, the difference is that the future of a classical system can be predicted with unlimited certainty from its present state, while for quantum system not so. This doesn't necessarily mean that the future is not determined. One can adopt the viewpoint (I think that it was even suggested on OB/LW in Eliezer's posts about timeless physics) that future is symmetric to the past - it exists in the whole history of universe, and if we don't know it now, it's our ignorance. I suppose you would agree that not knowing about the electron's past is a matter of our ignorance rather than a property of the electron itself, without regard to whether we are able to calculate it from presently available information, even in principle (i.e. using present theories). I also think that it has little merit to engage in discussions about terminology and this one tends in that direction. Practically there's no difference between saying that quantum probabilities are "properties of the system" or "of the predictor". Either we can predict, or not, and that's all what matters. Beware of the clause "in principle", as it often only obscures the debate. Edit: to formulate it a little bit differently, predictability is an instance of regularity in the universe, i.e. our ability to compress the data of the whole history of the universe into some brief set of laws and possibly not so brief set of initial conditions, nevertheless much smaller amount of information that the history of the universe recorded at each point and time instant. As we do not have this huge pack of information and thus can't say to what extent

That's what Jaynes did to achieve his awesome victories: use trained intuition to pick good priors by hand on a per-sample basis.

... as if applying the classical method doesn't require using trained intuition to use the "right" method for a particular kind of problem, which amounts to choosing a prior but doing it implicitly rather than explicitly ...

Our inference is conditional on our assumptions [for example, the prior P(Lambda)]. Critics view such priors as a difficulty because they are `subjective', but I don't see how it could be other

... (read more)
Frequentist methods often have mathematical justifications, so Bayesian priors should have them too.

Since we're discussing (among other things) noninformative priors, I'd like to ask: does anyone know of a decent (noninformative) prior for the space of stationary, bidirectionally infinite sequences of 0s and 1s?

Of course in any practical inference problem it would be pointless to consider the infinite joint distribution, and you'd only need to consider what happens for a finite chunk of bits, i.e. a higher-order Markov process, described by a bunch of parameters (probabilities) which would need to satisfy some linear inequalities. So it's easy to find a ... (read more)

I suppose it depends what you want to do, first I would point out that the set is in a bijection with the real numbers (think of two simple injections and then use Cantor–Bernstein–Schroeder), so you can use any prior over the real numbers. The fact that you want to look at infinite sequences of 0s and 1s seems to imply that you are considering a specific type of problem that would demand a very particular meaning of 'non-informative prior'. What I mean by that is that any 'noninformative prior' usually incorporates some kind of invariance: e.g. a uniform prior on [0,1] for a Bernoulli distribution is invariant with respect to the true value being anywhere in the interval.
The purpose would be to predict regularities in a "language", e.g. to try to achieve decent data compression in a way similar to other Markov-chain-based approaches. In terms of properties, I can't think of any nontrivial ones, except the usual important one that the prior assign nonzero probability to every open set; mainly I'm just trying to find something that I can imagine computing with. It's true that there exists a bijection between this space and the real numbers, but it doesn't seem like a very natural one, though it does work (it's measurable, etc). I'll have to think about that one.
What topology are you putting on this set? I made the point about the real numbers because it shows that putting a non-informative prior on the infinite bidirectional sequences should be at least as hard as for the real numbers (which is non-trivial). Usually a regularity is defined in terms of a particular computational model, so if you picked Turing machines (or the variant that works with bidirectional infinite tape, which is basically the same class as infinite tape in one direction), then you could instead begin constructing your prior in terms of Turing machines. I don't know if that helps any.
Each element of the set is characterized by a bunch of probabilities; for example there is p_01101, which is the probability that elements x_{i+1} through x_{i+5} are 01101, for any i. I was thinking of using the topology induced by these maps (i.e. generated by preimages of open sets under them). How is putting a noninformative prior on the reals hard? With the usual required invariance, the uniform (improper) prior does the job. I don't mind having the prior be improper here either, and as I said I don't know what invariance I should want; I can't think of many interesting group actions that apply. Though of course 0 and 1 should be treated symmetrically; but that's trivial to arrange. I guess you're right that regularities can be described more generally with computational models; but I expect them to be harder to deal with than this (relatively) simple, noncomputational (though stochastic) model. I'm not looking for regularities among the models, so I'm not sure how a computational model would help me.
Something about this discussion reminds me of a hilarious text: The moral of this story seems to be, Assume priors over generators, not over sequences. A noninformative prior over the reals will never learn that the digit after 0100 is more likely to be 1, no matter how much data you feed it.
Right, that is a good piece. But I'm afraid I was unclear. (Sorry if I was.) I'm looking for a prior over stationary sequences of digits, not just sequences. I guess the adjective "stationary" can be interpreted in two compatible ways: either I'm talking about sequences such that for every possible string w the proportion of substrings of length |w| that are equal to |w|, among all substrings of length |w|, tends to a limit as you consider more and more substrings (either extending forward or backward in the sequence); this would not quite be a prior over generators, and isn't what I meant. The cleaner thing I could have meant (and did) is the collection of stationary sequence-valued random variables, each of which (up to isomorphism) is completely described by the probabilities p_w of a string of length |w| coming up as w. These, then, are generators.
Janos, I spent some days parsing your request and it's quite complex. Cosma Shalizi's thesis and algorithm seem to address your problem in a frequentist manner, but I can't yet work out any good Bayesian solution.
One issue with say taking a normal distribution and letting the variance go to infinity (which is the improper prior I normally use) is that the posterior distribution distribution is going to have a finite mean, which may not be a desired property of the resulting distribution. You're right that there's no essential reason to relate things back to the reals, I was just using that to illustrate the difficulty. I was thinking about this a little over the last few days and it occurred to me that one model for what you are discussing might actually be an infinite graphical model. The infinite bi-directional sequence here are the values of bernoulli-distributed random variables. Probably the most interesting case for you would be a Markov-random field, as the stochastic 'patterns' you were discussing may be described in terms of dependencies between random variables. Here's three papers I read a little while back on the topic (and related to) something called an Indian Buffet process: ( ( ( These may not quite be what you are looking for since they deal with a bound on the extent of the interactions, you probably want to think about probability distributions of binary matrices with an infinite number of rows and columns (which would correspond to an adjacency matrix over an infinite graph).

Perhaps we can try an experiment? We have here, apparently, both Bayesians and frequentists; or at a minimum, people knowledgeable enough to be able to apply both methods. Suppose I generate 25 data points from some distribution whose nature I do not disclose, and ask for estimates of the true mean and standard deviation, from a Bayesian and a frequentist? The underlying analysis would also be welcome. If necessary we could extend this to 100 sets of data points, ask for 95% confidence intervals, and see if the methods are well calibrated. (This does proba... (read more)

There's a difficulty with your experimental setup in that you implicitly are invoking a probability distribution over probability distributions (since you represent a random choice of a distribution). The results are going to be highly dependent upon how you construct your distribution over distributions. If your outcome space for probability distributions is infinite (which is what I would expect), and you sampled from a broad enough class of distributions then a sampling of 25 data points is not enough data to say anything substantive. A friend of yours who knows what distributions you're going to select from, though, could incorporate that knowledge into a prior and then use that to win. So, I predict that for your setup there exists a Bayesian who would be able to consistently win. But, if you gave much more data and you sampled from a rich enough set of probability distributions that priors would become hard to specify a frequentist procedure would probably win out.
Hmm. I don't know if I'm a very random source of distributions; humans are notoriously bad at randomness, and there are only so many distributions readily available in standard libraries. But in any case, I don't see this as a difficulty; a real-world problem is under no obligation to give you an easily recognised distribution. If Bayesians do better when the distribution is unknown, good for them. And if not, tough beans. That is precisely the sort of thing we're trying to measure! I don't think, though, that the existence of a Bayesian who can win, based on knowing what distributions I'm likely to use, is a very strong statement. Similarly there exists a frequentist who can win based on watching over my shoulder when I wrote the program! You can always win by invoking special knowledge. This does not say anything about what would happen in a real-world problem, where special knowledge is not available.
You can actually simulate a tremendous number of distributions (and theoretically any to an arbitrary degree of accuracy) by doing an approximate inverse CDF applied to a standard uniform random variable see here for example. So the space of distributions from which you could select to do your test is potentially infinite. We can then think of your selection of a probability distribution as being a random experiment and model your selection process using a probability distribution. The issue is that since the outcome space is the space of all computable probability distributions Bayesians will have consistency problems (another good paper on the topic is here), i.e. the posterior distribution won't converge to the true distribution. So in this particular set up I think Bayesian methods are inferior unless one could devise a good prior over what distributions, I suppose if I knew that you didn't know how to sample from arbitrary probability distributions then if I put that in my prior then I may be able to use Bayesian methods to successfully estimate the probability distribution (the discussion of the Bayesian who knew you personally was meant to be tongue-in-cheek). In the frequentist case there is a known procedure due to Parzen from the 60's . All of these are asymptotic results, however, your experiment seems to be focused on very small samples. To the best of my knowledge there aren't many results in this case except under special conditions. I would state that without more constraints on the experimental design I don't think you'll get very interesting results. Although I am actually really in favor of such evaluations because people in statistics and machine learning for a variety of reasons don't do them, or don't do them on a broad enough scale. Anyway if you actually are interested in such things you may want to start looking here, since statistics and machine learning both have the tools to properly design such experiments.
The small samples are a constraint imposed by the limits of blog comments; there's a limit to how many numbers I would feel comfortable spamming this place with. If we got some volunteers, we might do a more serious sample size using hosted ROOT ntuples or zipping up some plain ASCII. I do know how to sample from arbitrary distributions; I should have specified that the space of distributions is those for which I don't have to think for more than a minute or so, or in other words, someone has already coded the CDF in a library I've already got installed. It's not knowledge but work that's the limiting factor. :) Presumably this limits your prior quite a lot already, there being only so many commonly used math libraries.

I think this was a great post for having both context and links and specifically (rather than generally) questioning assumptions the group hasn't visited in a while (if ever).

What does one read to become well versed in this stuff in two days; and how much skill with maths does it require?

Ouch! Now I see the two days stuff looks like boasting. Don't worry, all my LW posts up to now have contained stupid mathematical mistakes, and chances are people will find errors in this one too :-) (ETA: sure enough, Eliezer has found one. Luckily it wasn't critical.) I have a degree in math and competed at the national level in my teens (both in Russia), but haven't done any serious math since I graduated six years ago. The sources for this post were mostly Wikipedia and Google searches on keywords from Wikipedia.
My comment was an honest question and was not intended as derogatory...

I'm surprised that nobody has mentioned the Universal Prior yet. Eliezer also wrote a post on it.

... What is it that frequentists do, again? I'm a little out of touch.

Strong evidence can always defeat strong priors, and vice versa.

Is there anything more to the issue than this?

This isn't always the case if the prior puts zero probability weight on the true model. This can be avoided on finite outcome spaces, but for infinite outcome spaces no matter how much evidence you have you may not overcome the prior.
I thought that 0 and 1 were Bayesian sins, unattainable +/- infinity on the log-odds scale, and however strong your priors, you never make them that strong.
In finite dimensional parameter spaces sure, this makes perfect sense. But suppose that we are considering a stochastic process X1, X2, X3, .... where Xn is follows a distribution Pn over the integers. Now put a prior on the distribution and suppose that unbeknown to you Pn is the distribution that puts 1/2 probability weight on -n and 1/2 probability weight on n. If the prior on the stochastic process does not put increasing weight on integers with large absolute value, then in the limit the prior puts zero probability weight on the true distribution (and may start behaving strangely quite early on in the process). Another case is that the true probability model may be too complicated to write down or computationally infeasible to do so (say a Gaussian mixture with 10^(10) mixture components, which is certainly reasonable in a modern high-dimensional database), so one may only consider probability distributions that approximate the true distribution and put zero weight on the true model, i.e. it would be sensible in that case to have a prior that may put zero weight on the true model and you would search only for an approximation.

I didn't mean to rehabilitate frequentism! I only meant to point out that calibration is a frequentist optimality criterion, and that it's one that Bayesian posterior intervals can be proved not to have in general.

Too late. I have already updated to believe that a theory that demands priors can't be complete. Correct, maybe, but not complete. We should work out an approach that works well on more criteria instead of guarding the truth of what we already know. If Bayes were the complete answer, Jaynes wouldn't have felt the need to invent maxent or generalize the indifference principle. That may be the correct direction of inquiry. ETA: this was a response to Cyan saying he didn't mean to rehabilitate frequentism. :-)
Updated, eh? Where did your prior come from? :)
Overcoming Bias. :-)

I'd like to take advantage of frequentism's return to respectability to ask if anyonw knows where I can get a copy of "An Introduction to the Bootstrap" by Efron and Tibshirani.

It's on Google books, but I don't like reading things through Google books. It's for sale on-line, but it costs a lot and shipping takes a while. My university's library is supposed to have it, but the librarians can't find it. My local library hasn't heard of it.

I hardly know any statistics or probability; I've just been borrowing bits and pieces as I need them without... (read more)

Collecting new data is not justifiable in general -- the cost of the new data may outweigh the benefit to be gained from it. But let's assume that collecting new data has a negligible cost. As a Bayesian, what you desire is the smallest loss possible. For reasonable loss functions, the smaller the region over which your distribution spreads its uncertainty (that is to say, the smaller its variance) the smaller you expect your loss to be. The law of total variance can be interpreted to say that you expect the variance of the posterior distribution to be smaller than the variance of the prior distributions.* So collect more data! * law of total variance: prior variance = prior expectation of posterior variance + prior variance of posterior mean. This implies that the prior variance is larger than the prior expectation of posterior variance.
So, more data is good because it makes you more confident? I guess that makes sense, but it still seems strange not to care what you're confident in.
In any real problem there is a context and some prior information. Bayes doesn't give this to you -- you give it to Bayes along with the data and turn the crank on the machinery to get the posterior. The things you're confident about are in the context.
What about changing your mind?
In theory, if you can change your mind about something, you have uncertainty about it, and your prior distribution should reflect that. In practice, you abstract the uncertainty away by making some simplifying assumptions, do the analysis conditional on your assumptions, and reserve the right to revisit the assumptions if they don't seem adequate.
I didn't mean to ask how a bayesian changes his or her mind. I meant to ask how the thing you believe in can be in the context in situations where you change your mind based on new evidence.
Let's say I'm weighing some acrylamide powder on an electronic balance. (Gonna make me some polyacrylamide gel!) The balance is so sensitive that small changes in air pressure register in the last two digits. From what I know about air pressure variations from having done this before, I create a model for the data. Also because I've done this before, I can eyeball roughly how much powder I've got on the balance; this determines my prior distribution before reading the balance. Then I observe some data from the balance readout and update my distribution.
I can't tell without more information whether that's an example of what I mean by "changing your mind." Here's one that I think definitely qualifies: Let's say you're going to bet on a coin toss. You only have a small amount of information on the coin, and you decide for whatever reason that there's a 51% chance of getting heads. So you're going to bet on heads. But then you realize that there's a way to get more data. At this point, I'm thinking, "Gee, I hardly know anything about this coin. Maybe I'm better off betting on tails and I just don't know it. I should get that data." What I think you're saying about bayesians is that a bayesian would say, "Gee, 51% isn't very high. I'd like to be at least 80% sure. Since I don't know very much yet, it wouldn't take much more to get to 80%. I should get that data so I can bet on heads with confidence." Which sort of makes sense but is also a little strange.
Technical stuff: under the standard assumption of infinite exchangeability of coin tosses, there exists some limiting relative frequency for coin toss results. (This is de Finetti's theorem.) Key point: I have a probability distribution for this relative frequency (call it f) -- not a probability of a probability. Here you've said that my probability density for f is dispersed, but slightly asymmetric. I too can say, "Well, I have an awful lot of probability mass on values of f less than 0.5. I should collect more information to tighten this up." This mixes up f on the one hand with my distribution for f on the other. I can certainly collect data until I'm 80% sure that f is bigger than 0.5 (provided that f really is bigger than 0.5). This is distinct from being 80% sure of getting heads on the next toss.
I guess I just don't understand the difference between bayesianism and frequentism. If I had seen your discussion of limiting relative frequency somewhere else, I would have called it frequentist. I think I'll go back to borrowing bits and pieces. (Thank you for some nice ones.)
The key difference is that a frequentist would not admit the legitimacy of a distribution for f -- the data are random, so they get a distribution, but f is fixed, although unknown. Bayesians say that quantities that are fixed but unknown get probability distributions that encode the information we have about them.

Being a frequentist who hangs out on a Bayesian forum, I've thought about the difference between the two perspectives. I think the dichotomy is analogous to bottom-up verses top-down thinking; neither one is superior to the other but the usefulness of each waxes and wanes depending upon the current state of a scientific field. I think we need both to develop any field fully.

Possibly my understanding of the difference between a frequentist and Bayesian perspective is different than yours (I am a frequentist after all) so I will describe what I think the dif... (read more)

Counterexample: I have a Platonic view of mathematical truths, but a Bayesian view of probability. This does not make sense. For any given coin flip, either the fundamental truth is that the coin will come up heads, or the fundamental truth is that the coin will come up tails. The 50% probability represents my uncertainty about the fundamental truth, which is not a property of the coin.
That's interesting. I had imagined that people would be one way or the other about everything. Can anyone else provide datapoints on whether they are Platonic about only a subset of things? ... in order to triangulate closer to whether Platonism is "hard-wired", do you find it possible to be non-Platonic about mathematical truths? Can someone who is non-Platonic think about them Platonically -- is it a choice? See, that's just not the way a frequentist sees it. At first I notice, you are defining "fundamental truth" as what will actually happen in the next coin flip. In contrast, it is more natural to me to think of the "fundamental truth" as being what the probability of heads is, as a property of the coin and the flip, since the outcome isn't determined yet. But that's just asking different questions. So if the question is, what is the truth about the outcome of the next flip, we are talking about empirical reality (an experiment) and my perspective will be more Bayesian.
The outcome is determined timelessly, by the properties of the coin-tossing setup. It hasn't happened yet. What came before the coin determines the coin, but in turn is determined by the stuff located further and further in the past from the actual coin-toss. It is a type error to speak of when the outcome is determined.
Whether or not the universe is deterministic is not determined yet. Even if you and I both think that a deterministic universe is more logical, we should accept that certain figures of speech will persist. When I said the toss wasn't determined yet, I meant that the outcome of the toss was not known yet by me. I don't see how your correction adds to the discussion except possibly to make me seem naive, like I've never considered the concept of determinism before.
Map/territory distinction. As a property of the actual coin and flip, the probability of heads is 0 or 1 (modulo some nonzero but utterly negligible quantum uncertainty); as a property of your state of knowledge, it can be 0.5.
This comment helped things come into better focus for me. A frequentist believes that there is a probability of flipping heads, as a property of the coin and (yes, certainly) the conditions of the flipping. To a frequentist, this probability is independent of whether the outcome is determined or not and is even independent of what the outcome is. Consider the following sequence of flips: H T T A frequentist believes that the probability of flipping heads was .5 all along right? The first 'H' and the second 'T' and the third 'T' were just discrete realizations of this probability. The reasons why I've been calling this a Platonic perspective is because I think the critical difference in philosophy is the frequentist idea of this non-empirical "probability' existing independent of realizations. The probability of flipping heads for a set of conditions is .5 whether you actually flip the coins or not. However, frequentists agree you must flip the coin to know that the probability was .5. You might think this perspective is wrong-headed, and from a strict empirical view where you allow no Platonic entities/concepts, it kind of is. But the question I am really interested in is the following: to what extent is this point of view a choice we can be wrong or right about, or a perspective that some (or most?) people have hard-wired in their physical brain? Further, how can you argue that it isn't useful when it demonstrably has been so useful? Perhaps it facilitates or is necessary for some categories of abstract thought.
It could be hard-wired and still be right or wrong.
Correct, generally. But how could a perspective be wrong? I can think of two ways a perspective can be wrong: either because it (a) asserts a fact about external reality that is not true or (b) yields false conclusions about the external world. (a) Frequentists don't assert anything extra about the empirical world, they assert the use of (and obstensibly, the "existence" of) something symbolic. From the empiricist perspective, it's not really there. Like a little icon floating above or around the actual thing that your cursor doesn't interact with, so it can't be false in the empirical sense. (b) It would be fascinating if the frequentist perspective yielded false conclusions,and in such a case, is there any doubt that people would develop and embrace new mathematics that avoided such errors? In fact, we already see this happening where physics at extreme scales seems to defy intuition. If someone wanted to propose a new theory of everything I don't think anyone would ever criticize it on the grounds of not being frequentist. I guess the point here is just that it's useful or not. Later edit: Ok, I finally get it. Maybe the reason we don't understand physics at the extreme scales is because the frequentist approach was evolved (hard-wired) for understanding intermediate physical scales and it's (apparently) beginning to fail. You guys are using empirical philosophy to try and develop a new brand mathematics that won't have these inborn errors of intuition. So while I argue that frequentism has definitely been productive so far, you argue that it is intrinsically limited based on philosophical principles.
A perspective can be wrong if it arbitrarily assigns a probability of 1 to an event that has a symmetrical alternative. Read the intro to My Bayesian Enlightenment for Eliezer's description of a frequentist going wrong in this way with respect to the problem of the mathematician with two children, at least one of which is a boy.
No, Bayesian probability and orthodox statistics give exactly the same answers if the context of the problem is the same. The two schools may tend to have different ideas about what is a "natural" context, but any good textbook will always define exactly what the context is so that there is no guessing and no disagreement. Nevertheless, which event with a symmetrical alternative were you referring to? (You are given that the women said she has at least 1 boy, so it would be correct to assign that probability 1 in the context of a given assumption, obviously when applying the orthodox method.) Both approaches work differently, but they both work.
Given that the women does have a boy and a girl, what is the probability that she would state that at least one of them is a boy? By symmetry, you would expect a priori, not knowing anything about this person's preferences, that in the same conditions, she is equally likely to state that at least one of her children is a girl, so to assign the conditional probability higher than .5 does not make sense, so it is definitely not right for the frequentist Eliezer was talking with to act as though the conditional probability were 1. (The case could be made that the statement is also evidence that the woman has a tendency to say at least once child is a boy rather than that at least one child is a girl. But this is a small effect, and still does not justify assigning a conditional probability of 1.) I think the frequentist approach could handle this problem if applied correctly, but it seems that frequentist in practice get it wrong because they do not even consider the conditional probability that they would observe a piece of evidence if a theory they are considering is true. If you read the article I cited, Eliezer did explain that this was a mangling of the original problem, in which the mathematician made the statement in response to a direct question, so one could reasonably approximate that she would make the statement exactly when it is true. However, life does not always present us with neat textbook problems. Sometimes, the conditional probabilities are hard to figure out. I prefer the approach that says figure them out anyways to the one that glosses over their importance.
In the "correct" formulation of the problem (the one in which the correct answer is 1/3), the frequentist tells us what the mother said as a given assumption; considering the prior <1 probability of this is rendered irrelevant because we are now working in the subset of probability space where she said that. Considering whether a theory is true is science -- I completely agree science has important, necessary Bayesian elements.
Considering whether a theory is true is not science, althought the two are certainly useful to each other.
Giving "probably" of actual outcome for the coin flip as ~1 looks like a type error, although it's clear what you are saying. It's more like P(coin is heads|coin is heads), tautologically 1, not really a probability.
Edited to clarify.
This mixes together two different kinds of probability, confusing the situation. There is nothing fuzzy about the events defining the possible outcomes, the fact that there is also indexical uncertainty imposed on your mind while it observes the outcome is from a different problem.
Yeah, it just felt like too much work to add "...randomly sampling from future Everett branches according to the Born probabilities" or the like.
My point is that most of the time decision-theoretic problems are best handled in a deterministic world.
Hence it's your uncertainty, which can as well be handled in deterministic world. And in deterministic world, I don't know how to parse your sentence
Most of the time I think about math, I do not worry about if it is platonic or not. It was really only in the context of considering my epistemic uncertainty that 2+2=4 that I needed consider the nature of the territory I was mapping, and in this context it did not make sense for the territory to be the physical universe. You mean, the outcome has not been determined by you, since you have not observed all the physical properties of coin, the person flipping it, and the environment, and calculated out all the physics that would tell you whether it would land heads or tails. Attaching a probability to the coin is just our way of dealing with the ignorance and lack of computing power that prevents us from finding the exact answer.
What is your point? You iterate the Bayesian perspective, but do you agree that frequentists and Bayesians have different perspectives about this? I think it boils down to this: you are a frequentist (and I've been using the term Platonist) if you see the 50% probability as a property of the coin and the flip, and you are a Bayesian if you see the 50% probability as just a way of measuring the uncertainty. (Given your rationale for being Platonic about mathematics, I don't know if you are really a Platonist (in the hard-wired sense).)
My point is that the view that 50% probability is a fundamental property of the coin is wrong. It is an example of the Mind Projection Fallacy, thinking that because you don't know the result, somehow the universe doesn't either. It is certainly not the case that when asked about the result of a single coin flip, that giving a 50% probability for heads is the best possible answer. One could, in principle, do more investigation, and find that under the current conditions, the coin will come up heads (or tails) with 99% probability, and actually be right 99 times out of a hundred. I don't like to call this view of the probability as a fundamental property of the coin the frequentist view. It makes more sense to describe their perspective as a the probability being a combined property of the coin and a distribution of conditions in which it could be flipped. From this perspective, the mistake of attaching the probability to the coin is that miss the fact that you are flipping the coin in one particular condition, which will have a definite outcome. The probability comes from uncertainty of which condition from the distribution applies in this case, and of course, limits on computational power.
Are you saying that frequentists are wrong, or just me? If the former, how can you say that and consider the case closed when frequentists arrive at correct conclusions? What I'm suggesting is that Bayesians are committing the mind projection fallacy when they assert that frequentists are "wrong".
I am saying that you are wrong, and I am not sure there isn't more to the frequentist view than you are saying, so I am not prepared to figure out if it is right or wrong until I know more about what it is saying. Like in the Monty Hall problem, where the frequentists will agree to the correct answer after you beat them over the head with a computer simulation? Huh? What property of our minds do you think we are projecting onto the territory?
In the Monty Hall problem, intuiton tends to insist on the wrong answer, not valid application of frequentist theory. Just curious -- is the monty hall solution intuitively obvious to a "Bayesian", or do they also need to work through the (Bayesian) math in order to be convinced? Oops. I meant the typical mind fallacy.
For me at least, it is not so much that the solution is intuitively obvious as that setting up the Bayesian math forces me to ask the important questions. Then how do you think we are assuming that others think like us? It seems to me that we notice that others are not thinking like us, and that in this case, the different thinking is an error. I believe that 2+2=4, and if I said that someone was wrong for claiming that 2+2=3, that would not be a typical mind fallacy.
If the conclusions about reality were different, then the 2+2=4 verses 2+2=3 analogy would hold. Instead, you are objecting to the way frequentists approach the problem. (Sometimes, the difference seems to be as subtle as just the way they describe their approach.) Unless you show that they do not as consistently arrive at the correct answer, I think that objecting to their methods is the typical mind fallacy. Asserting that frequentists are wrong is actually very non-Bayesian, because you have no evidence that the frequentist view is illogical. Only your intuition and logic guides you here. So finally, as two rationalists, we may observe a bona fide difference in what we consider intuitive, natural or logical. I'm curious about the frequency of "natural" Bayesians and frequentists in the population, and wonder about their co-evolution. I also wonder about their lack of mutual understanding.
From Probability is in the Mind: The frequentists get this exactly wrong, ruling out the only the correct answer given their knowledge of the situation. The article goes on to describe scenarios in which having different partial knowledge to the situation leads to different probabilities. The frequentist perspective doesn't merely lead to the wrong answer for these scenarios, it fails to even produce a coherent analysis. Because there is no single probability attached to the event itself. The probability really is a property of the mind analyzing that event, to the extent that it is sensitive to the partial knowledge of that mind.
I like the response of Constant2: Eliezer responded with: and in the post he wrote But the frequentist does have a coherent analysis for solving this problem. Because we're not actually interested in the long-term probability of flipping heads (of which all anyone can say is that it is not .5) but the expected outcome of a single flip of a biased coin. This is an expected value calculation, and I'll even apply your idea about events with symmetric alternatives. (So I do not have to make any assumptions about the shape of the distribution of possible biases.) I will calculate my expected value using that the coin is biased towards heads or it is biased towards tails with equal probability. Let p be the probability that the coin flips to the biased orientation (i.e., p>.5). * The probability of heads is p with probability of 0.5. The probability of tails in this case is (1-p)*0.5. * The probability of heads is (1-p) with probability of 0.5. The probability of tails in this case is (p)*0.5. Thus, the expected value of heads is p .5+(1-p) 0.5 = 0.5. So there's no befuddlement, only a change in random variables from the long-term expectation of the outcome of many flips to the long-term expectation of whether heads or tails is preferred and a single flip. Which we should expect, since the random variable we are really being asked about has changed with the different contexts.
You just pushed aside your notion of an objective probability and calculated a subjective probability reflecting your partial information. Congratulations, you are a Bayesian.
I applied completely orthodox frequentist probability. I had predicted your objection would be that expected value is an application of Bayes' theorem, but I was prepared to argue that orthodox probability does include Bayes' theorem. It is one of the pillars of any introductory probability textbook. A problem isn't "Bayesian" or "frequentist". The approach is. Frequentists take the priors as given assumptions. The assumptions are incorporated at the beginning as part of the context of the problem, and we know the objective solution depends upon (and is defined within) a given context. A Bayesian in contrast, has a different perspective and doesn't require formalizing the priors as given assumptions. Apparently they are comfortable with asserting that the priors are "subjective". As a frequentist, I would have to say that the problem is ill-posed (or under-determined) to the extent that the priors/assumptions are really subjective. Suppose that I tell you I am going to pick up a card randomly and will ask you the probability of whether it is the ace of hearts. Your correct answer would be 1/52, even if I look at the card myself and know with probability 0 or 1 that the card is the ace of hearts. Frequentists have no problem with this "subjectivity", they understand it as different probabilities for different contexts. This is mainly a response to this comment, but is relevant here. Yet again, the misunderstanding has arisen because of not understanding what is meant by the probability is "in" the cards. In this way, Bayesian's interpret the frequentist's language too literally. But what does a frequentist actually mean? Just that the probability is objective? But the objectivity results from the preferred way of framing the problem ... I'm willing to consider and have suggested the possibility that this "Platonic probability" is an artifact of a thought process that the frequentist experiences empirically (but mentally).
I'm Platonistic in general I suppose, but I see Bayesianism as subjectively objective as a Platonistic truth.
I am a Platonist about mathematics by inclination, though I strongly suspect that this inclination is one that I should resist taking too seriously. I am a Bayesian about proability (at least in the following sense: it seems to me that the Bayesian approach subsumes the others, when they are applied correctly). I am mostly Bayesian about statistics, but don't see any reason why you shouldn't compute confidence intervals and unbiased estimators if you want to. I don't think "Platonist" and "frequentist" are at all the same thing, so I don't see any of the above as indicating that I'm (inclined to be) Platonist about some things but not about others. This seems to have prompted a debate about whether The Fundamental Truth is one about the general propensities of the coin, or one about what will happen the next time it's flipped. I don't see why there should be exactly one Fundamental Truth about the coin; I'd have thought there would be either none or many depending on what sort of thing one wishes to count as a "fundamental truth". Anyway: imagine a precision robot coin-flipper. I hope it's clear that with such a device one could arrange that the next million flips of the coin all come up heads, and then melt it down. So whatever "fundamental truth" there might be about What The Coin Will Do has to be relative to some model of what's going to be done to it. The point of coin-flipping is that it's a sort of randomness magnifier: small variations in what you do to it make bigger differences to what it does, so a small patch of possibility-space gets turned into a somewhat-uniform sampling of a larger patch (caution: Liouville, volume conservation, etc.). And the "fundamental truth" about the coin that you're appealing to is that, plus what it implies about its ability to turn kinda-sorta-slightly-random-ish coin flipping actions into much more random-ish outcomes. To turn that into an actual expectation of (more or less) independent p=1/2 Bernoulli trials, you need t
As a property of the coin and the flip and the environment and the laws of physics, the probability of heads is either 0 or 1. Just because you haven't computed it doesn't mean the answer becomes a superposition of what you might compute, or something. What you want is something like the result of taking a natural generalization of the exact situation - if the universe is continuous and the system is chaotic enough "round to some precision" works - and then computing the answer in this parameterized space of situations, and then averaging over the parameter. The problem is that "natural generalization" is pretty hard to define.
Being a Platonist and a frequentist aren't the same thing, but they correlate because they're both errors in thinking. The objection to frequentism is that it builds the answer into the solution so the problem actually changes from the original real world problem. This is fine as long as you can test discrepancies between theory and practice, but that's not always going to possible.
"A Bayesian, in contrast, believes that the realization is the primary thing ... the flipping of the coin yields the property of having 50% probability of coming up heads as you flip it." Thanks for trying to explain the difference, but I have no idea what this means.
What I was thinking about was this: Bayesians and frequentists both agree that if a fair coin is tossed n times (where n is very large) then a string of heads and tails will result and the probability of heads is .5 in some way related to the fact that the number of heads divided by n will approach .5 for large n. In my mind, the frequentist perspective is that the .5 probability of getting heads exists first, and then the string of heads and tails realize (i.e., make a physical manifestation of) this abstract probability lurking in the background. As though there is a bin of heads and tails somewhere with exactly a 1:1 ratio and each flip picks randomly from this bin. The Bayesian perspective is that there is nothing but the string of heads and tails -- only the string exists, there's no abstract probability that the string is a realization of. No picking from a bin in the sky. Inspecting the string, a Bayesian can calculate the 0.5 probability ... so the 0.5 probability results from the string. So according to me, the philosophical debate boils down to: what comes first, the probability or the string? I definitely get the impression that the Bayesians in this thread are skeptical of this description of the difference, and seem to prefer describing the difference of the Bayesian view as considering probability a measure of your uncertainty. However, probability is also taught as a measure of uncertainty in classical probability, so I'm skeptical of this dichotomy. (In favor of my view, the name "frequentist" comes from the observation that they believe in a notion of "frequency" -- i.e., that there's a hypothetical distribution "out there" that observed data is being sampled from.) Perhaps the difference in whether the correct approach is subjective or objective better gets to the heart of the difference. I am leaning towards this hypothesis because I can see how a frequentist can confuse something being objective with that something having an independent "exist
I have a little difficulty with the notion that the probable outcome of a coin toss is the result of the toss, rather like the collapse of a quantum probability into reality when observed. Looking at the coin before the toss, surely three probabilities may be objectively observed - H, T or E, and the likelihood of the coin coming to rest on its edge dismissed. Since the coin MUST then end up H or T ; the sum of both probabilities is 1, both outcomes are a priori equally likely and have the value1/2 before the toss. Whether one chooses to believe that the a priori probabilities have actual existence is a metaphysical issue.