Sorted by New

# Wiki Contributions

Or, I suppose, I would compare it to the other noted statistical paradox, whereby a famous hospital has a better survival rate for both mild and severe cases of a disease than a less-noted hospital, but a worse overall survival rate because it sees more of the worst cases. Merely because people don't understand how to do averages has little to do with them requiring an agent.

Now, if at least one child is a boy, it must be either the oldest child who is a boy, or the youngest child who is a boy. So how can the answer in the first case be different from the answer in the latter two?

Because they obviously aren't exclusive cases. I simply don't see mathematically why it's a paradox, so I don't see what this has to do with thinking that "probabilities are a property of things."

The "paradox" is that people want to compare it to a different problem, the problem where the cards are ordered. In that case, if you ask "Is your first card an ace," "Is your first card the ace of hearts," or "Is your first card the ace of spades," then there is the same probability of 1/3 in all three cases that both cards are aces given an answer "Yes." In that case the averaging makes sense because the cases are exclusive. In the "paradox," you can't average by saying that, "well, if there's one it's either the Ace of Spades or the Ace of Hearts, and in either case the answer would be 1/3, so it averages to 1/3." The problem is that you're double-counting.

I'm a Bayesian, but I don't see what this particular example has to do with subjectivity and agents. Probability is a result of the measure and the universe one is dealing with, and that may lead to results that seem unintuitive to those who don't grasp the mathematical principles (that seem obvious to me), but that has nothing to do with needing an agent. Define the measure space as you have done, claim that the probabilities are cold hard inherent facts about the objects themselves, and the result is independent of an agent.

This "paradox" seems on the same level to me as the confusion as to why the chances of rolling a 6 in three rolls of a die is not 1/2, or the problem that if one takes an outbound trip averaging 30 mph, then it is impossible to make the inbound trip so as to average 60 mph without teleporting instantaneously.

Sorry, posted too soon. I'm a little confused because you said that you rejected coherentist views of truth, but most mathematical empiricists these days use the idea of coherence to justify mathematics. (Mathematics is necessary for these scientific theories; these theories must be taken as a whole; therefore there is reason to accept mathematics, to grossly simplify.)

Are you also an empiricist in mathematics, akin to Quine and Putnam?

have you ever actually seen an infinite set?

Wait, are you an finitist or an intuitionist when it comes to the philosophy of mathematics? I don't think I've ever met one before in person?

Clearly you have to deal with infinite sets in order to apply Bayesian probability theory. So do you deal with mathematics as some sort of dualism where infinite sets are allowed so long as you aren't referring to the real world, or do you use them as a sort of accounting fiction but always assume that you're really dealing with limits of finite things but it makes the math and concepts easier?

Do you believe in the Axiom of Choice? Would the Banach-Tarski paradox make you less likely to?

Does the two envelopes problem make you less likely to believe the Bayesian theory of probability?

Can you justify your acceptance of the Bayesian theory of probability or the other mathematical axioms to which you hold through pure evidence?

Does it bother you that (as shown by Godel) no theory which contains elementary arithmetic (addition and multiplication of the natural numbers) can be both consistent and complete, and that no theory that contains elementary arithmetic and the concepts of formal provability can include a statement about its own consistency without being inconsistent? Does this evidence cause you to reject elementary arithmetic, based on the importance of consistency, rational logic, and the need for all true statements to be proved?

To add to the comment about gambling-- professional gamblers are well aware of the term Dutch book, if not necessarily with arbitrage (though arbitrage is becoming more commonly used).

Sorry, ambiguous wording. 0.05 is too weak, and should be replaced with, say, 0.005. It would be a better scientific investment to do fewer studies with twice as many subjects and have nearly all the reported results be replicable. Unfortunately, this change has to be standardized within a field, because otherwise you're deliberately handicapping yourself in an arms race.

Ah, yes, I see. I understand and lean instinctively towards agreeing. Certainly I agree about the standardization problem. I think it's rather difficult to determine what is the best number, though. 0.005 is as equally pulled out of a hat as Fisher's 0.05.

From your "A Technical Explanation of Technical Explanation":

Similarly, I wonder how many betters on horse races realize that you don't win by betting on the horse you think will win the race, but by betting on horses whose payoffs exceed what you think are the odds. But then, statistical thinkers that sophisticated would probably not bet on horse races.

Now I know that you aren't familiar with gambling. The latter is precisely what the professional gamblers do, and some of them do bet on horse races, or sports. Professional gamblers, unlike the amateurs, are sophisticated statistical thinkers. (And horse races are acceptable for sophisticated gamblers because there's only the small vigorish involved, and there's plenty of area for specialized knowledge.)

I think you've made a common statistical fallacy. Perhaps "someone who bets on horse races is probably not a sophisticated statistical thinker." But it does not necessarily follow that "someone who is a sophisticated statistical thinker probably does not bet on horse races." Bayes's Theorem, my man. :)

I know plenty of math Ph.D.s and grad students who do gamble online and look for arbitrage in a variety on ways. Whether they're representative I don't know.

I consider myself a 'Bayesian wannabe' and my favorite author thereon is E. T. Jaynes.

Ah, well then I agree with you. However, I'm interested in how you reconcile your philosophical belief as a subjectivist when it comes to probability with the remainder of this post. Of course, as a mathematician, arguments based on the idea of rejecting arbitrary axioms are inherently less impressive than to some other scientists. After all, most of us believe in the Axiom of Choice for some reason like that the proofs needing it are too beautiful and must be true; this is despite the Banach-Tarski paradox and knowing that it is logically independent of the other axioms of Zermelo-Fraenkel set theory.

it is demonstrably too high

Hmm. I lean towards agreeing that it may be too high, but at the same time there would be problems introduced from a lower standard as well. In particular, one such silly problem is that from testing many relationships at the same time, and one then inevitably finding that (from random chance) one is "significant," another thing that many scientists are not aware of, particularly when doing demographic studies. I shudder at the idea of ridiculous demographic data dredging and multiple comparisons being even more widespread.

That said, I, being largely a Bayesian, question the entire concept of null hypotheses. If you are truly "vehemently denying that the posterior probability following an experiment should depend on whether Alice decided ahead of time to conduct 12 trials or decided to conduct trials until 3 successes were achieved," then you must logically reject the entire concept of point hypothesis testing, not merely believe that it's arbitrary or too high, and favor something like Bayes factor.

Of course, it's hard for any of us to be completely consistent in our statistical tests or even understand them all or understand all the completely arbitrary axioms that go into our reasoning.

Probability theory still applies.

Ah, but which probability theory? Bayesian or frequentist? Or the ideas of Fisher?

How do you feel about the likelihood principle? The Behrens-Fisher problem, particularly when the variances are unknown and not assumed to be equal? The test of a sharp (or point) null hypothesis?

It does no good to assume that one's statistics and probability theory are not built on axioms themselves. I have rarely met a probabilist or statistician whose answer about whether he or she believes in the likelihood principle or in the logically contradicted significance tests (or in various solutions of the Behrens-Fisher problem) does not depend on some sort of axiom or idea of what simply "seems right." Of course, there are plenty of scientists who use mutually contradictory statistical tests, depending on what they're doing.

A calculated probability of 0.0000001 should diminish the emotional strength of any anticipation, positive or negative, by a factor of ten million.

And there goes Walter Mitty and Calvin, then. If it is justifiable to enjoy art or sport, why is it not justifiable to enjoy gambling for its own sake?

if the results are significant at the 0.05 confidence level. Now this is not just a ritualized tradition. This is not a point of arbitrary etiquette like using the correct fork for salad.

The use of the 0.05 confidence level is itself a point of arbitrary etiquette. The idea that results close to identical, yet one barely meeting the arbitrary 0.05 confidence level and the other not, can be separated into two categories of "significant" and "not significant" is a ritualized tradition indeed perhaps not understood by many scientists. There are important reasons for having an arbitrary point to mark significance, and of having that custom be the same throughout science (and not chosen by the experimenter). But the actual point is arbitrary etiquette.

The commonality of utensils or traffic signals in a culture is important, even though the specific forms that they take are arbitrary. The exact confidence level used is arbitrary; it's important that there is a standard.

Nor is Bayes's Theorem different from one place to another.

No, but the statistical concept of "confidence" depends on how an experimenter thinks that a study was designed. See for example this discussion of the likelihood principle.

If Alice conducts 12 trials with 3 successes and 9 failures, do we reject the null hypothesis p = .5 versus p < .5 at the 0.05 confidence level? It turns out that the answer depends in the classical frequentist sense on whether Alice decided ahead of time to conduct 12 trials or decided to conduct trials until 3 successes were achieved. What if Alice drops dead after recording the results of the trials but not the setup? Then Bob and Chuck, finding the notebook, may disagree about significance. The "significance" depends on the design of the experiment rather than the results alone, according to classical methods.

How many scientists understand that?