One, two, and three are all integers, and so is negative four. If you keep counting up, or keep counting down, you’re bound to encounter a whole lot more integers. You will not, however, encounter anything called “positive infinity” or “negative infinity,” so these are not integers.

    Positive and negative infinity are not integers, but rather special symbols for talking about the behavior of integers. People sometimes say something like, “5 + infinity = infinity,” because if you start at 5 and keep counting up without ever stopping, you’ll get higher and higher numbers without limit. But it doesn’t follow from this that “infinity - infinity = 5.” You can’t count up from 0 without ever stopping, and then count down without ever stopping, and then find yourself at 5 when you’re done.

    From this we can see that infinity is not only not-an-integer, it doesn’t even behave like an integer. If you unwisely try to mix up infinities with integers, you’ll need all sorts of special new inconsistent-seeming behaviors which you don’t need for 1, 2, 3 and other actual integers.

    Even though infinity isn’t an integer, you don’t have to worry about being left at a loss for numbers. Although people have seen five sheep, millions of grains of sand, and septillions of atoms, no one has ever counted an infinity of anything. The same with continuous quantities—people have measured dust specks a millimeter across, animals a meter across, cities kilometers across, and galaxies thousands of lightyears across, but no one has ever measured anything an infinity across. In the real world, you don’t need a whole lot of infinity.1

    In the usual way of writing probabilities, probabilities are between 0 and 1. A coin might have a probability of 0.5 of coming up tails, or the weatherman might assign probability 0.9 to rain tomorrow.

    This isn’t the only way of writing probabilities, though. For example, you can transform probabilities into odds via the transformation O = (P/(1 - P)). So a probability of 50% would go to odds of 0.5/0.5 or 1, usually written 1:1, while a probability of 0.9 would go to odds of 0.9/0.1 or 9, usually written 9:1. To take odds back to probabilities you use P = (O∕(1 + O)), and this is perfectly reversible, so the transformation is an isomorphism—a two-way reversible mapping. Thus, probabilities and odds are isomorphic, and you can use one or the other according to convenience.

    For example, it’s more convenient to use odds when you’re doing Bayesian updates. Let’s say that I roll a six-sided die: If any face except 1 comes up, there’s a 10% chance of hearing a bell, but if the face 1 comes up, there’s a 20% chance of hearing the bell. Now I roll the die, and hear a bell. What are the odds that the face showing is 1? Well, the prior odds are 1:5 (corresponding to the real number 1/5 = 0.20) and the likelihood ratio is 0.2:0.1 (corresponding to the real number 2) and I can just multiply these two together to get the posterior odds 2:5 (corresponding to the real number 2/5 or 0.40). Then I convert back into a probability, if I like, and get (0.4/1.4) = 2/7 = ~29%.

    So odds are more manageable for Bayesian updates—if you use probabilities, you’ve got to deploy Bayes’s Theorem in its complicated version. But probabilities are more convenient for answering questions like “If I roll a six-sided die, what’s the chance of seeing a number from 1 to 4?” You can add up the probabilities of 1/6 for each side and get 4/6, but you can’t add up the odds ratios of 0.2 for each side and get an odds ratio of 0.8.

    Why am I saying all this? To show that “odd ratios” are just as legitimate a way of mapping uncertainties onto real numbers as “probabilities.” Odds ratios are more convenient for some operations, probabilities are more convenient for others. A famous proof called Cox’s Theorem (plus various extensions and refinements thereof) shows that all ways of representing uncertainties that obey some reasonable-sounding constraints, end up isomorphic to each other.

    Why does it matter that odds ratios are just as legitimate as probabilities? Probabilities as ordinarily written are between 0 and 1, and both 0 and 1 look like they ought to be readily reachable quantities—it’s easy to see 1 zebra or 0 unicorns. But when you transform probabilities onto odds ratios, 0 goes to 0, but 1 goes to positive infinity. Now absolute truth doesn’t look like it should be so easy to reach.

    A representation that makes it even simpler to do Bayesian updates is the log odds—this is how E. T. Jaynes recommended thinking about probabilities. For example, let’s say that the prior probability of a proposition is 0.0001—this corresponds to a log odds of around -40 decibels. Then you see evidence that seems 100 times more likely if the proposition is true than if it is false. This is 20 decibels of evidence. So the posterior odds are around -40 dB + 20 dB = -20 dB, that is, the posterior probability is ~0.01.

    When you transform probabilities to log odds, 0 goes to negative infinity and 1 goes to positive infinity. Now both infinite certainty and infinite improbability seem a bit more out-of-reach.

    In probabilities, 0.9999 and 0.99999 seem to be only 0.00009 apart, so that 0.502 is much further away from 0.503 than 0.9999 is from 0.99999. To get to probability 1 from probability 0.99999, it seems like you should need to travel a distance of merely 0.00001.

    But when you transform to odds ratios, 0.502 and 0.503 go to 1.008 and 1.012, and 0.9999 and 0.99999 go to 9,999 and 99,999. And when you transform to log odds, 0.502 and 0.503 go to 0.03 decibels and 0.05 decibels, but 0.9999 and 0.99999 go to 40 decibels and 50 decibels.

    When you work in log odds, the distance between any two degrees of uncertainty equals the amount of evidence you would need to go from one to the other. That is, the log odds gives us a natural measure of spacing among degrees of confidence.

    Using the log odds exposes the fact that reaching infinite certainty requires infinitely strong evidence, just as infinite absurdity requires infinitely strong counterevidence.

    Furthermore, all sorts of standard theorems in probability have special cases if you try to plug 1s or 0s into them—like what happens if you try to do a Bayesian update on an observation to which you assigned probability 0.

    So I propose that it makes sense to say that 1 and 0 are not in the probabilities; just as negative and positive infinity, which do not obey the field axioms, are not in the real numbers.

    The main reason this would upset probability theorists is that we would need to rederive theorems previously obtained by assuming that we can marginalize over a joint probability by adding up all the pieces and having them sum to 1.

    However, in the real world, when you roll a die, it doesn’t literally have infinite certainty of coming up some number between 1 and 6. The die might land on its edge; or get struck by a meteor; or the Dark Lords of the Matrix might reach in and write “37” on one side.

    If you made a magical symbol to stand for “all possibilities I haven’t considered,” then you could marginalize over the events including this magical symbol, and arrive at a magical symbol “T” that stands for infinite certainty.

    But I would rather ask whether there’s some way to derive a theorem without using magic symbols with special behaviors. That would be more elegant. Just as there are mathematicians who refuse to believe in the law of the excluded middle or infinite sets, I would like to be a probability theorist who doesn’t believe in absolute certainty.


    1I should note for the more sophisticated reader that they do not need to write me with elaborate explanations of, say, the difference between ordinal numbers and cardinal numbers. I’m familiar with the different set-theoretic notions of infinity, but I don’t see a good use for them in probability theory.

    New to LessWrong?

    New Comment
    149 comments, sorted by Click to highlight new comments since: Today at 4:04 AM
    Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

    hmm... I feel even more confident about the existence of probability-zero statements than I feel about the existence of probability-1 statements. Because not only do we have logical contradictions, but we also have incoherent statements (like Husserl's "the green is either").

    Can one form subjective probabilities over the truth of "the green is either" at all? I don't think so, but I remember a some-months-ago suggestion of Robin's about "impossible possible worlds," which might also imply the ability to form probability esti... (read more)

    5[anonymous]13y
    If a statement is logically inconsistent with itself, it should not be part of your hypothesis space, and thus should not be assigned a probability at all.

    If you assign 0 to logical contradictions, you should assign 1 to the negations of logical contradictions. (Particularly since your confidence in bivalence and the power of negation is what allowed you to doubt the truth of the contradiction in the first place.) So it's strange to say that you feel safer appealing to 0s than to 1s.

    For my part, I have a hard time convincing myself that there's simply no (epistemic) chance that Graham Priest is right. On the other hand, assigning any value but 1 to the sentence "All bachelors are bachelors" just seems perverse. It seems as though I could only get that sentence wrong if I misunderstand it. But what am I assigning a probability to, if not the truth of the sentence as I understand it?

    Another way of saying this is that I feel queasy assigning a nonzero probability to "Not all bachelors are bachelors," (i.e., ¬(p → p)) even though I think it probably makes some sense to entertain as a vanishingly small possibility "All bachelors are non-bachelors" (i.e., p → ¬p, all bachelors are contradictory objects).

    One answer would be that an incoherent proposition is not a proposition, and so doesn't have any probability (not even zero, if zero is a probability.)

    Another answer would be that there is some probability that you are wrong that the proposition is incoherent (you might be forgetting your knowledge of English), and therefore also some probability that "the green is either" is both coherent and true.

    It's difficult to assign probability to incoherent statements, because since we can't mean anything by them, we can't assert a referent to the statement -- in that sense, the probability is indeterminate (additionally, one could easily imagine a language in which a statement such as "the green is either" has a perfectly coherent meaning -- and we can't say that's not what we meant, since we didn't mean anything). Recall also that each probability zero statement implies a probability one statement by its denial and vice versa, so one is equally capable of imagining them, if in a contrived way.

    2Baruta0711y
    Putting this in a slightly more coherent way. (I was having some trouble understanding the explanation, so I broke it down into layman's terms, might make it more easily understandable) If I assign P(0) to "Green is either" Then I assign P(1) to the statement "Green is not either" If you assign absolute certainty to any one statement you are, by definition assigning absolute impossibility to all other possibilities.
    -1Baruta0711y
    Putting this in a slightly more coherent way. (I was having some trouble understanding the explanation, so I broke it down into layman's terms, might make it more easily understandable) If I assign P(0) to "Green is either" Then I assign P(1) to the statement "Green is not either" If you assign absolute certainty to any one statement you are, by definition assigning absolute impossibility to all other possibilities.

    j.edwards, I think your last sentence convinced me to withdraw the objection -- I can't very well assign a probability of 1 to ~"the green is either" can I? Good point, thanks.

    Probabilities of 0 and 1 are perhaps more like the perfectly massless, perfectly inelastic rods we learn about in high school physics - they are useful as part of an idealized model which is often sufficient to accurately predict real-world events, but we know that they are idealizations that will never be seen in real life.

    However, I think we can assign the primeness of 7 a value of "so close to 1 that there's no point in worrying about it".

    0thrawnca8y
    Perhaps the only appropriate uses for probability 0 and 1 are to refer to logical contradictions (eg P & !P) and tautologies (P -> P), rather than real-world probabilities?

    In stark contrast to this time last week, I now internally believe the title of this post.

    I did enjoy "something, somewhere, is having this thought," Paul, despite all its inherent messiness.

    'Green is either' doesn't tell us much. As far as we know it's a nonsensical statement, but I think that makes it more believable than 'green is purple', which makes sense, but seems extremely wrong. You might as well try to assign a probability to 'flarg is nardle'. I can demonstrate that green isn't purple, but not that green isn't either, nor that flarg is... (read more)

    I think you can still have probabilities sum to 1: probability 1 would be the theoretical limit of probability reaching infinite certitude. Just like you can integrate over the entire real line, i.e -∞ to ∞ even though those numbers don't actually exist.

    i didn't get it.

    Easy: it's a demonstration of how you can never be certain that you haven't made an error even on the things you're really sure about.

    It's a cheap, dirty demonstration, but one nevertheless.

    Cumulant - can you state, with infinite certainty, that no-one will ever run faster than light?

    1[anonymous]12y
    By the current model it is impossible for anything to move faster than light*, but what is your confidence in the current model? Certainly high, but not infinite. Lets not mix up the map and the territory. As for running faster than light; certainly unlikely, but not infinitely so. If you define something as impossible in some model, and given that you want a probability within that model, or given that model, I don't know what happens however... *With certain complications. [Edit: Formating]
    -1[anonymous]12y
    By the current model it is impossible for anything to move faster than light*, but what is your confidence in the current model? Certainly high, but not infinite. Lets not mix up the map and the territory. As for running faster than light; certainly unlikely, but not infinitely so. If you define something as impossible in some model, and given that you want a probability within that model, or given that model, I don't know what happens however... *With certain complications.
    3pnrjulius12y
    Well, it does seem like someone who travels back in time to reach the finish before he got there has... not actually followed the rules of the 100-meter dash.

    Another way to think about probabilities of 0 and 1 is in terms of code length.

    Shannon told us that if we know the probability distribution of a stream of symbols, then the optimal code length for a symbol X is: l(X) = -log p(X)

    If you consider that an event has zero probability, then there's no point in assigning a code to it (codespace is a conserved quantity, so if you want to get short codes you can't waste space on events that never happen). But if you think the event has zero probability, and then it happens, you've got a problem - system crash or som... (read more)

    -6pnrjulius12y

    Brent,

    From what I understood on reading the Wikipedia article on Bayesian probability and inferring from how he writes (and correct me if I'm wrong), Eliezer is talking about your "subjective probability." You are a being, have consciousness, and interpret input as information. Given a lot of this information, you've formed an idea that 7 is prime. You've also formed an idea that other people exist, and that the sky is blue, which also have a high subjective probability in your mind because you have a lot of direct information to sustain that ... (read more)

    I agree with cumulant. The mathematical subject of probability is based on measure theory, which loses a ton of convergence theorems if we exclude 0 and 1. We can agree that things that are not known a priori can't have probability 0 or 1, but I think we must also agree that "an impossible thing will happen soon" has probability 0, because it's a contradiction. An alternate universe in which the number 7 (in the same kind of number system as ours, etc.) is prime is damn-near inconceivable, but an alternate universe in which impossible things are ... (read more)

    Speaking of measure theory, what probability should we assign to a uniformly distributed random real number on the interval [0, 1] being rational? Something bigger than 0? Maybe in practice we would never hold a uniform distribution over [0, 1] but would assign greater probability to "special" numbers (like, say, 1/2). But regardless of our probability distribution, there will exist subsets of [0, 1] to which we must assign probability 0.

    The only way I can see around this is to refuse to talk about infinite (or at least uncountable) sets. Are there others?

    I suspect Eliezer would object to my post claiming that I'm confusing map and territory, but I don't think that's fair. If there's a map you're trying to use all over the place (and you do seem to), then I claim it makes no sense to put a little region on the map labelled "maybe this map doesn't make any sense at all". If the map seems to make sense and you're still following it for everything, you'll have to ignore that region anyway. So is it really reasonable to claim that "the probability that probability makes sense is <1"?

    Utili... (read more)

    If there's a map you're trying to use all over the place (and you do seem to), then I claim it makes no sense to put a little region on the map labelled "maybe this map doesn't make any sense at all". If the map seems to make sense and you're still following it for everything, you'll have to ignore that region anyway.

    Janos, are you saying that it is in fact impossible that your map in fact doesn't make any sense? Because I do, indeed, have a little section of my map labelled "maybe this map doesn't make any sense at all", and every now and then, I think about it a little, because there are so many fundamental premises of which I am unsure even in their definitions. (E.g: "the universe exists", and "but why?") Just because this area of my map drops out of my everyday decision theory due to failure to generate coherent advice on preferences, does not mean it is absent from my map. "You must ignore" or rather "You should usually ignore" is decision theory, and probability theory should usually be firewalled off from preferences.

    Computable numbers are the largest countable class I know of.

    Either all countable sets are the ... (read more)

    I can admit the possibility that probability doesn't work, but not have to do anything about it. If probability doesn't work and I can't make rational decisions, I can expect to be equally screwed no matter what I do, so it cancels out of the equation.

    The definable real numbers are a countable superset of the computable ones, I think. (I haven't studied this formally or extensively.)

    If you don't want to assume the existence of certain propositions, you're asking for a probability theory corresponding to a co-intutionistic variant of minimal logic. (Cointuitionistic logic is the logic of affirmatively false propositions, and is sometimes called Popperian logic.) This is a logic with false, or, and (but not truth), and an operation called co-implication, which I will write a <-- b.

    Take your event space L to be a distributive lattice (with ordering <), which does not necessarily have a top element, but does have dual relative pseud... (read more)

    If the map seems to make sense and you're still following it for everything, you'll have to ignore that region anyway.

    Just cos it's not a very nice place to visit, doesn't mean it ain't on the map. ;)

    "1, 2, and 3 are all integers, and so is -4. If you keep counting up, or keep counting down, you're bound to encounter a whole lot more integers. You will not, however, encounter anything called "positive infinity" or "negative infinity", so these are not integers."

    This bothered me, more to the point, it hit on some stuff I've been thinking about. I realize I don't have a very good way to precisely state what I mean by "finite" or "eventually"

    The above, for instance, basically says "if infinity is no... (read more)

    Eliezer:

    I'm not sure what an "infinite set atheist" is, but it seems from your post that you use different notions of probability than what I think of as standard modern measure theory, which surprises me. Utilitarian's example of a uniform r.v. on [0, 1] is perfect: it must take some value in [0, 1], but for all x it takes value x with probability 0. Clearly you can't say that for all x it's impossible for the r.v. to take value x, because it must in fact take one of those values. But the probabilities are still 0. Pragmatically the way this com... (read more)

    Eliezer:

    I am curious as to why you asked Peter not to repeat his stunt.

    Also, I would really like to know how confident you are in your infinite set atheism and for that matter in your non-standard philosophy of mathematics attitudes in general.

    Regarding infinite set atheism:

    Is the set of "possible landing sites of a struck golf ball" finite or infinite?

    In other words, can you finitely parameterize locations in space? Physicists normally model "position" as n-tuples of real numbers in a coordinate system; if they were forced to model position discretely, what would happen?

    I can claim to see an infinite set each time I use a ruler...

    Doug S., I believe according to quantum mechanics the smallest unit of length is Planck length and all distances must be finite multiples of it.

    Eliezer:

    I should mention that I'm also an infinite set atheist.

    You've mentioned this before, and I have always wondered: what does this mean? Does it mean that you don't believe there are any infinite sets? If so, then you have to believe that a mathematician who claims the contrary (and gives the standard proof) is making a mistake somewhere. What is it?

    Frankly, even if you actually are a finitist (which I find hard to imagine), it doesn't seem relevant to this disucssion: every argument you have presented could equally well have been given by someone who accepts standard mathematics, including the existence of infinite sets.

    The nature of 0 & 1 as limit cases seem to be fascinating for the theorists. However, in terms of 'Overcoming Bias', shouldn't we be looking at more mundane conceptions of probability ? EY's posts have drawn attention to the idea that the amount of information needed to add additional cetainty on a proposition increases exponentially while the probability increases linearly. This says that in utilitarian terms, not many situations will warrant chasing the additional information above 99.9% certainty (outside technical implementations in nuclear phys... (read more)

    Doug S., I believe according to quantum mechanics the smallest unit of length is Planck length and all distances must be finite multiples of it.

    Not in standard quantum mechanics. Certain of the many theories unsupported hypotheses of quantum gravity (such as Loop Quantum Gravity) might say something similar to this, but that doesn't abolish every infinite set in the framework. The total number of "places where infinity can happen" in modern models has tended to increase, rather than decrease, over the centuries, as models have gotten more complex... (read more)

    I think Eliezer's "infinite set atheism" is a belief that infinite sets, although well-defined mathematically, do not exist in the "real world"; in other words, that any physical phenomenon that actually occurs can be described using a finite number of bits. (This can include numbers with infinite decimal expansions, as long as they can be generated by a finitely long computer program. Therefore, using pi in equations is not prohibited, because you're using the symbol "pi" to represent the program, which is finite.)

    A consequen... (read more)

    What do you mean by "infinite set atheism"? You are essentially stating that you don't believe in mathematical limits -- because that is one of the major consequences of infinite sets (or sequences).

    If you don't believe in those... well, you lose calculus, you lose the density of real numbers, you lose the need or understanding of man events with probability 0 or 1, and you lose the point of Zeno's Paradox.

    Janos is spot on about measure zero not implying impossibility. What is the probability of a golf ball landing at any exact point? Zero. ... (read more)

    What is the probability of a golf ball landing at any exact point? Zero.

    Wrong.

    I don't know which is more painful: Eliezer's errors, or those of his detractors.

    Perhaps you could clarify what exactly is an infinite set atheist in a full post...or maybe it's only worth a comment.

    Cumulant, I think the idea behind "infinite set atheism" is not that limits don't exist, but that that infinities are acceptable only as limits approached in a specified way. On this view, limits are not a consequence of infinite sets, as you contend; rather, only the limit exists, and the infinite set or sequence is merely a sloppy way of thinking about the limit.

    Eliezer, I'll second Matthew's suggestion above that you write a post on infinite set atheism; it looks as if we don't understand you.

    I think I understand the motive for rejecting infin... (read more)

    Caledonian: Not wrong. Take the field you're swinging at to be a plane. There are infinitely many points in that plane; that's just the density of the reals.

    Now say there is some probability density of landing spots; and, let's say no one spot is special in that it attracts golf balls more than points immediately nearby (i.e. our pdf is continuous and non-atomic). Right there, you need every point (as a singleton) to have measure 0.

    Go pick up Billingsley: measure 0 is not the same as impossible nor does it cause any problems.

    Take the field you're swinging at to be a plane. There are infinitely many points in that plane; that's just the density of the reals.

    And the location that the ball lands on will also be composed of infinitely many reals. Shall we compare the size of two infinite sets?

    I'd say that the ball is a sphere and consider the first point of impact (i.e. the tangency point of the plane to the sphere). Otherwise, you need to know a lot about the ball and the field where it lands.

    You can compare infinite sets. Take the sets A and B, A={1,2,3,...} and B={2,3,4,...}. B is, by construction, a subset of A. There's your comparison; yet, both are infinite sets.

    What assumptions would you make for the golf ball and the field? (To keep things clear, can we define events and probabilities separately?)

    Caledonian, every undergraduate who has ever taken a statistics class knows that the probability of any single point in a continuous distribution is zero. Probabilities in continuous space are measured on intervals. Basic calculus...

    I believe according to quantum mechanics the smallest unit of length is Planck length and all distances must be finite multiples of it.

    This is what I'm given to understand as well. Doesn't this take the teeth out of Zeno's paradox?

    Pragmatically the way this comes out is that "probability 0" doesn't imply impossible.

    Janos, would you agree that P=0 is a probability to the same degree that infinity is a number? Apologies for double post.

    Caledonian, every undergraduate who has ever taken a statistics class knows that the probability of any single point in a continuous distribution is zero.

    Gowder, everyone who's ever given the issue more than three-seconds'-thought knows that no statistical result ever involves a single point.

    Usually, if a die lands on edge we say it was a spoiled throw and do it over. Similarly if a Dark Lord writes 37 on the face that lands on top, we complain that the Dark Lord is spoiling our game and we don't count it.

    We count 6 possibilities for a 6-sided die, 5 possibilities for a 5-sided die, 2 possibilities for a 2-sided die, and if you have a die with just one face -- a spherical die -- what's the chance that face will come up?

    I think it would be interesting to develop probability theory with no boundaries, with no 0 and 1. It works fine to do it the way it's done now, and the alternative might turn up something interesting too.

    Ben:

    Well, that depends on your number system. For some purposes +infinity is a very useful value to have. For instance if you consider the extended nonnegative reals (i.e. including +infinity) then every measurable nonnegative extended-real-valued function on a measure space actually has a well-defined extended-nonnegative-real-values integral. There are all kinds of mathematical structures where an infinity element (or many) is indispensable. It's a matter of context. The question of what is a "number" is I think very vague given how many intere... (read more)

    I think it would be interesting to develop probability theory with no boundaries, with no 0 and 1. It works fine to do it the way it's done now, and the alternative might turn up something interesting too.

    You might want to check out Kosko's Fuzzy Thinking. I haven't gone any further into fuzzy logic, yet, but that sounds like something he discussed. Also, he claimed probability was a subset of fuzzy logic. I intend to follow that up, but there is only one of me, and I found out a long time ago that they can write it faster than I can read it.

    "On some golf courses, the fairway is readily accessible, and the sand traps are not. The green is either."

    Haha, very nice CGD. Shows how much those philosophers of language know about golf. :-)

    Although... hmm... interesting. I think that gives us a way to think about another probability 1 statement: statements that occupy the entire logical space. Example: "either there are probability 1 statements, or there are not probability 1 statements." That statement seems to be true with probability 1...

    Disallowing a symbol for "all events" breaks the definition of a probability space. It's probably easier to allow extended reals and break some field axioms than figure out do rigorous probability without a sigma-algebra.

    When re-working this into a book, you need to double check your conversions of log odds into decibels. By definition, decibels are calculated using log base 10, but some of your odds are natural logarithms, which confused the heck out of me when reading those paragraphs.

    Probability .0001 = -40 decibels (This is the only correct one in this post, all "decibel" figures afterwards are listed as 10 * the natural logarithm of the odds.) Probability 0.502 = 0.035 decibels Probability 0.503 = 0.052 decibels Probability 0.9999 = 40 decibels Probability 0.99999 = 50 decibels

    P.S. It'd be nice if you provided an RSS feed for the comments on a post, in addition to the RSS feed for the posts...

    I cannot begin to imagine where those numbers came from. Dangers of "Posted at 1:58 am", I guess. Fixed.

    0CuriousAlbert15y
    Could you respond to Neel Krishnaswami's post above, and this one as well?

    P(A&B)+P(A&~B)+P(~A&B)+P(~A&~B)=1

    Isn't the "1" above a probability?

    My intution as a mathematician declares that nobody will never develop an elegant mathematical formulation of probability theory that does not allow for statements that are logically impossible or certain, such as statements of the form p AND NOT p. And it is necessary, if the theory is to be isomorphic to the usual one, that these statements have probability 0 (if impossible) or 1 (if certain). However, I believe that it is quite reasonable to declare, as a condition demanded of any prior deemed rational, that only truly impossible or certain statements... (read more)

    0TobyBartels14y
    PS: cumulant-nimbus above gives a brief summary of the usual approach to measure theory. The pointless approach that I advocate can be suggested from that as follows: taboo \Omega. Neel Krishnamurti's comment is implicitly using the pointless approach; his event space is cumulant-nimbus's \mathcal{F}, and he works entirely in terms of events.
    0MarsColony_in10years9y
    Thanks for the link. It sounds like Yudkowsky is arguing something quite close to Cromwell's Rule, with a slight technical difference. From the Wikipedia article: Yudkowsky would argue that formal logic is not part of the territory, but rather part of our map (perhaps surveying equipment would be a good analogy, since the compass analogy is already taken by "moral compass"). As such, not even formal mathematical logic should be presumed to have 100% certainty. Of course, this raises the problem of constantly having to include the term p(math is fundamentally flawed) everywhere. instead of just writing p(heads) when calculating the odds of a coin flip or flips, now we'd have to use p(heads | ~math is fundamentally flawed). As a matter of sheer convenience, it would be easier to just add it to the list of axioms supporting the fundamental theorems that the rest of mathematics is built on. But that’s just semantics, I suppose. Wikipedia has a couple more interesting tidbits, that I’ve fished out for future readers:

    I'm kinda surprised that it's only been mentioned once in the comments (I only just discovered this site, really really great, by the way) and one from 2010 at that, but it seems to me that "a magical symbol to stand for "all possibilities I haven't considered" " does exist: the symbol "~" (i.e. not). Even the commenter who does mention it makes things complicated for himself: P(Q or ~Q)=1 is the simplest example of a proposition with probability 1.

    The proposition is of course a tautology. I do think (but I'm not sure) that th... (read more)

    0player_0313y
    Thanks for the analysis, MathijsJ! It made perfect sense and resolved most of my objections to the article. I was willing to accept that we cannot reach absolute certainty by accumulating evidence, but I also came up with multiple logical statements that undeniably seemed to have probability 1. Reading your post, I realized that my examples were all tautologies, and that your suggestion to allow certainty only for tautologies resolved the discrepancy. The Wikipedia article timtyler linked to seems to support this: "Cromwell's rule [...] states that one should avoid using prior probabilities of 0 or 1, except when applied to statements that are logically true or false." This matches your analysis - you can only be certain of tautologies. Also, your discussion of models neatly resolves the distinction between, say, a mathematically-defined die (which can be certain to end up showing an integer between 1 and 6) and a real-world die (which cannot quite be known for sure to have exactly six stable states). ---------------------------------------- Eliezer makes his position pretty clear: "So I propose that it makes sense to say that 1 and 0 are not in the probabilities; just as negative and positive infinity, which do not obey the field axioms, are not in the real numbers." It's true - you cannot ever reach a probability of 1 if you start at 0.5 and accumulate evidence, just as you cannot reach infinity if you start at 0 and add integer values. And the inverse is true, too - you cannot accumulate evidence against a tautology and bring its probability down to anything less than 1. But this doesn't mean a probability of 1 is an incoherent concept or anything. Eliezer: if you're going to say that 0 and 1 are not probabilities, you need to come up with a new term for them. They haven't gone away completely just because we can't reach them. Edit a year and a half later: I agree with the article as written, partially as a result of reading How to Convince Me That 2 + 2
    0fubarobfusco13y
    Jaynes avoids P(A|B) for "probability of A given evidence B" and P(B) for "probability of B", preferring P(A|BX) and P(B|X) where X is one's background knowledge. This and the above leads naturally to the question of ~X: the situation in which one's "background knowledge" is false. Assume that background knowledge X is the conjunction of a finite number of propositions. ~X is true if any of these propositions is false. If we can factor X into YZ where Y is the portion we suspect of being false — that is, if we can isolate for testing a portion of those beliefs we previously treated as "background knowledge" — then we can ask about P(A|BYZ) and P(A|B·~Y·Z).

    For any state of information X, we have P(A or not A | X) = 1 and P(A and not A | X) = 0. We have to have 0 and 1 as probabilities for probability theory even to work. I think you're taking a reasonable idea -- that P(A | X) should be neither 0 nor 1 when A is a statement about the concrete physical world -- and trying to apply it beyond its applicable domain.

    Consider the set of all possible hypotheses. This is a countable set, assuming I express hypotheses in natural language. It is potentially infinite as well, though in practice a finite mind cannot accomodate infintely-long hypotheses. To each hypothesis, I can try to assign a probability, on the basis of available evidence. These probabilities will be between zero and one. What is the probability that a rational mind will assign at least one hypothesis the status of absolute certainty? Either this is one (there is definitely such a hypothesis), or zero (th... (read more)

    1jimrandomh13y
    Self-referential hypotheses do not always map to truth values, and "a rational mind will assign at least one hypothesis the status of absolute certainty" is self-referential. The contradiction you've encountered arises from using a statement isomorphic to "this statement is false" and requiring it to have a truth value, not to a problem with excluding 0 and 1 as probabilities.

    Yes 0 and 1 are not probabilities. They're truth or falseness values. it's necessary to make a third 'truth value' for things that are unprovable, and possibly a fourth for things that are untestable.

    Digging up an old thread here, but an interesting point I want to bring up: a friend of mine claims that he internally assigns probability 1 (i.e. an undisprovable belief) only to one statement: that the universe is coherent. Because if not, then mnergarblewtf. Is it reasonable to say that even though no statement can actually have probability 1 if you're a true Bayesian, it's reasonable to internally establish an axiom which, if negated, would just make the universe completely stupid and not worth living in any more?

    0grouchymusicologist12y
    No, it's not. It's the same fundamental mistake that a lot of religious rhetoric about "faith" and "meaning" is founded on: that wanting something to be true counts as evidence that it is true. There's no reason to think that the universe depends for any of its properties on whether someone finds it stupid or not, or worth living in. I'd also suggest you try to draw your friend out a bit on what it means exactly for the universe to be "coherent." Can that notion be expressed formally? What would we expect to see if we lived in an incoherent universe? Obviously, I'm dubious that the "coherence" of the universe is in any proper sense a philosophical or scientific idea -- it sounds a lot more like an aesthetic one.
    0Sarokrae12y
    I think he just means "coherent" as "one which we can actually model based on our observations", i.e. one in which this whole exercise (rationality) makes any sense. He expects that the universe be incoherent with probability zero, and doesn't think there would be any sensible observations if this were the case (or any observation being possible if this were the case). ETA: Merriam-Webster Definition of COHERENT 1 a : logically or aesthetically ordered or integrated : consistent b : having clarity or intelligibility : understandable So, understandable and consistent: a universe which philosophy, mathematics and science can apply to in any meaningful way.
    0Alejandro112y
    A charitable paraphrase of "The universe is coherent" could be a statement of the universal validity of non-contradiction: For every p, not (p and not p). However, given the existence of paraconsistent logic and philosophers who take dialethism seriously, I cannot assign probability 1 to the claim that no aspect of the universe requires a contradiction in its description. I would go even further to say that I am quite more certain of many other claims (such as "1+1=2" and "2+2=4") than of such general and abstract propositions as "the universe is coherent" or even "there are no true contradictions".
    0Sarokrae12y
    I don't think he goes quite that far - he assigns no statements probability 0 or 1 within our own logic system, even (P and ¬P), because he believes it to be possible (though not very likely) that some other logic system might supersede our own. His belief is that it is not possible for ALL systems of logic to be incorrect, i.e. that (it is impossible to reason correctly about the universe) is necessarily false.
    1nshepperd12y
    There's a lot of logic to that. For extremely unlikely possibilities you can often get away with setting their probability to 0 to make the calculations a lot simpler. For possibilities where predicted utility is independent of your actions (like "reality is just completely random") it can also be worthwhile setting their probability to 0 (ie. ignoring them), since they're approximately a constant term in expected utility. These are good ways of approximating actual expected utility so you can still mostly make the right decisions, which bounded rationality requires.
    0Richard_Kennaway12y
    What is P(A|A)?
    0Sarokrae12y
    What do you mean by "|A"? It's well-defined in mathematics, sure, but in real life, surely the furthest you can go is "|experience/perception of evidence for A". Also, there's also the probability that the particular version of logic you're using is wrong.
    0[anonymous]12y
    How far you can go depends on what you mean by "go". It's perfectly possible to calculate, say, P(I see the coin come up heads | the coin is flipped once, it is fair, and I see the outcome), and actually much more difficult to calculate P(I see the coin come up heads | I have experience/perception of evidence for the facts that the coin is flipped once, it is fair, and I see the outcome).
    0Sarokrae12y
    "I see" is what I meant by perception/experience of evidence. Whenever I "see" something, there's always a non-zero chance of my brain deceiving me. The only thing you can really have to base your decisions on is P(I see the coin come up heads | I see/know the coin is flipped once, I know it is fair, and I see the outcome). P(the coin comes up heads|the coin is flipped once, it is fair and I know the outcome) is possible and easy to calculate, but not completely accurate to the world we live in.

    The ("Bayesian") framework explored in these essays replaces the two Cartesian options, affirmation and denial, by a continuum of judgmental probabilities in the interval from 0 to 1, endpoints included, or -- what comes to the same thing -- a continuum of judgmental odds in the interval from 0 to infinity, endpoints included. Zero and 1 are probabilities no less than 1/2 and 99/100 are. Probability 1 corresponds to infinite odds, 1:0. That's a reason for thinking in terms of odds: to remember how momentous it may be to assign probability 1 to a

    ... (read more)

    Interesting Log-Odds paper by Brian Lee and Jacob Sanders, November 2011.

    "When you work in log odds, the distance between any two degrees of uncertainty equals the amount of evidence you would need to go from one to the other. That is, the log odds gives us a natural measure of spacing among degrees of confidence."

    That observation is so useful and intuition friendly it probably deserves it's own blog post, and a prominent place in your book.

    Forgive me if this sounds condescending, but isn't saying "0 and 1 are not probabilities because they won't let you update your knowledge" basically the same as saying "you can't know something because knowing makes you unable to learn"? If we assign tautologies as having probability 1, then anything reducible to a tautology should have probability 1 (and similarly, all contradictions and things reducible to contradictions should have probability 0). For any arbitrarily large N, if you put 2 apples next to 2 apples and repeat the test N... (read more)

    -2Qiaochu_Yuan11y
    You shouldn't assign tautologies probability 1 either because your notion of what a tautology is might be a hallucination.
    2Richard_Kennaway11y
    This confuses object level and meta level. In probability theory, P(-A|A) = 0 and P(A|A) = 1, however uncertain you may be about Cox's theorem, or about whether you are actually thinking about the same A each time it appears in those formulas. No-one, as far as I know, has ever constructed a theory of probability in which these are assigned anything else but 0 and 1. That is not to say that it cannot be done, only that it has not been done. Until that is done, 0 and 1 are probabilities. The title of the article is a rhetorical flourish to convey the idea elaborated in its body, that to assert a probability, as a measure of belief, of 0 or 1 is to assert that no possible evidence could update that belief, that 0 and 1 are probabilities that you should not find yourself assigning to matters about which there could be any real dispute, and to suggest odds ratios or their logarithms as a better concept when dealing with practical matters associated with very low or very high probabilities. There is a very large difference between saying that the probability of winning a lottery is tiny and saying that it cannot happen at all; with enough participants it is almost certain to happen to someone. That difference is made clear by the log-odds scale, which puts the chance of a lottery ticket at 60 or more decibels below zero, not infinitely far below. In a world with 7 billion people, billion-to-1 chances happen every day. As an example of even tinier probabilities which are still detectably different from zero, consider a typical computer. A billion transistors in its CPU, clocked a billion times a second, running for a conveniently round length of time, a million seconds, which is about 12 days. Computers these days can easily do that without a single hardware error, which means that for every one of a million billion billion switching events, a transistor opened or closed exactly as designed. A million billion billion is about 1.5 times Avogadro's number. The correspondi

    So you are saying that statement "0 and 1 are not probabilities" has probability of 1?

    0thrawnca8y
    Nope. He's saying that based on his best analysis, it appears to be the case.

    O = (P / (1 - P))

    probabilities and odds are isomorphic

    This is undefined for P = 1. If you claim that that function is a real-valued bijection between probabilities and odds then P = 1 doesn't work so you're begging the question. Always take care to not divide by zero.

    Whether or not real-world events can have a probability of 0 or 1 is a different question than "are 0 and 1 probabilities?". They most certainly are.

    3Jake_NB3y
    I agree with this one. Without probabilities of 0 and 1, it's not merely that some proofs of theorems need to be revised, it's that probability theory simply doesn't work anymore, as its very axioms fall apart. I can give a statement that is absolutely certain, e.g. "x is true given that x is true". It doesn't teach me much about real life experiences, but it is infinitely certain. Likewise with probability 0. Please note that the probability is assigned to the territory here, not the map. The fact that I can't encounter these probabilities in real life has to do with my limits of sampling reality and interpreting it, being a flimsy brain, rather than the limits of probability theory. You may not want to believe that probability theory contains 0 and 1, but like many other cases, Math doesn't care about your beliefs.

    If I roll a die, then one of the events that can happen will happen. That's just saying that if S is my sample space, then P(S) = 1. Similarly, P(~S) = 0, which is just saying that impossible things won't happen. The former statement is an axiom in the standard mathematical treatments of the subject. These statements may be trivial, but I distrust any mathematics that can't handle trivial cases.

    Rejecting 1 as a probability would be catastrophic when you're dealing with discrete spaces. If you're the sort to reject infinity, then it would follow that all pr... (read more)

    2ike9y
    You can reject infinity without being able to enumerate every possibility. Your sample space will never practically contain all the possibilities. (How many times has something you never thought of happened?) There are 2^(however many bits of input come into my brain) possibilities for me to observe for any period of time, and I can never think about all of them. Any explicit sample space is going to miss possibilities. S is not well-defined. I think the point of the post was that 1 shouldn't be used for practical cases.
    0Epictetus9y
    Real life is complex enough that there is merit to the philosophical position that one should refrain from assigning probabilities of 0 or 1 to nontrivial events. Categorically denying that any event can have probability 0 or 1 is an extreme position (which, applied to itself, would really mean that a given event would have a high probability of not occurring with probability 0 or 1). From the purely mathematical standpoint, removing 0 and 1 from the set of possible probabilities breaks the current foundations of the theory. The existence of a sample space containing all possibilities does not depend on whether we humans can comprehend them all. If the sample space of all possibilities exists and P(S) < 1, then a lot of theorems break down. That's where you live with idealizations like absolute certainty (or almost certainty in the infinite case) or else find something other than probability to use to model the real world.
    0ike9y
    In theory, if you could list every possible observation you could make, that will have a 1 probability. It would take infinite time, because the following class of outcomes: has an infinite cardinality. I could get into how Godel means you can't even in principle describe all possible outcomes in a finite amount of space, even by referencing classes like I did, but I'll leave that up to you. There was a suggested fix to your problem in the post, why isn't that good enough for you? Sounds like he agrees that S has probability 1. Note: I agree that the way he "proves" the claim is not very good. He basically tries to switch your intuition by switching the wording of the question. Not too rigorous.
    0Epictetus9y
    When I say that the possibilities can be listed in principle, what I mean is that there some set S that contains them and make no reference to any practical problems with describing or storing its elements. Like the points and lines of geometry, it's a Platonic idealization. Because talk of magical symbols is a good sign that the passage was meant to ridicule the use of infinity. The very next paragraph seeks to expunge such "magical symbols" from probability theory.
    0ike9y
    If he has a rigorous way to ground probability theory without 0 and 1, I'm fine with it. He seemed to be saying that he wishes there was such a way, but until someone develops one, he's stuck with magical symbols. He acknowledges all your problems in the end of the post.

    This article is largely incoherent. The main justification is the abuse of an invalid transformations: y=x/(1-x) is not the bijection that he asserts it is, because it's not a function that maps [0,1] onto R. It's a function that maps [0,1] onto [1,\intfy] as a subset of the topological closure of R. And that's okay, but you can't say "well I don't like the topological closure of R, so I'll just use R and claim that 1 is where the problem is."

    Additionally, his discussion of log odds and such is perfectly fine, but ignores the fact that there are places where you do need to have an odds of 0:1, or a log odds of negative infinity. Probability theory stops working when you throw out 0 and 1, it's as simple as that.

    Even if you don't want to handle tautologies or contradictions, there are other ways to get P(X)=0 or 1. The probability that a real number chosen uniformly from the real interval [0,1] is 0. It has to be. It's a provable fact under ZFC and to decide otherwise is to say that you're more attached to the idea of 0 and 1 not being probabilities than you are to the fact that mathematics is consistent and if you really believe that, well, there's absolutely nothing I have to say to you.

    This is one of those situations where EY just demonstrates he knows very little mathematics.

    1Regex9y
    As someone who doesn't know much beyond basic statistics, in what way are 0 or 1 probabilities? Isn't it just axiomatic truth at that point? In that sense saying zero and one are probabilities is just saying 'certain' or 'impossible' as far as I understand it. Situations where an event will definitely or definitely not occur doesn't seem to be consistent with the idea of randomness which I've understood probability to revolve around. I suppose the alternative would be that we'd have to assume every mathematical proof has infinite evidence if we wanted to get anywhere productive- after all axioms are assumed to be true. It doesn't make much sense to need evidence in that scenario- except perhaps the probability of error and mistake? That isn't particularly calculable and would actually change from person to person. Using one and zero makes sense to me as a matter of assumed or proven truths, but I'm still unsure how that makes it a probability.
    1Epictetus9y
    "Event" is a very broad notion. Let's say, for example, that I roll two dice. The sample space is just a collection of pairs (a, b) where "a" is what die 1 shows and "b" is what die 2 shows. An event is any sub-collection of the sample space. So, the event that the numbers sum to 7 is the collection of all such pairs where a + b = 7. The probability of this event is simply the fraction of the sample space it occupies. If I rolled eight dice, then they'll never sum to seven and I say that that event occurs with probability 0. If I secretly rolled an unknown number of dice, you could reasonably ask me the probability that they sum to seven. If I answer "0", that just means that I rolled more than one and fewer than eight dice. It doesn't make the process less random nor the question less reasonable. If you treat an event as some question you can ask about the result of a random process, then 1 and 0 make a lot more sense as probabilities. For the mathematical theory of probability, there are plenty of technical reasons why you want to retain 1 and 0 as probabilities (and once you get into continuous distributions, it turns out that probability 1 just means "almost certain").
    0Regex9y
    This is what I meant by something being a proven truth- within the rules set one can find outcomes which are axiomatically impossible or necessary. The process itself may be random, but calling it random when something impossible didn't happen seems odd to me. The very idea that 1 may be not-quite-certain is more than a little baffling, and I suspect is the heart of the issue.
    1Epictetus9y
    If 1 isn't quite certain then neither is 0 (if something happens with probability 1, then the probability of it not happening is 0). It's one of those things that pops up when dealing with infinity. It's best illustrated with an example. Let's say we play a game where we flip a coin and I pay you $1 if it's heads and you pay me $1 if it's tails. With probability 1, one of us will eventually go broke (see Gambler's ruin). It's easy think of a sequence of coin flips where this never happens; for example, if heads and tails alternated. The theory holds that such a sequence occurs with probability 0. Yet this does not make it impossible. It can be thought of as the result of a limiting process. If I looked at sequences of N of coin flips, counted the ones where no one went broke and divided this by the total number of possible sequences, then as I let N go to infinity this ratio would go to zero. This event occupies an region with area 0 in the sample space.
    0Regex9y
    If the limit converges then it can hit 0 or 1. Got it. Thank you.
    1StellaAthena9y
    Formally, probability is defined via areas. The basic idea is that the probability of picking an element from a set A out of a set B is the ratio of the areas of A to B, where "area" can be defined not only for things like squares but also things like lines, or actually almost every* subset of R. So, lets say you want to randomly select a real number from the interval [0,1] and want to know the odds it falls in a set, S. The area of [0,1] is 1, so the answer is just the area of S. If S={0}, then S has area zero. If S=[0,1), then S has area 1. Not only are both of these theoretical possibilities, they are practical ones too. There are real world examples of probability zero events (the only one that comes to mind involves QM though so I don't want to bother with the details). Now, notice that this isn't the same thing as "impossible". Instead, it means more like "it won't happen I promise even by the time the universe ends". The way I tend to think about probability zero events is that they are so unlikely they are beyond the reach of the principle that as the number of trials increases, events become expected. For any nonzero probability, there is a number of trials, n, such that once you do it n times the expected value becomes greater than 1. That's not the case with probability zero events. Probability 1 events can then be thought of as the negation of probability 0 events. *not actually "almost every" in a formal sense, but "almost any" in a "unless you go try to build a set that you can't measure it probably has a well defined area" sense
    1Regex9y
    That seems a solid enough explanation, but how can something of probability zero have a chance to occur? How then do you represent an impossible outcome? It seems like otherwise 'zero' is equivalent to 'absurdly low'. That doesn't quite jive with my understanding.
    2StellaAthena9y
    Impossible things also have a probability of zero. I totally understand that this seems a bit unintuitive, and the underlying structure (which includes things like infinities of different sizes) is generally pretty unintuitive at first. Which is kinda just saying "sorry, I can't explain the intuition," which is unfortunately true.
    0Regex9y
    I'm just going to think of it as taking the limit as evidence approaches infinity. Because a probability next to zero and zero are identical, zero then is a probability?
    1Stephen_Cole9y
    I think one of the clearest expositions on these issues is ET Jaynes. The first three chapters (which is some of the relevant part) can be found at http://bayes.wustl.edu/etj/prob/book.pdf.
    0Regex9y
    "Not Found The requested URL /etj/prob/book.pdf. was not found on this server."
    2arundelo9y
    Fixed Jaynes link (no trailing period).
    0Regex9y
    Ah. Thanks!
    0Stephen_Cole9y
    Oops. Thanks for the fix!
    1David_Bolin9y
    Eliezer isn't arguing with the mathematics of probability theory. He is saying that in the subjective sense, people don't actually have absolute certainty. This would mean that mathematical probability theory is an imperfect formalization of people's subjective degrees of belief. It would not necessarily mean that it is impossible in principle to come up with a better formalization.
    1Lumifer9y
    Errr... as I read EY's post, he is certainly talking about the mathematics of probability (or about the formal framework in which we operate on probabilities) and not about some "subjective sense". The claim of "people don't actually have absolute certainty" looks iffy to me, anyway. The immediate two questions that come to mind are (1) How do you know? and (2) Not even a single human being?
    0Bound_up9y
    I think he's just acknowledging the minute(?) possibility that our apparently flawless reasoning could have a blind spot. We could be in a Matrix, or have something tampering with our minds, etcetera, such that the implied assertion: If this appears absolutely certain to me Then it must be true is indefensible.
    1Lumifer9y
    There are two different things. David_Bolin said (emphasis mine): "He is saying that in the subjective sense, people don't actually have absolute certainty." I am interpreting this as "people never subjectively feel they have absolute certainty about something" which I don't think is true. You are saying that from an external ("objective") point of view, people can not (or should not) be absolutely sure that their beliefs/conclusions/maps are true. This I easily agree with.
    0David_Bolin9y
    It should probably be defined by calibration: do some people have a type of belief where they are always right?
    0Lumifer9y
    Self-referential and anthropic things would probably qualify, e.g. "I believe I exist".
    -1StellaAthena9y
    You can phrase statements of logical deduction such that they have no premises and only conclusions. If we let S be the set of logical principles under which our logical system operates and T be some sentence that entails Y, then S AND T implies Y is something that I have absolute certainty in, even if this world is an illusion, because the premise of the implication contains all the rules necessary to derive the result. A less formal example of this would be the sentence: If the rules of logic as I know them hold and the axioms of mathematics are true, then it is the case that 2+2=4
    0Gram_Stone9y
    The way I view that statement is: "In our formalization, agents with absolutely certain beliefs cannot change those beliefs, we want our formalization to capture our intuitive sense of how an ideal agent would update its beliefs, a formalization with a quality of fanaticism does not capture our intuitive sense of how an ideal agent would update its beliefs, therefore we do not want a quality of fanaticism." And what state of the world would correspond to the statement "Some people have absolute certainty." ? Do you think that we can take some highly advanced and entirely fictional neuroimaging technology, look at a brain and meaningfully say, "There's a belief with probability 1." ? And on the other hand, I'm not afraid to talk about folk certainty, where the properties of an ideal mathematical system are less relevant, where everyone can remain blissfully logically uncertain to the fact that beliefs with probability 1 and 0 imply undesirable consequences in formal systems that possess them, and say things like "I believe that absolutely." I am not afraid to say something like, "That person will not stop believing that for as long as he lives," and mean that I predict with high confidence that that person will not stop believing that for as long as he lives. And once you believe that the formalization is trying to capture our intuitive sense of an ideal agent, and decide whether or not that quality of fanaticism captures it, and decide whether or not you're going to be a stickler about folk language, then I don't think that any question or confusion around that claim remains.
    0Lumifer9y
    People are not "ideal agents". If you specifically construct your formalization to fit your ideas of what an ideal agent should and should not be able to do, this formalization will be a poor fit to actual, live human beings. So either you make a system for ideal agents -- in which case you'll still run into some problems because, as has been pointed out upthread, standard probability math stops working if you disallow zeros and ones -- or you make a system which is applicable to our imperfect world with imperfect humans.
    1Gram_Stone9y
    I don't see why both aren't useful. If you want a descriptive model instead of a normative one, try prospect theory. I just don't see this article as an axiom that says probabilities of 0 and 1 aren't allowed in probability theory. I see it as a warning not to put 0s and 1s in your AI's prior. You're not changing the math so much as picking good priors.
    0Wes_W9y
    If we're asking what the author "really meant" rather than just what would be correct, it's on record. I... can't really recommend reading the entire thread at the link, it's kind of flame-war-y and not very illuminating.
    4EHeller9y
    I think the issue at hand is that 0 and 1 aren't special cases at all, but very important for the math of probability theory to work (try and construct a probability measure where some subset doesn't have probability 1 or 0). This is incredibly necessary for the mathematical idea of probability ,and EY seems to be confusing "are 0 and 1 probabilities relevant to Bayesian agents?" with "are 0 and 1 probabilities?" (yes, they are, unavoidably, not as a special case!).
    -1Lumifer9y
    It seems that EY position boils down to And that's a weak claim. EY's ideas of what is "mentally healthier" are, basically, his personal preferences. I, for example, don't find any mental health benefits in thinking about one over googolplex probabilities.
    0Wes_W9y
    Cromwell's Rule is not EY's invention, and relatively uncontroversial for empirical propositions (as opposed to tautologies or the like). If you don't accept treating probabilities as beliefs and vice versa, then this whole conversation is just a really long and unnecessarily circuitous way to say "remember that you can be wrong about stuff".
    3EHeller9y
    The part that is new compared to Cromwell's rule is that Yudkowsky doesn't want to give probability 1 to logical statements (53 is a prime number). Because he doesn't want to treat 1 as a probability, you can't expect complete sets of events to have total probability 1, despite them being tautologies. Because he doesn't want probability 0, how do you handle the empty set? How do you assign probabilities to statements like "A and B" where A and B are logical exclusive? (the coin lands heads AND the coin lands tails). Removing 0 and 1 from the math of probability breaks most of the standard manipulations. Again, it's best to just say "be careful with 0 and 1 when working with odds ratios."
    2Lumifer9y
    Nobody is saying EY invented Cromwell's Rule, that's not the issue. The issue is that "0 and 1 are not useful subjective certainties for a Bayesian agent" is a very different statement than "0 and 1 are not probabilities at all".
    0Wes_W9y
    You're right, I misread your sentence about "his personal preferences" as referring to the whole claim, rather than specifically the part about what's "mentally healthy". I don't think we disagree on the object level here.
    0David_Bolin9y
    Of course if no one has absolute certainty, this very fact would be one of the things we don't have absolute certainty about. This is entirely consistent.
    2Nikolaus Hansen4y
    How is that not a bijection? Specifically, a bijection between the sets [0,1[∪{1} and IR≥0∪{∞}, which seems exactly to be the claim EY is making. On a broader point, EY was not calling into question the correctness or consistency of mathematical concepts or claims but whether they have any useful meaning in reality. He was not talking about the map, he was talking about the territory and how we may improve the map to better reflect the territory.

    A real mathematician got in a debate with EY over this post, and made some really good points: https://np.reddit.com/r/badmathematics/comments/2bazyc/0_and_1_are_not_probabilities_any_more_than/cj43y8k

    Maybe this doesn't stand up mathematically, but I really like the intuition of log odds instead of probability. And this post explained it quite well. And the main point that you shouldn't believe in absolute certainties is still true. An ideal AI using probability theory would probably use log odds, and not have a 0 or 1.

    1mako yass5y
    /r/badmathematics is shuttered now, apparently. Oh no, really? Who would have thought that the sorts of people who have learned to enjoy indulging contempt would eventually turn on each other. I really wanted to see that argument though, tell me, to what extent was it an argument? Cause I feel like if a person in our school wanted to settle this, they'd just distinguish the practical cases EY's talking about from the mathematical cases the conversants are talking about and everyone would immediately wake up and realise how immaterial the disagreement always was (though some of them might decide to be mad about that instead), but also, maybe Eleizer kind of likes getting people riled up about this so maybe dispersing the confusion never crossed his mind. Contempt vampires meet contempt bender. Kismesis is forged. I shouldn't contribute to this "fight", but I can't resist. I'd have recommended he bring up how the brunt of the causal network formalization explicitly disallows certain or impossible events on the math level once you cross into a certain level of sophistication (I forget where the threshold was, but I remember thinking "well the bayesian networks that supports 0s and 1s sounds pretty darn limited and I'm going to give up on them just as my elders advised.") Ultimately, the "can't be 0 or 1" restriction is pretty obviously needed for a lot of the formulas to work robustly (you can't even use the definition of conditional probability without restricting the prior of the evidence! Cause there's a division in it! There are lots of divisions in probability theory!) So I propose that we give a name to that restriction, and I offer the name "credences". (Currently, it seems the word "credence" is just assigned to a bad overload of "probability" that uses percent notation instead of normal range. I doubt anyone will miss it.) A probability is a credence iff it is neither 0 nor 1. A practical real-world right and justly radically skeptical bayesian reasoner
    6Houshalter5y
    It's back btw. If it ever goes down again you can probably get it on wayback machine. And yes the /r/bad* subreddits are full of terrible academia snobbery. Badmathematics is the best of the bunch because mathematics is at least kind of objective. So they mostly talk about philosophy of mathematics. The problem is formal models of probability theory have problems with logical uncertainty. You can't assign a nonzero probability to a false logical statement. All the reasoning about probability theory is around modelling uncertainty in the unkown external world. This is an early attempt to think about logical uncertainty. Which MIRI has now published papers on and tried to formalize. Just calling them "log odds" is fine and they are widely used in real work. Btw what does "Response to previous version" mean? Was this article significantly editted? It doesn't seem so confrontational reading it now.
    3habryka5y
    We published new versions of a lot of sequences posts a few months ago. If you click on the "Response to previous version" text, you can read the original text that the comment was referring to.
    2[anonymous]4y
    Wait, these old posts have been edited? I don’t see the “Response to previous version” link. I’d like to read the originals, as they were written, in chronological order... there are other ways to consume the compendium if I so desired.
    2habryka4y
    Yeah, they were edited as part of the process of compiling Rationality: AI to Zombies. Usually that just involved adding some sources, cleaning up some sentences and fixing some typos.  The "Response to previous version" link is at the top of every comment that was posted on the previous version of the post. See here:  https://res.cloudinary.com/lesswrong-2-0/image/upload/v1577074008/Screen_Shot_2019-12-22_at_7.48.08_PM_n9bcp3.png 
    2[anonymous]4y
    I see it now. Is there some way to make the original article the default View? Or a link to the prior version at the top of the article?
    2habryka4y
    You can click on the date-stamp at the top of the post and select the earliest version from there. 
    -1mako yass5y
    Hmm. Reading. Okay. Summary: All of Eliezer's writing on this assumed the context of AGI/applied epistemology. That wasn't obvious from the materials, and it did not occur to this group of pure mathematicians to assume that same focus, because they're pure mathematicians and because of the activity they had decided to engage in on that day.

    What are the odds that the face showing is 1? Well, the prior odds are 1:5 (corresponding to the real number 1/5 = 0.20)

    I'm years late to this party, and probably missing something obvious. But I'm confused by Yudkowsky's math here. Wouldn't it be more correct to say that the prior odds of rolling a 1 are 1:5, which corresponds to a probability of 1/6 or 0.1666...? If odds of 1:5 correspond to a probability of 1/5 = 0.20, that makes me think there are 5 sides to this six-sided die, each side having equal probability.

    Put differently: when I think of how to ... (read more)

    2Tetraspace3y
    The real number 0.20 isn't a probability, it's just the same odds but written in a different way to make it possible to multiply (specifically you want some odds product * such that A:B * C:D = AC:BD). You are right about how you would convert the odds into a probability at the end.

    It's a nice analogy, but it all rests on whether infinite evidence is a thing or not, and there aren't arguments one way or the other here. (Sure, infinite evidence would mean "whatever log odds you come up with, this is even stronger", but that doesn't rule out it is a thing). 

    Like, how much evidence for the hypothesis "I'll perceive the die to come up a 4" does the event "Ok, die was thrown and I am perceiving it to be a 3" provide? Or how much evidence do I have of being conscious right now when I am feeling like something? I think any answer different from infinity is just playing a word game.

    Wouldn't the prior odds in the bell example be 1:4 when the chance is 0.2? But written is 1:5.

    1tiuxtj7mo
    The prior probability of rolling a 1 is 1/6 = ~0.16. The prior odds of rolling a 1 are 1:5 = 1/5 = 0.2. Some sources would call the 0.2 a "chance" to communicate that it's an odds and not a probability, but Eliezer seems to not do that, he just uses "chance" as a synonym for probability: Don't get confused, this 20% probability of hearing the bell is not the 0.2 from earlier.

    When you work in log odds, the distance between any two degrees of uncertainty equals the amount of evidence you would need to go from one to the other.

    What does "amount of evidence" in this sentence is supposed to mean? Is it the same idea that "bits of evidence" mentioned in these posts previously?

    The only way I can interpret this sentence as a definition of "amount of evidence", but then I don't understand what's the point of highlighting the sentence as if it's saying something more significant.

    1tiuxtj7mo
    Yes, it's bits of evidence.