I wonder what your life must be like. The way you write, it sounds as if you spend a lot of your time trying to convince crazy people (by which I mean most of humanity, of course) to be less crazy and more rational, like us. Why not just ignore them?
Then I looked at your Wikipedia entry and noticed how young you are. Ah! When I was your age, I was also trying to convert everybody. My endless arguments about software development methods, circa 1994, are still in Google's Usenet archive. So, who am I to talk?
(Note: Mostly I write comments that complain about...
I really enjoy your deep analysis of topics, but might I suggest writing shorter entries a bit more often?
Sam, if I write shorter entries, I'll never get everything said.
James: Snort. One of these days I'll do a post on "maturity bias".
Oh Eliezer, why'd you have to toss that parenthetical in about priors? The rest of the post is so wonderful. But the priors thing... hell, for my part, the objection isn't to priors that aren't imposed by some Authority, it's priors that are completely pulled out of one's arse. Demanding something beyond the whim of some metaphorical marble bouncing about in one's brain before one gets to make a probability statement is hardly the same as demanding capital-A-Authority.
The main reason people think a probability of 100% is necessary is that they assume that any other probability implies a subjective feeling of doubt, and they are aware that it is impossible to go through life in a continuous state of subjective doubt about whether or not food is necessary to sustain one's life and the like.
Once someone has separated the probability from this subjective feeling, a person can see that a subjective feeling of certainty can be justified in many cases, even though the probability is less than 100%. Once this has been admitted, I think most people would not have a problem with admitting that 100% probabilities are not possible.
I once thought I had a fast, crushing argument against the existence of God. I would point to various objects around me and ask "What does that do?" e.g. point at a beach ball and they would say "bounce," point at a bird and they would say "sing." And I would triumphantly say, "See, God can't exist!" and they would look at me blankly.
In my mind, every object I had ever seen did it's own peculiar thing - that is, it didn't do "just anything." Therefore the idea of omnipotence - the ability to make objects do...
Practically all words (eg "dead") actually cut across a continuum; maybe we should reclaim the word "certainty". We are certain that evolution is how life got to be what it is, because the level of doubt is so low you can pretty much forget about it. Any other meaning you could assign to the word "certain" makes it useless because everything falls on one side.
Denis, you will definitely enjoy this one.
Thinking of science in religious terms makes the whole thing fall over, for everyone. The only way you can have 100% certainty in something is if it's not falsifiable. The only way something can be unfalsifiable is if it is mysterious, ethereal and makes no testable predictions.
My withering rejoinder? "Yes, you may have god. But do you have any knowledge?"
'Any other meaning you could assign to the word "certain" makes it useless because everything falls on one side.'
Yes, exactly. The concept of "certainty" as colloquially used has no referents. It is such a strict standard, the only things that could possibly be referents for it are statements made by an omniscient entity. A statement by any lesser entity could be wrong and therefore could not be a referent. We are beating ourselves up over a concept no more valid that "unicorn."
Ian, your God argument doesn't follow:
1) Objects behave in certain, predictable ways 2) God can make objects behave arbitrarily 4) No objects behave arbitrarily 5) There is no God
Hidden argumentation:
3) Therefore, God WILL make things behave arbitrarily
You can't assume that an omnipotent God will behave in any particular way.
You can't assume that an omnipotent God will behave in any particular way.
What happens when an immovable object meets an irresistible force?
I think you've mischaracterized Ian's argument. He seems to be arguing that because everything in his empirical experience behaves in particular ways and appears incapable of behaving arbitrarily, that this is strong evidence to suggest that no other being could exist which is capable of behaving arbitrarily.
I think the real weakness of this argument is that the characterization of things as behaving in particular ways is way too simplistic. Balls may roll as well as bounce. They can deflate or inflate, or crumple or explode, or any of a thousand other ...
LG - Your objection is only valid if you assume I am starting with the idea of omnipotence and trying to use the evidence to disprove it. In fact, I am starting with the evidence and showing that the idea of omnipotence can't be arrived at without contradiction.
1) Objects behave in certain, predictable ways 2) Therefore the suggestion that someone could make an object behave arbitrarily contradicts the evidence 3) Therefore the idea of "omnipotence" contradicts the evidence 4) Therefore the idea of God contradicts the evidence
It's a different style of reasoning: starting with reality vs. starting with imagination and then using reality only as a test.
Ian, are you arguing that the concept of omnipotence is incoherent, or merely (as Michael seems to have interpreted you:) that we have no reason to believe that any omnipotent entity actually exists?
If you really mean the latter, then I suspect most people here will agree with you: if one does not observe any evidence for omnipotence, and one accepts Occam's razor (as reasonable people do), then one concludes that no omnipotent entity exists, unless and until strong evidence to the contrary comes up.
But it remains the case that the idea of omnipotence is c...
Here's an example: some time ago I was discussing evolution with a creationist, and was asked "Can you prove it?" I responded that "prove" isn't the appropriate word, but rather scientists gather and evaluate evidence to see what position the evidence most clearly supports. He crowed in jubilation. "Then you don't have any proof!" he exclaimed.
So my response in that situation has changed. I now respond, "Yes, we have the same level of proof that sends people to death row: We've got the DNA!" That's adapted from S...
Ian, your argument fails not merely because premise 1 isn't established apodictically. (Which is the flaw of inductive reasoning generally, but which, as Eliezer tries to point out to the religious, doesn't mean we don't have good reason to believe it.)
It also fails because we have counterexamples up the wazoo. Michael's point about sentient creatures is one of them. But we can generate a lot of others just by diddling around the space in which we define "objects." Balls bounce and roll, bowling balls just roll, spherical objects generally do...
Eliezer's use of "the one" is not an error or a Matrix reference, it's a deliberate echo of an ancient rabbinical trope. (Right, Eliezer?)
I think Ian makes an important point: people give their ability to imagine something the same weight as evidence. The most gratuitous example of this, relevant here because it's the impetus for inductive probabilism, is the so-called "problem of induction." Say we have two laws concerning the future evolution of some system, call them L1 and L2, such that at some future time t L2(t) gives a result that is defined only as being NOT the result given by L1(t). L1 is based on observation. L2 represents my ability to imagine that my observations will fail to hold at some future time t. The problem of induction is a result of giving MORE weight to L2 than L1.
Actually, I didn't realize "the one comes to us and says" was a rabbinical borrowing until it was pointed out to me. But it seems to have the right tone, and it's syntactical; I care not whether it is grammatical.
Poke, that's a really unhelpful way of thinking about the problem of induction. The problem of induction is a problem of logic in the first instance -- a description of the fact that we do have absolute knowledge of the truth of deductive arguments (conditional on the premises being true) but we don't have absolute knowledge of the truth of inductive arguments. And that's just because the conclusion of a deductive argument is (in some sense) contained in the premises, whereas the conclusion of a generalization isn't contained in the individual observatio...
Michael: "Balls may roll as well as bounce. They can deflate or inflate, or crumple or explode, or any of a thousand other things." Paul: "It also fails because we have counterexamples up the wazoo."
But even if an object behaves thousands of ways, it is still behaving in those ways and only those ways. If we want to work with it, we must follow cause and effect, we can't simply will it to do what we want. That is the case for all objects I know of, there are no counter-examples.
Z. M. Davis: "are you arguing that the concept of omni...
Paul Gowder,
I think your response is too general. How does the problem of induction being an deductive argument make the conclusion any less absurd? It's a deductive argument that takes as its premise my ability to imagine something being otherwise. That makes sense if you're an Empiricist philosopher, since you accept an Empiricist psychology a priori, but not a lot of sense if you're a scientist or committed to naturalism. Further, the difference you cite between deductive and inductive arguments (that the former is certain and the latter not), is the conclusion of the problem of induction; you can't use it to argue for the problem of induction.
Poke: let's attack the problem a different way. You seem to want to cast doubt on the difference along the dimension of certainty between induction and deduction. ("the difference you cite between deductive and inductive arguments (that the former is certain and the latter not), is the conclusion of the problem of induction; you can't use it to argue for the problem of induction")
Either deduction and induction are different along the dimension of certainty, or they're not. So there are four possibilities. induction = certain, deduction = cert...
Maybe you should try telling some parables about people who thought they had certain knowledge. Maybe some of them should include other people who did not think their knowledge was certain.
I cannot accept that Probability must be applied to everything. Which of course indirectly states that there are no absolutes, since probability has no 0 or 1.
If you discard absolutes, you must be willing to accept mysticism and contradictions.
I can create a long list of false or contradictory statements, and anyone who lives by probabilities must obediently tell me that every one of them is possible.
"Does God exist?" "Probably not, but it's possible."
"Can he create a boulder that he cannot lift?" "Probably not, but i
In the world of the unenlightened ones, there is authority and un-authority. What can be trusted, can be trusted; what cannot be trusted, you may as well throw away. There are good sources of information and bad sources of information.
This is pretty much the standard argument against Wikipedia. It fails to address the question of "what's it for?"
I mean, suppose that God himself descended from the clouds and told you that your whole religion was true except for the Virgin Birth. If that would change your mind, you can't say you're absolutely certain of the Virgin Birth.
I think that latter statement is equivalent to this:
V = Virgin Birth
G = God appears and proclaims ~V
P(V|G) < 1
∴P(V) < 1
But that argument is predicated on P(G) > 0. It is internally consistent to believe P(V|G) < 1 and yet P(V) = 1, as long as one also believes P(G) = 0, i.e. one is certain that God will not appear and proclaim ~V.
For technical reasons of probability theory, if it's theoretically possible for you to change your mind about something, it can't have a probability exactly equal to one.
This is supposed to be an argument against giving anything an 100% probability. I do agree with the concept, but this particular argument seems wrong. It's based on Conservation of Expected Evidence (if the "technical reasons of probability theory" refer to something else, let me know). However, the Bayes rule doesn't just imply that "having a chance of changing your mind...
Foolish mortal, the Quantitative Way is beyond your comprehension, and the beliefs you lightly name ‘certain’ are less assured than the least of our mighty hypotheses.
Have you considered selling merch? I'm infinitely certain I'd buy a T-shirt with that quote.
The Dalai Lama stated that "If science proves some belief of Buddhism wrong, then Buddhism will have to change."
I like the guy :)
Another problem with some people is that they don’t consciously believe (or won’t openly admit) they have absolute certainty. In their speech, they say that they doubt this and that, that they "cannot know everything" but I guess that’s mostly a trick for them to say "and neither do you." With them, one first needs to convince them that they are lying to themselves before having a talk about certainty vs uncertainty.
The one comes to you and loftily says: "Science doesn't really know anything. All you have are theories—you can't know for certain that you're right. You scientists changed your minds about how gravity works—who's to say that tomorrow you won't change your minds about evolution?"
Behold the abyssal cultural gap. If you think you can cross it in a few sentences, you are bound to be sorely disappointed.
In the world of the unenlightened ones, there is authority and un-authority. What can be trusted, can be trusted; what cannot be trusted, you may as well throw away. There are good sources of information and bad sources of information. If scientists have changed their stories ever in their history, then science cannot be a true Authority, and can never again be trusted—like a witness caught in a contradiction, or like an employee found stealing from the till.
Plus, the one takes for granted that a proponent of an idea is expected to defend it against every possible counterargument and confess nothing. All claims are discounted accordingly. If even the proponent of science admits that science is less than perfect, why, it must be pretty much worthless.
When someone has lived their life accustomed to certainty, you can't just say to them, "Science is probabilistic, just like all other knowledge." They will accept the first half of the statement as a confession of guilt; and dismiss the second half as a flailing attempt to accuse everyone else to avoid judgment.
You have admitted you are not trustworthy—so begone, Science, and trouble us no more!
One obvious source for this pattern of thought is religion, where the scriptures are alleged to come from God; therefore to confess any flaw in them would destroy their authority utterly; so any trace of doubt is a sin, and claiming certainty is mandatory whether you're certain or not.
But I suspect that the traditional school regimen also has something to do with it. The teacher tells you certain things, and you have to believe them, and you have to recite them back on the test. But when a student makes a suggestion in class, you don't have to go along with it—you're free to agree or disagree (it seems) and no one will punish you.
This experience, I fear, maps the domain of belief onto the social domains of authority, of command, of law. In the social domain, there is a qualitative difference between absolute laws and nonabsolute laws, between commands and suggestions, between authorities and unauthorities. There seems to be strict knowledge and unstrict knowledge, like a strict regulation and an unstrict regulation. Strict authorities must be yielded to, while unstrict suggestions can be obeyed or discarded as a matter of personal preference. And Science, since it confesses itself to have a possibility of error, must belong in the second class.
(I note in passing that I see a certain similarity to they who think that if you don't get an Authoritative probability written on a piece of paper from the teacher in class, or handed down from some similar Unarguable Source, then your uncertainty is not a matter for Bayesian probability theory. Someone might—gasp!—argue with your estimate of the prior probability. It thus seems to the not-fully-enlightened ones that Bayesian priors belong to the class of beliefs proposed by students, and not the class of beliefs commanded you by teachers—it is not proper knowledge.)
The abyssal cultural gap between the Authoritative Way and the Quantitative Way is rather annoying to those of us staring across it from the rationalist side. Here is someone who believes they have knowledge more reliable than science's mere probabilistic guesses—such as the guess that the moon will rise in its appointed place and phase tomorrow, just like it has every observed night since the invention of astronomical record-keeping, and just as predicted by physical theories whose previous predictions have been successfully confirmed to fourteen decimal places. And what is this knowledge that the unenlightened ones set above ours, and why? It's probably some musty old scroll that has been contradicted eleventeen ways from Sunday, and from Monday, and from every day of the week. Yet this is more reliable than Science (they say) because it never admits to error, never changes its mind, no matter how often it is contradicted. They toss around the word "certainty" like a tennis ball, using it as lightly as a feather—while scientists are weighed down by dutiful doubt, struggling to achieve even a modicum of probability. "I'm perfect," they say without a care in the world, "I must be so far above you, who must still struggle to improve yourselves."
There is nothing simple you can say to them—no fast crushing rebuttal. By thinking carefully, you may be able to win over the audience, if this is a public debate. Unfortunately you cannot just blurt out, "Foolish mortal, the Quantitative Way is beyond your comprehension, and the beliefs you lightly name 'certain' are less assured than the least of our mighty hypotheses." It's a difference of life-gestalt that isn't easy to describe in words at all, let alone quickly.
What might you try, rhetorically, in front of an audience? Hard to say... maybe:
But, in a way, the more interesting question is what you say to someone not in front of an audience. How do you begin the long process of teaching someone to live in a universe without certainty?
I think the first, beginning step should be understanding that you can live without certainty—that if, hypothetically speaking, you couldn't be certain of anything, it would not deprive you of the ability to make moral or factual distinctions. To paraphrase Lois Bujold, "Don't push harder, lower the resistance."
One of the common defenses of Absolute Authority is something I call "The Argument From The Argument From Gray", which runs like this:
Reversed stupidity is not intelligence. You can't arrive at a correct answer by reversing every single line of an argument that ends with a bad conclusion—it gives the fool too much detailed control over you. Every single line must be correct for a mathematical argument to carry. And it doesn't follow, from the fact that moral relativists say "The world isn't black and white", that this is false, any more than it follows from Stalin's belief that 2 + 2 = 4 that "2 + 2 = 4" is false. The error (and it only takes one) is in the leap from the two-color view to the single-color view, that all grays are the same shade.
It would concede far too much (indeed, concede the whole argument) to agree with the premise that you need absolute knowledge of absolutely good options and absolutely evil options in order to be moral. You can have uncertain knowledge of relatively better and relatively worse options, and still choose. It should be routine, in fact, not something to get all dramatic about.
I mean, yes, if you have to choose between two alternatives A and B, and you somehow succeed in establishing knowably certain well-calibrated 100% confidence that A is absolutely and entirely desirable and that B is the sum of everything evil and disgusting, then this is a sufficient condition for choosing A over B. It is not a necessary condition.
Oh, and: Logical fallacy: Appeal to consequences of belief.
Let's see, what else do they need to know? Well, there's the entire rationalist culture which says that doubt, questioning, and confession of error are not terrible shameful things.
There's the whole notion of gaining information by looking at things, rather than being proselytized. When you look at things harder, sometimes you find out that they're different from what you thought they were at first glance; but it doesn't mean that Nature lied to you, or that you should give up on seeing.
Then there's the concept of a calibrated confidence—that "probability" isn't the same concept as the little progress bar in your head that measures your emotional commitment to an idea. It's more like a measure of how often, pragmatically, in real life, people in a certain state of belief say things that are actually true. If you take one hundred people and ask them to list one hundred statements of which they are "absolutely certain", how many will be correct? Not one hundred.
If anything, the statements that people are really fanatic about are far less likely to be correct than statements like "the Sun is larger than the Moon" that seem too obvious to get excited about. For every statement you can find of which someone is "absolutely certain", you can probably find someone "absolutely certain" of its opposite, because such fanatic professions of belief do not arise in the absence of opposition. So the little progress bar in people's heads that measures their emotional commitment to a belief does not translate well into a calibrated confidence—it doesn't even behave monotonically.
As for "absolute certainty"—well, if you say that something is 99.9999% probable, it means you think you could make one million equally strong independent statements, one after the other, over the course of a solid year or so, and be wrong, on average, around once. This is incredible enough. (It's amazing to realize we can actually get that level of confidence for "Thou shalt not win the lottery.") So let us say nothing of probability 1.0. Once you realize you don't need probabilities of 1.0 to get along in life, you'll realize how absolutely ridiculous it is to think you could ever get to 1.0 with a human brain. A probability of 1.0 isn't just certainty, it's infinite certainty.
In fact, it seems to me that to prevent public misunderstanding, maybe scientists should go around saying "We are not INFINITELY certain" rather than "We are not certain". For the latter case, in ordinary discourse, suggests you know some specific reason for doubt.