There are three great besetting sins of rationalists in particular, and the third of these is underconfidence.  Michael Vassar regularly accuses me of this sin, which makes him unique among the entire population of the Earth.

    But he's actually quite right to worry, and I worry too, and any adept rationalist will probably spend a fair amount of time worying about it.  When subjects know about a bias or are warned about a bias, overcorrection is not unheard of as an experimental result.  That's what makes a lot of cognitive subtasks so troublesome—you know you're biased but you're not sure how much, and you don't know if you're correcting enough—and so perhaps you ought to correct a little more, and then a little more, but is that enough?  Or have you, perhaps, far overshot?  Are you now perhaps worse off than if you hadn't tried any correction?

    You contemplate the matter, feeling more and more lost, and the very task of estimation begins to feel increasingly futile...

    And when it comes to the particular questions of confidence, overconfidence, and underconfidence—being interpreted now in the broader sense, not just calibrated confidence intervals—then there is a natural tendency to cast overconfidence as the sin of pride, out of that other list which never warned against the improper use of humility or the abuse of doubt.  To place yourself too high—to overreach your proper place—to think too much of yourself—to put yourself forward—to put down your fellows by implicit comparison—and the consequences of humiliation and being cast down, perhaps publicly—are these not loathesome and fearsome things?

    To be too modest—seems lighter by comparison; it wouldn't be so humiliating to be called on it publicly, indeed, finding out that you're better than you imagined might come as a warm surprise; and to put yourself down, and others implicitly above, has a positive tinge of niceness about it, it's the sort of thing that Gandalf would do.

    So if you have learned a thousand ways that humans fall into error and read a hundred experimental results in which anonymous subjects are humiliated of their overconfidence—heck, even if you've just read a couple of dozen—and you don't know exactly how overconfident you are—then yes, you might genuinely be in danger of nudging yourself a step too far down.

    I have no perfect formula to give you that will counteract this.  But I have an item or two of advice.

    What is the danger of underconfidence?

    Passing up opportunities.  Not doing things you could have done, but didn't try (hard enough).

    So here's a first item of advice:  If there's a way to find out how good you are, the thing to do is test it.  A hypothesis affords testing; hypotheses about your own abilities likewise.  Once upon a time it seemed to me that I ought to be able to win at the AI-Box Experiment; and it seemed like a very doubtful and hubristic thought; so I tested it.  Then later it seemed to me that I might be able to win even with large sums of money at stake, and I tested that, but I only won 1 time out of 3.  So that was the limit of my ability at that time, and it was not necessary to argue myself upward or downward, because I could just test it.

    One of the chief ways that smart people end up stupid, is by getting so used to winning that they stick to places where they know they can win—meaning that they never stretch their abilities, they never try anything difficult.

    It is said that this is linked to defining yourself in terms of your "intelligence" rather than "effort", because then winning easily is a sign of your "intelligence", where failing on a hard problem could have been interpreted in terms of a good effort.

    Now, I am not quite sure this is how an adept rationalist should think about these things: rationality is systematized winning and trying to try seems like a path to failure.  I would put it this way:  A hypothesis affords testing!  If you don't know whether you'll win on a hard problem—then challenge your rationality to discover your current level.  I don't usually hold with congratulating yourself on having tried—it seems like a bad mental habit to me—but surely not trying is even worse.  If you have cultivated a general habit of confronting challenges, and won on at least some of them, then you may, perhaps, think to yourself "I did keep up my habit of confronting challenges, and will do so next time as well".  You may also think to yourself "I have gained valuable information about my current level and where I need improvement", so long as you properly complete the thought, "I shall try not to gain this same valuable information again next time".

    If you win every time, it means you aren't stretching yourself enough.  But you should seriously try to win every time.  And if you console yourself too much for failure, you lose your winning spirit and become a scrub.

    When I try to imagine what a fictional master of the Competitive Conspiracy would say about this, it comes out something like:  "It's not okay to lose.  But the hurt of losing is not something so scary that you should flee the challenge for fear of it.  It's not so scary that you have to carefully avoid feeling it, or refuse to admit that you lost and lost hard.  Losing is supposed to hurt.  If it didn't hurt you wouldn't be a Competitor.  And there's no Competitor who never knows the pain of losing.  Now get out there and win."

    Cultivate a habit of confronting challenges—not the ones that can kill you outright, perhaps, but perhaps ones that can potentially humiliate you.  I recently read of a certain theist that he had defeated Christopher Hitchens in a debate (severely so; this was said by atheists).  And so I wrote at once to the Bloggingheads folks and asked if they could arrange a debate.  This seemed like someone I wanted to test myself against.  Also, it was said by them that Christopher Hitchens should have watched the theist's earlier debates and been prepared, so I decided not to do that, because I think I should be able to handle damn near anything on the fly, and I desire to learn whether this thought is correct; and I am willing to risk public humiliation to find out.  Note that this is not self-handicapping in the classic sense—if the debate is indeed arranged (I haven't yet heard back), and I do not prepare, and I fail, then I do lose those stakes of myself that I have put up; I gain information about my limits; I have not given myself anything I consider an excuse for losing.

    Of course this is only a way to think when you really are confronting a challenge just to test yourself, and not because you have to win at any cost.  In that case you make everything as easy for yourself as possible.  To do otherwise would be spectacular overconfidence, even if you're playing tic-tac-toe against a three-year-old.

    A subtler form of underconfidence is losing your forward momentum—amid all the things you realize that humans are doing wrong, that you used to be doing wrong, of which you are probably still doing some wrong.  You become timid; you question yourself but don't answer the self-questions and move on; when you hypothesize your own inability you do not put that hypothesis to the test.

    Perhaps without there ever being a watershed moment when you deliberately, self-visibly decide not to try at some particular test... you just.... slow..... down......

    It doesn't seem worthwhile any more, to go on trying to fix one thing when there are a dozen other things that will still be wrong...

    There's not enough hope of triumph to inspire you to try hard...

    When you consider doing any new thing, a dozen questions about your ability at once leap into your mind, and it does not occur to you that you could answer the questions by testing yourself...

    And having read so much wisdom of human flaws, it seems that the course of wisdom is ever doubting (never resolving doubts), ever the humility of refusal (never the humility of preparation), and just generally, that it is wise to say worse and worse things about human abilities, to pass into feel-good feel-bad cynicism.

    And so my last piece of advice is another perspective from which to view the problem—by which to judge any potential habit of thought you might adopt—and that is to ask:

    Does this way of thinking make me stronger, or weaker?  Really truly?

    I have previously spoken of the danger of reasonableness—the reasonable-sounding argument that we should two-box on Newcomb's problem, the reasonable-sounding argument that we can't know anything due to the problem of induction, the reasonable-sounding argument that we will be better off on average if we always adopt the majority belief, and other such impediments to the Way.  "Does it win?" is one question you could ask to get an alternate perspective.  Another, slightly different perspective is to ask, "Does this way of thinking make me stronger, or weaker?"  Does constantly reminding yourself to doubt everything make you stronger, or weaker?  Does never resolving or decreasing those doubts make you stronger, or weaker?  Does undergoing a deliberate crisis of faith in the face of uncertainty make you stronger, or weaker?  Does answering every objection with a humble confession of you fallibility make you stronger, or weaker?

    Are your current attempts to compensate for possible overconfidence making you stronger, or weaker?  Hint:  If you are taking more precautions, more scrupulously trying to test yourself, asking friends for advice, working your way up to big things incrementally, or still failing sometimes but less often then you used to, you are probably getting stronger.  If you are never failing, avoiding challenges, and feeling generally hopeless and dispirited, you are probably getting weaker.

    I learned the first form of this rule at a very early age, when I was practicing for a certain math test, and found that my score was going down with each practice test I took, and noticed going over the answer sheet that I had been pencilling in the correct answers and erasing them.  So I said to myself, "All right, this time I'm going to use the Force and act on instinct", and my score shot up to above what it had been in the beginning, and on the real test it was higher still.  So that was how I learned that doubting yourself does not always make you stronger—especially if it interferes with your ability to be moved by good information, such as your math intuitions.  (But I did need the test to tell me this!)

    Underconfidence is not a unique sin of rationalists alone.  But it is a particular danger into which the attempt to be rational can lead you.  And it is a stopping mistake—an error which prevents you from gaining that further experience which would correct the error.

    Because underconfidence actually does seem quite common among aspiring rationalists who I meet—though rather less common among rationalists who have become famous role models)—I would indeed name it third among the three besetting sins of rationalists.

    New to LessWrong?

    New Comment
    187 comments, sorted by Click to highlight new comments since: Today at 4:13 AM
    Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

    I wonder if the decline of apprenticeships has made overconfidence and underconfidence more common and more severe.

    I'm not a history expert, but it seems to me that a blacksmith's apprentice 700 years ago wouldn't have had to worry about over/underconfidence in his skill. (Gender-neutral pronouns intentionally not used here!) He would have known exactly how skilled he was by comparing himself to his master every day, and his master's skill would have been a known quantity, since his master had been accepted by a guild of mutually recognized masters.

    Nowadays, because of several factors, calibrating your judgement of your skill seems to be a lot harder. Our education system is completely different, and regardless of whatever else it does, it doesn't seem to be very good at providing reliable feedback to its students, who properly understand the importance of the feedback and respond accordingly. Our blacksmith's apprentice (let's call him John) knows when he's screwed up - the sword or whatever that he's made breaks, or his master points out how it's flawed. And John knows why this is important - if he doesn't fix the problem, he's not going to be able to earn a living.

    Whereas a mode... (read more)

    A friend of mine, normal in most ways, has exceptionally good mental imagery, such that one time she visited my house and saw a somewhat complex 3-piece metalwork puzzle in my living room and thought about it later that evening after she had left, and was able to solve it within moments of picking it up when she visited a second time. At first I was amazed at this, but I soon became more amazed that she didn't find this odd, and that no one had ever realized she had any particular affinity for this kind of thing in all the time she'd been in school. I'm curious as to how many cognitive skills like this there are to excel at and if many people are actually particularly good at one or many of them without realizing it due to a lack of good tests for various kinds of cognition.

    My usual self-testing example is something like "can I write this program correctly on the very first try?". That's a hard challenge, integrated into my everyday work.

    I should try to remember to try this the next time I have a short piece of code to write. Furthermore, it's the sort of thing that makes me slightly uncomfortable and is therefore easy to forget, so I should try harder to remember it.

    In general, this sort of thing seems like a very useful technique if you can do it without endangering your work. Modded parent up.

    Without risk, there is no growth. If your practice isn't making you feel scared and uncomfortable, it's not helping. Imagine training for a running race without any workouts that raise your heart rate and make you breathe hard. Feeling out of your comfort zone and at risk of failure is something everybody should seek out on a regular basis.
    I never thought of that as a thing you could do. I think when my code compiles on the first try, it's more often then not a sign of something very wrong. For example, the last time it happened was because I forgot to add the file I was working on to the makefile. Perhaps I should try to learn to code more precisely.
    Heh. (You should use makefiles that automatically build new files, and automatically sense dependencies for rebuild.) As I recall, Eliezer said somewhere that I'm too tired to Google - there is no limit to the amount of intelligence that you can use while programming.

    it was said by them that Christopher Hitchens should have watched the theist's earlier debates and been prepared, so I decided not to do that

    I urge you to prepare properly. Not only Hitchens but Richard Carrier and several other atheists have been humiliated in debate with him, by their own admission. Winning at all is challenge enough, and would be a great service to the world. Given how much of a blow you would find it to lose having fully prepared, I urge you to to reconsider whether you're self-handicapping.

    Scientists are frequently advised to never participate in a live debate with a creationist. This is because being right has absolutely nothing to do with winning.

    "Debating creationists on the topic of evolution is rather like playing chess with a pigeon - it knocks the pieces over, craps on the board, and flies back to its flock to claim victory." -- Scott D. Weitzenhoffer

    Debates are not a rationality competition. They're a Dark Arts competition, in which the goal is to use whatever underhanded trick you can come up with in order to convince somebody to side with you. Evidence doesn't matter, because it's trivial to simply lie your ass off and get away with it.

    The only kind of debates worth having are written debates, in which, when someone tells a blatant lie, you can look up the truth somewhere and take all the space you need to explain why it's a lie - and "cite your sources, or you forefeit" is a reasonable rule.

    Indeed. Association fallacy. Eliezer might not think much of his loss, but it would still be seen by people as a loss for "the atheists" and a victory for "the theists". Debate to win!

    Who is this theist? I'm interested in watching these debates. (though obviously without knowledge of the specific case, I agree with ciphergoth. It's not just about you, it's about whoever's watching.)

    I agree with ciphergoth's guess. Eliezer: I agree with ciphergoth and Yvain. Debating, at least as the Theist Who (Apparently) Must Not Be Named is concerned, is a performance art more than it is a form of intellectual inquiry, and unless you've done a lot of it you run the severe risk of getting eaten by someone who has, especially if you decide to handicap yourself. If you engage in such a debate, the chances are that at least some people will watch or hear it, or merely learn of the result, and change their opinions as a result. (Probably few will change so far as to convert or deconvert: maybe none. Many will find that their views become more or less entrenched.) What would you think of a musician who decided to give a public performance without so much as looking at the piece she was going to play? Would you not be inclined to say: "It's all very well to test yourself, but please do it in private"? (For what it's worth, I think it's rather unlikely that TTWMNBN will agree to a Bloggingheads-style debate. He would want it to be public. And he might decide that Eliezer isn't high-enough-profile to be worth debating. Remember: for him, among other things, this is propaganda.) [EDITED a few minutes after posting to remove the explicit mention of the theist's name]
    1Paul Crowley15y
    Entirely agreed. There's a chance such a debate could be arranged if the book is a success, though.
    Rot13 is your friend. (Edit: fixed above)
    I already knew that, as you might have inferred from "I agree with ciphergoth's guess" and, er, the fact that I named him in my last paragraph. (That was an oversight which I am about to correct.) Perhaps I should have been more explicit about what guess I was agreeing with. I don't know why the coyness, but perhaps TTWMNBN is suspected of googling for his own name every now and then. Or perhaps ciphergoth was just (semi-)respecting Eliezer's decision not to name him.
    0Paul Crowley15y
    But you haven't really not named him. Anyone can decipher these posts with a small amount of effort. All that's happened is that this thread has become annoying to read.
    3Paul Crowley15y
    Jvyyvnz Ynar Penvt (I'm guessing; certainly the only time I've heard it credibly said that Hitchens lost a debate with a theist)
    Well, I already know the proper counter to his pet argument. Hat tip, Tnel Qerfpure for explaining gvzr.
    4Paul Crowley15y
    He has piles of pet arguments, that's part of his technique; he fires off so many arguments that you can't answer them all. I've watched him and put a lot of thought into how I'd answer him, and I'm still not sure how I can fit the problems with his arguments into the time available in a debate, but I'd start with asking either him or the audience to pick which of his arguments I was going to counter in my reply. In particular, I still don't have a counter to the fine-tuning argument which is short, assumes no foreknowledge, and is entirely intellectually honest. Could you point me to the counter argument you rot-13? Google isn't finding it for me. Thanks!

    In particular, I still don't have a counter to the fine-tuning argument which is short, assumes no foreknowledge, and is entirely intellectually honest.

    The "fine-tuning" argument falls into the script:

    1. Here is a puzzle that scientists can't currently explain
    2. God explains it
    3. Therefore God exists

    If you accept that script you lose the debate, because there will always be some odd fact that can't currently be explained. (And even if it can actually be explained, you won't have time to explain it within the limits of the debate and the audience's knowledge.)

    The trap is that it is a very temping mistake to try and solve the puzzle yourself. It's highly unlikely that you will succeed, and your opponent will already know the flaws (and counter-arguments) to most of the existing solution attempts, so can throw those at you. Or if you support a fringe theory (which isn't generally considered in the solution space, but might work), the opponent can portray you as a marginal loon.

    I suspect that the theist wins these debates because most opponents fall into that trap. They are smart enough that they think that they can resolve the puzzle in question, and so walk right into... (read more)

    The anthropic principle does technically work, but it admittedly feels like a cheat and I'd expect most audiences not familiar with it already would consider it such. It's not a knock-down counterargument, but it seems to me we don't know enough about physics to say it's actually possible that the universe could be fine-tuned differently. Sure, we can look at a lot of fundamental constants and say, "If that one were different by 1 unit, fusion wouldn't occur," but we don't know if they are interconnected, and I don't think we can accurate model what would occur, so it's possible that it couldn't be different, that other constants would vary with it, and/or that it would make a universe so entirely different from our own that we have no idea what it would be like, so it's quite possible it could support life of some form. Or, reduced into something more succinct, we don't actually know what the universe would look like if we changed fundamental constants (if this is even possible) because the results are beyond our ability to model, so it's quite possible that most possible configurations would support some form of life. Multiverse works too, but again feels like cheating. I also admit there may be facts that undermine this, I'm not super-familiar with the necessary physics.
    If there is no multiverse, "Why is the universe the way it is rather than any other way?" is a perfectly good question to which we haven't found the answer yet. However, theists don't merely ask that question, they use our ignorance as an argument for the existence of a deity. They think a creator is the best explanation for fine-tuning. The obvious counter-argument is that not only is a creator not the best explanation, it's not an explanation at all. We can ask the exact same question about the creator that we asked about the universe: Why is the creator what it is rather than something else? Why isn't 'He' something that couldn't be called a 'creator' at all, like a quark, or a squirrel? Or, to put the whole thing in the right perspective, why is the greater universe formed by the combination of our universe and its creator the way it is, rather than any other way? At this point the theist usually says that God is necessary, or outside of time, which could just as easily be true of the universe as we know it. Or the theist might say that God is eternal, while our universe probably isn't, which is irrelevant. None of these alleged characteristics of God's explain why He's fine-tuned, anyway.
    I was thinking along similar lines but didn't post because I was talking myself in circles. So I gave up and weighted the hypothesis that this kind of philosophy is insoluble. Here's what I wrote: In such a debate, what is the end goal -- what counts as winning the debate question? If they provide a hypothesis that invokes God, is it sufficient to just provide another plausible hypothesis that doesn't? (Then, done.) Or do you really need to address the root of the root of the question: Why are we here? (Even if you have multi-verses, why are they all here?) And "why" isn't really the question anyway. It's just a complaint, "I don't understand the source of everything." ... "If there is a source 'G', I don't understand the source of 'G'." You can't answer that question: The property "always existing" or the transition between "not existing and then existing" is a mystery; it's the one thing atheists and theists can agree on. How does giving it a name mean anything more? So I think the best argument is that invoking God doesn't answer the question either. Unless is the problem really about whether or not this is evidence that something wanted us to be here? Then finding plausible scientific hypothesis for X,Y, Z would never answer the question. You would always have remaining, did someone want this all to be so? And I got stuck there, because if something exists, to what extent was it "willed" has no meaning to me at the moment.
    I haven't read this particular version of the fine-tuning argument, but the general counter-argument is that evolution fine-tuned life (humans) for the universe, not that the universe was fine-tuned for humans.
    7Paul Crowley15y
    Unfortunately, that doesn't work. Without the fine tuning, the Universe consists of undifferentiated mush, and evolution is impossible.
    That isn't any version of the fine tuning argument I've heard. And it just sounds plain stupid. Who makes this particular argument, and more importantly how do they justify it? It sounds like some wild claim that is just too irrational to refute.
    To me it sounds commonplace. What is the problem you see?
    I don't think this is good enough. There seem to be several physical constants that - if they had been slightly different - would have made any sort of life unlikely.
    That part can be deproblematized (if you will forgive the nonce word) by the anthropic principle: if the universe were unsuited for life, there would be no life to notice that and remark upon it.
    I don't accept that form of the anthropic principle. I am on a planet, even though planets make up only a tiny portion of the universe, because there's (almost) nobody not on a planet to remark on it. The anthropic principle says that you will be where a person is. However, it can't change the universe. The laws of physics aren't going to rewrite themselves just because there was nobody there to see them. That being said, if you combine this with multiple universes, it works. The multiverse is obviously suitable for life somewhere. We are going to end up in one of those places.
    Even in the case of a single infinite universe, the anthropic principle does help - it means that any arbitrarily low success rate for forming life is equally acceptable, so long as it is not identically zero.
    In that case, it would look like the universal constants don't support life at all, but you somehow managed to get lucky and survive anyway, rather than the universal constants appearing to be fine-tuned. If the "universal constants" are different in different areas, then it would basically be a multiverse.
    As i understand it, it's possible to pick out even better constants than what we have. For instance, having a fine structure constant between 6 and 7 would cause all atoms with at least 6 protons to be chemically identical to carbon due to 'atomic collapse'. That would probably help life along noticeably. As things stand, we're pretty marginal. There's a whole lot of not-life out there.
    As I understand it, the vast majority of constants are worse than what we have now. You might be able to find something better, but if this was just chance, we're very lucky as it is. Since you're not usually that lucky, it probably wasn't chance.
    It would probably also completely screw up the triple-alpha process, so that much less carbon will be produced in stars -- assuming stars would be possible in that situation in the first place.
    Would that help really? Most life requires all of CHNOPS. And pretty much all complex life requires at least a few heavier elements, especially iron, copper, silicon, selenium, chlorine, magnesium, zinc, and iodine. Life won't do much if one can't get any elements heavier than carbon.
    It obviously wouldn't be life exactly as we know it, no! I'm pretty confident that if you replaced all the elements heavier than carbon with carbon, some form of life would be able to emerge. Carbon is where the complexity comes from - everything else is optimization. Seriously, that's the most blatant case of the failure of imagination fallacy I've seen since I stopped cruising creationist discussion boards.
    I'm substantially less convinced. While carbon is the main cause of complexity, that's still carbon with other elements. Your options in this hypothetical are hydrogen, helium, lithium, beryllium, boron and carbon and that's it. Helium is effectively out (I think, I don't know enough to be that confident that basic bonding behavior will be that similar when you've drastically altered the fine structure constant.) The chemistry for that set isn't nearly as complicated as that involving full CHNOPS. And the relevant question isn't "can life form with these elements" but rather "how likely is it?" and "how likely is complex life to form"?
    True. But since a universe unsuitable for life seems overwhelmingly the more probable situation, we can still ask why it isn't so. (My own feeling is that the problem has to be resolved by either "God" or "a multiverse". The idea that there's precisely one universe and it just happens to have the conditions for life seems extraordinary.)
    My understanding (I'd have to dig out references) is that the fine tuning may not be as fine as generally believed. Ah, the wikipedia page on the argument has some references on this: In addition to the anthropic type arguments, some theoretical work seems to suggest that the fine tuning isn't. ie, that we don't even need to invoke anthropic reasoning too strongly. Heck, supposedly one can even have stars in a universe with no weak interaction at all. So it may very well be that, even without appealing to anthropic style reasoning in multiverses (which I'm not actually opposed to, but there's stuff there that I still don't understand. Born stats, apparent breakdown of the Aumann Agreement Theorem, etc... so too easy to get stuff wrong) anyways, even without that, it may well be that the fine tuning stuff can be refuted by simply pointing out "looking at the actual physics, the tuning is rather less fine than claimed."
    I agree, but the anthropic principle has always seemed like a bit of cheat -- an explanation that really isn't much of an explanation at all.
    Exactly. The parameters we have define this universe. Any complex system -- presumably most if not all universes -- would have complex patterns. You would just need patterns to evolve that are self-promoting (i.e., accumulative) and evolving, and eventually a sub-pattern will evolve that significantly meta-references. Given that replicating forms can result from simple automata rules and self-referencing appears randomly (in a formal sense) all over the place in a random string (Godel) it doesn't seem so improbable for such a pattern to emerge. In fact, an interesting question is why is there only one "life" that we know of (i.e., carbon-based)? Once we understand the mechanism of consciousness, we may find that it duplicates elsewhere -- perhaps not in patterns that are accumulative and evolving but briefly, spontaneously. This is totally idle speculation of course. Another argument: There's nothing in Physics that says there isn't a mechanism for how the parameters are chosen. It's just another mystery that hasn't been solved yet -- so far, to date, God has reliably delegated answers regarding questions about the empirical world to Science.
    Yes, that's something I've often thought too. (Not only about this particular theist; the practice of throwing up more not-very-good arguments than can be refuted in the time available seems to be commonplace in debates about religious topics. Quite possibly in all debates, but I haven't watched a broad enough sample to know.)
    Gur Xnynz Pbfzbybtvpny Nethzrag (via Wikipedia). Counter argument in ISBN 0262042339 where gvzr is explained as fhpprffvir senzrf juvpu vapernfr va pbeeryngvba njnl sebz gur bevtvany fgngr. Nothing in physics requires the bevtvany fgngr to have a pnhfr. It might have a ernfba, but you can't spin a theology around that.
    0Paul Crowley15y
    Thanks. That doesn't sound like the counter-argument I'd present.
    I can't see it being very convincing to anyone who doesn't already know enough physics to be unimpressed by the argument (i.e., TTWMNBN's pet argument) in the first place.
    Why are we talking in ROT-13?
    Puevf Unyydhvfg wrote about how he would debate Jvyyvnz Ynar Penvt on his blog. I found it worthwhile.
    No, winning is good but losing is also useful - we ought to permanently eliminate from the corpus any argument that fails. Even if it wouldn't fail against a blockhead without the intellectual muscle to finesse a counter.
    3Paul Crowley15y
    Losing is a lot more informative if we build on what we learned last time, don't you think?
    If you permanently eliminate from the gene pool any genes that aren't currently working efficiently, your ability to evolve is limited.
    Eleizer will be humiliated. Even if Eleizer prepares for the debate he will still lose. Eleizer spends too much time thinking rationally for him to be a match for a master debater. I've seen him on Bloggingheads. He doesn't spend nearly enough energy producing the kind of bullshit you are supposed to throw together if you want to be considered victorious in a debate.
    4Paul Crowley15y
    I disagree; I watched Eliezer vs Adam Frank, and at several points I paused it, trying to work out what I'd say in response to Frank's arguments. I still found that Eliezer got across the counterarguments in a far neater way when I unpaused, and he had a lot less time than I did. (BTW, after hearing that I also learned how his name is pronounced, so I'm better at spelling it correctly: it's Eli-Ezer, four syllables.)
    I have not observed that getting across counterarguments in a neat way is a particularly vital element of what it takes to 'win' a debate.
    0Eliezer Yudkowsky15y
    I'd read Frank's book. (And I did try to direct him to the webpages whereby he could have read my stuff.) But I think I could've done it equally well on the fly.

    Unfair debate proposal

    You want a debate in which the tables are tilted against you? I see a way to do that which doesn't carry the risks of your current proposal.

    A bunch of us get together on an IRC channel and agree to debate you. We thrash out our initial serve; we then spring the topic and our initial serve on you. You must counter immediately, with no time to prepare. We then go away and mull over your counter, and agree a response, which you must again immediately respond to.

    We can give ourselves more speaking time than you in each exchange, too, if you want to tilt the tables further (I'm imagining the actual serves and responses being delivered as video).

    Since Eliezer hasn't prepared by watching earlier debates then one solution could be to just use arguments from the theist's past debates in a simulated debate. As Eliezer prefers, he wouldn't prepare and would have to answer questions immediately. There are two drawbacks: first it would just be "us" evaluating whether Eliezer performed well (but then, debate performance is always somewhat subjective) and we would lose the interaction of question, response and follow-up question. Nevertheless, Eliezer's off-the-cuff responses to the theist's past questions could be informative.
    2Eliezer Yudkowsky15y
    You're not theists; a handicap is more appropriate if we're going to be debating theology and you taking the positive... but this does sound interesting, so long as we can find a debate position that I agree with but that others are willing to take the negative of.
    I'm pretty sure it's not required that one agree with a position to debate in its favor.
    In fact, I have a post kicking around on the subject that it's easier in a debate to defend the side you don't agree with. But perhaps Eliezer also believes this and is looking to further handicap himself :)
    This triggered an idea about paranoid debating: require players to submit a preliminary answer in the first few seconds of being presented with the question, then debate.
    This sounds like fun. What would we debate him about?

    I've found some of the characterizations of Craig's arguments and debate style baffling.

    When he debates the existence of god, he always delivers the same five arguments (technically, it's four: his fifth claim is that god can be known directly, independently of any argument). He develops these arguments as carefully as time allows, and defends each of his premises. He uses the kalam cosmological argument, the fine tuning argument, the moral argument, and the argument from the resurrection of Jesus. This can hardly be characterized as dumping.

    Also, his arguments are logically valid; you won't see any, 'brain teaser, therefore god!' moves from him. He's not only a 'theologian'; he's a trained philosopher (he actually has two earned PHDs, one in philosophy and one in theology).

    Finally, Craig is at his best when it comes to his responses. He is extremely quick, and is very adept at both responding to criticisms of his arguments, and at taking his opponent's arguments apart.

    Debating William Lane Craig on the topic of god's existence without preparation would be as ill advised as taking on a well trained UFC fighter in the octagon without preparation. To extend the analogy further, it would be like thinking it's a good idea because you've won a couple of street fights and want to test yourself.

    I don't think its a good idea either. But the fact that the debate would be on bloggingheads rather than in front of an audience with formal speeches and timed rebuttals definitely helps Eliezer. He's free to ask questions, clarify things etc. So really its like fighting a UFC fighter in an alley. Not a good idea but I guess you might have a chance.
    4Eliezer Yudkowsky15y
    I'd tend to assume that the absence of a moderator makes it easier to abuse the more inexperienced party.
    Well you'd have more experience with the medium. But at a formal debate he'd give 5 five arguments each of which would take your entire speaking time to respond to. On bloggingheads you can ask for his best argument and then spend as much time as you need to on it (or within bloggingheads limits I guess). Also, if you watch formal debates between theists and atheists the participants often avoid answering the difficult questions. In particular, theists always avoid explaining how invoking God doesn't merely obscure and push the question of creation back a step. This medium gives you and opportunity to press things and I like to think that opportunity is an advantage for the side of truth. Still I'm sure he has an answer to that question. The guy does this for a living, I think even if you prepare it would be a good test of your skills.
    0Robi Rahman8y
    Did this debate ever end up happening? If it did, is there a transcript available somewhere? Edit: Found in another comment that WLC turned down the debate.

    It sounds as though you're viewing the debate as a chance to test your own abilities at improvisational performance. That's the wrong goal. Your goal should be to win.

    "The primary thing when you take a sword in your hands is your intention to cut the enemy, whatever the means. Whenever you parry, hit, spring, strike or touch the enemy’s cutting sword, you must cut the enemy in the same movement. It is essential to attain this. If you think only of hitting, springing, striking or touching the enemy, you will not be able actually to cut him. More than anything, you must be thinking of carrying your movement through to cutting him."

    By increasing the challenge the way you suggest, you may very well be acting rationally toward the goal of testing yourself, but you're not doing all you can to cut the opponent. To rationally pursue winning the debate, there's no excuse for not doing your research.

    In choosing not to try for that, you'll end up sending the message that rationalists don't play to win. You and I know this isn't quite accurate -- what you're doing is more like a rationalist choosing to lose a board game, because that served some other, real purpose of his -- but that is still how it will come across. Do you consider this to be acceptable?

    This isn't about choosing to lose. It's more about exploration vs. exploitation. If you always use the strategy you currently think is the best, then you won't get the information you need to improve.
    That seems contradictory. If you actually thought that always using one strategy would have this obvious disadvantage over another course of action, then doing so would by definition not be "the strategy you currently think is best."
    Experiments can always be framed as a waste of resources. There is always something you're using up that you could put to direct productive use, even if it's just your time.
    The potential information you gain from the experiment is a currency. Discount that currency (or have a low estimate of it) and yeah you can frame the experiment as a waste of resources.
    You're confusing meta strategies and strategies. The best meta strategy might be implementing strategies that do not have the highest chance of succeeding, simply because you can use the information you gain to choose the actual best strategy when it matters. Consider the case where you're trying to roll a die many times and get the most green sides coming up, and you can choose between a die that has 3 green sides, and one that probably (p = 0.9) has 2 green sides, but might (p = 0.1) have 4 green sides. If the game lasts 1 roll, you chose the first die. If the game lasts many many rolls, you chose the other die until you're convinced that it only has 2 green sides- even though this is expected to lose in the short term.
    Both those courses of action with dice sound like strategies to me, not meta strategies. Could you give another example of something you'd consider a meta strategy? I think there's a larger point lurking here, which is that a good strategy should, in general, provide for gathering information so it can adapt. Do you agree?
    I might be able to clarify the example. The strategy for one roll is the die with 3 green sides. The strategy for multiple rolls is not the same as repeating the strategy for one roll multiple times. That being said, I do not know if that qualifies as a meta-strategy. A more typical example could be a Rock-Paper-Scissors game. Against a random player, the game-theory optimal is to pick randomly amongst the three choices. Against your cousin Bob who is known to always picks Rock, picking Paper is the better option. Using knowledge from outside the game lets you win against Bob because you are using a meta-strategy. See also, Wikipedia's article on Metagaming.
    That does indeed help. Thank you. So really, a meta strategy would be something like choosing your deck for a Magic tournament based on what types of decks you expect your opponents to use. While the non-meta strategy would be your efforts to win within a game once it's started.
    Ah, crap. Was that my comment? Sorry. I keep deleting comments when it looks like no one has responded. But, yeah, Magic has a rather intense meta-game. The reason I deleted my comment was because I realized I had no idea where the meta-strategy was in the dice example so I assumed I missed something. I could be chasing down the wrong definition.
    ...and that's why you really shouldn't delete a comment unless you think it's doing great harm. You may be worrying a bit too much about what others here think about every comment you make, when it's in fact somewhat random whether anyone replies to a given comment.
    0Eliezer Yudkowsky15y
    Also, I believe that deleting a comment does not dissipate any negative karma that it has already earned you.
    This is correct. I do not delete to avoid the karma hit, I delete to drop the number of comments in a thread. If two other people say the same thing there was no reason for me to say it. In this case, I realized immediately after I posted the comment that I probably had not done justice to the entire thread, so I deleted it. I find the clutter annoying and if I can voluntarily take my comment out of the path I am happy to do so. Unfortunately, this apparently does not work because two people have responded before I could delete a comment. So, deleting does not work well and now I know. Next strategy to try, just editing with a sentence saying "Ignore me"? What is the community consensus on this subject? Just leave the comment alone? It would be neat if there was a way to just hit my own comment with -4 and get it off of people's radar.

    Cultivate a habit of confronting challenges - not the ones that can kill you outright, perhaps, but perhaps ones that can potentially humiliate you.

    You may be interested to learn that high-end mountaineers apply exactly the strategy you describe to challenges that might kill them outright. Mick Fowler even states it explicitly in his autobiography - "success every time implies that one's objectives are not challenging enough".

    A large part of mountaineering appears to be about identifying the precise point where your situation will become unrecoverable, and then backing off just before you reach it. On the other hand, sometimes you just get unlucky.

    A slogan I like is that failure is OK, so long as you don't stop trying to avoid it.

    While reading this post, a connection with Beware of Other-Optimizing clicked in my mind. Different aspiring rationalists are (more) susceptible to different failure modes. From Eliezer's previous writings it had generally seemed like he was more worried about the problem of standards (for oneself) that are too low -- that is, not being afraid enough of failure -- than about the opposite error, standards that are too high. But I suspect that's largely specific to him; other... (read more)

    And so I wrote at once to the Bloggingheads folks and asked if they could arrange a debate. This seemed like someone I wanted to test myself against. Also, it was said by them that Christopher Hitchens should have watched the theist's earlier debates and been prepared, so I decided not to do that, because I think I should be able to handle damn near anything on the fly, and I desire to learn whether this thought is correct; and I am willing to risk public humiliation to find out.

    This really bothers me, because you weren't just risking your own public humiliation; you were risking our public humiliation. You were endangering an important cause for your personal benefit.

    The cause of rationalism does not rise and fall with Eliezer Yudkowsky. If you fear the consequences of being his partisan, don't align yourself with his party. If you are willing to associate yourself and your reputation with him, accept the necessary consequences of having done so.
    Phil might be wrong to phrase his objection in terms of "our public humiliation". But its still the case that there are things at stake beyond Eliezer Yudkowsky's testing himself. And those are things we all care about.
    4Eliezer Yudkowsky15y
    I've done a service or two to atheism, and will do more services in the future, and those may well depend on this test of calibration.
    Who is the theist? I've actually seen Hitchens preform poorly in a number of debates with theists just because he doesn't really give a damn about responding to their arguments because he rightly finds them so silly. Plus his focus is really on religion being bad more than religion being false and as such is rarely equipped to answer the more advanced theist arguments (like the say the fine-tunning of physical constants) in the way someone like Dawkins is. (Edit- forget the question. I just read your reason for not naming him. Fair enough. But if you told someone who it was they could watch the debate and indicate to you whether or not you really ought to be worried. Particularly if you don't end up debating him, we might get something out of watching him.)
    I realize it is a tradeoff.

    There are three great besetting sins of rationalists in particular, and the third of these is underconfidence.

    Were we ever told the other two?

    Yes, by Jeffreyssai:

    "Three flaws above all are common among the beisutsukai. The first flaw is to look just the slightest bit harder for flaws in arguments whose conclusions you would rather not accept. If you cannot contain this aspect of yourself then every flaw you know how to detect will make you that much stupider. This is the challenge which determines whether you possess the art or its opposite: Intelligence, to be useful, must be used for something other than defeating itself."

    "The second flaw is cleverness. To invent great complicated plans and great complicated theories and great complicated arguments - or even, perhaps, plans and theories and arguments which are commended too much by their elegance and too little by their realism. There is a widespread saying which runs: 'The vulnerability of the beisutsukai is well-known; they are prone to be too clever.' Your enemies will know this saying, if they know you for a beisutsukai, so you had best remember it also. And you may think to yourself: 'But if I could never try anything clever or elegant, would my life even be worth living?' This is why cleverness is still our chief vulnerability even aft

    ... (read more)

    gjm asks wisely:

    What would you think of a musician who decided to give a public performance without so much as looking at the piece she was going to play? Would you not be inclined to say: "It's all very well to test yourself, but please do it in private"?

    The central thrust of Eliezer's post is a true and important elaboration of his concept of improper humility, but doesn't it overlook a clear and simple political reality? There are reputational effects to public failure. It seems clear that those reputational effects often outweigh whatev... (read more)

    We have lots of experimental data showing overconfidence; what experimental data show a consistent underconfidence, in a way that a person could use that data to correct their error? This would be a lot more persuasive to me than the mere hypothetical possibility of underconfidence.

    Underconfidence is surely very common in the general population. It's usually referred to "shyness", "tentativeness", "depression" - or by other names besides "underconfidence". This is part of the audience of the self-help books that encourage people to be more confident. E.g. see: "The trouble with overconfidence." on PubMed.
    For underconfidence and depression, see: "Depressive cognition: a test of depressive realism versus negativity using general knowledge questions." on PubMed. Underconfidence in visual perceptual judgments: "The role of individual differences in the accuracy of confidence judgments." on PubMed. More on that, see: "Realism of confidence in sensory discrimination." on PubMed.
    0Eliezer Yudkowsky15y
    I believe there were some nice experiments having to do with overcorrection, and I believe those were in "Heuristics and Biases" (the 2003 volume), but I'm on a trip right now and away from my books.

    I skimmed several debates with WLC yesterday, referenced here. His arguments are largely based on one and the same scheme:

    1. Everythng must have a cause
    2. Here's a philosophical paradox for you, that can't be resolved within the world
    3. Since despite the paradox, some fact still holds, it must be caused by God, from outside the world

    (Or something like this, the step 3 is a bit more subtle than I made it out to be.) What's remarkable, even though he uses a nontrivial number of paradoxes for the step 2, almost all of them were explicitly explained in the mater... (read more)

    Many of WLC's arguments have this rough structure:

    • Here's a philosophical brain teaser. Doesn't it make your head spin?
    • Look, with God we can shove the problem under the carpet
    • Therefore, God.

    That's why I think that in order to debate him you have to explicitly challenge the idea that God could ever be a good answer to anything; otherwise, you disappear down the rabbit hole of trying to straighten out the philosophical confusions of your audience.

    "saying 'God' is an epistemic placebo -- it gives you the feeling of a solution without actually solving anything" something like that?
    I like to put it this way: Religion is junk food. It sates the hunger of curiosity without providing the sustenance of knowledge.
    2Paul Crowley15y
    Well, you could start with something like that, but you're going to have to set out why it doesn't solve anything. Which I think means you're going to have to make the "lady down the street is a witch; she did it" argument. Making that simple enough to fit into a debate slot is a real challenge, but it is the universal rebuke to everything WLC argues.
    If we shouldn't expect evidence in either case then the probability of God's existence is just the prior, right? How could P(God) be above .5? I can't imagine thinking that the existence of an omnipotent, omniscient and benevolent being who answers prayers and rewards and punishes the sins of mortals with everlasting joy or eternal punishment was a priori more likely than not. I wonder what variety of first cause argument he's making. Even if everything must have a cause that does not mean there is a first cause and the existence of a first cause doesn't mean the first cause is God. Aquinas made two arguments of this variety that actually try to prove the existence of God, but they require outdated categories and concepts to even make.
    If God's existence is the prior, I don't think you include that he is also an "omnipotent, omniscient and benevolent being, [...]". Those are things you deduce about him after. The way I've thought about it is let X =whatever the explanation is to the creation conundrum. We will call X "God". X exists trivially (by definition), can we then infer properties about X that would justify calling it God? In other words, does the solution to creation have to be something omniscient and benevolent? (This is the part which is highly unlikely.)
    If you call X "God" by definition, you may find yourself praying to the Big Bang, or to mathematics. There is a mysterious force inherent in all matter and energy which binds the universe together. We call it "gravity", and it obeys differential equations.
    The Big Bsng and mathematics are good candidates. I've considered them. It only sounds ridiculous because you mentioned praying to them. The value of 'praying to X' is again something you need to deduce, rather than assume. Nah, gravity isn't universal or fundamental enough. That is, I would be very surprised if it was a 'first cause' in any way.
    1Eliezer Yudkowsky15y
    You certainly should not call X "God", nor should you suppose that X has the property "existence" which is exactly that which is to be rendered non-confusing.
    I just read your posts about the futility of arguing "by definition". I suspect that somewhere there is where my error lies. More precisely, could you clarify whether I "shouldn't" do those things because they are "not allowed" or because they wouldn't be effective?

    You shouldn't because even though when you speak the word "God" you simply intend "placeholder for whatever eventually solves the creation conundrum," it will be heard as meaning "that being to which I was taught to pray when I was a child" -- whether you like it or not, your listener will attach the fully-formed God-concept to your use of the word.

    Got it. if X is the placeholder for whatever eventually solves the creation conundrum, there's no reason to call it anything else, much less something misleading.
    In fact even naming it X is a bit of a stretch, because "the creation conundrum" is being assumed here, but my own limited understanding of physics suggests this "conundrum" itself is a mistake. What a "cause" really is, is something like: the information about past states of the universe embedded in the form of the present state. But the initial state doesn't have embedded information, so it doesn't really have either a past or a cause. As far as prime movers go, the big bang seems to be it, sufficient in itself.
    Yes, I agree with you: there is no real conundrum. In the past, we've solved many "conundrums" (for example, Zeno's paradox and the Liar's Paradox). By induction, I believe that any conundrum is just a problem (often a math problem) that hasn't been solved yet. While I would say that the solution to Zeno's paradox "exists", I think this is just a semantic mistake I made; a solution exists in a different way than a theist argues that God exists. (This is just something I need to work on.) Regarding the physics: I understand how a state may not causally depend upon the one proceeding (for example, if the state is randomly generated). I don't understand (can't wrap my head around) if that means it wasn't caused... it still was generated, by some mechanism.
    precisely =)
    You shouldn't do something not directly because it's not allowed, but for the reason it's not allowed.
    This comment is condescending and specious.
    That comment was meta. It isn't condescending, as it's not about you.
    It is condescending because you assumed that I didn't know what you were telling me, and you presume to tell me how to make decisions about what I "should" do. And the reason why it irritated me enough to complain is because I know the source of that condescension: I was asking a question in a vulnerable (i.e., feminine) way. And I got a cheap hit for not using language the way a man does. But I'm not saying it's sexism; it's just a cheap shot.
    It's about me because you imply that I don't already know what you're saying, and I could benefit from this wise advice.
    If someone speaks the obvious, then it's just noise, no new information, and so the speaker should be castigated for destructive stupidity. Someone or I.
    You could do it that way but then the question is just the priors for the probability that X has those traits. You can't say. "It would be a lot easier for God to do all of the things we think he needs to do if he was omnipotent therefore it is more likely that God is omnipotent. Adding properties to God that increase His complexity have to decrease the probability that He exists otherwise we're always going to be ascribing super powers to the entities we posit since they never make it harder for those entities to accomplish the tasks we need them to. Now I suppose if you could deduce that God has those traits then you would be providing evidence that X had those traits with a probability of 1. Thats pretty remarkable but anyone is free to have at it. So either you're putting a huge burden on your evidence to prove that there is some X such that X has these traits OR you have to start out with an extremely low prior.
    For some reason, the idea that P(God) = 0.5 exactly amuses me. Thank you for the smile :)
    6Luke Stebbing13y
    It reminded me of one of my formative childhood books: --Martin Gardner, Aha! Gotcha He goes on to demonstrate the obvious contradiction, and points out some related fallacies. The whole book is great, as is its companion Aha! Insight. (They're bundled into a book called Aha! now.)
    Contradiction: answered prayers is lots of evidence.
    I looking at the concept of God and trying to guess what the priors would be for a being that meets that description. That description usually includes answering prayers. If there is evidence of answered prayers then we might want to up the probability of God's existence- but a being capable of doing that is going to be some complex that extraordinary evidence is necessary to come to the conclusion one exists.
    Only if you have some sort of information about the unanswered prayers.
    Given the problems for the principle of indifference, a lot of bayesians favor something more "subjective" with respect to the rules governing appropriate priors (especially in light of Aumann-style agreement theorems). I'm not endorsing this manuever, merely mentioning it.

    There is a children's puzzle which consists of 15 numbered square blocks arranged in a frame large enough to hold 16, four by four, leaving one empty space. You can't take the blocks out of the frame. You can only slide a block into the empty space from an adjacent position. The puzzle is to bring the blocks into some particular arrangement.

    The mathematics of which arrangements are accessible from which others is not important here. The key thing is that no matter how you move the blocks around, there is always an empty space. Wherever the space is, you ca... (read more)

    This post reminds me of Aristotle's heuristics for approaching the mean when one tends towards the extremes:

    "That moral virtue is a mean, then, and in what sense it is so, and that it is a mean between two vices, the one involving excess, the other deficiency, and that it is such because its character is to aim at what is intermediate in passions and in actions, has been sufficiently stated. Hence also it is no easy task to be good. For in everything it is no easy task to find the middle, e.g. to find the middle of a circle is not for every one but fo... (read more)

    I agree - overcorrecting in action might well be a good technique for simply correcting virtue. A coward might do well by being brash for a bit, to settle in at courage after the fact.

    Did you write a cost function down for the various debate outcomes? The skew will inform whether overconfidence or underconfidence should be weighted differently.

    Eliezer, does your respect for Aumann's theorem incline you to reconsider, given how many commenters think you should thoroughly prepare for this debate?

    3Eliezer Yudkowsky15y
    Actually, the main thing that moved me was the comment about Richard Carrier also losing. I was thinking mostly that Hitchens had just had a bad day. Depending on how formidable the opponent is, it might still be a test of my ability even if I prepare.
    3Paul Crowley15y
    Carrier lost by his own admission, on his home territory. I've given a lot of thought to how I'd combat what he says, and what I think it comes down to is that standard, "simple" atheism that says "where is your evidence" isn't going to work; I would explicitly lead with the fact that religious language is completely incoherent and does not constitute an assertion about the world at all, and so there cannot be such a thing as evidence for it. And I would anticipate the way he's going to mock it by going there first: "I'm one of those closed-minded scientists who says he'll ignore the evidence for Jesus". At least when I play the debate out in my head, this is always where we end up, and if we start there I can deny him some cheap point scoring.
    "I'm one of those closed-minded scientists who says he'll ignore the evidence for Jesus" He would probably answer that it is not scientific to ignore evidence. Miracles cannot be explained by science. But they could - theoretically - be proven with scientific methods. If someone claims to have a scientific proof of a miracle (for example a video), it would be unscientific to just ignore it, wouldn't it?
    0Paul Crowley15y
    The idea is that you would open with this, but go on to explain why there could not be such a thing as evidence, because what is being asserted isn't really an assertion at all.
    I can't agree with the idea that religious assertions aren't really assertions. A fairly big thing in Christianity is that Jesus died, but then two or three days later was alive and well. This is a claim about how the world is (or was). It's entirely conceivable that there could be evidence for such a claim. And, in fact, there is evidence - it's just not strong enough evidence for my liking.
    I don't think making a move towards logical positivism or adopting a verificationist criterion of meaning would count as a victory.
    0Paul Crowley15y
    You don't have to do either of those things, I don't think. Have a look at the argument set out in George H Smith's "Atheism: the Case against God".
    I didn't think that one had to. That is what your challenge to the theist sounded like. I think that religious language is coherent but false, just like phlogiston or caloric language. Denying that the theist is even making an assertion, or that their language is coherent is a characteristic feature of positivism/verificationism, which is why I said that.
    0Paul Crowley15y
    No, I think it extends beyond that - see eg No Logical Positivist I

    What is the danger of overconfidence?

    Passing up opportunities. Not doing thing you could have done, but didn't try (hard enough).

    Did you mean "danger of underconfidence"?

    2Eliezer Yudkowsky15y
    Yes. Fixed. Thanks. Apparently "danger of overconfidence" is cached in my mind to the point that even when the whole point of the article is the opposite, it still comes out that way. Case in point!

    Can anyone give some examples of being underconfident, that happened as a result of overcorrecting for overconfidence?

    I'll give it a shot. In poker you want to put more money in the pot with strong hands, and less money with weaker ones. However, your hand is secret information, and raising too much "polarizes your range," giving your opponents the opportunity to outplay you. Finally, hands aren't guaranteed -- good hands can lose, and bad hands can win. So you need to bet big, but not too big, with your good hands. So my buddy and I sit down at the table, and I get dealt a few strong hands in a row, but I raise too big with them -- I'm overconfident -- so I win a couple of small pots, and lose a big one. My buddy whispers to me, "'re overplaying your hands..." Ten minutes later I get dealt another good hand, and I consider his advice, but now I bet too small, underconfident, and miss out on value. Replace the conversation with an internal monologue, and this is something you see all the time at the poker table. Once bitten, twice shy and all that.
    My "revision" to my Amanda Knox post is one. I was right the first time.
    2Wei Dai12y
    How did you end up concluding that your original confidence level was correct after all?
    I realized that there was a difference between the information I had and the information most commenters had; also that I had underestimated my Bayesian skills relative to the LW average, so that my panicked reaction to what I perceived as harsh criticism in a few of the comments was an overreaction brought about by insecurity.
    2Wei Dai12y
    I'm afraid I can't accept your example at this point, because based on my priors and the information I have at hand (the probability of guilt that you gave was 10x lower than the next lowest estimate, it doesn't look like you managed to convince anyone else to adopt your level of confidence during the discussions, absence of other evidence indicating that you have much better Bayesian skills than the LW average), I have to conclude that it's much more likely that you were originally overconfident, and are now again. Can you either show me that I'm wrong to make this conclusion based on the information I have, or give me some additional evidence to update on?
    Interesting posts. However, I disagree with your prior by a significant amount. The probability that [person in group] commits a murder within one year is small, but so is the probability that [person in group] is in contact with a victim. I would begin with the event [murder has happened], assign a high probability (like ~90%) to "the murderer knew the victim", and then distribute those 90% among people who knew her (and work with ratios afterwards). I am not familiar enough with the case to do that know, but Amanda would probably get something around 10%, before any evidence or (missing) motive is taken into account.
    A cursory search suggests 54% is more accurate. source, seventh bullet point. Also links to a table that could give better priors.
    I'm reading that as 54% plus some unknown but probably large proportion of the remainder: that includes a large percentage in which the victim's relationship to the perpetrator is unknown, presumably due to lack of evidence. Your link gives this as 43.9%, but that doesn't seem consistent with the table. If you do look at the table, it says that 1,676 of 13,636 murders were known to be committed by strangers, or about 12%; the unknowns probably don't break down into exactly the same categories (some relationships would be more difficult to establish than others), but I wouldn't expect them to be wildly out of line with the rest of the numbers.
    I agree with that interpretation. The 13636 murders contain: 1676 from strangers 5974 with some relation *5986 unknown Based on the known cases only, I get 22% strangers. More than expected, but it might depend on the region, too (US <--> Europe). Based on that table, we can do even better: We can exclude reasons which are known to be unrelated to the specific case, and persons/relations which are known to be innocent (or non-existent). A bit tricky, as the table is "relation murderer -> victim" and not the other direction, but it should be possible.

    Eliezer said:

    So if you have learned a thousand ways that humans fall into error and read a hundred experimental results in which anonymous subjects are humiliated of their overconfidence - heck, even if you've just read a couple of dozen - and you don't know exactly how overconfident you are - then yes, you might genuinely be in danger of nudging yourself a step too far down.

    I also observed this phenomenon of debiasing being over-emphasized in discussions of rationality, while heuristic is treated as a bad word. I tried to get at the problem of passing... (read more)

    Playing tic-tac-toe against a three-year-old for the fate of the world would actually be a really harrowing experience. The space of possible moves is small enough that he's reasonably likely to force a draw just by acting randomly.

    Not if you can go first.
    So, you go center. If he goes on a flat side, you're golden (move in a nearly-opposite corner, you can compel victory). If he goes in a corner, you go 90° away. Now, if he's really acting randomly, he has a 1/6 chance to block your next-turn win. Then you block his win threat, making a new threat of your own, that he has a 1/4 chance to block. If he does, he'll make the last block half the time. So, a 1/96 chance to tie by moving randomly. That would be enough to make me nervous if the fate of the world were at stake. Would you like to play Global Thermonuclear War?
    1/96 (I was thinking of a different algorithm, but the probability is the same) would be enough to make me nervous, but I wouldn't call it 'reasonably likely'

    Can someone explain why we can't name the theist in question, other than sheer silliness?

    5Eliezer Yudkowsky15y
    Because I consider it unfair to him to talk about a putative debate before he's replied to a request; also somewhat uncourteous to talk about how I plan to handicap myself (especially if it's not a sign of contempt but just a desire to test myself). If people can work it out through effort, that's fine, I suppose, but directly naming him seems a bit discourteous to me. I have no idea whether he's courteous to his opponents outside debate, but I have no particular info that he isn't.
    How is it unfair to him in any way? He's free to choose whether to debate or not debate you; I doubt any reasonable person would be offended by the mere contemplation of a future debate. And any sort of advantage or disadvantage that might be gained or lost by "tipping him off" could only be of the most trivial sort, the kind any truth-seeking person should best ignore. All this does is make it a bit difficult to talk about the actual substance and ideas underlying the debate, which seems to me the most important stuff anyway.
    I think Eliezer's reason is good. It would sound like contempt to the More Wrong.

    This post reminds me of the phrase "cognitive hyper-humility," used by Ben Kovitz's Sophistry Wiki:

    Demand for justification before making a move. Of course, this is not always sophistry. In some special areas of life, such as courtroom trials, we demand that a "burden" of specific kinds of evidence be met as a precondition for taking some action. Sophistry tends to extend this need for justification far beyond the areas where it's feasible and useful. Skeptical sophistry tends to push a sort of cognitive hyper-humility, or freezing ou

    ... (read more)

    If there was a message I could send back to my younger self this would be it. Plus that if it's hard, don't try to make it easier, just keep in mind that it's important. (By younger self, I mean 7-34 years old.)

    IHAPMOE, but the post seems to assume that a person's "rationality" is a float rather than a vector. If you're going to try to calibrate your "rationality", you'd better try to figure out what the different categories of rationality problems there are, and how well rationality on one category of problems correlates with rationality on other categories. Otherwise you'll end up doing something like having greater confidence in your ethical judgements because you do well at sudoku.

    Does the fact that I find this guy's formulation of the cosmological argument somewhat persuasive mean that I can't hang out with the cool kids anymore? I'm not saying it is an airtight argument, just that it isn't obviously meaningless or ridiculous metaphysics.

    Slightly off-topic:

    I don't know if it would be possible to arrange either of them, but there are two debates I'd love to see Eliezer in:

    A debate with Amanda Marcotte on evolutionary psychology


    A debate with Alonzo Fyfe on meta-ethics.

    Before anyone even thinks about this, they need to read Gender, Nature, and Nurture by Richard Lippa. He creates a hypothetical debate between Nature and Nurture which is very well done. Nurture has a bunch of arguments that sound "reasonable" and will be persuasive to audiences who are either close-minded or unfamiliar with the research literature, yet are actually sophistry. I would recommend having at least some sort of an answer to all of those points. Defending evolutionary psychology in a debate is going to be very hard, because the playing field is so stacked. It's really easy to get nailed by skeptical sophistry or defeated by a King on the Mountain. In this case, the King would be arguing something like "male-female differences are socially constructed." Appreciating the argument of evolutionary psychology, like evolution itself, requires thinking holistic and tying a lot of arguments and evidence together. This is difficult in a verbal debate, where a skilled sophist will take your statements and evidence in isolation and ridicule them without giving you a change to link them together into a bigger picture:

    And conversely, as Ari observes:

    If you’ve never hit the ground while skydiving, you’re opening your parachute too early.

    er, am I misparsing this? It seems to me that if you haven't hit the ground while skydiving, you're some sort of magician, or you landed on an artificial structure and then never got off..

    This seems like a reflection of a general problem people have, the problem of not getting things done - more specifically, the problem of not getting things done by convincing yourself not to do them.

    It's so much easier to NOT do things than do them, so we're constantly on the lookout for ways not to do them. Of course, we feel bad if we simply don't do them, so we first have to come up with elaborate reasons why it's ok - "I'll have plenty of time to do it later", "There's too much uncertainty", "I already got alot of work done to... (read more)

    In CS, laziness is considered a virtue, principally (I believe) because being too lazy to just do something the hard (but obvious) way tends to lead to coming up with an easy (clever) way that's probably faster and more elegant. But what if you convince yourself not to want it?

    Eliezer should write a self-help book! Blog posts like the above are very inspiring to this perennial intellectual slacker and general underachiever (meaning: me).

    I certainly can relate to this part:

    "It doesn't seem worthwhile any more, to go on trying to fix one thing when there are a dozen other things that will still be wrong...

    There's not enough hope of triumph to inspire you to try hard..."

    Last paragraph, open parentheses missing. (I'm on a typo roll it seems)

    Overconfidence is usually costlier than underconfidence. The cost to become completely accurate is often greater than the benefit of being slightly-inaccurate-but-close-enough.

    When these two principles are taken into account, underconfidence becomes an excellent strategy. It also leaves potential in reserve in case of emergencies. As being accurately-confident tends to let others know what you can do, it's often desirable to create a false appearance.

    The cost of underconfidence is an opportunity cost. This is easy to miss, so it will be underweighted--salience bias. This is not a rebuttal, but it is a reason to expect people will falsely conclude that overconfidence is costlier.
    I approve of your response, Douglas_Knight, but think that it is both incomplete and somewhat inaccurate. The cost of underconfidence isn't necessarily or always an opportunity cost. It can be so, yes. But it can also be not so. You are making a subtle and mostly implicit claim of universality regarding an assertion that is not universally the case. A strategy doesn't need to work in every possible contingency to be useful or valid.
    I suspect you are overconfident in that belief. Simply stating something is not a persuasive argument.
    "Simply stating something is not a persuasive argument." Is simply stating that supposed to be persuasive? Sooner or later we have to accept or reject arguments on their merits, and that requires evaluating their supports. Not demanding supports for them.
    Overconfidence and underconfidence both imply a non-optimal amount of confidence. It's a little oxymoronic to claim that underconfidence is an excellent strategy - if it's an excellent strategy then it's presumably not underconfidence. I assume what you are actually claiming is that in general most people would get better results by being less confident than they are? Or are you claiming that relative to accurate judgements of probability of success it is better to consistently under rather than over estimate? You claim that overconfidence is usually costlier than underconfidence. There are situations where overconfidence has potentially very high cost (overconfidently thinking you can safely overtake on a blind bend perhaps) but in many situations the costs of failure are not as severe as people tend to imagine. Overconfidence (in the sense of estimating greater probability of success than is accurate) can usefully compensate for over estimating the cost of failure in my experience. You seem to have a pattern of responding to posts with unsupported statements that appear designed more to antagonize than to add useful information to the conversation.
    I am replying here instead of higher because I agree with mattnewport, but this is addressed to Annoyance. It is hard to for me to understand what you mean by your post because the links are invisible and I did not instinctively fill them in correctly. As best as I can tell, this is situational. I think mattnewport's response is accurate. More on this below. It seems that the two paths from this statement are to stay inaccurate or start getting more efficient at optimizing your accuracy. It sounds too similar to saying, "It is too hard. I give up," for me to automatically choose inaccuracy. I want to know why it is so hard to become more accurate. It also seems situational in the sense that it is not always, just often. This is relevant below. In addition to mattnewport's comment about underconfidence implying non-optimal confidence, I think that building this statement on two situational principles is dangerous. Filling out the (situational) blanks leads to this statement: This seems to work just as well as saying this: Which can really be generalized to this: Which just leads us back to mattnewport's comment about optimal confidence. It also seems like it was not the point you were trying to make, so I assume I made a mistake somewhere. As best as I can tell, it was underemphasizing the two situational claims. As a result, I fully understand the request for more support in that area. Acting overconfident is another form of bluffing. Also, acting one way or the other is a little different than understanding your own limits. How does it help if you bluff yourself?
    "Overconfidence and underconfidence both imply a non-optimal amount of confidence." Not in the sense of logical implication. The terms refer to levels of confidence greater or lesser than they should be, with the criteria utilized determining what 'should' means in context. The utility of the level of confidence isn't necessarily linked to its accuracy. Although accuracy is often highly useful, there are times when it's better to be inaccurate, or to be inaccurate in a particular way, or a particular direction. "You seem to have a pattern of responding to posts with unsupported statements" I can support my statements, and support my supports, and support my support supports, but I can't provide an infinite chain of supports. No one can. The most basic components of any discussion stand by themselves, and are validated or not by comparison with reality. Deal with it. "that appear designed more to antagonize than to add useful information to the conversation" They're crafted to encourage people to think and to facilitate that process to the degree to which that is possible. I can certainly see how people uninterested in thinking would find that unhelpful, even antagonizing. So?
    Why is confidence or lack thereof an issue aside from personal introspection?
    If you are under confident you may pass up risky but worthwhile opportunities, or spend resources on unnecessary safety measures. As for over confidince see hubris. Also welcome to less wrong.

    A typo in the article: "What is the danger of overconfidence?" -> "What is the danger of underconfidence?"

    ...the third of these is underconfidence. Michael Vassar regularly accuses me of this sin, which makes him unique among the entire population of the Earth.

    Well, that sure is odd. Guess that's why Vassar was promoted. It makes sense now.

    Anyway, EY's history doesn't seem to me marked by much underconfidence. For example, his name has recently been used in vain at this silly blog, where they're dredging up all sorts of amusing material that seems to support the opposite conclusion.

    Since I know EY has guru status around here, please don't jump down my throat... (read more)

    With detractors like this, who needs supporters? I almost wonder whether razib wrote that blog post in one of his faux-postmodernist moods. I advise you all not to read it; badly written and badly supported criticism of EY is too powerful of a biasing agent towards him.
    The author understandably distances himself from his own output, reminiscent of the passages ridiculed in "Politics and the English Language".