Edit: it was unfortunately a prank. I definitely checked the date of the article (which is dated Apr. 2), before posting on it. Kind of mean to make an April Fool's prank after April Fool's. I didn't realize I'd have a chance to practice what I preach so soon.

I guess I need to just say oops.

 

Original Post:

Chess analyst Vasik Rajlich had some big news today: solving the King’s Gambit.

I know that this doesn’t add much new to the complexity theory aspects of games like chess, but I would say it’s a beautiful result, very much like the recent improvement on the complexity of matrix multiplication, and it certainly emphasizes the role computation plays as the King’s Gambit is a pretty popular, classical opening. By most any human standard it’s a respectable opening, and yet we can conclusively say it is unequivocally bad for White assuming two rational players.

I wrote up a short blurb about it at my blog.

New to LessWrong?

New Comment
11 comments, sorted by Click to highlight new comments since: Today at 3:34 AM
[-]gjm12y120

I am suspicious.

On March 31 the author of the Rybka program, Vasik Rajlich, and his family moved from Warsaw, Poland to a new appartment in Budapest, Hungary. The next day, in spite of the bustle of moving boxes and setting up phone and Internet connections Vas, kindly agreed to the following interview, which had been planned some months ago.

So ... April 1.

Our algorithm works in an iterative manner – it first forms a hypothesis, and then it confirms or alters that hypothesis over a number of passes using a non-deterministic Turing Machine program running across the clusters.

A "Turing Machine program"? Really? (And there are other awfully suspicious-looking things in that paragraph. Strategy stealing, in a highly asymmetrical chess position?)

Not anywhere near conclusive, but pretty strongly suggestive. I think this is a hoax.

[-][anonymous]12y80

Damn. They got me good. I think I'll leave the posts up to shame myself.

non-deterministic Turing Machine program

You know, you can't actually build one of these, at least not without exponentially growing resources...

Here's a good example of where I was fooled where I shouldn't have been if I'd been thinking like a proper Bayesian. Prior to reading the article I would have given something like 1/1000 that computers could "solve" a main-line chess opening (to the definition given in the article, which is just that the computer evaluates each line as winning, not that every possible position has been examined). I'd also try to plug in reasonable numbers for newspapers reporting a story as true/false when the story is actually true/false as something like p(newspaper reports true given story is actually true)=95% and p(newspaper reports as true given story is actually false) =20%. Doing the math, there is then almost no chance that the article was true (less than 1%)

And I should have been able to do this in my head. Even if the newspaper reported true stories as true 99% of the time, and a false story as true only 1% of the time, there would have still been about 10 to 1 odds that it wasn't true.

So why did I get fooled? I didn't ever stop to think about it probably, which is embarrassing. Why not? I saw the link from MR and I apparently over-trust Tyler Cowen as a gatekeeper. Had a random person told me about the article I probably would have called BS on it (as I've done before with similar situations) but because someone I trust made the assertion I forgot to apply my brain filters, probably assuming he already did it.

Moral of the story, I need to always, at least briefly, think about my priors and how strong of evidence the source is when I learn new information. Especially if it comes from a source I trust because I'm more prone to believe it.

[-][anonymous]12y20

Excellent point.

For me, the problem was one level before yours: I had very bad priors. This is embarrassing for me because (a) I frequently play chess at a USCF affiliated club and have read more than a handful of books specifically on the King's Gambit; and (b) I am a computational science grad student and have studied complexity theory in great detail, even specifically discussing implications of chess on the development of A.I. and complexity theory as a whole.

In retrospect, as @gjm pointed out, there are enough markers in the article (especially the "Turing machine program" reference, which should have been an absolute dead giveaway for me) to see easily that it must be a hoax. But in the larger sense, the article sounded extremely plausible to me. My prior belief was that the number of continuations larger than 15 moves long that truly need to be deeply explored is very small and that it shouldn't require too much computation to get to Rybka's standard of +/- 5.12. In reality, the number of continuations that would need to be examined is far larger than I thought, and chasing them all down to +/- 5.12 would probably require more computational resources than we have on the planet if you wanted to solve it in 4 months of actual time. It didn't occur to me to question this at all. I just thought "humans are smart at knowing what needs to be explored" and "Rybka is really good at knowing if a line loses given its positional score", both of which are gross oversimplifications that matter greatly if the claim is to have "solved" the opening.

I was also partially primed to wish that the result was true because of the mention of Bobby Fischer's hubris, something that I kind of want to see vindicated in a "there's-just-something-special-about-human-geniuses" kind of way, when really I should drastically discount a human's ability to completely refute an entire opening.

I am interested in whether there is a more general principle that emerges here. Because I am both a chess fan and a computer fan, I was more willing to overlook tiny discrepancies that should have been glaring. Almost like a version of halo effect but applied to my favorite hobbies. "Of course an awesome result in my nerd interest areas is likely to be true..." If the article had been about some new breakthrough in computer vision machine learning (something I'm much more internally skeptical about, though similarly knowledgeable compared to my knowledge of chess and complexity theory), I think I would have been much more interested in disputing the article instead of celebrating it.

Now the question is with respect to what other interests/hobbies of mine am I exhibiting this same error? As a person becomes more knowledgeable about topic X, we know that can lead to confirmation and sophistication bias, but can it also lead to something like celebration bias?

This is a good interview, it explains what it means to "solve the King's Gambit" even for people who don't know much about chess. That said, please change your font back to the site default, different fonts are weird and kind of an eyesore.

[-][anonymous]12y20

Thank you for the font feedback. What are the default settings? I copied this over from a different html editor.

[-][anonymous]12y20

Thank you.

If the King's Gambit was actually solved, it would be trivial to solve the rest of chess.