In Sherlock Holmes fiction, we see that Holmes is capable of making correct inferences using insufficient information and long, tenuous chains of reasoning. I'm curious what would happen if we tried to apply this in real life. Here's a riddle containing insufficient information to come to the right answer with any certainty; will our Holmesian reasoning attempts be anything close to the "correct" answer, or will it be totally off?

The other day, I was listening to music from a movie on my headphones. In the movie, one scene depicts one of the characters getting out of bed. He puts one foot on the ground, then the other. The headphones were broken on one side. Which side?

Use your meta-riddle awareness: this isn't just a random event, but the sort of event that I would make into a riddle.

Here's the answer I had in mind, rot13'd.

New to LessWrong?

New Comment
19 comments, sorted by Click to highlight new comments since: Today at 3:16 PM

I think that your puzzle not only fails to have enough information, but that one of the steps in your explanation fails to follow at all. That is to say, I wouldn't feel convinced even if Sherlock Holmes said it.

[-][anonymous]13y00

Which one?

Gur bar gung fnlf gung gur oebxra urnqcubar vf ba gur fnzr fvqr nf gur oebxra yrt. Jul qb gurl unir nalguvat gb qb jvgu rnpu bgure?

Huh, I would have guessed you were referring to the assumptions about foot order.

Agreed. Vs gur urnqcubarf unq orybatrq gb gur zbivr punenpgre engure guna gur ivrjre, jr pbhyq unir thrffrq gung gurl jrer qnzntrq va gur fnzr vapvqrag, but it didn't even make that much sense.

[-][anonymous]13y00

A fair point. knb sums it up pretty well:

. . . I had to know you were giving hints, and that you wanted us to reason by weak association. So Holmesian reasoning is worthless unless you happen to know the situation is contrived.

I remember a long time ago a brief fad for lateral thinking puzzles in which a situation is described and you are asked to explain it -- much like the above problem. Canonical example:

A man is lying face down in the middle of a ploughed field. He is wearing a small backpack and is dead. The earth around him is undisturbed.

Answer: Gur onpxcnpx vf n cnenpuhgr gung snvyrq gb bcra.

For a well-constructed puzzle, the answer is always obvious in hindsight: it explains every detail, while adding as little as possible. It need not be logically implied by the data, but it must explain the data better than any other.

Holmes is popularly thought of as a master of "reasoning", but two other things are also required: observation (this is explicitly an essential part of his method in the Conan Doyle stories) and the ability to generate ideas (mentioned less often in those stories, but called "insight" when it is).

Not much attention has been paid on LessWrong to either of these, although they have been mentioned from time to time. Eliezer has written somewhere of noticing the tiny feeling that something is not quite right, and raising it to full awareness, the small voice that should sound as loud as a fire alarm. Generating ideas was discussed in The Failures of Eld Science, but any process for doing so was not examined. (A method for generating new ideas even sounds like a contradiction in terms.) The universal prior generates ideas by enumerating all ideas, as does that maximally optimal predictor for which I don't have a link, but which does something like systematically searching for Turing machines that generate the sequence seen so far.

I don't think anything substantial has been written here on these two topics:

How does one notice what is important, when you do not yet know what is important?

How does one think of ideas to explain what one has noticed?

And at present I don't have anything more to say about them, or this would be a top level posting.

I started with random.org giving me a number, 1 or 0. I decided to guess "left" if random.org returned 1 and "right" if random.org returned 0. On this particular occasion, random.org returned 1 and my method was successful.

Without other examples of Holmesian reasoning, it is not immediately obvious to myself that Holmesian reasoning is more successful than coin-flipping, although it is probably more time-intensive.

I always felt that Holmesian reasoning was characterized by reliance on obscure technical knowledge, like the different kinds of ash produced by various cigars, or else by noticing seemingly irrelevant details.

For example, a screenshot of my desktop would reveal (among other things) a button labeled 'us' in the panel. From this you might infer that I use a nonstandard keyboard layout. Browsing through my comment history would probably give you enough text to guess that I'm a native English speaker, so from this a reasonable guess (and the correct one) would be that I use Dvorak.

This, on the other hand, is just silly. Holmesian reasoning should, above all else, sound plausible to someone who doesn't understand how evidence works.

[-]FAWS13y40

Is there any reason at all to put your answer on an external site AND rot13 it? Are you seriously worried about people accidentally clicking the link and reading it or something? At least one of those two steps seems a complete waste of time.

[-][anonymous]13y10

Are you seriously worried about people accidentally clicking the link and reading it or something?

I was casually worried (har har) about that, and I figured the cost of making the wrong decision was low. I guess I went ahead and made the wrong decision.

[-]knb13y30

I think I did surprisingly well at following your reasoning. This was the process I used (rot13'd to avoid spoilers).

V tbg gur cneg nobhg gur nanybtl orgjrra gur srrg naq gur urnqcubarf, naq V nyfb thrffrq gung gur qnzntrq urnqcubar pbeerfcbaqrq gb n qnzntrq sbbg. V pbhyqa'g guvax bs nal zbivrf jurer fbzrbar vawherq gurve sbbg, fb V tnir hc. Gur cneg nobhg ernqvat beqre fghzcrq zr. V thrffrq "yrsg" naljnl, fvapr V hfhnyyl trg bhg bs orq gung jnl (fb znlor vg vf zber pbzzba.)

Of course, to get that much, I had to know you were giving hints, and that you wanted us to reason by weak association. So Holmesian reasoning is worthless unless you happen to know the situation is contrived.

Edit: upvoted, btw.

So you're testing if the answers will be consistent, rather than correct?

I think the goal is to see how well Holmesian reason works and how it fails.

[-][anonymous]13y30

The goal is to see what happens.

How can this show how well it works? There's no correct answer, since the example is made up.

Well it won't provide very much evidence, but, especially with a few more examples, there's a chance we'll learn something.

[-][anonymous]13y00

Enter a comment here

[This comment is no longer endorsed by its author]Reply
[-][anonymous]13y30

What the crap.