## LESSWRONGLW

Late in the game and perhaps missing the point, but in order to try and understand for myself...

(1) followed the 'method' or 'ideal' as (2) well as possible and (3) ended up with a hypothesis that was factually incorrect (4) risked of 'wasting' a very long time researching something that ended up being wrong and (5) that the 'method' or 'ideal' does not help one to avoid this properly (6) all of which combined make the method/ideal problematic as it is likely to statistically to also result in a high number of 'wasted years researching something useless' (or some variation of that)

--

Now, there are many ways to look at this argument.

In reference to (1) and (2): Ideal can only be approximated, but never achieved. We do as well as we can and improve (hopefully) through every trial and error

ref (3): How did you find out this was 'wrong'? Are you sure? Can you prove it? If so, the question boils down to: how can one lower the likelihood of working on something 'wrong' for too long? A common suggestion is to share ideas even when they are being worked on: open them up for testing and attack by anyone, because a million eye balls will do it better than two (assuming the two are within the million). A second suggestion is to work with multiple simultaneous hypotheses all of which according to the ideal support current data, have predictive power, are falsifiable (via experiments) and are divergent enough as to be considered separate.

(4) How can we know the length of time if we have not 'wasted' it? How can we know the 'waste' if we have not walked that path of knowledge? I would propose that anybody who diligently, openly and humbly follows the ideal to the best of her skills will arrive at lot of 'non-wasted' knowledge, models, thinking, publications, prestige, colleagues, positions, etc - EVEN if the whole hypothesis in the end is falsified. Just look at theoretical physics and the amount of silent evidence in the graveyard of falsified hypotheses, many of which were done by intellectually towering giants and branched off into new research areas in maths, statistics, meta-theory and philosophy. I'd love to attain a wasted failure like that myself :)

(5) This is the biggest argument. In theory I agree, in practice not quite so. Of course, the ideal method does not guarantee lack of such 'failure' (which it is not, imho, as argued above), but skillful implementation of this method can lower such a likelihood, but it requires, imho, humility, openness and and constant fight against bias, something which we can never be free of, but can temporarily be aware of.

(6) Too big to tackle in a post, at least for me :)

Good blog!

# 24

Once upon a time, a younger Eliezer had a stupid theory.  Let's say that Eliezer18's stupid theory was that consciousness was caused by closed timelike curves hiding in quantum gravity.  This isn't the whole story, not even close, but it will do for a start.

And there came a point where I looked back, and realized:

1. I had carefully followed everything I'd been told was Traditionally Rational, in the course of going astray.  For example, I'd been careful to only believe in stupid theories that made novel experimental predictions, e.g., that neuronal microtubules would be found to support coherent quantum states.
2. Science would have been perfectly fine with my spending ten years trying to test my stupid theory, only to get a negative experimental result, so long as I then said, "Oh, well, I guess my theory was wrong."

From Science's perspective, that is how things are supposed to work—happy fun for everyone.  You admitted your error!  Good for you!  Isn't that what Science is all about?

But what if I didn't want to waste ten years?

Well... Science didn't have much to say about that.  How could Science say which theory was right, in advance of the experimental test?  Science doesn't care where your theory comes from—it just says, "Go test it."

This is the great strength of Science, and also its great weakness.

Eliezer, why are you concerned with untestable questions?

Because questions that are easily immediately tested are hard for Science to get wrong.

I mean, sure, when there's already definite unmistakable experimental evidence available, go with it.  Why on Earth wouldn't you?

But sometimes a question will have very large, very definite experimental consequences in your future—but you can't easily test it experimentally right now—and yet there is a strong rational argument.

Macroscopic quantum superpositions are readily testable:  It would just take nanotechnologic precision, very low temperatures, and a nice clear area of interstellar space.  Oh, sure, you can't do it right now, because it's too expensive or impossible for today's technology or something like that—but in theory, sure!  Why, maybe someday they'll run whole civilizations on macroscopically superposed quantum computers, way out in a well-swept volume of a Great Void.  (Asking what quantum non-realism says about the status of any observers inside these computers, helps to reveal the underspecification of quantum non-realism.)

This doesn't seem immediately pragmatically relevant to your life, I'm guessing, but it establishes the pattern:  Not everything with future consequences is cheap to test now.

Evolutionary psychology is another example of a case where rationality has to take over from science.  While theories of evolutionary psychology form a connected whole, only some of those theories are readily testable experimentally.  But you still need the other parts of the theory, because they form a connected web that helps you to form the hypotheses that are actually testable—and then the helper hypotheses are supported in a Bayesian sense, but not supported experimentally.  Science would render a verdict of "not proven" on individual parts of a connected theoretical mesh that is experimentally productive as a whole.  We'd need a new kind of verdict for that, something like "indirectly supported".

Cryonics is an archetypal example of an extremely important issue (150,000 people die per day) that will have huge consequences in the foreseeable future, but doesn't offer definite unmistakable experimental evidence that we can get right now.

So do you say, "I don't believe in cryonics because it hasn't been experimentally proven, and you shouldn't believe in things that haven't been experimentally proven?"

Well, from a Bayesian perspective, that's incorrect.  Absence of evidence is evidence of absence only to the degree that we could reasonably expect the evidence to appear.  If someone is trumpeting that snake oil cures cancer, you can reasonably expect that, if the snake oil was actually curing cancer, some scientist would be performing a controlled study to verify it—that, at the least, doctors would be reporting case studies of amazing recoveries—and so the absence of this evidence is strong evidence of absence.  But "gaps in the fossil record" are not strong evidence against evolution; fossils form only rarely, and even if an intermediate species did in fact exist, you cannot expect with high probability that Nature will obligingly fossilize it and that the fossil will be discovered.

Reviving a cryonically frozen mammal is just not something you'd expect to be able to do with modern technology, even if future nanotechnologies could in fact perform a successful revival.  That's how I see Bayes seeing it.

Oh, and as for the actual arguments for cryonics—I'm not going to go into those at the moment.  But if you followed the physics and anti-Zombie sequences, it should now seem a lot more plausible, that whatever preserves the pattern of synapses, preserves as much of "you" as is preserved from one night's sleep to morning's waking.

Now, to be fair, someone who says, "I don't believe in cryonics because it hasn't been proven experimentally" is misapplying the rules of Science; this is not a case where science actually gives the wrong answer.  In the absence of a definite experimental test, the verdict of science here is "Not proven".  Anyone who interprets that as a rejection is taking an extra step outside of science, not a misstep within science.

John McCarthy's Wikiquotes page has him saying, "Your statements amount to saying that if AI is possible, it should be easy. Why is that?"  The Wikiquotes page doesn't say what McCarthy was responding to, but I could venture a guess.

The general mistake probably arises because there are cases where the absence of scientific proof is strong evidence—because an experiment would be readily performable, and so failure to perform it is itself suspicious.  (Though not as suspicious as I used to think—with all the strangely varied anecdotal evidence coming in from respected sources, why the hell isn't anyone testing Seth Roberts's theory of appetite suppression?)

Another confusion factor may be that if you test Pharmaceutical X on 1000 subjects and find that 56% of the control group and 57% of the experimental group recover, some people will call that a verdict of "Not proven".  I would call it an experimental verdict of "Pharmaceutical X doesn't work well, if at all".  Just because this verdict is theoretically retractable in the face of new evidence, doesn't make it ambiguous.

In any case, right now you've got people dismissing cryonics out of hand as "not scientific", like it was some kind of pharmaceutical you could easily administer to 1000 patients and see what happened.  "Call me when cryonicists actually revive someone," they say; which, as Mike Li observes, is like saying "I refuse to get into this ambulance; call me when it's actually at the hospital".  Maybe Martin Gardner warned them against believing in strange things without experimental evidence.  So they wait for the definite unmistakable verdict of Science, while their family and friends and 150,000 people per day are dying right now, and might or might not be savable—

—a calculated bet you could only make rationally.

The drive of Science is to obtain a mountain of evidence so huge that not even fallible human scientists can misread it.  But even that sometimes goes wrong, when people become confused about which theory predicts what, or bake extremely-hard-to-test components into an early version of their theory.  And sometimes you just can't get clear experimental evidence at all.

Either way, you have to try to do the thing that Science doesn't trust anyone to do—think rationally, and figure out the answer before you get clubbed over the head with it.

(Oh, and sometimes a disconfirming experimental result looks like:  "Your entire species has just been wiped out!  You are now scientifically required to relinquish your theory.  If you publicly recant, good for you!  Remember, it takes a strong mind to give up strongly held beliefs.  Feel free to try another hypothesis next time!")