Title: [SEQ RERUN] I Defy the Data!

Tags: sequence_reruns

Today's post, I Defy the Data! was originally published on 11 August 2007. A summary (taken from the LW wiki):

 

If an experiment contradicts a theory, we are expected to throw out the theory, or else break the rules of Science. But this may not be the best inference. If the theory is solid, it's more likely that an experiment got something wrong than that all the confirmatory data for the theory was wrong. In that case, you should be ready to "defy the data", rejecting the experiment without coming up with a more specific problem with it; the scientific community should tolerate such defiances without social penalty, and reward those who correctly recognized the error if it fails to replicate. In no case should you try to rationalize how the theory really predicted the data after all.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Your Strength as a Rationalist, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

New to LessWrong?

New Comment
6 comments, sorted by Click to highlight new comments since: Today at 3:56 PM

Somehow I couldn't quite figure out what Eliezer was advocating when I first read this article. Now I think that he wants to see more exchanges like the following:

Experimentalist: My experiment yielded results that contradict your theory at the p < 0.01 level!

Theorist: I remain unconvinced. I defy your data.

Experimentalist: What?! You can't just ignore empirical observations like that. So, which is it? Are you accusing me of negligence or fraud?

Theorist: I'm not accusing you of anything. But your experiment could have been one of the 1-in-100 that would get results at least that strong just by chance. As unlikely as it is that you were so unlucky, it is nonetheless more likely than that my theory is wrong. Your p-value just wasn't large enough to kill my theory in one fell swoop. However, several independent replications of your results would be enough to do it.

So it sounds a lot like how science actually works. I'm not sure what this article offers, honestly.

Isn't this why we do independent verification? As far as I know, in Physics a single experiment almost never causes people to re-examine a theory. It's quite likely that there are systemic errors that the experimentalists didn't think of. It's only after numerous independent verifications that people start looking at theories again.

Or did I miss some subtlety in the argument?

Ref: Michelson–Morley experiment (repeated several times and by several people before being accepted), and more recently the superluminal neutrino result which is currently being called an "anomaly" pending verification at other facilities[1].

  1. http://blog.vixra.org/2011/09/19/can-neutrinos-be-superluminal/

It's quite likely that there are systemic errors that the experimentalists didn't think of.

This would be an example of negligence on the part of the experimentalist, though perhaps excusable negligence.

Eliezer's point is that an experimentalist might get results contrary to theory (i.e., extremely unlikely, given that the theory is true) even if the experimentalist thought of every possible source of systematic error. After all, extremely unlikely outcomes do happen (albeit rarely, of course). Therefore, the theorist should be able to "defy the data" without implying that the experimentalist made any kind of oversight.

Yes, this. It simply shouldn't be necessary—ever—to loudly defy a single result. An I replicated result should not be seen as a result at all, but merely a step in the experimental process. Sadly, that's not how most people treat results.

Thanks for doing this :)