Reference class of the unclassreferenceable

So to sum up, you think you have a heuristic "On average, nothing ever happens for the first time" which beats any argument that something is about to happen for the first time. Cases like the Wright Brothers (reference class: "attempts at heavier-than-air flight") are mere unrepeatable anomalies. To answer the fundamental rationalist question, "What do you think you know and how do you think you know it?", we know the above is so because experiments show that people could do better at predicting how long it will take them to do their Christmas shopping by asking "How long did it take last time?" instead of trying to visualize the details. Is that a fair summary of your position?

"On average, nothing ever happens for the first time" is an erroneous characterization because it ignores all the times where the predictable thing kept on happening. By invoking the first time you restrict the reference class to those where something unusual happened. But if usually nothing unusual happens (hmm...) and those who predict the unusual are usually con artists as opposed to genius inside analyzers (is this really so unreasonable a view of history?), then he has a point.

"Smart people claiming that amazing things are going to happen" sometimes leads the way for things like the Wright Brothers, but very often nothing amazing happens.

2Johnicholas10yWould you buy: "After something happens, we will see the occurrence as a part of a pattern that extended back before that particular occurrence." The Wright Brothers may have won the crown of "first", but there were many, many near misses before. http://en.wikipedia.org/wiki/First_flying_machine [http://en.wikipedia.org/wiki/First_flying_machine]
6taw10yI entertain the notion that outside view might be a bad way of analyzing some situations, the post is a question on what this class might look like, and how do we know a situation belongs to such class? I'd definitely take outside view as a default type of reasoning - inside view by definition has no evidence of even as little as lack of systemic bias behind it. The way you describe my heuristic is not accurate. There are cases where something highly unusual happen, but these tend to be extremely difficult to reliably predict - even if they're really easy to explain away as bound to happen with benefit of hindsight. For example I've heard plenty of people being absolutely certain that fall of the Soviet Union was virtually certain and caused by something they like to believe - usually without even the basic understanding of facts, but many experts make identical mistake. The fact is - nobody predicted it (ignoring background noise of people who "predict" such things year in year out) - and relevant reference classes showed quite low (not zero, but far lower than one) probability of it happening.

Reference class of the unclassreferenceable

by taw 1 min read8th Jan 2010154 comments

25


One of the most useful techniques of rationality is taking the outside view, also known as reference class forecasting. Instead of thinking too hard about particulars of a given situation and taking a guess which will invariably turned out to be highly biased, one looks at outcomes of situations which are similar in some essential way.

Figuring out correct reference class might sometimes be difficult, but even then it's far more reliable than trying to guess while ignoring the evidence of similar cases. Now in some situations we have precise enough data that inside view might give correct answer - but for almost all such cases I'd expect outside view to be as usable and not far away in correctness.

Something that keeps puzzling me is persistence of certain beliefs on lesswrong. Like belief in effectiveness of cryonics - reference class of things promising eternal (or very long) life is huge and has consistent 0% success rate. Reference class of predictions based on technology which isn't even remotely here has perhaps non-zero but still ridiculously tiny success rate. I cannot think of any reference class in which cryonics does well. Likewise belief in singularity - reference class of beliefs in coming of a new world, be it good or evil, is huge and with consistent 0% success rate. Reference class of beliefs in almost omnipotent good or evil beings has consistent 0% success rate.

And many fellow rationalists not only believe that chances of cryonics or singularity or AI are far from negligible levels indicated by the outside view, they consider them highly likely or even nearly certain!

There are a few ways how this situation can be resolved:

  • Biting the outside view bullet like me, and assigning very low probability to them.
  • Finding a convincing reference class in which cryonics, singularity, superhuman AI etc. are highly probable - I invite you to try in comments, but I doubt this will lead anywhere.
  • Or is there a class of situations for which the outside view is consistently and spectacularly wrong; data is not good enough for precise predictions; and yet we somehow think we can predict them reliably?

How do you reconcile them?

25