Reference class of the unclassreferenceable

I hereby assign all your skepticism to "beliefs the future will be just like the past" with associated correctness frequency zero.


Your move in the wonderful game of Reference Class Tennis.

For most of human history, the future pretty much was like the past. It's not hard to argue that, between the Neolithic Revolution and the Industrial Revolution, not all that much really changed for the average person.

Things that still haven't changed:

People still grow and eat wheat, rice, corn, and other staple grains.
People still communicate by flapping their lips.
People still react to almost any infant communications or artistic medium in the same way: by trying to use it for pornography and radical politics, usually in that order.
People still fight eac... (Read more)(Click to expand thread. ⌘/CTRL+F to Expand All)Cmd/Ctrl F to expand all comments on this post

3Unknowns10yAs you can see from his response above, "These were slow gradual changes over time..." he is not saying that the future will be just like the past. There are plenty of ways that the future could be very different from the past, without superpowerful AI, singularities, or successful cryonics. So your reference class is incorrect.
1pdf23ds10yDownvoted because I wanted to hear more about why it belongs in that reference class.

Reference class of the unclassreferenceable

by taw 1 min read8th Jan 2010154 comments


One of the most useful techniques of rationality is taking the outside view, also known as reference class forecasting. Instead of thinking too hard about particulars of a given situation and taking a guess which will invariably turned out to be highly biased, one looks at outcomes of situations which are similar in some essential way.

Figuring out correct reference class might sometimes be difficult, but even then it's far more reliable than trying to guess while ignoring the evidence of similar cases. Now in some situations we have precise enough data that inside view might give correct answer - but for almost all such cases I'd expect outside view to be as usable and not far away in correctness.

Something that keeps puzzling me is persistence of certain beliefs on lesswrong. Like belief in effectiveness of cryonics - reference class of things promising eternal (or very long) life is huge and has consistent 0% success rate. Reference class of predictions based on technology which isn't even remotely here has perhaps non-zero but still ridiculously tiny success rate. I cannot think of any reference class in which cryonics does well. Likewise belief in singularity - reference class of beliefs in coming of a new world, be it good or evil, is huge and with consistent 0% success rate. Reference class of beliefs in almost omnipotent good or evil beings has consistent 0% success rate.

And many fellow rationalists not only believe that chances of cryonics or singularity or AI are far from negligible levels indicated by the outside view, they consider them highly likely or even nearly certain!

There are a few ways how this situation can be resolved:

  • Biting the outside view bullet like me, and assigning very low probability to them.
  • Finding a convincing reference class in which cryonics, singularity, superhuman AI etc. are highly probable - I invite you to try in comments, but I doubt this will lead anywhere.
  • Or is there a class of situations for which the outside view is consistently and spectacularly wrong; data is not good enough for precise predictions; and yet we somehow think we can predict them reliably?

How do you reconcile them?