This sounds like a probability search problem in which you don't know for sure there exists anything to find - the hope function.

I worked through this in #lesswrong with nialo. It's interesting to work with various versions of this. For example, suppose you had a uniform distribution for AI's creation over 2000-2100, and you believe its creation 90% possible. It is of course now 2011, so how much do you believe it is possible now given its failure to appear between 2000 and now? We could write that in Haskell as let fai x = (100-x) / ((100 / 0.9) - x) in fai 11 which evaluates to ~0.889 - so one's faith hasn't been much damaged.

One of the interesting things is how slowly one's credence in AI being possible declines. If you run the function fai 50*, it's 81%. fai 90** = 47%! But then by fai 98 it has suddenly shrunk to 15% and so on for fai 99 = 8%, and fai 100 is of course 0% (since now one has disproven the possibility).

* no AI by 2050

** no AI by 2090, etc.

EDIT: Part of the interestingness is that one of the common criticisms of AI is 'look at them, they were wrong about AI being possible in 19xx, how sad and pathetic that they still think it's possible!' The hope function shows that unless one is highly confident about AI showing up in the early part of a time range, the failure of AI to show up ought to damage one's belief only a little bit.


That blog post is also interesting from a mind projection fallacy viewpoint:

"What I found most interesting was, the study provides evidence that people seem to reason as though probabilities were physical properties of matter. In the example with the desk with the eight drawers and an 80% chance a letter is in the desk, many people reasoned as though “80% chance-of-letter” was a fundamental property of the furniture, up there with properties like weight, mass, and density.

Many reasoned that the odds the desk has the letter, stay 80% throughout the fruitless search. Thus, they reasoned, it would still be 80%, even if they searched seven drawers and found no letter. And these were people with some education about probability! One problem is people were tending to overcompensate to avoid falling into the Gambler’s Fallacy. They were educated, well-learned people, and they knew that the probability of a fair coin falling heads remains 50%, no matter how many times in a row heads have already been rolled. They seemed to generalize this to the letter search. There’s an important difference, though: the coin flips are independent of each other. The drawer searches are not.

In a followup study, when the modified questions were posed, with two extra “locked” drawers and a 100% initial probability of a letter, miraculously the respondents’ answers showed dramatic improvement. Even though, formally, the exercises were isomorphic."

Incidentally, I've tried to apply the hope function to my recent essay on Folding@home: http://www.gwern.net/Charity%20is%20not%20about%20helping#updating-on-evidence

9nshepperd9yFor a non-uniform distribution we can use the similar formula (1.0 - p(before 2011)) / (1.0/0.9 - p(before 2011)) which is analogous to adding a extra blob of (uncounted) probability density (such that if the AI is "actually built" anywhere within the distribution including the uncounted bit, the prior probability (0.9) is the ratio (counted) / (counted + uncounted)), and then cutting off the part where we know the AI to have not been built. For a normal(mu = 2050, sigma=10) distribution, in Haskell this is let ai year = (let p = cumulative (normalDistr 2050 (10^2)) year in (1.0 - p) / (1.0/0.9 - p)) ¹. Evaluating on a few different years: * P(AI|not by 2011) = 0.899996 * P(AI|not by 2030) = 0.8979 * P(AI|not by 2050) = 0.8181... * P(AI|not by 2070) = 0.16995 * P(AI|not by 2080) = 0.012 * P(AI|not by 2099) = 0.00028 This drops off far faster than the uniform case, once 2050 is reached. We can also use this survey [http://www.aleph.se/andart/archives/2011/04/when_will_we_get_our_robot_overlords.html] as an interesting source for a distribution. The median estimate for P=0.5 is 2050, which gives us the same mu, and the median for P=0.1 was 2028, which fits with sigma ~ 17 years². We also have P=0.9 by 2150, suggesting our prior of 0.9 is in the ballpark. Plugging the same years into the new distribution: * P(AI|not by 2011) = 0.899 * P(AI|not by 2030) = 0.888 * P(AI|not by 2050) = 0.8181... * P(AI|not by 2070) = 0.52 * P(AI|not by 2080) = 0.26 * P(AI|not by 2099) = 0.017 Even by 2030 our confidence will have changed little. ¹Using Statistics.Distribution.Normal [http://hackage.haskell.org/package/statistics-0.8.0.5] from Hackage [http://hackage.haskell.org/packages/hackage.html]. ²Technically, the survey seems to have asked about unconditional probabilities, not conditional on that AI is possible, whereas the latter is what we want. We may want then to actually fit a normal distribution so that cdf(2028) = 0.1/0.9 and cdf(2050) = 0.5/0.9, which wou

An inflection point for probability estimates of the AI takeoff?

by Prismattic 1 min read29th Apr 201146 comments

11


Suppose that your current  estimate for possibility of an AI takeoff coming in the next 10 years is some probability x.  As technology is constantly becoming more sophisticated, presumably your probability estimate 10 years from now will be some y > x.  And 10 years after that, it will be z > y.  My question is, does there come a point in the future where, assuming that an AI takeoff has not yet happened in spite of much advanced technology, you begin to revise your estimate downward with each passing year?  If so, how many decades (centuries) from now would you expect the inflection point in your estimate?