If you're interested in evolution, anthropics, and AI timelines -- or in what the Singularity Institute has been producing lately -- you might want to check out this new paper, by SingInst research fellow Carl Shulman and FHI professor Nick Bostrom.

The paper:

How Hard is Artificial Intelligence? The Evolutionary Argument and Observation Selection Effects 

The abstract:

Several authors have made the argument that because blind evolutionary processes produced human intelligence on Earth, it should be feasible for clever human engineers to create human-level artificial intelligence in the not-too-distant future.  This evolutionary argument, however, has ignored the observation selection effect that guarantees that observers will see intelligent life having arisen on their planet no matter how hard it is for intelligent life to evolve on any given Earth-like planet.  We explore how the evolutionary argument might be salvaged from this objection, using a variety of considerations from observation selection theory and analysis of specific timing features and instances of convergent evolution in the terrestrial evolutionary record.  We find that a probabilistic version of the evolutionary argument emerges largely intact once appropriate corrections have been made.

I'd be interested to hear LW-ers' takes on the content; Carl, too, would much appreciate feedback.

New Comment
13 comments, sorted by Click to highlight new comments since:
[-]gwern130

Mildly interesting; but kind of odd to read a whole paper on estimating evolving intelligence with no reference to the success or lack thereof existing genetic algorithms and optimization!

Hans Moravec (1976, 1998, 1999), argue that human evolution shows that such AI is not just possible, but feasible within this century.

Moravec 1998 and 1999 are not listed in the references. Moravec paid little attention to observation selection effects - but if he made any bad arguments as a result, they could benefit from being more precisely identified - e.g. with quotes or page numbers.

Thanks Tim.

I think I understand why

“hard step” models predict few sequential hard steps in the few hundred million years since that ancestor

but I do not see why this

count against extreme evolutionary challenge in developing human-level intelligence

I think it's explained in this passage, but I'm having trouble following the reasoning:

Thus, the “hard steps” model rules out a number of possible “hard intelligence” scenarios: evolution typically may take prohibitively long to get through certain “hard steps”, but, between those steps, the ordinary process of evolution suffices, even without observation selection effects, to create something like the progression we see on Earth. If Earth’s remaining habitable period is close to that given by estimates of the sun’s expansion, observation selection effects could not have given us hundreds or thousands of steps of acceleration, and so could not, for example, have uniformly accelerated the evolution of human intelligence across the last few billion years.

Could the authors, or someone who does understand it, expand on it a bit?

One might have assigned significant prior probability to there being hundreds of sequential hard innovations required for human intelligence, e.g. in brain design. There might have ten, a hundred, a billion. If the hard steps model, combined with substantial remaining time in our habitable window, can lop off almost all of the probability mass assigned to those scenarios (which involve hard intelligence), that is a boost for the easy intelligence hypothesis.

Also, the fewer hard innovations that humans must replicate in creating AI, the more likely we are to succeed.

Thanks, that answers my question. But the hard steps model also rules out scenarios involving many steps, each individually easy, leading to human intelligence, right? Why think that overall it gives a boost for the easy intelligence hypothesis?

But the hard steps model also rules out scenarios involving many steps, each individually easy, leading to human intelligence, right?

If the steps are sequential, the time to evolve human intelligence is the sum of many independent small steps (without a cutoff) giving you a roughly normal distribution. So if there were a billion steps with step times of a million years, you would expect us to find ourselves much closer to the end of Earth's habitable window.

Why think that overall it gives a boost for the easy intelligence hypothesis?

Let's take as our starting point that intelligence is difficult enough to occur in less than 1% of star systems like ours. One supporting argument is that if we started with a flattish prior over difficulty much of the credence for intelligence being at least that easy to evolve would have been on scenarios in which intelligence was easy enough to reliably develop near the beginning of Earth's habitable window. [See Carter (1983)] Another is the Great Filter, the lack of visible alien intelligence.

So we need some barriers to evolution of intelligence. The hard steps analysis then places limits on their number, and stronger limits on the number that have been made since the development of brains, or primates, suggesting that they will collectively be much easier for engineers to work around than a random draw from our distribution after updating on the above considerations, but before considering the hard steps models.

We had more explanation of this, cut for space constraints. Perhaps we should reinstate it.

This paper approvingly builds on Robin Hanson's Hard Step paper. I've read that a couple of times now. It has some good stuff about early life, but I can't make sense of its thesis about hard steps being evenly spaced - it seems to be just assuming what it proves. Also, its conclusion seems to be outrageous to me - there's no way you can figure such things out from an armchair without looking at the historical evidence.

but I can't make sense of its thesis about hard steps being evenly spaced - it seems to be just assuming what it proves

Robin ran Monte Carlo simulations and included the results in the paper, including the large standard deviation. The estimates of the fraction of planets with intelligence with remaining habitable windows of a certain size come from those simulations. You can construct your own in MATLAB to test the result.

The paper apparently makes no attempt to model destructive forces. It doesn't entertain the possibility of hard steps being undone. The history of life on the planet has seen some pretty epic scale destruction. So: I am rather sceptical whether such a model can have much of interest to say about the actual evolutionary process. I don't think the model withstands the K-T extinction.

A lot of "anthropic" papers are giving SSA and SIA equal time. Surely this whole business isn't that complicated and should just be sorted out. The SIA is much less general than the SSA - and should surely be swallowed by a more general theory which includes the "reference class" concept.

Agreed - I saw no mention of states of information in the part of the paper I read, which is really all this is about.

However, the Self-Indication Assumption has a number of such implications, e.g. that if we non-indexically assign any finite positive prior probability to the world containing infinitely many observers like us then post-SIA we must believe that this is true with probability 1.

Probability 1?!? Hang on - how confident was the assigning of probabilities to an infinite number of observers in the fist place? I don't think this is a blemish on the Self-Indication Assumption - the conclusion is more not to mess about idly with infinity.