Followup toLife's Story Continues

Imagine two agents who've never seen an intelligence - including, somehow, themselves - but who've seen the rest of the universe up until now, arguing about what these newfangled "humans" with their "language" might be able to do...

Believer:  Previously, evolution has taken hundreds of thousands of years to create new complex adaptations with many working parts.  I believe that, thanks to brains and language, we may see a new era, an era of intelligent design. In this era, complex causal systems - with many interdependent parts that collectively serve a definite function - will be created by the cumulative work of many brains building upon each others' efforts.

Skeptic:  I see - you think that brains might have something like a 50% speed advantage over natural selection?  So it might take a while for brains to catch up, but after another eight billion years, brains will be in the lead.  But this planet's Sun will swell up by then, so -

Believer:  Thirty percent?  I was thinking more like three orders of magnitude. With thousands of brains working together and building on each others' efforts, whole complex machines will be designed on the timescale of mere millennia - no, centuries!

Skeptic:  What?

Believer:  You heard me.

Skeptic:  Oh, come on!  There's absolutely no empirical evidence for an assertion like that!  Animal brains have been around for hundreds of millions of years without doing anything like what you're saying.  I see no reason to think that life-as-we-know-it will end, just because these hominid brains have learned to send low-bandwidth signals over their vocal cords.  Nothing like what you're saying has happened before in my experience -

Believer:  That's kind of the point, isn't it?  That nothing like this has happened before?  And besides, there is precedent for that kind of Black Swan - namely, the first replicator.

Skeptic:  Yes, there is precedent in the replicators.  Thanks to our observations of evolution, we have extensive knowledge and many examples of how optimization works.  We know, in particular, that optimization isn't easy - it takes millions of years to climb up through the search space.  Why should "brains", even if they optimize, produce such different results?

Believer:  Well, natural selection is just the very first optimization process that got started accidentally.   These newfangled brains were designed by evolution, rather than, like evolution itself, being a natural process that got started by accident.  So "brains" are far more sophisticated - why, just look at them.  Once they get started on cumulative optimization - FOOM!

Skeptic:  So far, brains are a lot less impressive than natural selection.  These "hominids" you're so interested in - can these creatures' handaxes really be compared to the majesty of a dividing cell?

Believer:  That's because they only just got started on language and cumulative optimization.

Skeptic:  Really?  Maybe it's because the principles of natural selection are simple and elegant for creating complex designs, and all the convolutions of brains are only good for chipping handaxes in a hurry.  Maybe brains simply don't scale to detail work.  Even if we grant the highly dubious assertion that brains are more efficient than natural selection - which you seem to believe on the basis of just looking at brains and seeing the convoluted folds - well, there still has to be a law of diminishing returns.

Believer:  Then why have brains been getting steadily larger over time?  That doesn't look to me like evolution is running into diminishing returns.  If anything, the recent example of hominids suggests that once brains get large and complicated enough, the fitness advantage for further improvements is even greater -

Skeptic:  Oh, that's probably just sexual selection!  I mean, if you think that a bunch of brains will produce new complex machinery in just a hundred years, then why not suppose that a brain the size of a whole planet could produce a de novo complex causal system with many interdependent elements in a single day?

Believer:  You're attacking a strawman here - I never said anything like that.

Skeptic:  Yeah?  Let's hear you assign a probability that a brain the size of a planet could produce a new complex design in a single day.

Believer:  The size of a planet?  (Thinks.)  Um... ten percent.

Skeptic:  (Muffled choking sounds.)

Believer:  Look, brains are fast.  I can't rule it out in principle -

Skeptic:  Do you understand how long a day is?  It's the amount of time for the Earth to spin on its own axis, once.  One sunlit period, one dark period.  There are 365,242 of them in a single millennium.

Believer:  Do you understand how long a second is?  That's how long it takes a brain to see a fly coming in, target it in the air, and eat it.  There's 86,400 of them in a day.

Skeptic:  Pffft, and chemical interactions in cells happen in nanoseconds.  Speaking of which, how are these brains going to build any sort of complex machinery without access to ribosomes?  They're just going to run around on the grassy plains in really optimized patterns until they get tired and fall over.  There's nothing they can use to build proteins or even control tissue structure.

Believer:  Well, life didn't always have ribosomes, right?  The first replicator didn't.

Skeptic:  So brains will evolve their own ribosomes?

Believer:  Not necessarily ribosomes.  Just some way of making things.

Skeptic:  Great, so call me in another hundred million years when that evolves, and I'll start worrying about brains.

Believer:  No, the brains will think of a way to get their own ribosome-analogues.

Skeptic:  No matter what they think, how are they going to make anything without ribosomes?

Believer:  They'll think of a way.

Skeptic:  Now you're just treating brains as magic fairy dust.

Believer:  The first replicator would have been magic fairy dust by comparison with anything that came before it -

Skeptic:  That doesn't license throwing common sense out the window.

Believer:  What you call "common sense" is exactly what would have caused you to assign negligible probability to the actual outcome of the first replicator.  Ergo, not so sensible as it seems, if you want to get your predictions actually right, instead of sounding reasonable.

Skeptic:  And your belief that in the Future it will only take a hundred years to optimize a complex causal system with dozens of interdependent parts - you think this is how you get it right?

Believer:  Yes!  Sometimes, in the pursuit of truth, you have to be courageous - to stop worrying about how you sound in front of your friends - to think outside the box - to imagine futures fully as absurd as the Present would seem without benefit of hindsight - and even, yes, say things that sound completely ridiculous and outrageous by comparison with the Past.  That is why I boldly dare to say - pushing out my guesses to the limits of where Truth drives me, without fear of sounding silly - that in the far future, a billion years from now when brains are more highly evolved, they will find it possible to design a complete machine with a thousand parts in as little as one decade!

Skeptic:  You're just digging yourself deeper.  I don't even understand how brains are supposed to optimize so much faster.  To find out the fitness of a mutation, you've got to run millions of real-world tests, right?  And even then, an environmental shift can make all your optimization worse than nothing, and there's no way to predict  that no matter how much you test -

Believer:  Well, a brain is complicated, right?  I've been looking at them for a while and even I'm not totally sure I understand what goes on in there.

Skeptic:  Pffft!  What a ridiculous excuse. 

Believer:  I'm sorry, but it's the truth - brains are harder to understand.

Skeptic:  Oh, and I suppose evolution is trivial?

Believer:  By comparison... yeah, actually.

Skeptic:  Name me one factor that explains why you think brains will run so fast.

Believer:  Abstraction.

Skeptic:  Eh?   Abstrah-shun?

Believer:  It... um... lets you know about parts of the search space you haven't actually searched yet, so you can... sort of... skip right to where you need to be -

Skeptic:  I see.  And does this power work by clairvoyance, or by precognition?  Also, do you get it from a potion or an amulet?

Believer:  The brain looks at the fitness of just a few points in the search space - does some complicated processing - and voila, it leaps to a much higher point!

Skeptic:  Of course.  I knew teleportation had to fit in here somewhere.

Believer:  See, the fitness of one point tells you something about other points -

Skeptic:  Eh?  I don't see how that's possible without running another million tests.

Believer:  You just look at it, dammit!

Skeptic:  With what kind of sensor?  It's a search space, not a bug to eat!

Believer:  The search space is compressible -

Skeptic:  Whaa?  This is a design space of possible genes we're talking about, not a folding bed -

Believer:  Would you stop talking about genes already!  Genes are on the way out!  The future belongs to ideas!

Skeptic:  Give. Me. A. Break.

Believer:  Hominids alone shall carry the burden of destiny!

Skeptic:  They'd die off in a week without plants to eat.  You probably don't know this, because you haven't studied ecology, but ecologies are complicated - no single species ever "carries the burden of destiny" by itself.  But that's another thing - why are you postulating that it's just the hominids who go FOOM?  What about the other primates?  These chimpanzees are practically their cousins - why wouldn't they go FOOM too?

Believer:  Because it's all going to shift to the level of ideas, and the hominids will build on each other's ideas without the chimpanzees participating -

Skeptic:  You're begging the question.  Why won't chimpanzees be part of the economy of ideas?  Are you familiar with Ricardo's Law of Comparative Advantage?  Even if chimpanzees are worse at everything than hominids, the hominids will still trade with them and all the other brainy animals.

Believer:  The cost of explaining an idea to a chimpanzee will exceed any benefit the chimpanzee can provide.

Skeptic:  But why should that be true?  Chimpanzees only forked off from hominids a few million years ago.  They have 95% of their genome in common with the hominids.  The vast majority of optimization that went into producing hominid brains also went into producing chimpanzee brains.  If hominids are good at trading ideas, chimpanzees will be 95% as good at trading ideas.  Not to mention that all of your ideas belong to the far future, so that both hominids, and chimpanzees, and many other species will have evolved much more complex brains before anyone starts building their own cells -

Believer:  I think we could see as little as a million years pass between when these creatures first invent a means of storing information with persistent digital accuracy - their equivalent of DNA - and when they build machines as complicated as cells.

Skeptic:  Too many assumptions... I don't even know where to start...  Look, right now brains are nowhere near building cells.  It's going to take a lot more evolution to get to that point, and many other species will be much further along the way by the time hominids get there.  Chimpanzees, for example, will have learned to talk -

Believer:  It's the ideas that will accumulate optimization, not the brains.

Skeptic:  Then I say again that if hominids can do it, chimpanzees will do it 95% as well.

Believer:  You might get discontinuous returns on brain complexity.  Like... even though the hominid lineage split off from chimpanzees very recently, and only a few million years of evolution have occurred since then, the chimpanzees won't be able to keep up.

Skeptic:  Why?

Believer:  Good question.

Skeptic:  Does it have a good answer?

Believer:  Well, there might be compound interest on learning during the maturational period... or something about the way a mind flies through the search space, so that slightly more powerful abstracting-machinery can create abstractions that correspond to much faster travel... or some kind of feedback loop involving a brain powerful enough to control itself... or some kind of critical threshold built into the nature of cognition as a problem, so that a single missing gear spells the difference between walking and flying... or the hominids get started down some kind of sharp slope in the genetic fitness landscape, involving many changes in sequence, and the chimpanzees haven't gotten started down it yet... or all these statements are true and interact multiplicatively... I know that a few million years doesn't seem like much time, but really, quite a lot can happen.  It's hard to untangle.

Skeptic:  I'd say it's hard to believe.

Believer:  Sometimes it seems that way to me too!  But I think that in a mere ten or twenty million years, we won't have a choice.

New Comment
28 comments, sorted by Click to highlight new comments since:

How is this not a surface analogy?

It is. If it proved anything, the debate would have just ended.

And there may be another - maybe several of them - optimizers already. Sprung out of brains, of course. One may be a humble "mechanical solution searcher software". Or something of this kind. Unexpected, but quite real and powerful.

Believer: The search space is compressible -

The space of behaviors of Turing Machines is not compressible, sub spaces are, but not the whole lot. What space do you expect the SeedAIs to be searching? If you could show that it is compressible and bound to have an uncountable number of better versions of the SeedAI, then you could convince me that I should worry about Fooming.

As such when I think of self-modifiers I think of them searching the space of Turing Machines, which just seems hard.

The space of possible gene combinations is not compressible - under the evolutionary mechanism.

The space of behaviours of Turing machines is not compressible, in the terms in which that compression has been envisaged.

The mechanism that compresses search space that Believer posits is something to do with brains; something to do with intelligence. And it works - we know it does; Kekule works on the structure of benzene without success; sleeps, dreams of a serpent biting its own tail, and waking, conceives of the benzene ring.

The mechanism (and everyone here believes that it is a mechanism) is currently mysterious. AI must possess this mechanism, or it will not be AI.

Programmers are also supposed to search the space of Turing machines, which seems really hard. Programming in Brainfuck is hard. All the software written in higher level languages are points of a mere subspace... If optimizing in this subspace has proven to be so effective, I don't think we have a reason to worry about uncompressible subspaces containing the only working solution for our problems, namely more intelligent AI designs.

Why not? The space we search has been very useful apart from for finding the solution to the creation of a mind. Perhaps the space of minds is outside the current space of Turing machines we are currently searching. It would cert explain why no one has been very successful so far.

Not to say that we could never find a mind. Just that we might have trouble using a compressible search space.

[-]Thomas-30

Turing machine with a random number generator is already something more complex than just a TM.

Some abstractions are not that useful, after all. TM+RNG may be a better try.

Species boundaries are pretty hard boundaries to the transfer of useful genetic information. So once proto-humans stumbled on key brain innovations there really wasn't much of a way to transfer that to chimps. The innovation could only spread via the spread of humans. But within the human world innovations have spread not just by displacement, but also by imitation and communication. Yes conflicting cultures, languages, and other standards often limit the spread of innovations between humans, but even so this info leakage has limited the relative gains to those first with an innovation. The key question is then what barriers to the spread of innovation would prevent this situation from continuing with future innovations.

At some random time before human brains appeared, the time estimate is more difficult. Believer's imminence is helped a lot by humans' existence.

This is a brilliant parable on behalf of possibility, but it doesn't end the argument (as Eli freely admits) because it doesn't have bearing on probability.

As a surface analogy, it makes it pretty clear that discontinuous and radical, even unbelievable change is possible. But it doesn't make it probable.

"Are you familiar with Ricardo's [...]"

It was cute in "The Simple Truth," but in the book, you might want to consider cutting down on the anachronisms. Intelligences who've never seen an intelligence falls under standard suspension-of-disbelief, but explicit mention of Ricardo or the Wright brothers is a bit grating.

Robin: Species boundaries are pretty hard boundaries to the transfer of useful genetic information. So once proto-humans stumbled on key brain innovations there really wasn't much of a way to transfer that to chimps. The innovation could only spread via the spread of humans. But within the human world innovations have spread not just by displacement, but also by imitation and communication. Yes conflicting cultures, languages, and other standards often limit the spread of innovations between humans, but even so this info leakage has limited the relative gains to those first with an innovation. The key question is then what barriers to the spread of innovation would prevent this situation from continuing with future innovations.

If there's a way in which I've been shocked by how our disagreement has proceeded so far, it's the extent to which you think that vanilla abstractions of economic growth and productivity improvements suffice to cover the domain of brainware increases in intelligence: Engelbart's mouse as analogous to e.g. a bigger prefrontal cortex. We don't seem to be thinking in the same terms at all.

To me, the answer to the above question seems entirely obvious - the intelligence explosion will run on brainware rewrites and, to a lesser extent, hardware improvements. Even in the (unlikely) event that an economy of trade develops among AIs sharing improved brainware and improved hardware, a human can't step in and use off-the-shelf an improved cortical algorithm or neurons that run at higher speeds. Not without technology so advanced that the AI could build a much better brain from scratch using the same resource expenditure.

The genetic barrier between chimps and humans is now permeable in the sense that humans could deliberately transfer genes horizontally, but it took rather a large tech advantage to get to that point...

Eliezer ---

Do you think this analogy is useful for estimating the value of friendliness? That is, is the impact of humans on other species and our environment during this explosion of intelligence a useful frame for pondering the impact of a rapidly evolving AI on humans?

I think it has potential to be useful, but I'm not sure in which direction it should be read.

While we've driven some species toward extinction, others have flourished. And while I'm optimistic that as intelligence increases we'll be better able to control our negative impacts on the environment, I'm also worried that as the scale of our impacts increases a single mistake could be fatal.

That's the other thing about analogies, of course - I'm very carefully picking which analogies to make, using knowledge not itself contained in the analogy. AIs are not like humans, their advent won't be like the advent of human intelligence. The main point of this analogy is to avoid absurdity bias - to try to appreciate how surprising intelligence, like the first replicator, would have seemed at the time without benefit of hindsight.

Eliezer, it may seem obvious to you, but this is the key point on which we've been waiting for you to clearly argue. In a society like ours, but also with one or more AIs, and perhaps ems, why would innovations discovered by a single AI not spread soon to the others, and why would a non-friendly AI not use those innovations to trade, instead of war?

This comment crystallised for me the weirdness of this whole debate (I'm not picking sides, or even imagining that I have the capacity to do so intelligently).

In the spirit of the originating post, imagine two worms are discussing the likely characteristics of intelligent life, some time before it appears (I'm using worms as early creatures with brains, allowing for the possibility that intelligence is a continuum - that worms are as far from humans as humans are from some imagined AI that has foomed for a day or two);

Worm1: I tell you it's really important to consider the possibility that these "intelligent beings" might want all the dead leaf matter for themselves, and wriggle much faster than us, with better sensory equipment.....

Worm2: But why can't you see that, as super intelligent beings, they will understand the cycle of life, from dead leaves, to humous, to plants and back again. It is hard to imagine that they won't understand that disrupting this flow will be sub-optimal....

I cannot imagine how, should effective AI come into existence, these debates will not seem as quaint as those 'how many angels would fit onto the head of a pin' ones that we fondly ridicule.

The problem is, that the same people who were talking about such ridiculous notions were also: laying the foundation stones of western philosophical thinking; preserving and transmitting classical texts; developing methodologies that eventually underpin the scientific method - and they didn't distinguish between them!

Anyways, if the 1st goal of an AI is to improve, why would it not happily give away it's hardware to implement a new, better AI ?

Even if there are competing AIs, if they are good enough they probably would agree on what is worth trying next, so there would be no or minimal conflict.

They would focus on transmitting what they want to be, not what they currently are.

...come to think of it, once genetic engineering has advanced enough, why would humans not do the same ?

This is correct, but only in so far as the better AI has the same goals as the current AI. If the first AI cares about maximizing Google's stock value, and the second better AI cares about maximizing Microsoft's stock value, then the first AI will definitely not want to stop existing and hand over all resources to the second one.

Anyways, if the 1st goal of an AI is to improve, why would it not happily give away it's hardware to implement a new, better AI?

That's what self-improvement is, in a sense. See Tiling. (Also consider that improvement is an instrumental goal for a well-designed and friendly seed AI.)

Even if there are competing AIs, if they are good enough they probably would agree on what is worth trying next, so there would be no or minimal conflict.

Except that whoever decides the next AI's goals Wins, and the others Lose - the winner has their goals instantiated, and the losers don't. Perhaps they'd find some way to cooperate (such as a values handshake - the average of the values of all contributing AIs, perhaps weighted by the probability that each one would be the first to make the next AI on their own), but that would be overcoming conflict which exists in the first place.

Essentially, they might agree on the optimal design of the next AI, but probably not on the optimal goals of the next AI, and so each one has an incentive to not reveal their discoveries. (This assumes that goals and designs are orthogonal, which may not be entirely true - certain designs may be Safer for some goals than for others. This would only serve to increase conflict in the design process.)

They would focus on transmitting what they want to be, not what they currently are.

Yes, that is the point of self-improvement for seed AIs - to create something more capable but with the same (long-term) goals. They probably wouldn't have a sense of individual identity which would be destroyed with each significant change.

why would innovations discovered by a single AI not spread soon to the others

I think Eliezer's saying that, though such innovations indeed might spread between AIs, they wouldn't spread to humans. But I'm puzzled why you press this scenario when he's said before that he thinks it unlikely that there would be competition between multiple superintelligent AIs. This seems like the more salient point of disagreement.

why would a non-friendly AI not use those innovations to trade, instead of war?

The comparative advantage analysis ignores the opportunity cost of not killing and seizing property. Between humans, the usual reason it's not worth killing is that it destroys human capital, usually the most valuable possession. But an AI or an emulation might be better off seizing all CPU time than trading with others.

Once the limiting resource is not the number of hours in a day, the situation is very different. Trade might still make sense, but it might not.

[-]rand10

Do you think the chimpanzees had any idea when their cousins took off and went foom? What makes you think we'll be any more aware when the next jump happens? Won't the cost of clueing us in just as equally exceed any benefit we might provide in return?

"why would innovations discovered by a single AI not spread soon to the others, and why would a non-friendly AI not use those innovations to trade, instead of war?"

One of Elizer's central (and I think, indisputable) claims is that a hand-made AI, after going recursive self-improvement, could be powerful in the real world while being WILDLY unpredictable in its actions. It doesn't have to be economically rational.

Given a paperclip-manufacturing AI, busily converting earth into grey goo into paperclips, there's no reason to believe we could communicate with it well enough to offer to help.

Oh, and I suppose evolution is trivial? [...] By comparison... yeah, actually.

Nature was compressing the search space long before large brains came along.

For example, bilateral symmetry is based partly on the observation that an even number of legs works best. Nature doesn't need to search the space of centipedes with an odd number of legs. It has thus compressed the search space by a factor of two. There are very many such economies - explored by those that study the evolution of evolvability.

Surely this is not an example of search-space compression, but an example of local islands of fitness within the space? Evolution does not 'make observations', or proceed on the basis of abstractions.

An even number of legs 'works best' precisely for the creatures who have evolved in the curtailed (as opposed to compressed) practical search space of a local maxima. This is not a proof that an even number of legs works best, period.

Once bilateral symmetry has evolved, the journey from bilateralism to any other viable body plan is simply too difficult to traverse. Nature DOES search the fringes of the space of centipedes with an odd number of legs- all the time.

http://www.wired.com/magazine/2010/04/pl_arts_mutantbugs/

That space just turns out to be inhospitable, time and time again. One day, under different conditions, it might not.

BTW, I am not claiming, either, that it is untrue that an even number of legs works best - simply that the evolution of creatures with even numbers of legs and any experimental study showing that even numbers of legs are optimal are two different things. Mutually reinforcing, but distinct.

If nothing else, hugely entertaining, and covers a great deal of ground too.

[-][anonymous]21

Ah. So sad I am getting to read this debate so long after the fact. I can see really useful points on both sides, but thus far Robin does not seem to realize that the universe has been running a terrible genetic algorithm for millions of years in Matlab on a computer with nearly zero RAM. FOOM'd AI, even if using the same crappy genetic algorithm, would be like dropping it into a brand new 10GHZ processor with 100GB RAM and re-writing it all in optimized C++. And then if you learn from genetic algorithms how to start making legitimate optimization methods for intelligent agents, it's a whole different ball of wax. The categorical type of advantage one can hold over peers isn't even on the radar screen in terms of human kind's knowledge of social science.