And there may be another - maybe several of them - optimizers already. Sprung out of brains, of course. One may be a humble "mechanical solution searcher software". Or something of this kind. Unexpected, but quite real and powerful.
Believer: The search space is compressible -
The space of behaviors of Turing Machines is not compressible, sub spaces are, but not the whole lot. What space do you expect the SeedAIs to be searching? If you could show that it is compressible and bound to have an uncountable number of better versions of the SeedAI, then you could convince me that I should worry about Fooming.
As such when I think of self-modifiers I think of them searching the space of Turing Machines, which just seems hard.
The space of possible gene combinations is not compressible - under the evolutionary mechanism.
The space of behaviours of Turing machines is not compressible, in the terms in which that compression has been envisaged.
The mechanism that compresses search space that Believer posits is something to do with brains; something to do with intelligence. And it works - we know it does; Kekule works on the structure of benzene without success; sleeps, dreams of a serpent biting its own tail, and waking, conceives of the benzene ring.
The mechanism (and everyone here believes that it is a mechanism) is currently mysterious. AI must possess this mechanism, or it will not be AI.
Programmers are also supposed to search the space of Turing machines, which seems really hard. Programming in Brainfuck is hard. All the software written in higher level languages are points of a mere subspace... If optimizing in this subspace has proven to be so effective, I don't think we have a reason to worry about uncompressible subspaces containing the only working solution for our problems, namely more intelligent AI designs.
Why not? The space we search has been very useful apart from for finding the solution to the creation of a mind. Perhaps the space of minds is outside the current space of Turing machines we are currently searching. It would cert explain why no one has been very successful so far.
Not to say that we could never find a mind. Just that we might have trouble using a compressible search space.
Turing machine with a random number generator is already something more complex than just a TM.
Some abstractions are not that useful, after all. TM+RNG may be a better try.
Species boundaries are pretty hard boundaries to the transfer of useful genetic information. So once proto-humans stumbled on key brain innovations there really wasn't much of a way to transfer that to chimps. The innovation could only spread via the spread of humans. But within the human world innovations have spread not just by displacement, but also by imitation and communication. Yes conflicting cultures, languages, and other standards often limit the spread of innovations between humans, but even so this info leakage has limited the relative gains to those first with an innovation. The key question is then what barriers to the spread of innovation would prevent this situation from continuing with future innovations.
At some random time before human brains appeared, the time estimate is more difficult. Believer's imminence is helped a lot by humans' existence.
This is a brilliant parable on behalf of possibility, but it doesn't end the argument (as Eli freely admits) because it doesn't have bearing on probability.
As a surface analogy, it makes it pretty clear that discontinuous and radical, even unbelievable change is possible. But it doesn't make it probable.
"Are you familiar with Ricardo's [...]"
It was cute in "The Simple Truth," but in the book, you might want to consider cutting down on the anachronisms. Intelligences who've never seen an intelligence falls under standard suspension-of-disbelief, but explicit mention of Ricardo or the Wright brothers is a bit grating.
Robin: Species boundaries are pretty hard boundaries to the transfer of useful genetic information. So once proto-humans stumbled on key brain innovations there really wasn't much of a way to transfer that to chimps. The innovation could only spread via the spread of humans. But within the human world innovations have spread not just by displacement, but also by imitation and communication. Yes conflicting cultures, languages, and other standards often limit the spread of innovations between humans, but even so this info leakage has limited the relative gains to those first with an innovation. The key question is then what barriers to the spread of innovation would prevent this situation from continuing with future innovations.
If there's a way in which I've been shocked by how our disagreement has proceeded so far, it's the extent to which you think that vanilla abstractions of economic growth and productivity improvements suffice to cover the domain of brainware increases in intelligence: Engelbart's mouse as analogous to e.g. a bigger prefrontal cortex. We don't seem to be thinking in the same terms at all.
To me, the answer to the above question seems entirely obvious - the intelligence explosion will run on brainware rewrites and, to a lesser extent, hardware improvements. Even in the (unlikely) event that an economy of trade develops among AIs sharing improved brainware and improved hardware, a human can't step in and use off-the-shelf an improved cortical algorithm or neurons that run at higher speeds. Not without technology so advanced that the AI could build a much better brain from scratch using the same resource expenditure.
The genetic barrier between chimps and humans is now permeable in the sense that humans could deliberately transfer genes horizontally, but it took rather a large tech advantage to get to that point...
Eliezer ---
Do you think this analogy is useful for estimating the value of friendliness? That is, is the impact of humans on other species and our environment during this explosion of intelligence a useful frame for pondering the impact of a rapidly evolving AI on humans?
I think it has potential to be useful, but I'm not sure in which direction it should be read.
While we've driven some species toward extinction, others have flourished. And while I'm optimistic that as intelligence increases we'll be better able to control our negative impacts on the environment, I'm also worried that as the scale of our impacts increases a single mistake could be fatal.
That's the other thing about analogies, of course - I'm very carefully picking which analogies to make, using knowledge not itself contained in the analogy. AIs are not like humans, their advent won't be like the advent of human intelligence. The main point of this analogy is to avoid absurdity bias - to try to appreciate how surprising intelligence, like the first replicator, would have seemed at the time without benefit of hindsight.
Eliezer, it may seem obvious to you, but this is the key point on which we've been waiting for you to clearly argue. In a society like ours, but also with one or more AIs, and perhaps ems, why would innovations discovered by a single AI not spread soon to the others, and why would a non-friendly AI not use those innovations to trade, instead of war?
This comment crystallised for me the weirdness of this whole debate (I'm not picking sides, or even imagining that I have the capacity to do so intelligently).
In the spirit of the originating post, imagine two worms are discussing the likely characteristics of intelligent life, some time before it appears (I'm using worms as early creatures with brains, allowing for the possibility that intelligence is a continuum - that worms are as far from humans as humans are from some imagined AI that has foomed for a day or two);
Worm1: I tell you it's really important to consider the possibility that these "intelligent beings" might want all the dead leaf matter for themselves, and wriggle much faster than us, with better sensory equipment.....
Worm2: But why can't you see that, as super intelligent beings, they will understand the cycle of life, from dead leaves, to humous, to plants and back again. It is hard to imagine that they won't understand that disrupting this flow will be sub-optimal....
I cannot imagine how, should effective AI come into existence, these debates will not seem as quaint as those 'how many angels would fit onto the head of a pin' ones that we fondly ridicule.
The problem is, that the same people who were talking about such ridiculous notions were also: laying the foundation stones of western philosophical thinking; preserving and transmitting classical texts; developing methodologies that eventually underpin the scientific method - and they didn't distinguish between them!
Anyways, if the 1st goal of an AI is to improve, why would it not happily give away it's hardware to implement a new, better AI ?
Even if there are competing AIs, if they are good enough they probably would agree on what is worth trying next, so there would be no or minimal conflict.
They would focus on transmitting what they want to be, not what they currently are.
...come to think of it, once genetic engineering has advanced enough, why would humans not do the same ?
This is correct, but only in so far as the better AI has the same goals as the current AI. If the first AI cares about maximizing Google's stock value, and the second better AI cares about maximizing Microsoft's stock value, then the first AI will definitely not want to stop existing and hand over all resources to the second one.
Anyways, if the 1st goal of an AI is to improve, why would it not happily give away it's hardware to implement a new, better AI?
That's what self-improvement is, in a sense. See Tiling. (Also consider that improvement is an instrumental goal for a well-designed and friendly seed AI.)
Even if there are competing AIs, if they are good enough they probably would agree on what is worth trying next, so there would be no or minimal conflict.
Except that whoever decides the next AI's goals Wins, and the others Lose - the winner has their goals instantiated, and the losers don't. Perhaps they'd find some way to cooperate (such as a values handshake - the average of the values of all contributing AIs, perhaps weighted by the probability that each one would be the first to make the next AI on their own), but that would be overcoming conflict which exists in the first place.
Essentially, they might agree on the optimal design of the next AI, but probably not on the optimal goals of the next AI, and so each one has an incentive to not reveal their discoveries. (This assumes that goals and designs are orthogonal, which may not be entirely true - certain designs may be Safer for some goals than for others. This would only serve to increase conflict in the design process.)
They would focus on transmitting what they want to be, not what they currently are.
Yes, that is the point of self-improvement for seed AIs - to create something more capable but with the same (long-term) goals. They probably wouldn't have a sense of individual identity which would be destroyed with each significant change.
why would innovations discovered by a single AI not spread soon to the others
I think Eliezer's saying that, though such innovations indeed might spread between AIs, they wouldn't spread to humans. But I'm puzzled why you press this scenario when he's said before that he thinks it unlikely that there would be competition between multiple superintelligent AIs. This seems like the more salient point of disagreement.
why would a non-friendly AI not use those innovations to trade, instead of war?
The comparative advantage analysis ignores the opportunity cost of not killing and seizing property. Between humans, the usual reason it's not worth killing is that it destroys human capital, usually the most valuable possession. But an AI or an emulation might be better off seizing all CPU time than trading with others.
Once the limiting resource is not the number of hours in a day, the situation is very different. Trade might still make sense, but it might not.
Do you think the chimpanzees had any idea when their cousins took off and went foom? What makes you think we'll be any more aware when the next jump happens? Won't the cost of clueing us in just as equally exceed any benefit we might provide in return?
"why would innovations discovered by a single AI not spread soon to the others, and why would a non-friendly AI not use those innovations to trade, instead of war?"
One of Elizer's central (and I think, indisputable) claims is that a hand-made AI, after going recursive self-improvement, could be powerful in the real world while being WILDLY unpredictable in its actions. It doesn't have to be economically rational.
Given a paperclip-manufacturing AI, busily converting earth into grey goo into paperclips, there's no reason to believe we could communicate with it well enough to offer to help.
Oh, and I suppose evolution is trivial? [...] By comparison... yeah, actually.
Nature was compressing the search space long before large brains came along.
For example, bilateral symmetry is based partly on the observation that an even number of legs works best. Nature doesn't need to search the space of centipedes with an odd number of legs. It has thus compressed the search space by a factor of two. There are very many such economies - explored by those that study the evolution of evolvability.
Surely this is not an example of search-space compression, but an example of local islands of fitness within the space? Evolution does not 'make observations', or proceed on the basis of abstractions.
An even number of legs 'works best' precisely for the creatures who have evolved in the curtailed (as opposed to compressed) practical search space of a local maxima. This is not a proof that an even number of legs works best, period.
Once bilateral symmetry has evolved, the journey from bilateralism to any other viable body plan is simply too difficult to traverse. Nature DOES search the fringes of the space of centipedes with an odd number of legs- all the time.
http://www.wired.com/magazine/2010/04/pl_arts_mutantbugs/
That space just turns out to be inhospitable, time and time again. One day, under different conditions, it might not.
BTW, I am not claiming, either, that it is untrue that an even number of legs works best - simply that the evolution of creatures with even numbers of legs and any experimental study showing that even numbers of legs are optimal are two different things. Mutually reinforcing, but distinct.
Ah. So sad I am getting to read this debate so long after the fact. I can see really useful points on both sides, but thus far Robin does not seem to realize that the universe has been running a terrible genetic algorithm for millions of years in Matlab on a computer with nearly zero RAM. FOOM'd AI, even if using the same crappy genetic algorithm, would be like dropping it into a brand new 10GHZ processor with 100GB RAM and re-writing it all in optimized C++. And then if you learn from genetic algorithms how to start making legitimate optimization methods for intelligent agents, it's a whole different ball of wax. The categorical type of advantage one can hold over peers isn't even on the radar screen in terms of human kind's knowledge of social science.
Followup to: Life's Story Continues
Imagine two agents who've never seen an intelligence - including, somehow, themselves - but who've seen the rest of the universe up until now, arguing about what these newfangled "humans" with their "language" might be able to do...