Inefficiencies are necessary for resilience:

Results suggest that when agents are dealing with a complex problem, the more efficient the network at disseminating information, the better the short-run but the lower the long-run performance of the system. The dynamic underlying this result is that an inefficient network maintains diversity in the system and is thus better for exploration than an efficient network, supporting a more thorough search for solutions in the long run.

Introducing a degree of inefficiency so that the system as a whole has the potential to evolve:

Efficiency is about maximising productivity while minimising expense. Its something that organisations have to do as part of routine management, but can only safely execute in stable environments. Leadership is not about stability; it is about managing uncertainty through changing contexts.

That means introducing a degree of inefficiency so that the system as a whole has the potential to evolve. Good leaders generally provide top cover for mavericks, listen to contrary opinions and maintain a degree or resilience in the system as a whole.

Systems that eliminate failure, eliminate innovation:

Innovation happens when people use things in unexpected ways, or come up against intractable problems.  We learn from tolerated failure, without the world is sterile and dies. Systems that eliminate failure,  eliminate innovation.

Natural systems are highly effective but inefficient due to their massive redundancy:

Natural systems are highly effective but inefficient due to their massive redundancy (picture a tree dropping thousands of seeds). By contrast, manufactured systems must be efficient (to be competitive) and usually have almost no redundancy, so they are extremely vulnerable to breakage. For example, many of our modern industrial systems will collapse without a constant and unlimited supply of inexpensive oil.

I just came across those links here.

Might our "irrationality" and the patchwork-architecture of the human brain constitute an actual feature? Might intelligence depend upon the noise of the human brain?

A lot of progress is due to luck, in the form of the discovery of unknown unknowns. The noisiness and patchwork architecture of the human brain might play a significant role because it allows us to become distracted, to leave the path of evidence based exploration. A lot of discoveries were made by people pursuing “Rare Disease for Cute Kitten” activities.

How much of what we know was actually the result of people thinking quantitatively and attending to scope, probability, and marginal impacts? How much of what we know today is the result of dumb luck versus goal-oriented, intelligent problem solving?

My point is, what evidence do we have that the payoff of intelligent, goal-oriented experimentation yields enormous advantages (enough to enable explosive recursive self-improvement) over evolutionary discovery relative to its cost? What evidence do we have that any increase in intelligence does vastly outweigh its computational cost and the expenditure of time needed to discover it?

There is a significant difference between intelligence and evolution if you apply intelligence to the improvement of evolutionary designs:

  • Intelligence is goal-oriented.
  • Intelligence can think ahead.
  • Intelligence can jump fitness gaps.
  • Intelligence can engage in direct experimentation.
  • Intelligence can observe and incorporate solutions of other optimizing agents.

But when it comes to unknown unknowns, what difference is there between intelligence and evolution? The critical similarity is that both rely on dumb luck when it comes to genuine novelty. And where else but when it comes to the dramatic improvement of intelligence does it take the discovery of novel unknown unknowns?

A basic argument supporting the risks from superhuman intelligence is that we don't know what it could possible come up with. That is why we call it a 'Singularity'. But why does nobody ask how it knows what it could possible come up with?

It is argued that the mind-design space must be large if evolution could stumble upon general intelligence. I am not sure how valid that argument is, but even if that is the case, shouldn't the mind-design space reduce dramatically with every iteration and therefore demand a lot more time to stumble upon new solutions?

An unquestioned assumption seems to be that intelligence is kind of a black box, a cornucopia that can sprout an abundance of novelty. But this implicitly assumes that if you increase intelligence you also decrease the distance between discoveries. Intelligence is no solution in itself, it is merely an effective searchlight for unknown unknowns. But who knows that the brightness of the light increases proportionally with the distance between unknown unknowns? To have an intelligence explosion the light would have to reach out much farther with each generation than the increase of the distance between unknown unknowns. I just don't see that to be a reasonable assumption.

It seems that if you increase intelligence you also increase the computational cost of its further improvement and the distance to the discovery of some unknown unknown that could enable another quantum leap. It seems that you need to apply a lot more energy to get a bit more complexity.

New Comment
7 comments, sorted by Click to highlight new comments since:

This can be rephrased as a question about optimization algorithms. "Given an unknown fitness landscape, is there an algorithm that beats a sort of hill-climbing algorithm?" Yes and more yes. Not being some sort of computer-science person I don't know quite how much faster, but a quick google search demonstrated that it is lots.

As for intelligence depending on the noise of the human brain, I'd agree that our intelligence depends on the noise of the human brain. But that in no good way implies that human brain-noise is the optimal kind of brain-noise, and it doesn't mean all human discoveries were"dumb luck."

this is similar to the question of whether the tendency to mutate is itself selected for.

To have an intelligence explosion the light would have to reach out much farther with each generation than the increase of the distance between unknown unknowns. I just don't see that to be a reasonable assumption.

OK, so I don't think anyone really knows exactly what is going to happen there. Some things we can see, though. There's a process of increasing optimisation sophistication. Evolution is getting better at search. The result is synergy.

...and - though we can't tell for sure - it looks as though we are probably close enough and going fast enough to zoom past human level in maybe a decade or so - provided there are no unforseen setbacks.

To have an intelligence explosion the light would have to reach out much farther with each generation than the increase of the distance between unknown unknowns. I just don't see that to be a reasonable assumption.

The analysis I gave in: Ultimate encephalization quotient cited the somewhat contradictory trends of larger brain sizes in evolution, and smaller brain-to-body size ratios in large animals (assuming that future dominant creatures will be big, which seems likely).

...but I think that we can see clearly from the huge data centres that are sprouting up, that nature favours very large brains - and is building them pretty quickly.

My point is, what evidence do we have that the payoff of intelligent, goal-oriented experimentation yields enormous advantages (enough to enable explosive recursive self-improvement) over evolutionary discovery relative to its cost?

We can easily see the performance advantage of memetic evolution over evolution based on random mutations - by observing what has happened so far on the planet since memetic evolution kicked off.

We can easily see the performance advantage of memetic evolution...

I do not doubt that intelligence is superior. But as I have already written in another thread, evolution was able to come up with altruism, something that works two levels above the individual and one level above society. So far we haven't been able to show such ingenuity by incorporating successes that are not evident from an individual or even societal position.

That example provides evidence that intelligence isn't many levels above the blind idiot God. Therefore the crucial questions is, how great is the performance advantage? Is it large enough to justify the conclusion that the probability of an intelligence explosion is easily larger than 1%?

To answer this we would have to fathom the significance of the discovery ("random mutations") of unknown unknowns in the dramatic amplification of intelligence versus the invention (goal-oriented "research and development") of an improvement within known conceptual bounds.

Hans Moravec has attempted to quantify the performance advantage. Here he claims that machine evolution is proceeding about 10,000,000 times faster than organic evolution did. The machines do have us to crib from if they need to - but still.