Richard Feynman once said:

If, in some cataclysm, all of scientific knowledge were to be destroyed, and only one sentence passed on to the next generations of creatures, what statement would contain the most information in the fewest words? I believe it is the atomic hypothesis (or the atomic fact, or whatever you wish to call it) that all things are made of atoms—little particles that move around in perpetual motion, attracting each other when they are a little distance apart, but repelling upon being squeezed into one another.

Feynman was a smart guy and this is a remarkable fact. The atomic hypothesis enables us to understand that complex systems, like humans, are just an ordered composition of simple systems. However, this idea does not explain how can such order emerge from chaos. In other words, how can simple systems interact in a way that leads to the complexity we see in the world around us? Yes, we can know that atoms interact following certain laws, but how does this interaction lead to the emergence of complexity?

The fact, the marvelous idea — but not evident — that ordered complexity can emerge from the random interaction of simple systems (like atoms) is probably the most important idea that humanity has discovered. It is a more general concept than the atomic hypothesis since it's independent of our model of the world (in the end, the atomic hypothesis is just a model for us to make predictions about the physical world, useful, but anthropogenic). And it gives us more information since it tells us that not only complex systems can be made of simpler parts, but that there is no need for any prior complex system to arrange them in the first place!

Humanity took a long journey until this fact was accepted. The first people to discover a mechanism that transforms simplicity into complexity were Charles Darwin and Alfred R. Wallace, with the discovery of the mechanisms of natural selection for evolutionary biology.

In 1859, the book "On the Origin of Species by Means of Natural Selection" was published, and I consider it to be one of the most (if not the most) important events in our civilization's history. It was a shock, for the first time, it was presented a mechanism that could explain the world we see without the need for God. It was shown that structures as complex as the human eye or the brain can be developed through a process of random mutations and natural selection.

It took a while for people to accept it, and it wasn't until the 1920s that the scientific community started to accept the mechanism of natural selection. Still, to date, there are some opposing movements by religious groups. It's understandable, it took from humanity the need for God to explain the world we see.

The mechanism of natural selection is the first example we have of a process that can lead to the development of complex structures from simple parts, but it's not the only one. For example, in the 1970s, Stuart Kauffman was studying how genes are expressed in living organisms and how this expression can be regulated. He noticed that there are regulatory circuits in living organisms that are incredibly complex. Kauffman tried to understand how these regulatory circuits could have been generated and he found that they could have emerged through a process of random interactions between the genes.

Kauffman's work was extended by other researchers and it is now known that there are many processes that can lead to the development of complex structures from simple parts. These processes are known as self-organizing processes and they occur in many different fields, from biology to computer science.

During the last decades, several mechanisms have been developed to create arbitrarily complex functions from simple elements, e.g. neural networks for inference. Now it’s evident to the scientific community that complexity and intelligence arise from sophisticated combinations of simple parts. Civilizations are just the product of interacting individuals that create a complex network with emergent properties. One could even argue that intelligence is just a measure of our ignorance of the composition and organization of a decision-making system and that complexity is a way of measuring the amount of information needed to describe such a composition.

Now humanity is starting to harness these mechanisms and exploiting them to mass produce targeted complexity (or intelligence) to solve complex tasks that were impossible to solve before.

Rich Stutton published an article in 2019 that contains the following paragraph:

[...] They are not what should be built in [our minds], as their complexity is endless; instead we should build in only the meta-methods that can find and capture this arbitrary complexity. Essential to these methods is that they can find good approximations, but the search for them should be by our methods, not by us. [...]

This is the key concept that will unleash the upcoming industrial and economic revolution. The complexity that can be created by machines is bounded only by the resources available to them.

As I see it, in the next years, the creation of complexity will be the primary source of value in the economy. The ability to harness self-organizing processes to create targeted complexity will enable us to solve problems that were thought unsolvable. It won't be long until such attempts to harness complexity surpass what evolution has achieved over millions of years, since the genetic algorithm is slow.

New Comment
2 comments, sorted by Click to highlight new comments since:

I think there's a sense in which you're not wrong, but I think it'd be useful to taboo the word "complexity" and be more clear about what you mean, and what more specific predictions you can make.

See: Say Not "Complexity", a LW classic on how the word 'complexity' can obscure more than illuminate.

It could be that I overuse the word complexity in the text, but I think it is essential to convey the message. And honestly, I find the terms "intelligence" and "understanding" more obscuring that the term complexity. Let me try to explain my point in more detail:

  • I understand intelligence as a relative measure of the ignorance of the dynamics of a decision-making system. Let's use four types of chess players to illustrate the idea:
    • The first player is a basic "hard-coded" chess engine, for example, based on MTD-bi search (like [sunfish](thomasahle/sunfish: Sunfish: a Python Chess Engine in 111 lines of code (github.com)). Very few people will consider this player/algorithm intelligent, since we understand how it takes decisions. Even if I might not know the details of the decisions, I know that I can follow the logical process of the algorithm to find them. Even if this logical process is computationally very long.
    • The second player is a more sophisticated chess engine based on a large deep neural network trained via self-play (for example Alpha Zero). In this case, some people might consider this player/algorithm somewhat intelligent. The argumentation to attribute the algorithm intelligence is not other than the fact that the rules that this algorithm uses to make decisions are obscured by the complexity of the neural network. We can understand how the MTD-bi search algorithm works, but it's impossible for a human brain to understand the distribution of weights of a neural network with billions of parameters. The complexity of the algorithm increased and with it our ignorance over its methods for decision. Although in this case, we are still able to numerically compute its decisions.
    • The third player is a grandmaster of chess (for example, Magnus Carlsen). In this case, most people will consider that the player is intelligent because we have very limited information about how Magnus takes decisions. We know some things about how the brain works, but we don't know the details and certainly, we can't compute the decisions since we don't even have access to Magnus Carlsen's brain state.

However, for a Laplace's Demon with complete information about the world, none of these players would be considered intelligent since their decisions are just consequences of the natural evolution of the dynamics of the universe (the fact that some of these dynamics could be stochastic/random is irrelevant here). For a Laplace's Demon nothing will be intelligent since for him there's zero relative ignorance of the dynamics of any decision-making system.

  • The 4th player is a random player that just takes random valid actions in each turn. However, there are 2 versions of this player:
    • One is a pseudorandom number generator that selects the actions.
    • The other is a human taking random actions.

Is the player intelligent in any of the two cases? Why?

To summarize:

  • Complexity: the amount of information needed to describe a system.
  • Intelligence: a measure of the relative ignorance of the dynamics of a decision-making system.

It seems obvious to me that complexity is necessary for intelligence but not sufficient, since we can have complex systems that are not effective at making decisions. For example, a star might be complex but is not intelligent. This is where I introduce the term "targeted complexity", which might not be the best selection of words, although I don't find a better one. Targeted complexity means the use of flexible/adaptative systems to create tools that can solve difficult tasks (or another way to put it: that can take intelligent decisions).