Tl;dr:
Evolution is an untargeted stochastic process rather than focusing on goals like replication or survival.
It is not only applicable to living things, but also takes place across all sorts of concepts.
The way we design AI doesn't mimic evolution, which is worrisome.
Once in a while, I come across a text that goes something like this:
“Although evolution optimized them for the sole goal of replication, many spend
much of their time not producing offspring but on activities such as sleeping, pursuing food, building homes, asserting dominance and fighting or helping others — sometimes even to an extent that reduces replication.”
– excerpt from an anonymous NYT bestseller book about the future of AI
And once in a while, I find myself cringing a bit inside when I read this. Most of us (who are not living in conservative or religious communities) have learned about evolution at school, but many of us — even highly educated and intelligent people — get this detail wrong about evolution:
Evolution does not have a goal.
Evolution does not aim to improve replication, to increase fitness, or even to keep its subjects alive.
But wait, isn't this what Evolution is all about?
Not quite, there is a tiny distinciton - at its core, evolution simply describes a stochastic process with a few basic ingredients:
- Stuff that is better at replicating will replicate more and will thus be around more.
- Stuff that is better at not breaking — or avoiding being broken — will break later and will thus be around more.
- Stuff that is easier to make will be made more and will thus be around more.
Evolution doesn’t care. It’s just statistics — pure mathematics.
Let’s say we have three starting populations with apparently similar growth rates observed over a timespan n:
a∗1n vs. b∗1.001n vs. c∗1.0011n
No matter how large the population of a is, it won’t grow, since its growth rate is 1 — it just stays the same. If we wait long enough, b and c will necessarily outgrow a. And even though b and c differ in growth rate by only 0.01%, over enough time c will absolutely dominate b. This is the nature of exponential growth, even the tiniest edge compounds and will be around more. (Note that this is highly simplified for actual evolutionary processes). We don't need a GOAL here of growth, we will have the fastest growing population dominating, just because that is the nature of growing stuff.
When we look at the origins of life, everything began with tiny molecules in the early oceans of Earth. There is no scientific consensus on what these molecules looked like, but it is agreed they must have had some kind of self-reinforcing pattern — one that caused other molecules or atoms to form into the same shape. In this way, they multiplied (exponentially), and by chance, some variants of these molecules emerged that were slightly better at sticking around and thus became more common. There was never a GOAL to replicate or survive. It was simply that those are the processes that make more of stuff, and stuff that is better at it will be more common.
Sidenote: An interesting example of self-reinforcing molecules today are prions. Prions are misfolded proteins that, upon contact, cause other proteins to misfold in the exact same way, thereby creating new prions. This can sometimes lead to catastrophic tissue collapse. Mad cow disease is caused by prions — And they are also, by the way, the reason why you shouldn’t eat human brains, just in case you were wondering.
So now, knowing that evolution does not have a goal, you might ask: “Okay, what about it, it’s all statistics, not an aimed process. But why does that matter? In the end, survival of the fittest still wins.”
Here's two reasons why this distinction matters:
1. Evolution extends beyond biology
Once you grasp that evolution is not goal-directed, it becomes easier to see that it’s not just limited to living things. The same principles apply to languages, social norms, and even political systems. Political systems that are better at winning over others and are more stable tend to be around more. In languages, simpler or more practical expressions will stick around more. "Do not" is clunkier and therefore arguably has a lower fitness than "don't".
This also helps explain why human values are so messy. They didn’t evolve for replication — because there was never such a goal. Some values may have aided survival in the past, but many might have also simply not interfered with it enough to disappear.
Additionally, the values we hold today may have undergone a kind of cultural evolution themselves, leaving us with those values that happened to persist best. Not necessarily the values that achieved the most goodness, but those that spread the most effectively — perhaps because they were easy to understand, emotionally resonant, or incentivized their own propagation.
Some values, like cooperation and community, have proven extremely successful at spreading — suggesting they indeed amplify collective success. But others may have stuck around for no reason other than contingency, simplicity, or historical accidents that erased alternatives.
2. Evolution and AI are fundamentally different
Evolution being untargeted also marks a key difference between evolution and the way we build AIs.
AIs do have goals. Their utility functions are explicitly defined. While reinforcement learning can introduce some form of selection pressure, it is typically only one small part of training. The core process, which is optimization via gradient descent, makes AIs fundamentally different from everything that has evolved on this planet so far.
We can’t predict which (or if any) values these systems will develop. Many outside the AI research community dismiss alignment concerns. I think part of this is based on experience with living beings. We have the notion that intelligent entities tend to behave “reasonably well.” (Blissfully ignoring that some individual humans are infact awfully misaligned with humanity). Human "alignment" and value formation relies on the long, messy filtering process of natural evolution — this is a filter AIs don't need to pass.
Humans, for instance, only appear “aligned” after millions of years in which cooperation, empathy, and mutual dependence outcompeted purely selfish behavior. AIs, by contrast, have no such evolutionary history. Their only pressure is to optimize their given goal.
Assuming they will behave in aligned or cooperative ways for the same reasons we do is, at best, highly optimistic.