Moloch and multi-level selection

by thoughtfulmadison2 min read11th Aug 20203 comments


Game TheoryMolochWorld ModelingWorld Optimization

I'm new to the LW community, and I've been reading the seminal works. After reading Scott Alexander’s Meditations on Moloch and Zvi's Moloch Hasn't Won, I wanted to add a couple thoughts - mostly additional rebuttal of "Meditations...".

To summarize Scott’s argument:

  • With a few exceptions, because of game theoretic concerns, Malthusian scenarios are inevitable
  • We should be grateful to have a relatively happy existence, and fight to keep the good things in society
  • We should generally be pessimistic that “the good life” can last forever

I think that Zvi covers most of the key issues with "Mediations..." in "Moloch Hasn’t Won”. To summarize Zvi’s rebuttals:

  • Empirically, we see that things actually are pretty good and the worst Malthusian scenarios have not come to pass. This isn’t because of dumb luck.
  • Perfect competition does not exist in the real world
  • Uncertainty and environmental change means that there are no truly optimal decisions or static equilibria
  • Groups are still just collections of individuals - even soulless, profit-optimizing corporations are influenced by individual human concerns

I agree with Zvi. I also think he comes tantalizingly close to a couple of points that are worth calling out explicitly.


Competition is continually playing out at many scales. Individual humans are competing against individual humans. Groups of humans are competing against other groups. Nations compete. Within humans, cells compete. The upshot is that the optimal decision at one scale may be suboptimal at a different scale (Zvi hints at this when he talks about antifragility).

Consider two competing tribes, the Athenians and the Spartans. Individual Athenians are more prone to altruism and cooperation, while individual Spartans are more prone to selfishness and competition.

In Sparta, the more competitive Spartan individuals are selected for vs. the less competitive ones. One spring, Athens and Sparta go to war. In the lead up to the pivotal battle, the Spartan generals cannot agree on a unified battle plan. As a result, the entire Spartan tribe dies.

In victorious Athens, Athenians now specially recognize the need for group solidarity and altruism. They coordinate to prevent a Malthusian race-to-the-bottom. Over the next 1000 years, Athens does not experience selection for strong, competitive individuals. One fall, Macedonia invades. Athens is united against the Macedonian invasion, but the much more ferocious Macedonians quickly overwhelm the friendly Athenians.

In an alternate universe, the Spartans and Athenians are preparing for war. One day, a Trojan scouting ship is spotted in the distance. Now, all the Greeks are concerned about the nearby Trojans. The Athenian tribe and the Spartan tribe agree to an uneasy truce, but leaders of each tribe meet behind closed doors and discuss whether and when to break the truce. At the same time, Troy sends a messenger offering citizenship and 10,000 gold coins to any individual Greek who choses to defect to Troy with military secrets.

There are different incentives to compete vs. cooperate depending on the scale you're looking at (whether temporal scale or organizational scale). What I’m getting at is that altruism/cooperation is not aberrant behavior - it is natural adaptation in a world of multi-level selection, and it exists in balance with selfishness/competitive behavior. In an environment of continual change, this balance also changes over time.


Our subjective evaluation of what is “good” is adapted to the systems we live in. We know that some individual humans may enjoy the thrill of competition more than others. It’s likely that if other animal species could weigh in, some would find humans to be shockingly selfish. Others would find us to be frustratingly selfless. Zvi’s writing seemed to suggest that a corporation’s objective must incorporate the objectives of its constituent individuals. That seems right. But humans also co-evolve with our technology - this includes organizational structures. If highly hierarchical tribes of size 250 have a survival advantage over non-hierarchical tribes of size 50, we expect selection for humans that are happy and effective in more adaptive, highly hierarchical tribes of size 250.

Even disregarding evolutionary effects, we know that the human software layer can adapt remarkably well to changing circumstances. There is evidence that the happiness of recent lottery winners and recent amputees alike quickly resets to the normal range.


All in all, I don’t think that an extreme level of concern around Malthusian scenarios is warranted. Humans have adapted to changing conditions remarkably well in the past. At a species level, outside of a short list of existential threats, I don’t think we should be too concerned with our ability to gracefully adapt to the future.

However, I do think that the complexity of optimal decision making in a world of deep uncertainty, changing conditions, and multi-level selection suggests possibilities for AI alignment. Or at least AGI confusion? More to follow :)


3 comments, sorted by Highlighting new comments since Today at 1:04 PM
New Comment

Thanks for the link. Wish I'd read it earlier! That's a much better exposition of what I was trying to express here. :)

I do think that there's complication beyond even the two-layer model presented in "Studies on Slack". For example, maybe my company gives a lot of slack and looks at my value-add on a 5-year timeframe. At the same time, I have little personal slack around my annual bonus because I need to pay off loans. Perhaps the culture I live in has some different level of slack in its expectations for work. Although the two-layer model is a useful simplification, I'm not sure that the actual interactions are so neatly hierarchical.

The upshot is that the optimal decision at one scale may be suboptimal at a different scale

This is a nice summary of what Moloch is - pressure to optimize on dimensions that the agent(s) in question would prefer to weight differently. Leading to dis-optimization of things the agent(s) think are important.