LESSWRONG
LW

The networkist perspective
World Modeling

4

The networkist approach

by Juan Zaragoza
7th Sep 2025
13 min read
0

4

World Modeling

4

New Comment
Moderation Log
More from Juan Zaragoza
View more
Curated and popular this week
0Comments

The lens and the lookout

You wake up in a dying forest. The birds are falling ill, the swamp is decaying, monkeys show signs of chronic stress, and insects are consuming what remains.

You’d like to do something about it, but it seems impossibly hard. 

Let’s say you specialize in sick birds. First, you detect a viral infection. Then, you notice that the virus thrives because the forest’s microbiome has changed, but you find this depends on complex interactions between microorganisms that you don’t yet understand.

You also notice that the bird’s immune system is compromised due to malnutrition, because a species of worms it ate disappeared. Then, you find this happened due to a population explosion of one of its other predators, for reasons yet unknown. 

Analyzing each problem separately leads you to tangled diagnoses, too complicated to unravel. As soon as you start pulling one thread, you find a very convoluted causal network. It’s like trying to face Hydra, the many-headed serpent: when you try to attack one node, more nodes appear.

So zooming in isn’t working. 

Now you’re frustrated and you decide to take a walk. You climb the mountainside and reach a lookout point. When you see the whole forest, everything clicks: you notice all the crises are actually downstream effects of deforestation. 

The forest is a living network. Each organism evolved to find stability within that ecosystem. Because every organism does the same, robust equilibria emerge. You might not know the specifics of how these equilibria work, but you can assume they exist, at least on some level. If they didn’t, you wouldn’t have a “forest” in the first place.

But big enough perturbations may interfere with these equilibria. Cutting off most trees is a big perturbation. You know that this eventually causes systemic failures, even if you don’t know exactly how.

At first glance, the downstream crises seemed independent. When zooming in, you could probably notice that they’re somewhat intertwined, but the diagram gets too confusing very soon. Analyzing the issues (i.e. starting from the crises and moving “up” in the causal network) is not a suitable approach to see deforestation, because it becomes intractable as soon as you climb a few levels.

You needed the “bigger picture” to see that stopping the deforestation and letting the forest recover may just be enough to solve the forest’s crises. Unless deforestation was catastrophic (i.e. a perturbation beyond a point of no return), organisms will recover their balance on their own, because the forest’s equilibria are stable attractors. You probably don’t understand the specifics of how the forest solves its crises once you stop deforestation. It simply does.

The core idea here is that it's easier to tend living networks than to debug them. Analyzing or “zooming in” is good for debugging, but tending may require a “big picture” approach.

Growth and design

The broader principle behind that example is what I call the networkist approach.

Trying to trace through the causal network of the forest was computationally infeasible. “Helping out” in one of the downstream causes (by, say, recovering the worm population to restore the birds’ diet) would require you to understand a big chunk of that network.

However, the forest already does those computations natively. Each species is constantly sensing and reacting to its environment. Once the network is in place, you can trust that it “knows” how to solve minor problems, at least when some broader constraints are met (such as not cutting most of the trees).

So instead of trying to understand everything and design explicit solutions, it’s easier in these cases to find the big obstacles for the network to “naturally work fine”. If you remove those obstacles, you can let the living network, in itself, do the necessary computations to solve the problems. There’s an element of trust in this approach. You need a leap of faith because you still don’t understand the specific mechanisms, but many times it just works.

Another toy example of the networkist approach is the work of a gardener. Zooming in on a metabolic problem of a plant would probably show an intractable network of enzymes and cells not doing what they’re supposed to. Despite this, it could be the case that the plant is getting dry or lacks sunshine. Addressing these upstream obstacles is frequently enough for the plant to grow a solution. The gardener doesn’t need to debug the metabolic pathways, he just knows some big picture patterns about the plant and trusts it to take care of implementation details.

A more familiar example of this approach are neural networks. As Connor Leahy mentioned in an interview, “AI is grown, not designed”. This phrase describes the success of neural networks over explicit approaches like symbolic AI or classical machine learning. Instead of designing an explicit solution to a problem, you can now set up the network structure and key constraints. Computational details are then solved by the network, through backpropagation and activation functions, not the programmer. This allowed humans to solve problems that were very hard to solve explicitly, like face recognition or natural language processing.

The objective here is to generalize that approach. Can we learn to “grow” solutions, instead of designing them, for other “living” networks? Particularly, can we use this approach to understand and address some of society’s crises from a promising angle?

In the best-case scenario, we find a key upstream obstacle, like deforestation in the case of the forest, or lack of water in the case of the plant, preventing the network from working “naturally fine”. Then, addressing that key issue would allow the network to grow its solutions.

Society’s breaking point

Humanity is currently facing several crises.

We’re probably facing environmental damages without historical precedent. We’re also living through the worst epidemics of anxiety, stress and depression in documented history. Economic inequality is growing relentlessly. One could make the case that individualism and intergroup intolerance are growing too, and that this is a bad thing.

On top of that, there’s a serious potential for current developments in AI to have bad consequences. Even if rogue AIs don’t become a thing, the explosion of AI slop in the web, the replacement of a large portion of the human workforce, or the attainment of unbounded levels of power by small groups who control AI are risks that don’t even require the arrival of a superintelligence.

This is not to say that everything is crumbling. For example, documented poverty has been diminishing for at least two centuries, and human productivity has been growing for even longer. I will come back to these observations later in this sequence, because I believe there are important caveats, but the main point I want to make here is that there are real risks and problems that we may want to solve, and they seem diverse and hard.

Let’s say you want to address the housing crisis. You detect a supply shortage, which depends on building being too expensive and risky, due to complex interactions between interest rates, construction costs and institutional investors. Also, NIMBY practices oppose new developments and affect public policy. If you keep pulling on a thread, you could end up thinking about gas prices and wars in the Middle East. The same would probably happen for any of the crises mentioned above.

Since society is a living network with many open problems, I’d love to try the networkist approach as an alternative to explicit solutions. I know it wouldn’t be as trivial as stopping deforestation or watering the plant. It may require some new developments, like we needed backpropagation or scaling for neural networks to triumph.

Crucially, it's not trivial to see the “big picture” and the core obstacle for society to solve these issues by itself. It’s easy for the gardener not to think about enzymes, because enzymes are small and the gardener is roughly the size of the plant. But society is very big. Our intuitive approach would be to analyze issues in terms of institutions, public policies, lobbying, etc, but from a systemic perspective this is analogous to thinking about the metabolic pathways of the plant and trying to design an explicit solution.

It seems like a good moment to recognize that some proponents of the free market think of their posture as (something that I would categorize as) a networkist approach. For them, the “obstacle” that impedes the market to solve its own issues is government intervention, and lifting interventions would be enough for most of these problems to go away.

As an example of these seemingly networkist ideas, “I, Pencil” by Leonard Read describes how nobody needs to design the whole construction process of a pencil. In Reads' understanding, this is a good thing because doing so would be prohibitively hard. Instead, the market as a whole takes care of the implementation details. 

I said “seemingly” because I don’t believe this is the correct networkist approach. The reason why will become clearer in future entries, but for now I can say two things. 

First, if the gardener arrived at the wrong conclusion of a plant needing chlorine instead of water, this wouldn’t be the right networkist approach. In the case of society, there are reasons to believe that free markets create moloch dynamics and inadequate equilibria, mainly because of externalities.  

Second, even if dissolving the government was enough to solve society’s problems, an adequate networkist solution would require the needed intervention to be achievable. The gardener can water the plants, but can’t control the rain. A good solution would require understanding precisely what interventions are needed to remove the obstacle, since just wishing to dissolve the government is not enough to take the world from the current state of affairs to that remote ideal. 

The rest of this sequence presents what I believe to be the right networkist approach to solve most of these crises. More precisely, it presents what I consider a step in the right direction.

In fact, it is closely related to moloch dynamics. It focuses on the fundamentals of why and when moloch appears, and how we can help society grow its own solutions to these issues. 

To do this, I will present a “big picture” view of some aspects of society. It strongly builds upon Axelrod’s work on cooperation, presented in both The Evolution of Cooperation and The Complexity of Cooperation.

The core insight is that we currently rely on four self-sufficient mechanisms for cooperation, and none of them adequately solves incentive alignment on large scales. I will present this idea more thoroughly in the next entry, but what i mean is basically:

  • Tit-for-tat only works on small scales, and doesn't solve externalities.
  • The market scales tit-for-tat, but still doesn't solve externalities.
  • Social norms solve externalities without concentrating power, but only work on small scales.
  • Hierarchies solve externalities and work on large scales, but they may lead to critical levels of power concentration.

The proposed solution is the schema of a fifth self-sufficient mechanism that allows a system like social norms to scale, in a similar way to how the market scaled tit-for-tat dynamics. 

The rest of this sequence will present that idea more thoroughly, while also showing why I think most social issues stem from this problem, and therefore, why I trust that solving this will be enough for society to grow its own solutions to the current crises. I’m not claiming it will be easy. I’m just claiming that it might be easier than explicit approaches, which currently seem impossibly hard. Developing neural networks wasn’t easy either, but programming an LLM with explicit if-else statements would be practically impossible.

To do this, I will present a big picture of society I find useful. It's very basic, so you may have thought of something similar before. That being said, I haven’t seen it presented explicitly in the following way. 

Because I believe the networkist approach to solving society’s issues is closely related to this big picture description, I’ve been calling it “the networkist perspective”, assuming it’s clear by context that I’m referring to this big picture model of society and its issues with incentive misalignment.

Apart from the core insight mentioned above, I won't present the big picture in this article, because I fear that presenting an ambicious idea without the needed context may be counterproductive. Even though most people could describe the supply-demand model in five minutes, explaining why it works and why its useful almost always takes more time. Its normal to use longer expositions to help understand and justify simple concepts. I promise to present a 5-minute version later.

Having said that, I can tell you what to expect in the following entries.

The networkist perspective

I mentioned that the networkist approach needs an overarching view of society.

But we already have basic and overarching views of society! 

For example, the supply-demand view is one of them, and it is sometimes used to support the free market approach I discussed above. Another overarching view of society is Marxism, which focused on power concentration but fell short of understanding incentive alignment. In future entries, I’ll argue that this was the main reason why Marxism was color blinded to how revolutions ended up creating enormous hierarchies and extreme power concentration.

Nevertheless, in the past few decades, scientists made some observations that fit nicely into yet another perspective. In each discipline involved, these observations are seen as basic and evident, but they were also significantly disruptive when first presented:

  • The prisoners' dilemma and the tragedy of the commons in game theory.
  • Externalities and network effects in economics.
  • Agent-based models of cooperation proposed by Axelrod.
  • Cognitive biases in behavioral science, especially Kahneman’s work.
  • The adaptativeness, or even optimality, of certain cognitive biases, which was observed by evolutionary psychologists and behavioral ecologists. An example of this is how gratitude and anger intuitively implement the tit-or-tat strategy.

All of these ideas have been successfully used to enhance our analyses of societal problems. For example, when analyzing the housing crisis and therefore studying institutions, lobbying or gas prices, these observations strongly deepen our comprehension of the causal networks involved. But analysis is not enough for the networkist approach.

I want to show how these observations, with the help of a few auxiliary but basic constructions, also fit nicely into a basic, big picture view of how society works, that isn’t well spread (I haven’t seen it elsewhere, so I’m writing this as an integrated summary of these ideas).

As a disclaimer, basic overviews are imperfect but useful. They are simple, so they describe general trends in a very “noisy” manner. For example, supply-demand is an imprecise but useful model, because it summarizes a lot of knowledge about society in a way that’s useful for learning and communicating, which in turn helps scientific advancement. 

In the following entries I intend to describe, explicitly, the big picture that can be drawn from these observations. I consider it helps explain some phenomena that previous overarching views didn’t (such as power dynamics with respect to AI, social networks, religions and cults, etc). All of these phenomena are already explained by correct analyses, which mostly use these observations, just not by a big picture that we can use for the networkist approach, or that summarizes knowledge in a way that lets us learn and communicate a lot of information in a fast way. 

I’m not a specialist in any of the disciplines mentioned above. My academic background is in computer science and philosophy. However, I’ve been interested in this approach for more than 10 years, and have read about these topics with the focus of “building the jigsaw puzzle”. Because of this, I cannot claim originality to all the conclusions drawn in the following entries. In fact, after writing or presenting "corollaries" of this basic idea, I discovered a few of them were already taken into account in some disciplines. 

To me, this is a feature instead of a bug. The interesting part of this work is the integration. For me, the fact that some conclusions I drew were already taken into account confirms that this perspective both points us in the right direction, and that it is useful to learn and have ideas faster. This is not to say that the following text has no original observations. I think at least some of the following is new, but the best way to find out is to share this work.

All of the following ideas should be seen as heuristics or intuitions. I will describe simple patterns in a hand-wavy manner. Reality is much noisier, so these patterns won’t always fit. I’d like you to judge them as you would judge other simple overviews of society, like the supply-demand model: not 100% right, but still potentially useful. 

As an outline for the sequence:

The next entry will dive into how the four cooperation mechanisms work (tit-for-tat, the market, social norms and hierarchies). It will present their differences in terms of scaling, power dynamics, and the forms of cooperation they are able to sustain. The key dynamic is that as technology develops, it allows for more economies of scale, which allows for the growth of scalable mechanisms and the displacement of non-scalable mechanisms. This creates predictable effects like the growth of some hierarchies and the displacement of community norms. This will be the basic dynamic on which to build our understanding.

The third entry shows how many of our emotions can be understood as heuristics that help us implement or navigate these cooperation mechanisms successfully. Understanding this is useful because it lets us track current epidemics of stress, anxiety and depression to the predictable shifts in the cooperation mechanisms we use as technology develops. It also helps understand how our motivation, and therefore our actions, tend to follow the incentive landscape set by the four cooperation mechanisms.

The fourth entry focuses on how the competition between hierarchies and other cooperation mechanisms explains several broad-scope patterns in power dynamics. For example, it will address historical waves of authoritarianism and liberation, as well as historical economic collapses.

The fifth entry explains how I imagine a fifth system could work, by presenting a mechanism that could theoretically promote group cooperation in a scalable way without the need for a central authority. This is far from a finished concept, so I hope for a fruitful discussion.

After that, there will probably be a lot of appendices with complementary ideas or corollaries I find useful. One of them will relate these ideas to AI alignment.

These observations are intended for a networkist approach. They intend to find the core obstacles for society to grow its own solutions. I write this because I actually believe we have the opportunity to “plant” a horizontal mechanism to cooperate at scale, and I hope and trust the network will take care of the details.