Epistemic Status: Highly speculative. I have an undergraduate student's background in physics and computer science but am not an expert in mechanism design, AI alignment, social sciences, political science, or physics and computer science for that matter. I am putting this model forward to have it critiqued and improved.
Intro
Intelligence is an emergent property of complex networks, a concept with profound implications for how we structure our societies. This post argues that our current modes of governance fail because they neglect the computational complexity of the systems they try to manage, a lesson painfully learned by planned economies and arguably ignored by today's market-based societies.
I will trace this idea from the cybernetic experiments of Allende's Chile to the centralized, managed economies of modern corporations like Amazon. Building on these observations, I will sketch out an alternative: a decentralized autonomous organization (DAO) designed to directly compute and optimize for the equitable distribution of power. This proposal should be considered a speculative model, and I am particularly interested in community feedback on its potential failure modes and underlying assumptions.
Intelligence Is Not Unique, and The Computational Limits of Governance
The earth emerged out of a collection of atoms and the properties of physics. You emerged out of a collection of molecules and the properties of biochemistry. You also emerged out of your mother. A man who learns the specific patterns of each individual who passes through a traffic light will be overwhelmed with information. A man who doesn't give a damn about the names and lives of these individuals at this specific traffic light, and simply counts their cars and keeps record of the time of day and maybe even keeps track of this at many traffic lights, can seemingly predict the future. Out of this impersonal man's approach emerges a traffic forecast.
Intelligence emerges out of the small rules of neurons. Each neuron has been genetically coded to blindly follow the wall of its small known universe, the balancing inputs of excitatory and inhibitory signals, out of many of these repeated parallel to each other magically pops intelligence. As you can see here intelligence is special but it is not unique. It's simply a property of these kinds of large networks. When you set up a lot of nodes (neurons or even people) that have certain kinds of rules that organize relationships between the nodes, the inhibitory or excitatory chemicals for neurons, money and the complexities of social relationships for people, backpropagation and the ReLU mathematical function for artificial neural networks, these large networks show intelligent properties. Networks of cells and people are intelligent at different scales, but not uniquely. We even simulate these networks by attempting to grow intelligences with math. We create mathematical networks of nodes inside computers and then develop the relationships between these nodes by showing the nodes data, comparing their output, and then rewarding or dissuading the output depending on the data. This is the most abstract version of what a large language model is.
The philosophical and political lesson of emergence is that we are as much reliant on the network of our brain, as we are reliant on the network of our social systems. Those social systems, themselves emerged out of ecological systems, which emerged out of physical ones. Without the base layer of the ecology we would have nothing, without the base layer of the sociology we would have nothing. The economy is nothing without these systems. This might seem like a leap but it's not. That is what emergence says. It says that all these systems are highly interconnected in complex and unintuitive ways. Controlling them top down is often misguided and backfires because of this computational problem. It is impossible to compute and therefore predict the long term effects of making small changes to complex adaptive systems. Many times we accidentally released invasive species into the wild, many times have we created the conditions for plagues by living in cities without specific adequate protection measures. Even just the consequences of releasing carbon into the atmosphere. These are small simple examples of the massive amounts of unexpectable problems that come about when dealing with complex adaptive and non-linear systems. These problems come about because we expect these systems to be simple. Modern capitalist economists expect them to fit easily into their simplistic economic models and call it a “black swan event” when everybody finds something they agree doesn't fit. To give credit to the economists these models are easily tractable, but in my opinion they blind you, they make you think you're right. It's easy to win when you set the rules to the game you're playing.
The Historical Proof of Computerized Government
In the soviet union there was this problem of economic computation. As many know this is Goodhart's Law. Without the free market to do it all for them they could not for the life of them figure out how to compute where resources should go. Famously almost all of the nails they made were railroad spikes, as there was no proper demand for smaller nails, this was because the soviet state measured their nail quotas by weight and at scale the spikes were cheaper to make. The soviets themselves spent uncountable rubles (because they hid, destroyed or lost the records) a year on the people required to compute where this stuff should go. Although not the sole point of failure for the soviets, It was a massive problem for them and all managed economies. China was able to solve it by letting market mechanisms in, just a little. This was under incredible supervision and control from the party. This was in order to gain some of the benefits of the computational adaptability provided by these market systems. They had to adapt their rigid human manual policy and the slow react -> compute -> debate -> policy -> vote -> repeat cycle. They needed a new way to react quickly to the change that the technological development of advanced economies inherently necessitates.
The tragedy of all this is something stranger than you think. These mechanisms were both easily outpaced by a promising prototype technology they already had in the 50s and 60s. This prototype was even debatably proven by 1971 in a little place called Chile. At the time Chile was a democratic liberal society that had just voted in a socialist named Salvador Allende. Allende was looking for a way to manage his society and economy without compromising its original democratic constitution. He found it in two men, a genius engineer and politician Fernando Flores and the true systems genius British operations research theorist and cyberneticist Stafford Beer. Now Beer was basically what would be called nowadays a business optimization consultant, but at the time this was quite a novel profession and actually required a lot of systems design intuition to understand and be good at. He was not just some business consultant you would meet in any financialized city nowadays. He was among one of the inventors of the job itself. Now to get to the point, these men built something great. They built a system to automate the management of their economy with a statewide computer network. One built to take inputs from their factories and such, if one of their ore smelters had too much ore to smelt, the central computer would be notified, and an algorithm would dispatch an order to the smelters distribution computer that a truck driver needed to dispatched and workers needed to load a truck with extra ore and they needed to send it to this ore smelter which the algorithm would calculate by distance and whether or not it too would be overflowed by the supply. This system easily was able to deal with the computational demands of managing their relatively simple (compared to advanced economies) and heavily mining based economy. This system though only lasted about 2 to 3 years. This is because in 1973 a fascist coup occurred, this coup was backed by American corporations operating in Chile who had some assets by the state. The asset seizures might sound like scary communism to some but keep in mind it was done because almost all of Chile's national industries such as copper were owned by American multinational corporations, almost no profits were going to Chile and its people and the corporations intended to keep it that way. Allende ended up killing himself in their equivalent of the white house as the fascists were bombing around him and on their way to killing him themselves.
Computational Economy in Modernity.
The experiment never went through to its fullest extent, and we've never been able to see a truly computationally managed economy. Except for everywhere on earth since the early 2010s. As written in this great book; “The People's Republic Of Walmart”, almost all automated large scale distribution systems use the principles Beer pioneered. Except, they use them to extract value from their customers. These systems are used as secondary managed economies under the control of corporations, entities designed to extract as much wealth as possible from you. Often once these systems are set up, through this idea known in the business as “network effects”, people start to exponentially more and more rely on them. As more people buy from amazon, more distribution centers are brought up, more warehouses are bought up and more people are used to relying on the service. Now as this process continues there no longer becomes any competition, as anybody trying to compete with you would have to have a distribution network set up that could compete with yours, and as yours gets larger, the capital to start any competitor's network gets exponentially larger. We are now at a stage where these distribution networks are so large they are global, and any real competition is gobbled up by these leviathans.
An Idealistic Proposal for Emergent Governance.
I'd like to illustrate to you a potential solution, one that is likely not correct but is a stab in the dark and a proof of concept for the idea that there are alternatives to what we have. This solution should be taken not as a serious attempt at a governing proposal, but as an abstract painting of a potential solution. One to be refined over years, where ideas are to be iterated over, thrown out or further included.
The solution I envision is the greatest machine learning engineering project ever devised. A decentralized autonomous organization, likely built on blockchain technology, such that it is not centrally controlled. One where any token or crypto currency required to fuel it is equitably distributed, and not based on solely computation. A system designed to optimize for both economic and sociological efficiency, while minimizing all possible concentrations of power where feasible. A system that has many cells and organelles for various fields and requirements, all working in parallel. One where no information is kept behind gates and any surveillance that is done is surveillance that is open to all, where all can access anything learned by it. This machine would be built and designed as an open source project. Leading developers and engineers, and therefore open source project administrators, would hold dual roles as a new kind of politician. Ones with potential term limits or whatever new kinds of democratic mechanisms the research engine of this machine devises and suggests to the public. As these leaders are public figures their lives would be naked, they should be expected to be torn apart and built back up again by the populace, as part of the agreement required to participate in controlling any necessary surveillance. The core of this machine would be one equation, likely defining edges in a network, an economic inspired model that would constantly optimize for the distribution of not just wealth, but power itself. The power of each individual over another would have to be quantified, likely by measuring many dimensions including but not limited to: costs of living, life expectancy, and how much one person could change another's life expectancy by sharing something they have access to. As of October 2025, a German man's cost of living for a day for example could pay for almost two days of cost of living for a Venezuelan. Assuming the German man was financially okay individually you can see that this clearly provides a path to quantifying how much power one man holds over another simply because of where they live. Over many years and maybe decades of this machine's processes, all these inequalities would balance, the German man's cost of living would go down, and the Venezuelans quality of life would go up. This process ideally though would first occur with those who are truly unequally empowered. Those with many zeros in their money counts, and those who know how to hide those zeros. In this situation of quantified power these people would attract a lot of attention, they would likely be found by the fact that they have significantly more wealth than would be required to live out their expected lifespan given modern technology. The system would likely compare expected lifespan of an individual and the wealth and assets they accrue, through analyzing their personal assets and any social and economic network effects they experience such as free healthcare or wealthy family members. Then the power model would compare how much that wealth could affect the lives of those who don't have enough to live out their expected lifespan. Power isn't a value for a single node, it is the value of the directional edge of one node to another, or even the hyperedge of one node to many, or even finally the hyperedge of many nodes to many others. With adjusting factors to continue to optimize the quantification of power, I believe this would be a start on a reliable way to compute some relative measure of power. This machine would not be an immediate fix, it would be a process of trial and error, but one with self regulating mechanisms designed to prevent catastrophe and mass exploitation. A machine designed to celebrate emergent solutions, just as the global machine of capital has done very well for so long. A machine built as humanities crutch, built to solve our greatest evolutionary failing, misdirection of competition.
AI Safety.
This idealized solution will likely get a billion people yelling and screaming about safety. The solution to that is quite easy in my opinion. Tool AI compartmentalization, decentralization and parallelization. The machine itself will have many tool AIs funneling data into different places. Not one large model with lots of data access such as our computationally inefficient language models nowadays. As described above it will be a constant back and forth, a sharing of duties, Tool AIs will likely mostly be used to format data into discrete packets for human made algorithms, while humans will guide and build the algorithms toward human ends. The system itself will ideally not have a conscious control over large scale heuristic decisions unless it only applies to the agency of itself, i.e. conscious AIs. Agentic AIs also fall under the model of power quantification and themselves are measured and optimized out of power concentrations. The idealized case is one where humans and AI are working in solidarity towards the liberty of all.
The Big Problems.
The largest problem I see with this idealized artistic vision is simply the problem of advanced persistent threats. As many people's movements have seen, these ideas are highly threatening to those with power. The problem that I see is simply getting anyone to agree to try it, especially because of fear of those already in power, and especially because of fears of those who will try to gain power within the new system. Any system like this would have to be created in a situation and culture that understood the need for collective good, but respected individual liberty. A difficult dialectic of authority and liberty would have to be balanced, not just within the system but within the culture of the humans running it. Therefore the humans themselves would have to self correct, and ideally any sign of the specifically toxic kinds of machiavellianism and most if not all kinds of narcissism within the halls of power would be ostracised and “bullied” away. Self correcting social mechanisms like this can be seen in tribal cultures in a few examples. The !Kung of the Kalahari Desert who actively discourage arrogance, have a practice called “insulting the meat”. When a hunter brings back a large kill, they are expected to be humble. Others in the tribe will often downplay the accomplishment to prevent the hunter from developing a sense of self-importance. In order to optimize for the success of this social self regulation, Dunbar's number will have to be heavily applied in order to maintain councils and groups that minimize anonymity of the crowd. This in order to maximize personal connection between people and their political decision makers. Hopefully personalizing the cells of leadership and limiting oversight that individual humans have to what they can see, will allow for our evolved mechanisms of social self regulation to smooth over the majority of human problems. These cells of leadership are similarly managed by the same quantification and minimization of power model, ideally attempting to minimize any tribalism that might occur due to separation of power groups. This system is not unlike the system of LessWrong itself, a culture of self criticism and search for rationality through social norms and understandable ruleset for self regulation through its moderation. Through social mechanisms of local decision making then acting as the consensus mechanism for wider policy, local cells would weed out what they deemed as toxic. Because they manage themselves policy across federal scales would ideally become a gradient where more or less conservative or social ideology would be accepted depending on majority decisions in each cell, as cells would have the same power as each other it would self quarantine perceived toxic regions. With an ideal case of a whistleblower culture and open source surveillance it would be difficult for the manipulators to manipulate each other, and gain control over larger swaths of territory, therefore self quarantining exploitation. This mimics the cold war stalemate situation, the Nash equilibrium, that enforced peace between larger powers via a metastable positioning of power over each other. Finally, I'd like to point out, all of these problems are ones that every single political system ever designed has had and still has, including the ones currently in place. I am clearly not thinking enough about specific problems with my design. In any case, all political systems' core faults come down to the human participants, my idealized model is no different.
Questions for Discussion:
The historical example of Project Cybersyn is central to my argument. Are there other interpretations of its successes or failures that challenge my thesis? What failures do you see in my idealized solution?
My metric for power is an idealized rough sketch, what are further ways to quantify power in a computationally tractable way? What arguments are there against attempting quantification of power at all?
Is it naïve to suggest a culture that ‘ostracizes and bullies away’ specifically toxic Machiavellianism on larger scales?