AI disempowerment operates across markets, networks, and governance simultaneously, but our analytical tools don't cross those boundaries. We propose spectral graph metrics—spectral gap, Fiedler vector, eigenvalue distribution—as computable, cross-domain measures for tracking how the balance of influence shifts when AI enters coordination systems, and identify three specific quantities to monitor for AI governance.
Introduction
AI systems are changing how society coordinates — across markets, networks, governance institutions, scientific communities, all at once. The gradual disempowerment thesis captures why this is hard to address: human influence over collective outcomes can erode slowly, through ordinary competitive dynamics, without any single dramatic failure. AI systems become better at navigating coordination mechanisms, and the effective weight of human agency quietly decreases.
The stubborn part is that it operates across institutional boundaries simultaneously. Regulate algorithmic trading to maintain human oversight of markets, and competitive pressure shifts to network dynamics — whoever shapes information flow shapes what traders believe before they trade. Address attention capture in social networks, and the pressure migrates to governance advisory relationships. The problem flows around single-domain interventions like water finding cracks.
Yet our analytical tools respect exactly those domain boundaries. Economists model markets with one formalism. Network scientists study information diffusion with another. Political scientists analyze voting with a third. Each captures something real. None can describe what happens when AI systems alter the dynamics across all three simultaneously.
We think markets, networks, and democratic systems are structurally more similar than they appear. They can all be described as message-passing protocols on graph structures — nodes are participating agents, edges are channels through which influence flows, and what varies across mechanisms is what gets passed along those edges and how nodes update. In markets, messages are price signals. In networks, they're beliefs and observations. In democratic systems, they're preferences and votes.
When you represent coordination mechanisms this way, you inherit the toolkit of spectral graph theory. And this turns the disempowerment problem from something that feels intractably cross-domain into something with computable structure.
Here we give a quick sense of what this looks like concretely — don't worry if the details aren't clear yet, we'll walk through specific examples carefully in the sections that follow.
Consider a human-only coordination graph — five nodes connected by edges representing who influences whom. Every graph like this has a mathematical property called the spectral gap (λ₂), which you get from decomposing the graph's structure into its fundamental modes — the same way you'd decompose a vibrating string into its harmonic frequencies. The spectral gap measures how easily information flows across the graph's weakest point. A large spectral gap means the graph is well-connected and signals propagate quickly to everyone. A small one means there's a bottleneck somewhere — a thin bridge between two clusters where information gets stuck.
Now add AI nodes. They connect densely to each other and to key human nodes. The spectral gap increases: λ₂' > λ₂, that is information flows faster. This might in turn lead to separated networks where AIs talk with AIs because their information flow is faster.
Another useful way of looking at disempowerment from a graph perspective is by creating an influence function and look at how much the AIs versus humans are providing for this.
Partition nodes into H and AI and trace which signals mattered for the collective outcome. Edge thickness represents causal contribution. The question becomes information-theoretic: what fraction of outcome-determination flowed through human nodes? An example of this is the metric of how much money flows through humans versus AIs but we want to extend this to be more generally about the mutual information between node signals and collective outcomes, partitioned by type (e.g politics, economics, culture).
The quantities that let us track this are all spectral. Eigenvector centrality tells you what fraction of structural influence belongs to human versus AI nodes, and whether that ratio is shifting. The Fiedler vector tells you whether the system is separating along the H/A boundary. Betweenness centrality tells you who controls information flow between communities — if AI nodes increasingly sit at bridge positions, nominally human decisions route through AI intermediation.[1]
Maintaining human agency means maintaining structural properties of the joint graph: human betweenness across mechanism boundaries, spectral gap ratios that keep human timescales relevant, Fiedler partitions that don't collapse onto the H/A boundary. These are measurable, computable, trackable quantities and we give a couple of AI governance suggestions based on these right before the conclusion.
We've been developing this framework for the past year at Equilibria Network. The core bet is that spectral graph theory provides a shared analytical language for coordination mechanisms that are usually studied in isolation — and that this shared language reveals structure you can't see from within any single domain.
Whether spectral analysis actually delivers on this depends on whether the toolkit works reliably across different coordination mechanisms. The rest of this post checks that claim against markets, networks, and democratic systems. We then lay out the desiderata for a unifying framework, where the open problems are, and what we're building toward.
Spectral Analysis Across Coordination Systems
We claimed that markets, networks, and democratic systems can be understood through the same spectral toolkit. Let's make that concrete. For each mechanism, we'll show how the graph Laplacian — the matrix you get from encoding who-influences-whom — gives you the spectral gap, the Fiedler vector, and the eigenvalue distribution, and what these quantities actually predict about real system behavior.
The pattern to watch for: in each case, the spectral gap λ₂ will predict how fast the system converges, the Fiedler vector will identify its natural fault lines, and the higher eigenvalues will capture its capacity for complex structure.
Spectral Analysis of Markets
Markets are our first example of where disempowerment thesis first becomes concrete. If AI traders increasingly dominate price discovery, human traders don't suddenly lose their accounts—they just find their signals mattering less. The bid you submit still enters the order book, but if AI systems have already moved prices to reflect information you haven't processed yet, your trade is reactive rather than formative. You're still participating; you're just not shaping outcomes.
To see this structurally, represent a market as a graph: nodes are traders, edges represent influence relationships—who watches whom, who updates their beliefs based on whose actions. This isn't the transaction record; it's the structure through which price information actually propagates.
The graph Laplacian L = D − W encodes this structure, where D is the degree matrix (how much total influence each trader receives) and W is the weighted adjacency matrix (who influences whom, and how strongly). The Laplacian has a useful property: for any assignment of values x to nodes, the quadratic form x^T L x equals the sum over all edges of (x_i − x_j)² × w_ij. In plain language, it measures total disagreement across the network, weighted by connection strength. If two connected nodes have similar values, that edge contributes little. If they have very different values, that edge contributes a lot.
This connects directly to price dynamics. Price discovery is diffusion—when one trader updates their bid, neighbors notice and adjust, their neighbors notice them, and the signal spreads. The Laplacian governs this process: prices equilibrate fast across strong connections (high penalty for disagreement) and slow across weak ones (low penalty). Bottlenecks in the influence structure become bottlenecks in price convergence.
Figure 3: Take an 8-trader market: 4 humans (H₁–H₄) and 4 AIs (A₁–A₄). Within each group, traders monitor each other closely—humans share information channels, AIs update rapidly on each other's outputs. But cross-group influence is sparse: just two weak links where a human occasionally tracks an AI's prices. The figure shows this structure, with solid edges for strong within-group connections (weight = 1.0) and dashed edges for weak cross-group links (weight = 0.1).
Build the Laplacian and compute its eigenvalues:
λ = {0, 0.19, 2.0, 2.1, ...}
The second eigenvalue λ₂ = 0.19 is called the spectral gap. It's small compared to λ₃ = 2.0, and that smallness tells you there's a bottleneck—the graph has a clean partition.
The eigenvector for λ₂—the Fiedler vector—comes out as:
All humans get positive values; all AIs get negative. The sign flip lands exactly at the Human/AI boundary. No clustering algorithm needed; the linear algebra found the structural partition automatically.
Why does this work? The Laplacian's smallest eigenvalue is always λ₁ = 0, corresponding to the constant eigenvector [1, 1, ..., 1]. This makes sense: if every node agrees, there's zero disagreement across any edge.
The Fiedler vector solves a specific optimization problem: find the assignment of values to nodes that minimizes total disagreement, subject to not being constant. The values must sum to zero, so some are positive and some are negative.
Think about what this optimization does. If two nodes are strongly connected, you want to give them similar values to keep (x_i − x_j)² small. If they're weakly connected, the penalty for different values is low. So the optimal strategy is: assign similar values within tightly-connected clusters, and let the sign flip happen where connections are weakest.
That's why the Fiedler vector partitions the graph along its bottleneck. It's not a clustering algorithm—it's the solution to "where can I afford disagreement?" The answer is: across the weak links.
How this relates to disempowerment
Convergence time scales inversely with the relevant eigenvalue. Within each group, the larger eigenvalues (~2.0) govern dynamics: τ_within ≈ 1/2 = 0.5 time units. A price shock among AIs equilibrates within the AI cluster quickly.
Across groups, the spectral gap governs: τ_cross ≈ 1/0.19 ≈ 5 time units. Cross-group convergence is 10× slower.
Here's where disempowerment becomes visible. If AI-AI connections are denser than human-human connections, the AI subgraph will have a larger internal spectral gap—AIs reach consensus faster. Collective price levels increasingly reflect AI-internal dynamics; humans equilibrate toward prices that were already set. Individual humans still trade, but the attractor they're converging toward was shaped by the faster-equilibrating subsystem.
The Fiedler partition gives you a direct test: is the market's primary structural fault line Human/AI, or something else (sector, strategy, geography)? When the partition aligns with agent type, the market has organized around that boundary. That's the spectral signature of segregation—not just difference, but separate communities where information flows freely within and struggles to cross between.
For information networks, the same mathematics applies but with different interpretation. Nodes are agents, edges are communication channels, and the Laplacian captures the diffusion structure of information flow.
The core dynamic is simple: people update their opinions by averaging over what their neighbors think. If you trust someone, their view pulls yours toward theirs. Repeat this across everyone simultaneously, and the network either converges toward shared understanding or gets stuck in persistent disagreement. Which happens depends entirely on the graph's spectral properties.
The spectral gap λ₂ predicts how fast consensus forms. The figure shows why this works intuitively. A well-connected network (left) has many paths between any two nodes—information can flow through multiple routes, disagreements get smoothed out quickly, and the system relaxes to consensus. A poorly-connected network (right) has a bottleneck: two dense clusters linked by a single weak connection. Information flows freely within each cluster but struggles to cross the bridge. Disagreements between clusters persist.
The Cheeger inequality proves that a small spectral gap guarantees a bottleneck exists somewhere in the network. The eigenvector v₂ (called the Fiedler vector) tells you exactly where: nodes with positive components fall on one side of the cut, nodes with negative components on the other. The algebra finds the echo chambers automatically.
The higher eigenvalues reveal something different: the network's capacity for complex patterns of belief. A network with only one significant eigenvalue can sustain only binary disagreement—you're either in group A or group B. A network with many well-separated eigenvalues can maintain richer structure: multiple factions, nested coalitions, opinions that don't collapse onto a single axis. The spectral distribution measures what we might call the network's "cognitive complexity."
Network scientists have confirmed these patterns empirically across social media platforms, scientific collaboration networks, and political communication systems.
Spectral Analysis of Democratic Systems
Voting systems are less obviously graphical, but the representation still works. Consider voters (or legislators) as nodes and influence relationships as edges: who persuades whom, who looks to whom for voting cues, who forms coalitions with whom. The aggregation mechanism determines how preference signals propagate through this influence graph.
The spectral gap λ₂ predicts the stability of collective decisions. A large spectral gap means the voting outcome is robust—small changes in individual preferences don't flip the result. A small spectral gap means the system is near a "phase transition" where minor shifts could change everything. There’s evidence that voting rules exhibit phase transitions at critical preference thresholds. Below some critical concentration, manipulation probability approaches 1 exponentially fast; above it, manipulation becomes exponentially unlikely. The spectral gap tells you how close your system sits to that knife's edge.
The picture shows what this looks like concretely. Two partisan blocs—tightly connected internally, weakly connected to each other—with a handful of swing voters bridging the gap. The spectral structure reads this topology directly: small λ₂ because the cross-bloc connections are sparse, and the Fiedler vector v₂ cleanly separating the blocs by sign. The swing voters show up near zero in v₂, mathematically capturing their position between worlds—and their decisive influence on outcomes.
The eigenvectors reveal fault lines, and this has been validated empirically. When network scientists applied spectral analysis to U.S. Congressional voting, they found that modularity-based community detection naturally identifies partisan blocs—and spectral methods for modularity optimization underlie these partitions. No manual labeling needed—the linear algebra finds the political coalitions automatically.
Political polarization has a precise spectral signature. Work on legislative voting networks shows that roll-call voting behavior increasingly corresponds to "interval graphs"—mathematical structures where a single dimension captures almost everything. The researchers found a sharp collapse in dimensionality over recent decades. By the post-104th Senate, Congressional voting had become essentially one-dimensional: a single eigenvector explains nearly all the variation. When your democracy's spectrum looks like that, you're seeing polarization written in the mathematics itself.
The gap between λ₂ and λ₃ measures something real about democratic health. When λ₂ is much smaller than λ₃, the system has one dominant fault line—a clean left-right split with weak cross-partisan ties. That structure is spectrally fragile. When eigenvalues distribute more evenly, the system sustains complex coalition patterns: overlapping groups, cross-cutting alliances, the kind of multidimensional politics that makes manipulation harder and outcomes more stable.
The napkin math: Under social influence, preferences converge via the same Laplacian dynamics: dx/dt = −Lx. But for voting, we care about stability under perturbation. If I shift one voter's preference by ε, how much does the collective outcome move?
Perturbation analysis gives: δ(outcome) ∝ ε/λ₂ under influence-aggregation models where preferences propagate through the network before a decision is reached. The precise relationship depends on the voting rule — unanimity, majority, approval voting, and ranked-choice systems each interact differently with the same influence topology. [2]
The influence matrix (I − αL)⁻¹ develops a near-singularity when λ₂ is small—small inputs produce large outputs. And crucially, where that perturbation lands matters: shifting a swing voter (near zero in v₂) moves the outcome far more than shifting a committed partisan. The eigenvector tells you who the kingmakers are.
This is the spectral signature of political instability: a polarized electorate with weak cross-partisan ties, a few swing voters holding disproportionate power, the whole system poised at a critical point where small nudges flip outcomes.
Generalising the examples
We set out to check whether spectral properties carry consistent meaning across different coordination mechanisms, and the examples suggest they do. In each case, the spectral gap λ₂ predicted the rate at which local perturbations become global patterns—price convergence speed in markets, information diffusion rate in networks, decision stability in democracies. The Fiedler vector v₂ identified natural fault lines in each system, whether those manifest as trading communities, echo chambers, or partisan blocs. Higher eigenvalues captured each system's capacity for complex, multi-dimensional structure rather than simple binary divisions.
The mathematics remained the same across all three; what changed was interpretation. This is consistent with the hypothesis that markets, networks, and democracies admit a shared analytical framework — though it doesn't yet prove they share deep structure as opposed to admitting similar computational tools for somewhat different reasons. Together, these spectral quantities give us computable, cross-domain metrics for tracking how coordination dynamics shift when AI nodes enter a system — spectral gap for convergence speed, Fiedler vector for fault lines, eigenvalue distribution for structural complexity, all partitionable by agent type.
What follows is the deeper theoretical program that we hope might ground these metrics in a unified framework and extend them beyond what spectral methods alone can reach.
A First Order Model
What follows is the theoretical program we're developing to ground and extend these results. The ideas here are less settled than what came before — we're sharing our current thinking rather than established findings, because we think the direction is promising enough to be worth discussing openly.
The spectral examples are satisfying but they raise an obvious question: why does this work? Observing that the same eigenvalues predict things across markets, networks, and democratic systems is a pattern. It's not yet an explanation. What's the underlying mechanism that makes the Laplacian the right object to study in all three cases?
The following story explains it easily but it is only a first order approximation. The actual contextualised answer to this question is part of a larger research program.
The graph Laplacian L = D − W is, at bottom, a description of how information moves on a graph. Take any quantity distributed across nodes — prices, beliefs, preferences, anything — and ask how it diffuses through the network of connections. The Laplacian is the operator that governs that diffusion. Its eigenvalues tell you the rates. Its eigenvectors tell you the patterns. This is what graph signal processing studies: signals defined on graphs, and the structures that govern how those signals propagate.
Now consider what the agents on this graph are actually doing. A trader is trying to find the right price. A voter is trying to figure out how to vote. A team member is trying to predict what their colleagues will do. Whatever the domain, each agent is maintaining some model of its situation, generating predictions, comparing those predictions against what it observes, and updating to reduce the error.
Each agent is doing local optimization against a landscape defined partly by what every other agent is doing. The active inference literature frames this as free energy minimization; you could also describe it through Bayesian belief updating or gradient-based optimization of prediction error. [3]
If you have a collection of agents on a graph, each doing local optimization — each one hill-climbing on its own prediction-error landscape — what you get at the collective level is energy moving across the system. Agent A updates based on signals from neighbors B and C, which changes what B and C observe, which changes their updates, which propagates further. To a first approximation, the collective dynamics is diffusion. Information and uncertainty flow across edges according to the same mathematics the Laplacian describes.
When models disagree — when agents have misaligned predictions about each other — the system is in a state of high collective free energy. Tension, disagreement, unresolved uncertainty. Message-passing resolves this tension. Agents exchange signals, update their models, and the system relaxes toward coordination. The Laplacian governs the rate and pattern of this relaxation. The spectral gap tells you how fast collective uncertainty resolves. The Fiedler vector tells you where the persistent disagreements will be.
This is why the spectral toolkit works across domains. Markets, networks, and democratic systems all involve agents doing local inference on a shared graph, and the Laplacian is the mathematical object that describes how local inference becomes collective dynamics. The spectral results follow from the structure of distributed optimization on graphs — regardless of whether the optimization targets are prices, beliefs, or policy preferences.
Beyond Approximations
This picture is the leading-order term, and it's the term the spectral toolkit captures well. But we should be direct about its limitations.
The phenomena that matter most for AI safety — strategic positioning at bridge nodes, recursive modeling of other agents' models, coalition formation, identity-driven resistance to consensus — live precisely in the regime where the diffusion approximation breaks down. The Laplacian governs what happens when agents are doing something like local averaging. Real world coordination involves agents who anticipate, who model each other's models, who form alliances and defect from them. The actual collective operator is more complex than the Laplacian, and characterizing what it looks like beyond the diffusion regime is the core theoretical challenge of this research program.
The deeper question — and this is where we think the most promising contribution lies — is whether there's a formal correspondence between free energy minimization at the individual level and the collective dynamics we observe across coordination mechanisms. If different coordination systems are different ways of collectively minimizing prediction error under different constraints, that would explain why the spectral toolkit transfers. We're pursuing this direction but the formal results aren't ready yet. If it works, the functorial mapping wouldn't be between markets and networks directly, but between individual inference and collective coordination, with markets and networks as different instantiations under different constraints.
Our approach builds on several existing lines of work connecting individual and collective inference. Heins et al. have shown how spin-glass systems can be analyzed through collective active inference, providing a concrete implementation of multi-agent free energy minimization on graphs. Hyland et al.'s work on free energy equilibria establishes conditions under which multi-agent systems reach shared minima — the formal analog of coordination. And recent work on partial information decomposition of flocking behavior demonstrates how collective dynamics can be decomposed into synergistic and redundant information contributions across agents. For readers interested in the computational methods, [this lecture on MCMC approaches to collective active inference] provides a setup for running a collective <-> individual gibbs sampling-style loop.
Desiderata for the Theory
The active inference story gives us a candidate explanation for why spectral methods transfer across domains. But an explanation isn't the same as a unifying framework. We think that our theory needs three things: universality across domains, compositionality so that understanding markets and networks separately lets you predict what happens when they operate together, and computational tractability at scale. We want this so that we can actually compose and simulate AI + human systems and see what happens.
These pull against each other, and most existing frameworks achieve one or two. Game theory applies broadly but doesn't compose — there's no natural operation for "combine these two games and predict the joint dynamics." Network science computes efficiently but treats each coordination domain as requiring its own model. Social choice theory has beautiful results about voting but nothing to say about price formation. Taking inspiration from Fong and Spivak's work on applied category theory, the question is whether price dynamics and voting dynamics are structurally isomorphic — the same compositional relationships, even if their elements look completely different.
Where does spectral analysis sit against these?
Tractability is real but not as clean as it first appears. Spectral decomposition runs in O(n log n) for sparse graphs. But the graphs we care about aren't static — they evolve endogenously based on the dynamics we're modeling, and influence in real systems is latent, inferred from observed correlations rather than measured directly. The computational advantage over exhaustive simulation exists, but it's not a free lunch.
Universality is more promising. The spectral gap does predict convergence-like behavior across markets, networks, and democratic systems, and the eigenvectors reveal natural clustering regardless of domain. But showing that the same tools apply isn't the same as showing these domains share deep structure. The active inference connection is our best current candidate for why the transfer happens — distributed inference on graphs, governed by the Laplacian at leading order — but it remains a conjecture rather than a result.
Compositionality is where we've barely started, and it's where the real gap lies. Real coordination systems blend multiple mechanisms — a company uses market mechanisms for resource allocation, network relationships for information flow, and democratic processes for major decisions. The spectral analysis applies to each layer, but we don't have composition rules that predict what happens when you stack them. We're exploring this through what we call process-based modelling (beware of technical debt) — a functional programming approach to multi-agent simulation that might offer a computational path to composition. More in a future post.
The deeper open question is whether the cross-domain transfer we've demonstrated reflects structural unity or a sufficiently general hammer. If it's structural unity, it should live in what each mechanism preserves and what it discards — markets might preserve something about efficient information aggregation through exchange, democracies something about equal origination of influence, networks something about positional structure of information flow. These feel like different constraints on the same underlying message-passing process. Formalizing that intuition is the core theoretical challenge ahead — and it connects directly to the practical question of what happens when AI agents, operating under their own constraint regimes, enter these systems.
Applications in Governance
The spectral framework suggests three specific quantities that should be monitored in any coordination system where AI agents participate alongside humans.
Human betweenness across mechanism boundaries. If AI nodes increasingly sit at bridge positions between different coordination mechanisms — between the market layer and the network layer, between information flow and governance — then nominally human decisions increasingly route through AI intermediation. Betweenness centrality, partitioned by agent type across mechanism boundaries, tracks this directly. When human betweenness declines relative to AI betweenness at cross-mechanism bridges, the system is developing AI-mediated chokepoints.
Spectral gap ratios that keep human timescales relevant. If the AI subgraph's internal spectral gap is much larger than the human subgraph's — meaning AI nodes reach internal consensus far faster than humans can coordinate among themselves — then collective outcomes might get determined by whichever subsystem equilibrates first. The ratio λ₂(AI subgraph) / λ₂(human subgraph) measures this directly. A ratio that grows over time signals that AI coordination speed is outpacing human coordination speed, and collective outcomes will increasingly reflect AI-internal dynamics.
Fiedler partitions that don't collapse onto the H/AI boundary. The Fiedler vector v₂ identifies the system's primary structural fault line. If v₂ increasingly separates human nodes from AI nodes — if the dominant partition of the system is "humans on one side, AI on the other" rather than some functional or topical division — the system has structurally segregated along the type boundary. This is the spectral signature of a coordination system where humans and AI are no longer integrated but operating as separate blocs.
These are necessary conditions for meaningful human agency in mixed coordination systems, but they are not sufficient. A system could satisfy all three criteria while still undermining human agency through subtler mechanisms — frame-setting that shapes which options humans consider, information curation that determines what humans see before they decide, meaning-making that happens within nodes rather than between them. The spectral criteria track the structural skeleton of influence. They cannot detect whether the content flowing through that skeleton is manipulative, reductive, or otherwise corrosive to agency. We flag this not to undermine the criteria but to bound what they can and can't detect — and to motivate the richer framework we're developing, which would need to capture not just information flow but information quality and strategic intent.
Mathematical foundations. Formalizing the symmetry conjectures, proving spectral-behavioral correspondences, developing the category-theoretic structure that makes composition rigorous.
Simulation infrastructure. Building tools that let researchers construct collective intelligence systems as graphs with message-passing rules, run simulations, and analyze spectral properties. We’re trying to make sure that we have good theory to practice feedback loops so the aim here is practical applications.
Multi-agent AI safety. Applying this framework to understand what happens when AI agents participate in human coordination mechanisms.
Conclusion
Wherever you have multiple agents coordinating under uncertainty, you can draw a graph. The nodes are whoever's participating. The edges are the channels through which they influence each other. The message-passing rules encode what kind of coordination mechanism you're using.
Markets, democracies, networks, hierarchies — they're all message-passing on graphs. The differences that matter are what flows along the edges, how nodes update, and what the structure permits. And because graphs give us matrices, we get the full power of linear algebra — efficient computation, proven algorithms, scalable analysis.
What we've shown here is that spectral methods give computable, falsifiable quantities for tracking coordination dynamics — including AI disempowerment — across domains that are usually studied separately.
What we haven't shown, but believe is worth pursuing, is why this works. We suspect the answer involves a formal correspondence between individual free energy minimization and collective coordination dynamics — that the Laplacian captures the leading-order term of distributed inference, and that's why it transfers. If that's right, different coordination mechanisms would be different constraint regimes on the same underlying process, and the deep question becomes: what do different mechanisms preserve? Markets seem to preserve something about efficient information aggregation through exchange. Democracies seem to preserve something about equal origination of influence. Networks seem to preserve something about positional structure of information flow.
We think these spectral correspondences hint at deeper structural connections between coordination mechanisms — connections that might eventually be formalized categorically, the way the Langlands program connected number theory and geometry.
Thanks to the extended Equilibria research network for many conversations that shaped these ideas. This post presents a research direction we're actively developing—feedback, criticism, and collaboration are welcome.
This was co-written with Claude Opus based on work over the last year. I would give a 85-90% probability of claims holding true as this is something I’ve spent a lot of time on.
(A caveat on this metric: mutual information with outcomes captures causal contribution but not the full picture of agency. A human whose vote is shaped entirely by AI-curated framing has high mutual information with the outcome — their signal mattered — but diminished agency in any meaningful sense. They didn't act from their own understanding; they were a conduit for someone else's influence. A complete account would need to track not just whether human signals determined outcomes, but whether those signals originated from human deliberation. We don't have a clean formalization of this yet, and it's a significant gap. The spectral metrics we propose are necessary conditions for human agency, not sufficient ones — you can't have meaningful agency without structural influence, but structural influence alone doesn't guarantee it.)↩︎
The result holds most cleanly for linear aggregation on the influence graph; for other mechanisms, the spectral gap still constrains dynamics but the proportionality may take a different form. With a small spectral gap, even tiny preference shifts get amplified dramatically. The spectral-stability connection for democratic systems is the least developed of the three analyses presented here and deserves its own dedicated treatment. The core difficulty is that different voting rules don't just produce different proportionality constants — they can fundamentally change what the spectral properties mean. Under approval voting, the strategy space is different from plurality, which means the way preferences propagate through influence relationships changes qualitatively, not just quantitatively. The voting rule may function as a structural variable that reshapes the effective graph, rather than a parameter applied to a fixed graph. We've bracketed this issue here by focusing on influence-aggregation models where the spectral connection is most transparent, but a full treatment would need to develop mechanism-specific spectral signatures for different voting rules. ↩︎
These frameworks converge under specific conditions — roughly, when agents have well-defined generative models and the environment is stationary enough for variational approximations to track — but they are not identical operations.. The structural point that matters here is more basic: each agent is doing some form of local inference, and the collective dynamics emerge from those local processes interacting across the graph. ↩︎
TL;DR
AI disempowerment operates across markets, networks, and governance simultaneously, but our analytical tools don't cross those boundaries. We propose spectral graph metrics—spectral gap, Fiedler vector, eigenvalue distribution—as computable, cross-domain measures for tracking how the balance of influence shifts when AI enters coordination systems, and identify three specific quantities to monitor for AI governance.
Introduction
AI systems are changing how society coordinates — across markets, networks, governance institutions, scientific communities, all at once. The gradual disempowerment thesis captures why this is hard to address: human influence over collective outcomes can erode slowly, through ordinary competitive dynamics, without any single dramatic failure. AI systems become better at navigating coordination mechanisms, and the effective weight of human agency quietly decreases.
The stubborn part is that it operates across institutional boundaries simultaneously. Regulate algorithmic trading to maintain human oversight of markets, and competitive pressure shifts to network dynamics — whoever shapes information flow shapes what traders believe before they trade. Address attention capture in social networks, and the pressure migrates to governance advisory relationships. The problem flows around single-domain interventions like water finding cracks.
Yet our analytical tools respect exactly those domain boundaries. Economists model markets with one formalism. Network scientists study information diffusion with another. Political scientists analyze voting with a third. Each captures something real. None can describe what happens when AI systems alter the dynamics across all three simultaneously.
We think markets, networks, and democratic systems are structurally more similar than they appear. They can all be described as message-passing protocols on graph structures — nodes are participating agents, edges are channels through which influence flows, and what varies across mechanisms is what gets passed along those edges and how nodes update. In markets, messages are price signals. In networks, they're beliefs and observations. In democratic systems, they're preferences and votes.
When you represent coordination mechanisms this way, you inherit the toolkit of spectral graph theory. And this turns the disempowerment problem from something that feels intractably cross-domain into something with computable structure.
Here we give a quick sense of what this looks like concretely — don't worry if the details aren't clear yet, we'll walk through specific examples carefully in the sections that follow.
Consider a human-only coordination graph — five nodes connected by edges representing who influences whom. Every graph like this has a mathematical property called the spectral gap (λ₂), which you get from decomposing the graph's structure into its fundamental modes — the same way you'd decompose a vibrating string into its harmonic frequencies. The spectral gap measures how easily information flows across the graph's weakest point. A large spectral gap means the graph is well-connected and signals propagate quickly to everyone. A small one means there's a bottleneck somewhere — a thin bridge between two clusters where information gets stuck.
Now add AI nodes. They connect densely to each other and to key human nodes. The spectral gap increases: λ₂' > λ₂, that is information flows faster. This might in turn lead to separated networks where AIs talk with AIs because their information flow is faster.
Another useful way of looking at disempowerment from a graph perspective is by creating an influence function and look at how much the AIs versus humans are providing for this.
Partition nodes into H and AI and trace which signals mattered for the collective outcome. Edge thickness represents causal contribution. The question becomes information-theoretic: what fraction of outcome-determination flowed through human nodes? An example of this is the metric of how much money flows through humans versus AIs but we want to extend this to be more generally about the mutual information between node signals and collective outcomes, partitioned by type (e.g politics, economics, culture).
The quantities that let us track this are all spectral. Eigenvector centrality tells you what fraction of structural influence belongs to human versus AI nodes, and whether that ratio is shifting. The Fiedler vector tells you whether the system is separating along the H/A boundary. Betweenness centrality tells you who controls information flow between communities — if AI nodes increasingly sit at bridge positions, nominally human decisions route through AI intermediation.[1]
Maintaining human agency means maintaining structural properties of the joint graph: human betweenness across mechanism boundaries, spectral gap ratios that keep human timescales relevant, Fiedler partitions that don't collapse onto the H/A boundary. These are measurable, computable, trackable quantities and we give a couple of AI governance suggestions based on these right before the conclusion.
We've been developing this framework for the past year at Equilibria Network. The core bet is that spectral graph theory provides a shared analytical language for coordination mechanisms that are usually studied in isolation — and that this shared language reveals structure you can't see from within any single domain.
Whether spectral analysis actually delivers on this depends on whether the toolkit works reliably across different coordination mechanisms. The rest of this post checks that claim against markets, networks, and democratic systems. We then lay out the desiderata for a unifying framework, where the open problems are, and what we're building toward.
Spectral Analysis Across Coordination Systems
We claimed that markets, networks, and democratic systems can be understood through the same spectral toolkit. Let's make that concrete. For each mechanism, we'll show how the graph Laplacian — the matrix you get from encoding who-influences-whom — gives you the spectral gap, the Fiedler vector, and the eigenvalue distribution, and what these quantities actually predict about real system behavior.
The pattern to watch for: in each case, the spectral gap λ₂ will predict how fast the system converges, the Fiedler vector will identify its natural fault lines, and the higher eigenvalues will capture its capacity for complex structure.
Spectral Analysis of Markets
Markets are our first example of where disempowerment thesis first becomes concrete. If AI traders increasingly dominate price discovery, human traders don't suddenly lose their accounts—they just find their signals mattering less. The bid you submit still enters the order book, but if AI systems have already moved prices to reflect information you haven't processed yet, your trade is reactive rather than formative. You're still participating; you're just not shaping outcomes.
To see this structurally, represent a market as a graph: nodes are traders, edges represent influence relationships—who watches whom, who updates their beliefs based on whose actions. This isn't the transaction record; it's the structure through which price information actually propagates.
The graph Laplacian L = D − W encodes this structure, where D is the degree matrix (how much total influence each trader receives) and W is the weighted adjacency matrix (who influences whom, and how strongly). The Laplacian has a useful property: for any assignment of values x to nodes, the quadratic form x^T L x equals the sum over all edges of (x_i − x_j)² × w_ij. In plain language, it measures total disagreement across the network, weighted by connection strength. If two connected nodes have similar values, that edge contributes little. If they have very different values, that edge contributes a lot.
This connects directly to price dynamics. Price discovery is diffusion—when one trader updates their bid, neighbors notice and adjust, their neighbors notice them, and the signal spreads. The Laplacian governs this process: prices equilibrate fast across strong connections (high penalty for disagreement) and slow across weak ones (low penalty). Bottlenecks in the influence structure become bottlenecks in price convergence.
Figure 3: Take an 8-trader market: 4 humans (H₁–H₄) and 4 AIs (A₁–A₄). Within each group, traders monitor each other closely—humans share information channels, AIs update rapidly on each other's outputs. But cross-group influence is sparse: just two weak links where a human occasionally tracks an AI's prices. The figure shows this structure, with solid edges for strong within-group connections (weight = 1.0) and dashed edges for weak cross-group links (weight = 0.1).
Build the Laplacian and compute its eigenvalues:
λ = {0, 0.19, 2.0, 2.1, ...}
The second eigenvalue λ₂ = 0.19 is called the spectral gap. It's small compared to λ₃ = 2.0, and that smallness tells you there's a bottleneck—the graph has a clean partition.
The eigenvector for λ₂—the Fiedler vector—comes out as:
v₂ ≈ [+0.5, +0.5, +0.5, +0.5, −0.5, −0.5, −0.5, −0.5], [H₁ H₂ H₃ H₄ A₁ A₂ A₃ A₄]
All humans get positive values; all AIs get negative. The sign flip lands exactly at the Human/AI boundary. No clustering algorithm needed; the linear algebra found the structural partition automatically.
Why does this work? The Laplacian's smallest eigenvalue is always λ₁ = 0, corresponding to the constant eigenvector [1, 1, ..., 1]. This makes sense: if every node agrees, there's zero disagreement across any edge.
The Fiedler vector solves a specific optimization problem: find the assignment of values to nodes that minimizes total disagreement, subject to not being constant. The values must sum to zero, so some are positive and some are negative.
Think about what this optimization does. If two nodes are strongly connected, you want to give them similar values to keep (x_i − x_j)² small. If they're weakly connected, the penalty for different values is low. So the optimal strategy is: assign similar values within tightly-connected clusters, and let the sign flip happen where connections are weakest.
That's why the Fiedler vector partitions the graph along its bottleneck. It's not a clustering algorithm—it's the solution to "where can I afford disagreement?" The answer is: across the weak links.
How this relates to disempowerment
Convergence time scales inversely with the relevant eigenvalue. Within each group, the larger eigenvalues (~2.0) govern dynamics: τ_within ≈ 1/2 = 0.5 time units. A price shock among AIs equilibrates within the AI cluster quickly.
Across groups, the spectral gap governs: τ_cross ≈ 1/0.19 ≈ 5 time units. Cross-group convergence is 10× slower.
Here's where disempowerment becomes visible. If AI-AI connections are denser than human-human connections, the AI subgraph will have a larger internal spectral gap—AIs reach consensus faster. Collective price levels increasingly reflect AI-internal dynamics; humans equilibrate toward prices that were already set. Individual humans still trade, but the attractor they're converging toward was shaped by the faster-equilibrating subsystem.
The Fiedler partition gives you a direct test: is the market's primary structural fault line Human/AI, or something else (sector, strategy, geography)? When the partition aligns with agent type, the market has organized around that boundary. That's the spectral signature of segregation—not just difference, but separate communities where information flows freely within and struggles to cross between.
The spectral gap shows up empirically in how markets restructure around Federal Reserve announcements, in asset pricing models using graph Laplacians, and in volatility forecasting through graph signal processing.
Spectral Analysis of Networks
For information networks, the same mathematics applies but with different interpretation. Nodes are agents, edges are communication channels, and the Laplacian captures the diffusion structure of information flow.
The core dynamic is simple: people update their opinions by averaging over what their neighbors think. If you trust someone, their view pulls yours toward theirs. Repeat this across everyone simultaneously, and the network either converges toward shared understanding or gets stuck in persistent disagreement. Which happens depends entirely on the graph's spectral properties.
The spectral gap λ₂ predicts how fast consensus forms. The figure shows why this works intuitively. A well-connected network (left) has many paths between any two nodes—information can flow through multiple routes, disagreements get smoothed out quickly, and the system relaxes to consensus. A poorly-connected network (right) has a bottleneck: two dense clusters linked by a single weak connection. Information flows freely within each cluster but struggles to cross the bridge. Disagreements between clusters persist.
The Cheeger inequality proves that a small spectral gap guarantees a bottleneck exists somewhere in the network. The eigenvector v₂ (called the Fiedler vector) tells you exactly where: nodes with positive components fall on one side of the cut, nodes with negative components on the other. The algebra finds the echo chambers automatically.
The higher eigenvalues reveal something different: the network's capacity for complex patterns of belief. A network with only one significant eigenvalue can sustain only binary disagreement—you're either in group A or group B. A network with many well-separated eigenvalues can maintain richer structure: multiple factions, nested coalitions, opinions that don't collapse onto a single axis. The spectral distribution measures what we might call the network's "cognitive complexity."
Network scientists have confirmed these patterns empirically across social media platforms, scientific collaboration networks, and political communication systems.
Spectral Analysis of Democratic Systems
Voting systems are less obviously graphical, but the representation still works. Consider voters (or legislators) as nodes and influence relationships as edges: who persuades whom, who looks to whom for voting cues, who forms coalitions with whom. The aggregation mechanism determines how preference signals propagate through this influence graph.
The spectral gap λ₂ predicts the stability of collective decisions. A large spectral gap means the voting outcome is robust—small changes in individual preferences don't flip the result. A small spectral gap means the system is near a "phase transition" where minor shifts could change everything. There’s evidence that voting rules exhibit phase transitions at critical preference thresholds. Below some critical concentration, manipulation probability approaches 1 exponentially fast; above it, manipulation becomes exponentially unlikely. The spectral gap tells you how close your system sits to that knife's edge.
The picture shows what this looks like concretely. Two partisan blocs—tightly connected internally, weakly connected to each other—with a handful of swing voters bridging the gap. The spectral structure reads this topology directly: small λ₂ because the cross-bloc connections are sparse, and the Fiedler vector v₂ cleanly separating the blocs by sign. The swing voters show up near zero in v₂, mathematically capturing their position between worlds—and their decisive influence on outcomes.
The eigenvectors reveal fault lines, and this has been validated empirically. When network scientists applied spectral analysis to U.S. Congressional voting, they found that modularity-based community detection naturally identifies partisan blocs—and spectral methods for modularity optimization underlie these partitions. No manual labeling needed—the linear algebra finds the political coalitions automatically.
Political polarization has a precise spectral signature. Work on legislative voting networks shows that roll-call voting behavior increasingly corresponds to "interval graphs"—mathematical structures where a single dimension captures almost everything. The researchers found a sharp collapse in dimensionality over recent decades. By the post-104th Senate, Congressional voting had become essentially one-dimensional: a single eigenvector explains nearly all the variation. When your democracy's spectrum looks like that, you're seeing polarization written in the mathematics itself.
The gap between λ₂ and λ₃ measures something real about democratic health. When λ₂ is much smaller than λ₃, the system has one dominant fault line—a clean left-right split with weak cross-partisan ties. That structure is spectrally fragile. When eigenvalues distribute more evenly, the system sustains complex coalition patterns: overlapping groups, cross-cutting alliances, the kind of multidimensional politics that makes manipulation harder and outcomes more stable.
The napkin math: Under social influence, preferences converge via the same Laplacian dynamics: dx/dt = −Lx. But for voting, we care about stability under perturbation. If I shift one voter's preference by ε, how much does the collective outcome move?
Perturbation analysis gives: δ(outcome) ∝ ε/λ₂ under influence-aggregation models where preferences propagate through the network before a decision is reached. The precise relationship depends on the voting rule — unanimity, majority, approval voting, and ranked-choice systems each interact differently with the same influence topology. [2]
The influence matrix (I − αL)⁻¹ develops a near-singularity when λ₂ is small—small inputs produce large outputs. And crucially, where that perturbation lands matters: shifting a swing voter (near zero in v₂) moves the outcome far more than shifting a committed partisan. The eigenvector tells you who the kingmakers are.
This is the spectral signature of political instability: a polarized electorate with weak cross-partisan ties, a few swing voters holding disproportionate power, the whole system poised at a critical point where small nudges flip outcomes.
Generalising the examples
We set out to check whether spectral properties carry consistent meaning across different coordination mechanisms, and the examples suggest they do. In each case, the spectral gap λ₂ predicted the rate at which local perturbations become global patterns—price convergence speed in markets, information diffusion rate in networks, decision stability in democracies. The Fiedler vector v₂ identified natural fault lines in each system, whether those manifest as trading communities, echo chambers, or partisan blocs. Higher eigenvalues captured each system's capacity for complex, multi-dimensional structure rather than simple binary divisions.
The mathematics remained the same across all three; what changed was interpretation. This is consistent with the hypothesis that markets, networks, and democracies admit a shared analytical framework — though it doesn't yet prove they share deep structure as opposed to admitting similar computational tools for somewhat different reasons. Together, these spectral quantities give us computable, cross-domain metrics for tracking how coordination dynamics shift when AI nodes enter a system — spectral gap for convergence speed, Fiedler vector for fault lines, eigenvalue distribution for structural complexity, all partitionable by agent type.
What follows is the deeper theoretical program that we hope might ground these metrics in a unified framework and extend them beyond what spectral methods alone can reach.
A First Order Model
What follows is the theoretical program we're developing to ground and extend these results. The ideas here are less settled than what came before — we're sharing our current thinking rather than established findings, because we think the direction is promising enough to be worth discussing openly.
The spectral examples are satisfying but they raise an obvious question: why does this work? Observing that the same eigenvalues predict things across markets, networks, and democratic systems is a pattern. It's not yet an explanation. What's the underlying mechanism that makes the Laplacian the right object to study in all three cases?
The following story explains it easily but it is only a first order approximation. The actual contextualised answer to this question is part of a larger research program.
The graph Laplacian L = D − W is, at bottom, a description of how information moves on a graph. Take any quantity distributed across nodes — prices, beliefs, preferences, anything — and ask how it diffuses through the network of connections. The Laplacian is the operator that governs that diffusion. Its eigenvalues tell you the rates. Its eigenvectors tell you the patterns. This is what graph signal processing studies: signals defined on graphs, and the structures that govern how those signals propagate.
Now consider what the agents on this graph are actually doing. A trader is trying to find the right price. A voter is trying to figure out how to vote. A team member is trying to predict what their colleagues will do. Whatever the domain, each agent is maintaining some model of its situation, generating predictions, comparing those predictions against what it observes, and updating to reduce the error.
Each agent is doing local optimization against a landscape defined partly by what every other agent is doing. The active inference literature frames this as free energy minimization; you could also describe it through Bayesian belief updating or gradient-based optimization of prediction error. [3]
If you have a collection of agents on a graph, each doing local optimization — each one hill-climbing on its own prediction-error landscape — what you get at the collective level is energy moving across the system. Agent A updates based on signals from neighbors B and C, which changes what B and C observe, which changes their updates, which propagates further. To a first approximation, the collective dynamics is diffusion. Information and uncertainty flow across edges according to the same mathematics the Laplacian describes.
When models disagree — when agents have misaligned predictions about each other — the system is in a state of high collective free energy. Tension, disagreement, unresolved uncertainty. Message-passing resolves this tension. Agents exchange signals, update their models, and the system relaxes toward coordination. The Laplacian governs the rate and pattern of this relaxation. The spectral gap tells you how fast collective uncertainty resolves. The Fiedler vector tells you where the persistent disagreements will be.
This is why the spectral toolkit works across domains. Markets, networks, and democratic systems all involve agents doing local inference on a shared graph, and the Laplacian is the mathematical object that describes how local inference becomes collective dynamics. The spectral results follow from the structure of distributed optimization on graphs — regardless of whether the optimization targets are prices, beliefs, or policy preferences.
Beyond Approximations
This picture is the leading-order term, and it's the term the spectral toolkit captures well. But we should be direct about its limitations.
The phenomena that matter most for AI safety — strategic positioning at bridge nodes, recursive modeling of other agents' models, coalition formation, identity-driven resistance to consensus — live precisely in the regime where the diffusion approximation breaks down. The Laplacian governs what happens when agents are doing something like local averaging. Real world coordination involves agents who anticipate, who model each other's models, who form alliances and defect from them. The actual collective operator is more complex than the Laplacian, and characterizing what it looks like beyond the diffusion regime is the core theoretical challenge of this research program.
The deeper question — and this is where we think the most promising contribution lies — is whether there's a formal correspondence between free energy minimization at the individual level and the collective dynamics we observe across coordination mechanisms. If different coordination systems are different ways of collectively minimizing prediction error under different constraints, that would explain why the spectral toolkit transfers. We're pursuing this direction but the formal results aren't ready yet. If it works, the functorial mapping wouldn't be between markets and networks directly, but between individual inference and collective coordination, with markets and networks as different instantiations under different constraints.
Our approach builds on several existing lines of work connecting individual and collective inference. Heins et al. have shown how spin-glass systems can be analyzed through collective active inference, providing a concrete implementation of multi-agent free energy minimization on graphs. Hyland et al.'s work on free energy equilibria establishes conditions under which multi-agent systems reach shared minima — the formal analog of coordination. And recent work on partial information decomposition of flocking behavior demonstrates how collective dynamics can be decomposed into synergistic and redundant information contributions across agents. For readers interested in the computational methods, [this lecture on MCMC approaches to collective active inference] provides a setup for running a collective <-> individual gibbs sampling-style loop.
Desiderata for the Theory
The active inference story gives us a candidate explanation for why spectral methods transfer across domains. But an explanation isn't the same as a unifying framework. We think that our theory needs three things: universality across domains, compositionality so that understanding markets and networks separately lets you predict what happens when they operate together, and computational tractability at scale. We want this so that we can actually compose and simulate AI + human systems and see what happens.
These pull against each other, and most existing frameworks achieve one or two. Game theory applies broadly but doesn't compose — there's no natural operation for "combine these two games and predict the joint dynamics." Network science computes efficiently but treats each coordination domain as requiring its own model. Social choice theory has beautiful results about voting but nothing to say about price formation. Taking inspiration from Fong and Spivak's work on applied category theory, the question is whether price dynamics and voting dynamics are structurally isomorphic — the same compositional relationships, even if their elements look completely different.
Where does spectral analysis sit against these?
Tractability is real but not as clean as it first appears. Spectral decomposition runs in O(n log n) for sparse graphs. But the graphs we care about aren't static — they evolve endogenously based on the dynamics we're modeling, and influence in real systems is latent, inferred from observed correlations rather than measured directly. The computational advantage over exhaustive simulation exists, but it's not a free lunch.
Universality is more promising. The spectral gap does predict convergence-like behavior across markets, networks, and democratic systems, and the eigenvectors reveal natural clustering regardless of domain. But showing that the same tools apply isn't the same as showing these domains share deep structure. The active inference connection is our best current candidate for why the transfer happens — distributed inference on graphs, governed by the Laplacian at leading order — but it remains a conjecture rather than a result.
Compositionality is where we've barely started, and it's where the real gap lies. Real coordination systems blend multiple mechanisms — a company uses market mechanisms for resource allocation, network relationships for information flow, and democratic processes for major decisions. The spectral analysis applies to each layer, but we don't have composition rules that predict what happens when you stack them. We're exploring this through what we call process-based modelling (beware of technical debt) — a functional programming approach to multi-agent simulation that might offer a computational path to composition. More in a future post.
The deeper open question is whether the cross-domain transfer we've demonstrated reflects structural unity or a sufficiently general hammer. If it's structural unity, it should live in what each mechanism preserves and what it discards — markets might preserve something about efficient information aggregation through exchange, democracies something about equal origination of influence, networks something about positional structure of information flow. These feel like different constraints on the same underlying message-passing process. Formalizing that intuition is the core theoretical challenge ahead — and it connects directly to the practical question of what happens when AI agents, operating under their own constraint regimes, enter these systems.
Applications in Governance
The spectral framework suggests three specific quantities that should be monitored in any coordination system where AI agents participate alongside humans.
Human betweenness across mechanism boundaries. If AI nodes increasingly sit at bridge positions between different coordination mechanisms — between the market layer and the network layer, between information flow and governance — then nominally human decisions increasingly route through AI intermediation. Betweenness centrality, partitioned by agent type across mechanism boundaries, tracks this directly. When human betweenness declines relative to AI betweenness at cross-mechanism bridges, the system is developing AI-mediated chokepoints.
Spectral gap ratios that keep human timescales relevant. If the AI subgraph's internal spectral gap is much larger than the human subgraph's — meaning AI nodes reach internal consensus far faster than humans can coordinate among themselves — then collective outcomes might get determined by whichever subsystem equilibrates first. The ratio λ₂(AI subgraph) / λ₂(human subgraph) measures this directly. A ratio that grows over time signals that AI coordination speed is outpacing human coordination speed, and collective outcomes will increasingly reflect AI-internal dynamics.
Fiedler partitions that don't collapse onto the H/AI boundary. The Fiedler vector v₂ identifies the system's primary structural fault line. If v₂ increasingly separates human nodes from AI nodes — if the dominant partition of the system is "humans on one side, AI on the other" rather than some functional or topical division — the system has structurally segregated along the type boundary. This is the spectral signature of a coordination system where humans and AI are no longer integrated but operating as separate blocs.
These are necessary conditions for meaningful human agency in mixed coordination systems, but they are not sufficient. A system could satisfy all three criteria while still undermining human agency through subtler mechanisms — frame-setting that shapes which options humans consider, information curation that determines what humans see before they decide, meaning-making that happens within nodes rather than between them. The spectral criteria track the structural skeleton of influence. They cannot detect whether the content flowing through that skeleton is manipulative, reductive, or otherwise corrosive to agency. We flag this not to undermine the criteria but to bound what they can and can't detect — and to motivate the richer framework we're developing, which would need to capture not just information flow but information quality and strategic intent.
Current Directions
At Equilibria Network, we're pursuing several threads:
Mathematical foundations. Formalizing the symmetry conjectures, proving spectral-behavioral correspondences, developing the category-theoretic structure that makes composition rigorous.
Simulation infrastructure. Building tools that let researchers construct collective intelligence systems as graphs with message-passing rules, run simulations, and analyze spectral properties. We’re trying to make sure that we have good theory to practice feedback loops so the aim here is practical applications.
Multi-agent AI safety. Applying this framework to understand what happens when AI agents participate in human coordination mechanisms.
Conclusion
Wherever you have multiple agents coordinating under uncertainty, you can draw a graph. The nodes are whoever's participating. The edges are the channels through which they influence each other. The message-passing rules encode what kind of coordination mechanism you're using.
Markets, democracies, networks, hierarchies — they're all message-passing on graphs. The differences that matter are what flows along the edges, how nodes update, and what the structure permits. And because graphs give us matrices, we get the full power of linear algebra — efficient computation, proven algorithms, scalable analysis.
What we've shown here is that spectral methods give computable, falsifiable quantities for tracking coordination dynamics — including AI disempowerment — across domains that are usually studied separately.
What we haven't shown, but believe is worth pursuing, is why this works. We suspect the answer involves a formal correspondence between individual free energy minimization and collective coordination dynamics — that the Laplacian captures the leading-order term of distributed inference, and that's why it transfers. If that's right, different coordination mechanisms would be different constraint regimes on the same underlying process, and the deep question becomes: what do different mechanisms preserve? Markets seem to preserve something about efficient information aggregation through exchange. Democracies seem to preserve something about equal origination of influence. Networks seem to preserve something about positional structure of information flow.
We think these spectral correspondences hint at deeper structural connections between coordination mechanisms — connections that might eventually be formalized categorically, the way the Langlands program connected number theory and geometry.
Thanks to the extended Equilibria research network for many conversations that shaped these ideas. This post presents a research direction we're actively developing—feedback, criticism, and collaboration are welcome.
This was co-written with Claude Opus based on work over the last year. I would give a 85-90% probability of claims holding true as this is something I’ve spent a lot of time on.
If you want to follow this research: Equilibria Newsletter | contact@eq-network.org
(A caveat on this metric: mutual information with outcomes captures causal contribution but not the full picture of agency. A human whose vote is shaped entirely by AI-curated framing has high mutual information with the outcome — their signal mattered — but diminished agency in any meaningful sense. They didn't act from their own understanding; they were a conduit for someone else's influence. A complete account would need to track not just whether human signals determined outcomes, but whether those signals originated from human deliberation. We don't have a clean formalization of this yet, and it's a significant gap. The spectral metrics we propose are necessary conditions for human agency, not sufficient ones — you can't have meaningful agency without structural influence, but structural influence alone doesn't guarantee it.) ↩︎
The result holds most cleanly for linear aggregation on the influence graph; for other mechanisms, the spectral gap still constrains dynamics but the proportionality may take a different form. With a small spectral gap, even tiny preference shifts get amplified dramatically. The spectral-stability connection for democratic systems is the least developed of the three analyses presented here and deserves its own dedicated treatment. The core difficulty is that different voting rules don't just produce different proportionality constants — they can fundamentally change what the spectral properties mean. Under approval voting, the strategy space is different from plurality, which means the way preferences propagate through influence relationships changes qualitatively, not just quantitatively. The voting rule may function as a structural variable that reshapes the effective graph, rather than a parameter applied to a fixed graph. We've bracketed this issue here by focusing on influence-aggregation models where the spectral connection is most transparent, but a full treatment would need to develop mechanism-specific spectral signatures for different voting rules. ↩︎
These frameworks converge under specific conditions — roughly, when agents have well-defined generative models and the environment is stationary enough for variational approximations to track — but they are not identical operations.. The structural point that matters here is more basic: each agent is doing some form of local inference, and the collective dynamics emerge from those local processes interacting across the graph. ↩︎