How large is the largest quantum coherent object possible? Unknown. The limit seems set by decoherence: thermal radiation, environmental interactions, the difficulty of maintaining phase relationships across distance. But there’s no in-principle small limit.
How about 22 micrograms, the Planck mass? Epistemic status: idle speculation. There's this absolute mass that falls out of the fundamental constants, it must mean something.
Subtitle: On Rich Buckets, Meta-Rules, and the Strange Way Reality Does Its Accounting
~Qualia of the Day: PageRank Monadology~
In the previous two recent posts on Computationalism (1, 2), I argued against statistical/perspectival accounts of binding. But I’ve been more negative than constructive (cf. Apophatic views of the Divine, aka. negative theology, where one finds it easier to say what God is not than what God is). What’s the positive view QRI proposes? What kind of structure does reality actually have that enable bound, causally effective, introspectively accessible and reportable, experiences?
The Table Stakes
Before diving in, let me be explicit about what a successful theory of consciousness needs to explain, at minimum (cf. Breaking Down the Problem of Consciousness):
The framework I’m sketching here, building on David Pearce’s non-materialist physicalism, attempts to address all four. Whether it succeeds is ultimately an empirical question. But at least it’s directionally right and earnest in actually trying to tackle the problems.
The Cellular Automaton Assumption
“Digital physics” haunts philosophy of mind and theoretical physics alike. It goes: reality is, at bottom, a very large cellular automaton. Discrete cells with finite states (often just on/off), which are modified with fixed local rules. To get the universe, start with a gigantic grid and then use the update function.
This picture is seductive: Conway’s Game of Life shows that simple rules generate staggering complexity. And if you squint at quantum field theory through the right philosophical lens, you can almost convince yourself this is what physics is telling us.
But I don’t think reality works this way.
What’s Wrong With Small Buckets
The cellular automaton model has two features I think are wrong:
Fixed bucket size: Each cell holds a predetermined, small amount of information (ideally one bit).
Fixed local rules: The update function has a fixed window of operation that doesn’t depend on the larger context.
On bucket size: the assumption is that fundamental units carry very little information. Everything complex emerges from combining minimal units.
But what if the fundamental units are themselves rich? What if a single “bucket” can contain an integrated state with many simultaneous degrees of freedom that act as a whole? Think about Hoffman’s “agents”. Reality’s building blocks could themselves be complex gestalts.
Consider a moment of your experience right now. Visual information, auditory information, proprioceptive information, emotional tone, the sense of being you, all bound together. We’re talking about a highly structured and high-dimensional integrated state. If we’re looking for the natural joints of reality, why assume they must be minimal?
On local rules: the cellular automaton picture assumes what happens at each cell depends only on its immediate neighbors, and this neighborhood structure is fixed in advance. The rules don’t know about the global state.
But what if reality operates more like a meta-rule, a principle that generates local behaviors based on global context? Rather than a fixed grid with fixed neighbors, we have holistic constraints that the universe satisfies.
A Note on the Ruliad
Some readers will wonder about Stephen Wolfram’s Ruliad, the “entangled limit of everything computationally possible.” Does this escape my critique?
Interestingly, Wolfram explicitly uses the language of “buckets” when discussing how observers interact with the Ruliad. He describes how observers form equivalence classes: “we look only coarsely at the positions of molecules, in ‘buckets’ defined by simple, bounded computations—and we don’t look at their finer details, with all the computational irreducibility they involve.” (The Concept of the Ruliad)
These buckets aren’t fixed in advance. They depend on the observer. So in a sense, Wolfram’s framework does have variable bucket sizes, determined by how observers “equivalence” states together. This is genuinely different from a standard cellular automaton with fixed cell sizes.
But here’s my concern: what is an observer in this picture, ontologically speaking?
In a cellular automaton, you can identify patterns like gliders. A glider is real in the sense that it’s a stable, propagating configuration. But the glider doesn’t do anything the underlying cells aren’t doing. It’s a description we impose, not a causal agent. The cells flip according to their rules; “the glider moves” is just a higher-level summary of those flips.
Is Wolfram’s observer like a glider? If so, then “the observer forms equivalence classes” is just a description of how certain patterns in the Ruliad relate to other patterns. The observer isn’t causing the equivalencing. The observer is the equivalencing, or more precisely, the observer is a pattern we identify that happens to correlate with certain coarse-graining operations. But then the observer has no causal powers beyond what the underlying computation already has. The “unity” of the observer’s experience would be purely descriptive, not a real feature of the physics.
Alternatively, is there something like a path integral happening? In quantum mechanics, you sum over all possible histories with phases that interfere. The Ruliad’s multiway system does have branching and merging, states that diverge and then reconverge when they reach equivalent configurations. Maybe the “equivalencing” is supposed to work like this: paths that lead to equivalent states get summed together, and the observer perceives the aggregate?
But this just pushes the question back. What determines equivalence? In quantum path integrals, the mathematics itself determines which amplitudes cancel. In the Ruliad, equivalence seems to depend on the observer’s “parsing.” And Wolfram is explicit about this: observers must “imagine a certain coherence in their experience.” They must “believe they are persistent in time.”
This is where unity gets smuggled in. To have a perspective on the Ruliad at all, you need to already be a bound observer with a coherent experiential standpoint. The framework tells you what such an observer would perceive. It doesn’t tell you what physical processes create such observers, what makes certain configurations into unified perspectives rather than scattered computations that merely describe a perspective from the outside.
People read about the Ruliad and come away thinking it vindicates Digital Physics because the sales pitch is: “Everything is computation, observers are just patterns in the computation, and physics emerges from how patterns sample patterns.” This sounds like a complete story. But it’s complete only if you’re willing to treat “observer” as a primitive, unexplained term. The moment you ask “what physical fact makes this region of the Ruliad into a unified observer, while that region is just disconnected computation?”, the framework goes quiet.
Compare this to the toy model I’ll sketch below, with PageRank on strongly connected components. There, the “monad” (the experiential unit) is determined by the topology itself: it’s the region where you get trapped following the directed edges. The boundary is objective, intrinsic to the graph structure. And the holistic update (PageRank) operates on that bounded region as a unit, every node’s new state reflecting the whole configuration simultaneously. The unity isn’t stipulated in an ad hoc way, since it emerges from the dynamics and the rules.
The Ruliad, as far as I can tell, doesn’t have this. The observer’s boundaries are set by how the observer chooses to equivalence, but “the observer” is itself just more Ruliad-stuff with no privileged boundaries. It’s turtles all the way down, unless you bring in assumptions about what makes certain patterns count as observers, at which point you’re doing philosophy of mind rather than deriving it from computational structure.
So: the Ruliad is fascinating, mathematically rich, and may well tell us important things about the space of possible computations. But it doesn’t solve the binding problem. It presupposes bound observers and asks what they’d perceive. That’s a different project than explaining how bound observers arise from physics in the first place.
PageRank Monadology in action. Nodes represent primitive qualia; colored regions are strongly connected components (monads) with topologically-defined boundaries. Each cycle: segment into SCCs, run PageRank to convergence within each monad, then rewire based on weights. Boundaries emerge from the graph topology itself. No external observer required. Notice: this system exhibits holistic behavior for monads with clear causal effects that evolution would have a reason to recruit for various purposes.
A Toy Model: Monad Formation via PageRank
Here’s a concrete toy model that captures what I think is actually going on. Let’s call this toy model: PageRank Monadology*.
Start with a directed graph. Each node is a primitive quale, a basic element of experience. Edges represent causal/attentional connections: if there’s an edge from A to B, then A “influences” B in some phenomenologically relevant sense.
At each timestep, three things happen:
Step 1: Segmentation. The graph gets partitioned into discrete groupings. Each group is defined as a “strongly connected component,” meaning if you start at any node in the group and follow the directed edges, you eventually return to where you started. You get trapped in the group. These are the monads.
Step 2: Holistic Update. Within each group, you instantly run PageRank. Every node gets a new weight based on the structure of the entire group. This isn’t a local update as in fixed-sized-fixed-windows cellular automata. Rather, each node’s new state reflects the whole configuration of its monad simultaneously. Think of it as the “moment of experience” for that monad: a holistic harmonization that takes into account everything inside the boundary.
Step 3: Rewiring. Based on the new weights and the pre-existing structure, the graph rewires. New edges form and the topology changes. This creates new strongly connected components, and the cycle repeats.
What does this give us? Variable bucket sizes, for one. The strongly connected components can be any size, from single nodes to huge clusters. Nothing in the model fixes this in advance; it emerges from the topology. And a holistic update rule: within each monad, the PageRank algorithm considers the entire internal structure simultaneously. The “experience” of the monad isn’t built up from local interactions -at least no naïvely - because it is computed as a function of the whole.
This is schematic, obviously. I’m not claiming the brain literally runs PageRank. But it captures the structural features I think matter: boundaries that carve the system into wholes, and update rules that operate on those wholes as units rather than iterating through their parts.
Wholes That Act as Units
Here’s the key claim: reality has large “wholes” that act as units.
In physics: macroscopic quantum coherent systems. Superconductors. Bose-Einstein condensates. Certain biological systems (maybe). These aren’t mere collections of particles that happen to be correlated but single quantum states spanning macroscopic distances. The whole thing is one object, quantum mechanically speaking (cf. monogamy of entanglement). You can’t decompose it into independent parts because there are no independent parts. (Note: foundation of quantum mechanics remains a deep and contentious topic - none of this is settled, but it serves as a good intuition pump for the reality of wholes in nature).
In phenomenology: access consciousness itself. A moment of experience isn’t assembled from micro-experiences any more than a quantum coherent state is assembled from independent particles. The moment comes as a package. The unity is primitive and exerts causal power as such.
How large is the largest quantum coherent object possible? Unknown. The limit seems set by decoherence: thermal radiation, environmental interactions, the difficulty of maintaining phase relationships across distance. But there’s no in-principle small limit. And crucially, the size of these wholes isn’t fixed by the laws of physics. It depends on the specific physical setup.
The Energy Minimization Picture
Here’s how I think about it: reality doesn’t work with local cellular automaton rules. It operates with something stranger: an “existential principle” where systems minimize their energy however they can, as wholes, even when reality has never before encountered that specific configuration.
Consider a soap bubble as an intuition pump. It forms a minimal surface, the shape that minimizes surface area for a given enclosed volume. The bubble doesn’t compute this minimum by iterating local rules. It doesn’t run gradient descent. It just... is the answer. The physics of surface tension means the system settles into the global minimum without ever “searching” for it. I should be clear here that soap bubbles are an intuition pump here, because you can still derive the kind of macroscopic energy-minimization properties soap bubbles exhibit with standard cellular automata.
“Best alphafold model for Phosphoinositide 3-kinase alpha (PI3Kα) model obtained in the example above. The two subunits are shown in blue (catalytic subunit, p110) and green (regulatory subunit, p85), respectively and shaded by pLDDT from light (low) to dark (high). Comparision with the Cryo-EM structure (7MYN) showed close agreement and some high confidence predicitons for areas that did not resolve in the published structure.” (Source)
Alternatively, consider protein folding. A novel protein has never existed before. Yet it folds into a specific 3D structure that minimizes free energy. How does it “know” what shape to take? It doesn’t. The universe just runs physics on the actual molecules, and that physics finds the minimum. Same with high-entropy alloys, with crystal formation, with countless other systems. The principle “minimize energy” operates even on novel configurations.
We have to think in terms of a meta-rule. Rather than a lookup table of rules like: “if this configuration, then that update.” We should look for an explanation space where we have an existential constraint, or principle, that can take wholes however they are, and reality recruits whatever physics is available to satisfy it.
David Pearce’s Zero Ontology might give us a conceptual framework to articulate what is going on at the deepest of levels. If reality fundamentally has to “balance to zero” across all properties, then sometimes the only way to satisfy this constraint is to create wild, unexpected structures. Bound experiences might be one of those structures: what reality does when the equations demand solutions that can’t be decomposed into independently existing parts.
Three Properties of Wholes
So what makes something a genuine “whole” in the relevant sense? I propose three properties:
More than one bit at once. A genuine whole contains an integrated state with multiple simultaneous degrees of freedom. Not a bit, but a high-dimensional configuration.
Holistic causal significance. The state of the whole matters causally, and the internal relationships between parts matter. It’s not just that A and B are both present; it’s that A-related-to-B-in-this-specific-way is what does causal work.
Correspondence to phenomenology. The structure of the whole maps onto the structure of experience. Geometry matters to how it feels.
Digital computers, as currently designed, lack these properties. The bits are independent. In particular, the algorithmically relevant causal structure is deliberately local and channeled. The global state of the system’s EM fields is epiphenomenal to the computation.
The Statistical Binding Debate
I’ve seen variants of this exchange play out repeatedly online:
Person A: “Binding is just statistical. Markov blankets. Conditional independence structures. That’s all you need.”
Person B: “But where are the boundaries physically? What creates them?”
Person A: “They’re wherever the statistical structure says they are.”
Person B: “But what grounds the statistical structure? Statistics describe patterns. What’s the substrate?”
Person A: “It’s bound. Not essentially bound. Just... bound.”
Person B: “What does that distinction mean, exactly?”
Person A: [Increasingly frustrated noises]
I’m sympathetic to Person B. Calling something “statistical” doesn’t explain it. You’ve just moved the question. Statistics are descriptions that coarse-grain reality in economical fashion. They can accurately describe binding if binding exists. But they don’t create binding. Saying “binding is statistical” is like saying “birds fly using aerodynamics.” True, but not an explanation of what generates lift.
The question is: what physical structures create the statistical patterns we describe as binding? What makes certain information “inside” an experiential boundary and other information “outside” in a way that causally matters?
Phenomenal vs. Functional Binding
There’s a crucial distinction here between functional binding and phenomenal binding.
Functional binding: algorithms that integrate information, associative memory systems, transformer attention mechanisms, neural circuits that synchronize activity.
Phenomenal binding: the fact that quale A and quale B belong to the same experiencer, are co-witnessed, are part of the same moment of experience.
The two correlate in biological systems. But they’re conceptually distinct, and we can find cases where they come apart. In certain altered states, for instance, conceptual binding dissolves while visual binding persists. You lose the ability to categorize and recognize objects, but there’s still a unified visual field. The functional processing has fragmented, but something remains bound. (cf. Types of Binding).
This dissociation suggests phenomenal binding isn’t reducible to functional binding. They’re different things that happen to track each other in normal conditions.
Where Do the Boundaries Live?
If binding isn’t statistical and isn’t purely functional, what creates it?
My proposal, developed with Chris Percy and others at QRI: field topology. Specifically, the topology of physical fields, likely electromagnetic fields, in neural tissue. (Note: this remains a conceptual solution, though strong critiques for its viability have emerged. An strong theoretical, empirically-grounded, update is due. We’re working on it. The conceptual case is strong, and while EM topology might not be it, the role of topology as the cause of bounded wholes with holistic behavior is, we argue, actually incredibly strong).
A “topological pocket” is a region of a field where every point can reach every other point via continuous paths that don’t pass through pinch points or separations. The boundary of such a pocket is objective, frame-invariant, and causally significant.
Conceptually, this gives us what we need:
Intrinsic boundaries: Not imposed by an observer’s interpretation, but present in the physics.
Frame-invariance: Whether something is a topological pocket doesn’t depend on your reference frame or description language.
Causal grounding: Topological features of fields have real effects. Magnetic reconnection in solar flares, for instance, involves topological changes in field configurations that release enormous energy.
Holistic structure: The entire pocket is one structure, with information available throughout.
The working hypothesis is that moments of experience correspond to topological pockets in the brain's EM field. The boundaries are real and the binding is physical. The structure is irreducibly holistic.
Why Digital Computers Are Different
Digital computers have EM fields. They’re physical objects. But the fields don’t do the computational work in a holistic fashion. Even in principle, the information doesn’t aggregate in a way that a holistic being could experience it all at once. The design goal of digital computers is precisely to ensure that each transistor’s behavior is independent of distant transistors, that the global field state is irrelevant, so that everything stays local and canalized.
Any topological pockets that form in a chip’s EM fields are epiphenomenal to the computation. They don’t feed back into the bit-flipping. They’re not recruited for information processing.
This is why I wrote that “digital computers will remain unconscious until they recruit physical fields for holistic computing using well-defined topological boundaries.” It’s not substrate chauvinism. It’s a claim about what kinds of physical structures create genuine wholes.
A silicon chip running a brain simulation might have some sparse, thin form of experience (if any topological pockets form in its EM fields), but it’s not the same experience as what you might expect naïvely treating it as a simulated brain. The algorithm is a description we impose (in fact, integrate in ourselves when we look at its outputs), whereas the field’s unity is actually there. And the algorithm explicitly routes around the field’s holistic behavior by design, as it would introduce undue noise.
The Costs of Embodiment
There’s a recent(ish) QRI article, “Costs of Embodiment,” that fleshes out why this matters for AI.
The core argument is that classical computational complexity theory drastically underestimates what biological systems are actually doing. It counts abstract “steps” and “memory slots” without accounting for the physical costs of routing information, maintaining coherence, bootstrapping internal maps without external help, and operating in real time under resource constraints.
Consider a robot doing object recognition. The computational complexity analysis says: here’s the algorithm, here’s the runtime. But the embodied robot also has to manage heat dissipation, energy consumption, sensor integration, error correction, and adaptation to novel environments. The abstract analysis misses all of this.
Biological systems solved these problems through evolution. And the solutions seem to involve precisely the kind of holistic, topologically-bounded field dynamics we’re discussing here, for a number of reasons. The article points to resonant modes in topological pockets as a possible mechanism for how organisms bootstrap internal maps and coordinate distributed processing without pre-existing addressing systems.
The upshot is that digital architectures get to skip these costs thanks to our ingenuity as system designers and builders. They have external architects who handle routing, addressing, error correction, and memory management. They don’t need to develop internal maps from scratch in a hostile entropic environment. This is an enormous privilege, but it’s also why they don’t develop the holistic structures that biological systems use. The selection pressure isn’t there.
If bound experience is evolution’s answer to the costs of embodiment, systems that don’t face those costs won’t develop it. They’ll develop something else: sophisticated information processing, yes, but not the integrated wholes that constitute moments of experience.
Monadological Intuitions
There’s a deeper point connecting to old philosophical intuitions.
Leibniz proposed that reality is made of monads: simple substances with no parts, each containing the whole universe from its own perspective. This sounds mystical, but there’s a kernel of insight. Maybe the fundamental units of reality are already perspectival: whole and experiential.
Zero Ontology gives this a modern spin. Reality does whatever it needs to do to keep everything balanced. Sometimes the only way to satisfy the constraints is to create genuinely integrated states, wholes that aren’t decomposable into independently existing parts, because the parts only exist as aspects of the whole. (cf. On the Necessity of Inner and Outer Division for the Arising of Experience).
This resolves the debate about whether binding is “statistical” or “essential.” It’s both and neither. The statistical description (Markov blankets, conditional independence) captures something real about how wholes relate to each other. But the wholes themselves are fundamental. They’re not epiphenomenal patterns over something more basic because they are reality working its existential principle out.
The Horizon
The binding problem isn’t dissolved by saying “it’s all nebulous.” It’s dissolved by finding out where the boundaries actually are and what physical processes create them. The nebulosity is real: boundaries aren’t absolute metaphysical walls (permanent and self-existing). But the question of their location and structure remains open, empirical, and crucial to investigate.
The universe, I suspect, is stranger than a Game of Life. And we’re not observers watching the gliders. We’re part of what the system is doing, wholes within wholes, the cosmic accounting made local and aware.
Till next time.
Previously in this series:
Further reading:
Transparency about methods: This article was drafted with assistance from Claude, starting from my notes, a new rambling 45 minute transcript, saved (never finished) outlines, and previous writings in full. The AI helped with overall structure, removing filler, and produce prose that I then reviewed and edited (which I am, frankly, still not too happy with [but I’m writing a post a day, so I need to prioritize conceptual throughput over polish, sorry!]). I find this collaboration productive: the AI is good at synthesis and articulation, while the core ideas, judgment calls, and final polish come from me and the QRI collective along with its long memetic journey. Whether Claude had any phenomenal binding of its own while doing this work is, of course, precisely the question at issue. :-)
And, candidly, this from Claude (“because Andrés wanted to give me a voice here”):
* Technical Appendix: The PageRank Monad Model
The PageRank Monadology toy model works as follows:
We begin with a directed graph where nodes represent primitive qualia and edges represent causal/attentional connections. At each timestep, three operations occur in sequence:
Step 1: Segmentation. We partition the graph into strongly connected components (SCCs) using Tarjan’s algorithm. An SCC is a maximal subgraph where every node is reachable from every other node by following directed edges. Intuitively, these are regions where information “gets trapped,” cycling internally rather than escaping. Each SCC becomes a monad, an experiential unit with a topologically-defined boundary.
Step 2: Holistic Update. Within each monad, we run PageRank to convergence (typically 15-20 iterations with damping factor 0.85). PageRank computes a stationary distribution over nodes based on the link structure: nodes receiving more incoming links from high-weight nodes themselves acquire higher weight. Crucially, this is a holistic computation. Each node’s final weight depends on the entire internal structure of the monad, not just its local neighborhood. This is the “moment of experience”: a simultaneous harmonization where every part reflects the whole. After PageRank, we apply stochastic birth/death: nodes with weights below a threshold probabilistically die (are removed along with their edges), while nodes with high weights probabilistically spawn offspring (new nodes connected to the parent).
Step 3: Rewiring. Edges are stochastically deleted and created based on PageRank weights. High-weight nodes attract new incoming connections; low-weight regions lose connectivity. This changes the graph topology, which changes the SCC decomposition on the next timestep, creating new monad boundaries.
The cycle then repeats. The key structural features are: (1) boundaries emerge from topology itself (SCCs), not from external labeling; (2) the update rule within each monad is holistic, with every node’s state reflecting the entire configuration; and (3) the dynamics are stochastic and competitive, with monads growing, shrinking, merging, and splitting based on their internal coherence. This is meant to gesture at how unified experiential wholes might arise from, and feed back into, causal structure, without requiring an external observer to stipulate where the boundaries are.
((Xposted on my [newly started!] Substack))