Suppose Everett is right: no collapse, just branching under decoherence. Here’s a thought experiment.
At time , Box A contains a rock and Box B contains a human. We open both boxes and let their contents interact freely with the environment—photons scatter, air molecules collide, etc... By time , decoherence has done its work.
Rock in Box A.
A rock is a highly stable, decohered object. Its pointer states (position, bulk properties) are very robust. When photons, air molecules, etc. interact with it, the redundant environmental record overwhelmingly favors a consistent description: "the rock is here, in this shape." Across branches, the rock will look extremely similar at the macroscopic level. Microscopically (atom by atom), there will be tiny differences (different thermal phonons, rare scattering events), but they won’t affect the higher-level description.
Human in Box B.
A human is a complex, dynamically unstable system, with huge numbers of degrees of freedom coupled in chaotic ways (neural firing patterns, biochemistry, small fluctuations magnifying over time). Decoherence still stabilizes macroscopic pointer states (the human is “there”), but internally the branching proliferates much faster. At a superficial level (you open the box and see a person), the worlds look similar. At a fundamental/microscopic level, the worlds rapidly diverge — especially in brain states. A single ion channel opening or not can, milliseconds later, cascade into different neural firing patterns, ultimately leading to different subjective experiences.
Similarity Across Worlds.
Superficially both are consistent across branches. Fundamentally, the rock’s worlds remain tightly bunched; the human’s fan out chaotically. Hence, the rock is “thicker” across worlds than the human. It’s high level processes are less contingent on perturbations in the environment.
In Zurek’s Quantum Darwinism, similarity across worlds is captured by redundancy : the number of disjoint environmental fragments that each carry nearly complete information (within tolerance ) about a system’s pointer state. High (like for a rock’s position) means strong agreement across branches; low (like for a human’s microstates which unspool into contingent perceptions, decisions, etc.) means rapid divergence.
If redundancy measures how robustly some property of a system is copied into the environment, then you can treat it as a measure of “multiversal thickness.” Rocks are thick, humans thin.
For agents then, to be consistent across worlds is to maximize redundancy of certain states or policies.
For artificial or future intelligences, this could be taken further. If an agent values a goal or principle (say, honesty, or maximizing knowledge), it could design itself so that this property is redundantly manifest across its many branching instantiations. This is analogous to engineering for einselection: choosing internal dynamics that make certain states/policies pointer-like, hence stable and redundantly observable across worlds. Philosophically, that makes such values or policies “thick across the multiverse” — they survive and propagate in more branches, becoming almost like invariants.
Possible Implications
There’s tension here. Adopting universalized policies across environments increases multiversal thickness, but it also sacrifices one of agency’s strengths, i.e. the ability to adapt and switch strategies. An agent that rigidly echoes the same policy everywhere risks brittleness.
Perhaps the sweet spot is to preserve thickness only at the level of an idealized decision theory. This way, flexibility is maintained within branches and you expect to be robust insofar as your decision theory is good, but consistency/predictabilty holds across them.
On the other hand, pre-commitment is powerful. In some circumstances, an agent that knows it will act consistently across worlds can extract coordination benefits (with other agents, with itself in other branches, or even with its future selves). There may be decision theoretic suboptimal precommits that are nonetheless advantaged. In that sense, selective multiversal thickness could be a way to leverage redundancy for advantage.
Just as a minor note (to other readers, mostly) decoherence doesn't really have "number of branches" in any physically real sense. It is an artifact of a person doing the modelling choosing to approximate a system that way. You do address this further down, though. On the whole, great post.