LESSWRONG
LW

Avi Parrack
20220
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
2Avi Parrack's Shortform
5h
2
Avi Parrack's Shortform
Avi Parrack5h32

Suppose Everett is right: no collapse, just branching under decoherence. Here’s a thought experiment.

At time t, Box  A contains a rock and Box  B contains a human. We open both boxes and let their contents interact freely with the environment—photons scatter, air molecules collide, etc... By time t′, decoherence has done its work.

Rock in Box A.
A rock is a highly stable, decohered object. Its pointer states (position, bulk properties) are very robust. When photons, air molecules, etc. interact with it, the redundant environmental record overwhelmingly favors a consistent description: "the rock is here, in this shape." Across branches, the rock will look extremely similar at the macroscopic level. Microscopically (atom by atom), there will be tiny differences (different thermal phonons, rare scattering events), but they won’t affect the higher-level description.

Human in Box B.
A human is a complex, dynamically unstable system, with huge numbers of degrees of freedom coupled in chaotic ways (neural firing patterns, biochemistry, small fluctuations magnifying over time). Decoherence still stabilizes macroscopic pointer states (the human is “there”), but internally the branching proliferates much faster. At a superficial level (you open the box and see a person), the worlds look similar. At a fundamental/microscopic level, the worlds rapidly diverge — especially in brain states. A single ion channel opening or not can, milliseconds later, cascade into different neural firing patterns, ultimately leading to different subjective experiences.

Similarity Across Worlds.
Superficially both are consistent across branches. Fundamentally, the rock’s worlds remain tightly bunched; the human’s fan out chaotically. Hence, the rock is “thicker” across worlds than the human. It’s high level processes are less contingent on perturbations in the environment.

In Zurek’s Quantum Darwinism, similarity across worlds is captured by redundancy Rδ: the number of disjoint environmental fragments that each carry nearly complete information (within tolerance δ) about a system’s pointer state. High Rδ (like for a rock’s position) means strong agreement across branches; low Rδ (like for a human’s microstates which unspool into contingent perceptions, decisions, etc.) means rapid divergence.

If redundancy measures how robustly some property of a system is copied into the environment, then you can treat it as a measure of “multiversal thickness.” Rocks are thick, humans thin.

For agents then, to be consistent across worlds is to maximize redundancy of certain states or policies.

  • At the microscopic (neuronal firing) level, chaos ensures our fine-grained branching diverges rapidly.
  • But at the coarse-grained, behavioral/policy level, adopting stable rules (e.g. “I will act according to principle X regardless of circumstances”) could force convergence at the level of behavior and function like an einselected pointer state: robust, redundantly arrived at, consistent across branches.
  • In other words: a policy followed under many micro-histories becomes thick in the Everettian sense, because the environment (and other observers) can redundantly infer it from multiple branches.

For artificial or future intelligences, this could be taken further. If an agent values a goal or principle (say, honesty, or maximizing knowledge), it could design itself so that this property is redundantly manifest across its many branching instantiations. This is analogous to engineering for einselection: choosing internal dynamics that make certain states/policies pointer-like, hence stable and redundantly observable across worlds. Philosophically, that makes such values or policies “thick across the multiverse” — they survive and propagate in more branches, becoming almost like invariants.

Possible Implications

  • In terms of personal identity, maybe what we care about is what persists across worlds, which is not the individual microstate, but the thick, redundant policies or principles you enact. Your “self” is most real where it is most redundantly recorded. Agents might have a sense of personal identity that encompasses their expression in multiple worlds.
  • If thickness correlates with lastingness/prominence across the multiverse, there seems to be a normative pull toward cultivating redundancy in values you endorse.
  • In AI design/safety, future intelligences might explicitly select for high redundancy of aligned values, ensuring they are robust pointer-states rather than fragile micro-fluctuations.

There’s tension here. Adopting universalized policies across environments increases multiversal thickness, but it also sacrifices one of agency’s strengths, i.e. the ability to adapt and switch strategies. An agent that rigidly echoes the same policy everywhere risks brittleness.

Perhaps the sweet spot is to preserve thickness only at the level of an idealized decision theory. This way, flexibility is maintained within branches and you expect to be robust insofar as your decision theory is good, but consistency/predictabilty holds across them.

On the other hand, pre-commitment is powerful. In some circumstances, an agent that knows it will act consistently across worlds can extract coordination benefits (with other agents, with itself in other branches, or even with its future selves). There may be decision theoretic suboptimal precommits that are nonetheless advantaged. In that sense, selective multiversal thickness could be a way to leverage redundancy for advantage.

Reply
Video and transcript of talk on "Can goodness compete?"
Avi Parrack17d10

I had a comment on Jordan Stone's piece about interstellar travel possibly dooming the longterm future that I think goes here as well, https://forum.effectivealtruism.org/posts/x7YXxDAwqAQJckdkr/interstellar-travel-will-probably-doom-the-long-term-future?commentId=pvJpqdswi9mmbtkz8 (its a bit fast and lose, apologies)

Essentially, 

a) Seems to me like you might need very competent and powerful space governance even in a 'safe' defense-dominated universe. You may worry about locusts causing S-risk, eating up value that could have been flourishing people, or just mundane societal collapse of some star system in future which really could be so bad.

b) I think solving good governance (where good here maybe means a long term stable social contract which is very agreeable on diverse sets of values) seems doable. We're in the tough spot of having seen thousands of years of failed attempts, true, but 

1. that's actually not a long time to try, 

2. we were hobbled by low tech and logistics limitations which we can overcome in future

3. we have made progress, and history teaches us about failure modes

In the future, we'll likely be building vertically into many virtual worlds and the vast majority of beings probably live there and they are smart and long-lived. But they have a shared source of ultimate vulnerability in base reality so it seems like there will be massive pressure to have tight governance ensuring that these are insanely safe.

I definitely have a huge fear/aversion for lock-in and tyranny but I think there are plausible paths to robustly good government (some vague sketch in the comment above) and that you plausibly get bad lock-in/tyranny by default if you don't do the hard work of building good government (depends on unknown equilibria, ideally revealed by LR but plausibly many unknowns about e.g. great attractors for civilizational equilibria remain). There seems also to be substantial risk from beginning to expand into the stars without knowing the equilibria. In either offense-dominated (OD) or defense-dominated (DD) universes there is something about the probe launch which you may never be able to rollback. i.e. in OD universe it could be existential if spurs of civilizations later come into conflict. While in DD if spurs of civilizations cause S-risk you can't fight a just war against them and bring an end to the regime. 

Reply
No wikitag contributions to display.
2Avi Parrack's Shortform
5h
2
32Here’s 18 Applications of Deception Probes
4d
0
31Trusted monitoring, but with deception probes.
1mo
0