From one of justinpombrio’s comments on Jessica Taylor’s review of the CTMU

I was hoping people other than Jessica would share some specific curated insights they got [from the CTMU]. Syndiffeonesis is in fact a good insight.

The reply I'd drafted to this comment ended up ballooning into a whole LessWrong post. Here it is! 

It used to seem crazy to me that the intentions and desires of conscious observers like us can influence quantum outcomes  (/ which Everett branches we find ourselves in / "wave function collapses"), or that consciousness had anything to do with quantum mechanics in a way that wasn’t explained away by decoherence. The CTMU claims this happens, which seemed crazy to me at first, but I think I’ve figured out a reasonable possible interpretation in terms of anthropics. (Note: I am not an expert in either quantum mechanics or anthropics. I’m pretty sure the big-picture here is correct, but I won’t be surprised if I get some details wrong.)

[EDIT: Based on the feedback I have received thus far, it seems worthwhile to clarify up-front that I am not trying to make a positive case for consciousness affecting quantum outcomes. I am trying to make a case that the arguments against consciousness affecting quantum outcomes rest on shakier assumptions than I think people often give them credit for, and that there may be room for consciousness to affect quantum outcomes if we drop these assumptions. If I had to bet based on my current inside view, I would bet against consciousness affecting quantum outcomes. My actual position is much weaker than what I've since realized the title of this post might connote — apologies for the confusion.]

My previous picture of quantum mechanics was that we’re in a "probabilistic clockwork universe" – very roughly, it’s got a set of initial conditions (initial quantum state of the universe) and deterministic laws (guiding the evolution of the wavefunction) that are hooked up to a random number generator (specifying quantum outcomes), and the initial conditions + the deterministic laws + the outcomes of the random generator fully constrain what we observe. (Under many-worlds, the RNG tells us which Everett branches we find ourselves in.) 

Importantly, the "quantum randomness" of the random number generator is truly, actually random. I think this picture of quantum mechanics does not leave much room for observers like us to influence quantum outcomes, and paints a picture of reality that’s beyond the reach of God

Let’s say we identify which Everett branch we’re in using a length-n bitstring that encodes the outcomes of the quantum RNG. If the Everett branch we find ourselves in is truly random, then the K-complexity of this bitstring should be close to n. If we should a priori expect to find ourselves in some particular "truly random" Everett branch, the "probabilistic clockwork universe" picture would be correct. 

However, there might be a special class of pseudorandom bitstrings, for which the prior probability that our quantum RNG bitstring is in this class far outweighs the prior probability that the bitstring is "truly random" (/ has K-complexity close to n). If this were the case, the Born probabilities would be more like reflections of logical uncertainty about pseudorandomness (like "the probability of some far-out digit of π being 5 is 10%") than like parameterizations of a "truly random" RNG. 

One way this could turn out to be true is if it’s a priori more likely that there are special, nonrandom portions of the quantum multiverse we're being sampled from. For example, if we had a priori reasons for expecting that we're in a simulation by some superintelligence trying to calculate the most likely distribution of superintelligences in foreign universes for acausal trade reasons, then we would have a priori reasons for expecting to find ourselves in Everett branches in which our civilization ends up producing some kind of superintelligence – i.e., that it’s in our logical past that our civilization ends up building some sort of superintelligence. 

Under this picture of reality, the deterministic laws of physics remain the same, but the initial conditions and "RNG outcomes" are underdetermined, and merely have high-level constraints applied to them, which get increasingly refined over time by our simulators, while possibly never reaching full resolution, if their full resolution is never needed (like in lazy evaluation). 

This is kind of like "authoring a story over the course of logical time", by fleshing out the details of a plot idea in parallel, stitching them all together in a logically consistent way, and iterating until the story is as complete as it needs to be. The order in which the author fleshes out these details may be totally different from the timeline of the story, and many unimportant details may never get fleshed out, including possibly many details about the initial conditions. 

This contrasts starkly with the "probabilistic clockwork universe" picture, where, roughly speaking, the initial states are fully specified, the next states are then fully specified with a "turn of the crank" (an application of the physical laws + a consultation of the RNG, if we’re tracking which Everett branch we move to), the next states after that are fully specified with another "turn of the crank", and so on. 

Crucially, in the "authorship" picture, the specifics of how future details get filled in might depend on the specifics of how past details get filled in (where "future" and "past" refer to logical time here, not physical time). Some of these past details might be details about us, like the actions we take and the thoughts we think. Just as the desires of a fictional character might affect how the rest of the story gets composed, our desires might affect how the details of the rest of physical reality get filled in, via influencing quantum measurement outcomes and retrodicting initial conditions of the universe (the latter of which Scott Aaronson has speculated about). 

Zooming way out, I want to be clear that I am highly uncertain about this "authorship" picture. In particular, the a priori likelihood of being in a not-truly-random Everett branch is still highly uncertain to me, and the bit about superintelligence simulations was meant more as an intuition pump to illustrate one possible way things could be than as a likely account of how things actually are. The current epistemic state of my inside view around whether our consciousness can influence quantum outcomes is "here be dragons", which I think is importantly different from both "yes, obviously" (as Chris Langan thinks), and "no, that’s obviously crazy, are you one of those supernaturalist nutjobs who doesn’t understand physics?" (as I used to think). 

That being said, Chris has (in his head, at least apparently) detailed and specific arguments behind his "yes, obviously", having largely to do with how to think clearly about reality as a self-simulation, and the implications of that for anthropics. In other words, I think he puts forth a much more detailed case for how to think about the a priori likelihood of being in a "nonrandom" Everett branch. I don’t understand Chris’s views here yet, but my prediction is that I will end up updating toward Chris’s position after understanding his views better, because that is usually what happens to me when I understand Chris’s views better. 

I’ll close by noting that, as far as I understand, the CTMU implies that the "authorship" picture of reality is correct; "telic recursion" refers to the iterative process of filling in these details throughout logical time; "intelligent design" (as Chris uses the term) is just another term for telic recursion; "UBT" essentially refers to "the story of reality" prior to any details getting filled in; "extended superposition" refers to the status of any "underdetermined portion of the story" whose details are "still in superposition", but spread out across swaths of time. 

Thanks to Marcello Herreshoff for coming up with the analogy of an author writing a story; thanks to Marcello Herreshoff and Nick Tarleton for reviewing earlier drafts of this post; and thanks to Scott Garrabrant for sanity-checking earlier versions of this general idea in private conversations. 

New Comment
11 comments, sorted by Click to highlight new comments since: Today at 6:22 PM

In applied quantum physics, you have concrete situations (Stern-Gerlach experiment is a famous one), theory gives you the probabilities of outcomes, and repeating the experiment many times, gives you frequencies that converge on the probabilities. 

Can you, or Chris, or anyone, explain, in terms of some concrete situation, what you're talking about? 

I'll take a stab at this. Suppose we had strong a priori reasons for thinking it's in our logical past that we'll have created a superintelligence of some sort. Let's suppose that some particular quantum outcome in the future can get chaotically amplified, so that in one Everett branch humanity never builds any superintelligence because of some sort of global catastrophe (say with 99% probability, according to the Born rule), and in some other Everett branch humanity builds some kind of superintelligence (say with 1% probability, according to the Born rule). Then we should expect to end up in the Everett branch in which humanity builds some kind of superintelligence with ~100% probability, despite the Born rule saying we only have a 1% chance of ending up there, because the "99%-likely" Everett branch was ruled out by our a priori reasoning. 

I'm not sure if this is the kind of concrete outcome that you're asking for. I imagine that, for the most part, the kind of universe I'm describing will still yield frequencies that converge on the Born probabilities, and for the most part appear indistinguishable from a universe in which quantum outcomes are "truly random". See my reply to Joel Burget for some more detail about how I think about this hypothesis. 

Shortly after publishing this, I discovered something written by John Wheeler (whom Chris Langan cites) that feels thematically relevant. From Law Without Law

[-]kave10d20

Is there anything particularly quantum about this effect?

Using the simulator frame, one might think there's space to tweak:

  1. The basic physical laws
  2. The fundamental constants
  3. The "PRNG" (in an Everettian picture this looks kind of weird because its more like throwing out parts of the wavefunction to save on computation; reminds me a little of mangled worlds)

Perhaps the idea is that tweaking 1 & 2 results in worlds less interesting to the simulator?

Good question! Yeah, there's nothing fundamentally quantum about this effect. But if the simulator wants to focus on universes with 1 & 2 fixed (e.g. if they're trying to calculate the distribution of superintelligences across Tegmark IV), the PNRG (along with the initial conditions of the universe) seem like good places for a simulator to tweak things. 

This all seems very teleological. Do you have thoughts on what the teleology of the universe could be under this model? 

If this was true, how could we tell? In other words, is this a testable hypothesis?

What reason do we have to believe this might be true? Because we're in a world where it looks like we're going to develop superintelligence, so it would be a useful world to simulate?

If we performed a trillion 50/50 quantum coin flips, and found a program with K-complexity far less than a trillion that could explain these outcomes, that would be an example of evidence in favor of this hypothesis. (I don't think it's very likely that we'll be able to find a positive result if we run that particular experiment; I'm naming it more to illustrate the kind of thing that would serve as evidence.) (EDIT: This would only serve as evidence against quantum outcomes being truly random. In order for it to serve as evidence in favor of quantum outcomes being impacted by consciousness, the low K-complexity program explaining these outcomes would need to route through the decisions of conscious beings somehow; it wouldn't work if the program were just printing out digits of pi in binary, for example.)

My inside view doesn't currently lead me to put much credence on this picture of reality actually being true. My inside view is more like "huh, I notice I have become way more uncertain about the a priori arguments about what kind of universe we live in -- especially the arguments that we live in a universe in which quantum outcomes are supposed to be 'truly random' -- so I will expand my hypothesis space for what kinds of universes we might be living in". 

If this was true, how could we tell? In other words, is this a testable hypothesis?

This. Physics runs on falsifiable predictions. If 'consciousness can affect quantum outcomes' is any more true than the classic 'there is an invisible dragon in my garage', then discovering that fact would seem easy from an experimentalist standpoint. Sources of quantum randomness (e.g. weak source+detector) are readily available, so any claimant who thinks they can predict or affect their outcomes could probably be tested initially for a few 100$. 

 

General remark:

One way this could turn out to be true is if it’s a priori more likely that there are special, nonrandom portions of the quantum multiverse we're being sampled from. For example, if we had a priori reasons for expecting that we're in a simulation by some superintelligence trying to calculate the most likely distribution of superintelligences in foreign universes for acausal trade reasons, then we would have a priori reasons for expecting to find ourselves in Everett branches in which our civilization ends up producing some kind of superintelligence – i.e., that it’s in our logical past that our civilization ends up building some sort of superintelligence. 

It is not clear to me that this would result in a lower Kolmogorov complexity at all. Such an algorithm could of course use a pseudo-random number generator for the vast majority quantum events which do not affect p(ASI) (like the creation of CMB photons), but this is orthogonal to someone nudging the relevant quantum events towards ASI. For these relevant events, I am not sure that the description "just do whatever favors ASI" is actually shorter than just the sequence of events.

I mean, if we are simulated by a Turing Machine (which is equivalent to quantum events having a low Kolmogorov complexity), then a TM which just implements the true laws of physics (and cheats with a PNRG, not like the inhabitants would ever notice) is surely simpler than one which tries to optimize towards some distant outcome state. 

As an analogy, think about the Kolmogorov complexity of a transcript of a very long game of chess. If both opponents are following a simple algorithm of "determine the allowed moves, then use a PRNG to pick one of them", that should have a bound complexity. If both are chess AIs which want to win the game (i.e. optimize towards a certain state) and use a deterministic PRNG (lest we are incompressible), the size of your Turing Machine -- which /is/ the Kolmogorov complexity -- just explodes.

Of course, if your goal is to build a universe which invents ASI, do you really need QM at all? Sure, some algorithms run faster in-universe on a QC, but if you cared about efficiency, you would not use so many levels of abstraction in the first place. 

Look at me rambling about universe-simulating TMs. Enough, enough. 

It is not clear to me that this would result in a lower Kolmogorov complexity at all. Such an algorithm could of course use a pseudo-random number generator for the vast majority quantum events which do not affect p(ASI) (like the creation of CMB photons), but this is orthogonal to someone nudging the relevant quantum events towards ASI. For these relevant events, I am not sure that the description "just do whatever favors ASI" is actually shorter than just the sequence of events.

Hmm, I notice I may have been a bit unclear in my original post. When I'd said "pseudorandom", I wasn't referring to the use of a pseudo-random number generator instead of a true RNG. I was referring to the "transcript" of relevant quantum events only appearing random, without being "truly random", because of the way in which they were generated (which I'm thinking of as being better described as "sampled from a space parameterizing the possible ways the world could be, conditional on humanity building superintelligence" rather than "close to truly random, or generated by a pseudo-random RNG, except with nudges toward ASI".) 

I mean, if we are simulated by a Turing Machine (which is equivalent to quantum events having a low Kolmogorov complexity), then a TM which just implements the true laws of physics (and cheats with a PNRG, not like the inhabitants would ever notice) is surely simpler than one which tries to optimize towards some distant outcome state.


As an analogy, think about the Kolmogorov complexity of a transcript of a very long game of chess. If both opponents are following a simple algorithm of "determine the allowed moves, then use a PRNG to pick one of them", that should have a bound complexity. If both are chess AIs which want to win the game (i.e. optimize towards a certain state) and use a deterministic PRNG (lest we are incompressible), the size of your Turing Machine -- which /is/ the Kolmogorov complexity -- just explodes.

Wouldn't this also serve as an argument against malign consequentialists in the Solomonoff prior, that may make it a priori more likely for us to end up in a world with particular outcomes optimized in their favor? 

It is not clear to me that this would result in a lower Kolmogorov complexity at all. 

[...]

Look at me rambling about universe-simulating TMs. Enough, enough.

To be clear, it's also not clear to me that this would result in a lower K-complexity either. My main point is that (1) the null hypothesis of quantum events being independent of consciousness rests on assumptions (like assumptions about what the Solomonoff prior is like) that I think are actually pretty speculative, and that (2) there are speculative ways the Solomonoff prior could be in which our consciousness can influence quantum outcomes. 

My goal here is not to make a positive case for consciousness affecting quantum outcomes, as much as it is to question the assumptions behind the case against the world working that way. 

This. Physics runs on falsifiable predictions. If 'consciousness can affect quantum outcomes' is any more true than the classic 'there is an invisible dragon in my garage', then discovering that fact would seem easy from an experimentalist standpoint. Sources of quantum randomness (e.g. weak source+detector) are readily available, so any claimant who thinks they can predict or affect their outcomes could probably be tested initially for a few 100$. 

Yes, I'm also bearish on consciousness affecting quantum outcomes in ways that are as overt and measurable in the way you're gesturing at. The only thing I was arguing in this post is that the effect size of consciousness on quantum outcomes is maybe more than zero, as opposed to obviously exactly zero. I don't think of myself as having made any arguments that the effect size should be non-negligible, although I also don't think that possibility has been ruled out for non-neglible effect sizes lying somewhere between "completely indistinguishable from no influence at all" and "overt and measurable to the extent a proclaimed psychic could reproducibly affect quantum RNG outcomes".