Back in 2012, in a thread on LW, Carl Shulman wrote a couple of comments connecting Solomonoff induction to brain duplication, epiphenomenalism, functionalism, David Chalmers's "psychophysical laws", and other ideas in consciousness.

The first comment says:

It seems that you get similar questions as a natural outgrowth of simple computational models of thought. E.g. if one performs Solomonoff induction on the stream of camera inputs to a robot, what kind of short programs will dominate the probability distribution over the next input? Not just programs that simulate the physics of our universe: one would also need additional code to "read off" the part of the simulated universe that corresponded to the camera inputs. That additional code looks like epiphenomenal mind-stuff. Using this framework you can pose questions like "if the camera is expected to be rebuilt using different but functionally equivalent materials, will his change the inputs Solomonoff induction predicts?" or "if the camera is about to be duplicated, which copy's inputs will be predicted by Solomonoff induction?"

If we go beyond Solomonoff induction to allow actions, then you get questions that map pretty well to debates about "free will."

The second comment says:

The code simulating a physical universe doesn't need to make any reference to which brain or camera in the simulation is being "read off" to provide the sensory input stream. The additional code takes the simulation, which is a complete picture of the world according to the laws of physics as they are seen by the creatures in the simulation, and outputs a sensory stream. This function is directly analogous to what dualist/epiphenomenalist philosopher of mind David Chalmers calls "psychophysical laws."

Carl's comments pose the questions you can ask/highlight the connection, but they don't answer those questions. I would be interested in references to other places discussing this idea, or answers to these questions.

Here are some of my own confused thoughts (I'm still trying to learn algorithmic information theory, so I would appreciate hearing any corrections):

  • If the camera is duplicated, it seems to take a longer program to track/read off two cameras in the physical world instead of just one, so Solomonoff induction would seem to prefer a bit sequence where only one camera's inputs are visible. However, there seems to be a symmetry between the two cameras, in that neither one takes a longer program to track. So right before the camera is duplicated, Solomonoff induction "knows" that it will be in just one of the cameras soon, but doesn't know which one. If we are using the variant of Solomonoff induction that puts a probability distribution over sequences, then the probability mass will split in half between the two copies' views. Is this right? If we are using the variant of Solomonoff induction that just prints out bits, then I don't know what happens; does it just flip a coin to decide which camera's view to print?
  • If the camera is rebuilt using different but functionally equivalent materials, I guess it would be possible to track the inputs to the new camera, but wouldn't it be even simpler to stop tracking the physical world altogether (and just return a blank camera view)?
  • In general, any time anything "drastic" happens to the camera (like a rock falling on it), it seems like one of the shortest programs is to just assume the camera stops working (or does something else that's weird-but-simple, like just "reading off" a fixed location without motion). But I'm not sure how to classify what counts as "drastic".
New Answer
New Comment

4 Answers sorted by

Daniel Kokotajlo

Mar 03, 2020


To answer your first bullet: Solomonoff induction has many hypotheses. One class of hypotheses would continue predicting bits in accordance with what the first camera sees, and another class of hypotheses would continue predicting bits in accordance with what the second camera sees. (And there would be other hypotheses as well in neither class.) Both classes would get roughly equal probability, unless one of the cameras was somehow easier to specify than the other. For example, if there was a gigantic arrow of solid iron pointing at one camera, then maybe it would be easier to specify that one, and so it would get more probability. Bostrom discusses this a bit in Anthropic Bias, IIRC.

To answer your second bullet: Yep. To reason about Solomonoff Induction properly we need to think about what the simplest "psychophysical laws" are, since they are what SI will be using to make predictions given the physics-simulation. And depending on what they are, various transformations of the camera may or may not be supported. Plausibly, when a camera is destroyed and rebuilt with functionally similar materials, the sorts of psychophysical laws which say "you survive the process" will be more complex than the sorts which say you don't. If so, SI would predict the end of its perceptual sequence. (Of course, after the transformation, you'd have a system which continued to use SI. So it would update away from those psychophysical laws that (in its view) just made an erroneous prediction.

To answer your third question: For SI, there is only one rule: Simpler is better. So, think about how you are not sure how to classify what counts as "drastic." Insofar as it turns out to be hard to specify, it's a distinction SI would not make use of. So it may well be that a rock falling on a camera would be predicted to result in doom, but it may not. It depends on what the overall simplest psychophysical laws are. (Of course, they have to also be consistent with data so far -- so presumably lots of really simple psychophysical laws have already been ruled out by our data, and any real-world SI agent would have an "infancy period" where it is busy ruling out elegan, simple, and wrong hypotheses, hypotheses which are so wrong that they basically make it flail around like a human baby.)

Those are my answers at least, I'd be interested to hear if anyone disagrees.

FWIW I am excited to hear Carl was thinking about this in 2012, I ended up having similar thoughts independently a few years ago. (My version: Solomonoff Induction is solipsistic phenomenal idealism.)

My version: Solomonoff Induction is solipsistic phenomenal idealism.

I don't understand what this means (even searching "phenomenal idealism" yields very few results on google, and none that look especially relevant). Have you written up your version anywhere, or do you have a link to explain what solipsistic phenomenal idealism or phenomenal idealism mean? (I understand solipsism and idealism already; I just don't know how they combine and what work the "phenomenal" part is doing.)

2Daniel Kokotajlo4y
Here's an old term paper I wrote defending phenomenal idealism. It explains early on what it is. It's basically Berkeley's idealism but without God. As I characterize it, phenomenal idealism says there are minds /experiences and also physical things, but only the former are fundamental; physical things are constructs out of minds/experiences. Solipsistic phenomenal idealism just means you are the only mind (or at least, the only fundamental one -- all others are constructs out of yours.) "Phenomenal" might not be relevant, it's just the term I was taught for the view. I'd just say "Solipsistic idealism" except that there are so many kinds of idealism that I don't think that would be helpful.


Mar 03, 2020


I wrote about a closely related issue (more directly about human developmental psychology / cognitive science than Solomonoff induction) here.

Thanks, that's definitely related. I had actually read that post when it was first published, but didn't quite understand it. Rereading the post, I feel like I understand it much better now, and I appreciate having the connection pointed out.

Lukas Finnveden

Mar 06, 2020


This is highly related to UDASSA. In the linked post, especially Problem #2 (about splitting conscious computers) and bits of Problem #3 (e.g. "What happens if we apply UDASSA to a quantum universe? For one, the existence of an observer within the universe doesn't say anything about conscious experience. We need to specify an algorithm for extracting a description of that observer from a description of the universe"...)


Mar 07, 2020


Lanrian's mention of UDASSA made me search for discussions of UDASSA again, and in the process I found Hal Finney's 2005 post "Observer-Moment Measure from Universe Measure", which seems to be describing UDASSA (though it doesn't mention UDASSA by name); it's the clearest discussion I've seen so far, and goes into detail about how the part that "reads off" the camera inputs from the physical world works.

I also found this post by Wei Dai, which seems to be where UDASSA was first proposed.

6 comments, sorted by Click to highlight new comments since: Today at 2:04 AM
So right before the camera is duplicated, Solomonoff induction "knows" that it will be in just one of the cameras soon, but doesn't know which one.

It sounds like it'd "know" that it will be both, separately.

I'm not sure I understand. The bit sequence that Solomonoff induction receives (after the point where the camera is duplicated) will either contain the camera inputs for just one camera, or it will contain camera inputs for both cameras. (There are also other possibilities, like maybe the inputs will just be blank.) I explained why I think it will just be the camera inputs for one camera rather than two (namely, tracking the locations of two cameras requires a longer program). Do you have an explanation of why "both, separately" is more likely? (I'm assuming that "both, separately" is the same thing as the bit sequence containing camera inputs for both cameras. If not, please clarify what you mean by "both, separately".)

My disagreement was terminological, not conceptual.

There is a teleporter. You step into part A and you will disappear, and step out of both part B and part C separately. There are now two of you. These two do not possess any special telepathy or connection, but both are you, and you may care about the outcomes for both before you step into the teleporter, and this may affect whether you choose to do so.

Duplication is not a process where you will end up as 'one of the two, but unclear which'. Duplication is a process where you become two entities which are not changed by the process. You become not one, but "both, separately." The 'separation' means that the two do not share observations directly with each other (though an object entering the same room as both could be seen by both from different angles).

I consider this to be a flaw in AIXI type designs. To actually make sense, these designs need hypercompute, and so have to guess at what rules allow the hypercompute to interact with the normal universe. I have a rough idea of some kind of FDTish agent that can solve this, but can't formalize it.

I might have misunderstood your comment, but it sounds like you're saying that Solomonoff induction isn't naturalized/embedded, and that this is a problem (sort of like in this post). If so, I'm fine with that, and the point of my question was more like, "given this flawed-but-interesting model (Solomonoff induction), what does it say about this question that I'm interested in (consciousness)?"

We can make Solomonoff induction believe all sorts of screwy things about consciousness. Take a few trillion identical computers running similar computations. Put something really special and unique next to one of the cases, say a micro black hole. Run solomonoff induction on all the computers, each with different input. Each inductor simulates the universe and has to know its own position in order to predict its input. The one next to the black hole can most easily locate itself as the one next to the black hole, if the black hole is moved, it will believe its consciousness resides in "the computer next to the black hole" and predict accordingly.