[ Question ]

What does Solomonoff induction say about brain duplication/consciousness?

by riceissa 1 min read2nd Mar 202013 comments


Back in 2012, in a thread on LW, Carl Shulman wrote a couple of comments connecting Solomonoff induction to brain duplication, epiphenomenalism, functionalism, David Chalmers's "psychophysical laws", and other ideas in consciousness.

The first comment says:

It seems that you get similar questions as a natural outgrowth of simple computational models of thought. E.g. if one performs Solomonoff induction on the stream of camera inputs to a robot, what kind of short programs will dominate the probability distribution over the next input? Not just programs that simulate the physics of our universe: one would also need additional code to "read off" the part of the simulated universe that corresponded to the camera inputs. That additional code looks like epiphenomenal mind-stuff. Using this framework you can pose questions like "if the camera is expected to be rebuilt using different but functionally equivalent materials, will his change the inputs Solomonoff induction predicts?" or "if the camera is about to be duplicated, which copy's inputs will be predicted by Solomonoff induction?"

If we go beyond Solomonoff induction to allow actions, then you get questions that map pretty well to debates about "free will."

The second comment says:

The code simulating a physical universe doesn't need to make any reference to which brain or camera in the simulation is being "read off" to provide the sensory input stream. The additional code takes the simulation, which is a complete picture of the world according to the laws of physics as they are seen by the creatures in the simulation, and outputs a sensory stream. This function is directly analogous to what dualist/epiphenomenalist philosopher of mind David Chalmers calls "psychophysical laws."

Carl's comments pose the questions you can ask/highlight the connection, but they don't answer those questions. I would be interested in references to other places discussing this idea, or answers to these questions.

Here are some of my own confused thoughts (I'm still trying to learn algorithmic information theory, so I would appreciate hearing any corrections):

  • If the camera is duplicated, it seems to take a longer program to track/read off two cameras in the physical world instead of just one, so Solomonoff induction would seem to prefer a bit sequence where only one camera's inputs are visible. However, there seems to be a symmetry between the two cameras, in that neither one takes a longer program to track. So right before the camera is duplicated, Solomonoff induction "knows" that it will be in just one of the cameras soon, but doesn't know which one. If we are using the variant of Solomonoff induction that puts a probability distribution over sequences, then the probability mass will split in half between the two copies' views. Is this right? If we are using the variant of Solomonoff induction that just prints out bits, then I don't know what happens; does it just flip a coin to decide which camera's view to print?
  • If the camera is rebuilt using different but functionally equivalent materials, I guess it would be possible to track the inputs to the new camera, but wouldn't it be even simpler to stop tracking the physical world altogether (and just return a blank camera view)?
  • In general, any time anything "drastic" happens to the camera (like a rock falling on it), it seems like one of the shortest programs is to just assume the camera stops working (or does something else that's weird-but-simple, like just "reading off" a fixed location without motion). But I'm not sure how to classify what counts as "drastic".


New Answer
Ask Related Question
New Comment

4 Answers

To answer your first bullet: Solomonoff induction has many hypotheses. One class of hypotheses would continue predicting bits in accordance with what the first camera sees, and another class of hypotheses would continue predicting bits in accordance with what the second camera sees. (And there would be other hypotheses as well in neither class.) Both classes would get roughly equal probability, unless one of the cameras was somehow easier to specify than the other. For example, if there was a gigantic arrow of solid iron pointing at one camera, then maybe it would be easier to specify that one, and so it would get more probability. Bostrom discusses this a bit in Anthropic Bias, IIRC.

To answer your second bullet: Yep. To reason about Solomonoff Induction properly we need to think about what the simplest "psychophysical laws" are, since they are what SI will be using to make predictions given the physics-simulation. And depending on what they are, various transformations of the camera may or may not be supported. Plausibly, when a camera is destroyed and rebuilt with functionally similar materials, the sorts of psychophysical laws which say "you survive the process" will be more complex than the sorts which say you don't. If so, SI would predict the end of its perceptual sequence. (Of course, after the transformation, you'd have a system which continued to use SI. So it would update away from those psychophysical laws that (in its view) just made an erroneous prediction.

To answer your third question: For SI, there is only one rule: Simpler is better. So, think about how you are not sure how to classify what counts as "drastic." Insofar as it turns out to be hard to specify, it's a distinction SI would not make use of. So it may well be that a rock falling on a camera would be predicted to result in doom, but it may not. It depends on what the overall simplest psychophysical laws are. (Of course, they have to also be consistent with data so far -- so presumably lots of really simple psychophysical laws have already been ruled out by our data, and any real-world SI agent would have an "infancy period" where it is busy ruling out elegan, simple, and wrong hypotheses, hypotheses which are so wrong that they basically make it flail around like a human baby.)

Those are my answers at least, I'd be interested to hear if anyone disagrees.

FWIW I am excited to hear Carl was thinking about this in 2012, I ended up having similar thoughts independently a few years ago. (My version: Solomonoff Induction is solipsistic phenomenal idealism.)

I wrote about a closely related issue (more directly about human developmental psychology / cognitive science than Solomonoff induction) here.

This is highly related to UDASSA. In the linked post, especially Problem #2 (about splitting conscious computers) and bits of Problem #3 (e.g. "What happens if we apply UDASSA to a quantum universe? For one, the existence of an observer within the universe doesn't say anything about conscious experience. We need to specify an algorithm for extracting a description of that observer from a description of the universe"...)

Lanrian's mention of UDASSA made me search for discussions of UDASSA again, and in the process I found Hal Finney's 2005 post "Observer-Moment Measure from Universe Measure", which seems to be describing UDASSA (though it doesn't mention UDASSA by name); it's the clearest discussion I've seen so far, and goes into detail about how the part that "reads off" the camera inputs from the physical world works.

I also found this post by Wei Dai, which seems to be where UDASSA was first proposed.