Framing Consciousness

by cousin_it 1 min read8th May 200945 comments


Update: you can ignore this post, it's completely wrong, I'm only leaving it up to preserve people's comments. Randallsquared has caught a crucial mistake in my reasoning: consciousness could require physical causality, rather than a property of some snapshot description. This falsifies my Point 3 below.


In this unabashedly geek-porn post I want to slightly expand our discussion of consciousness, as defined in the hard problem of consciousness. Don't be scared: no quantum claptrap or "informational system" bullshit.

Point 1. The existence (and maybe degree) of conscious/subjective experiences is an objective question.

Justification: if you feel a human possesses as much consciousness as a rock or the number three, stop reading now. This concludes the "proof by anthropic principle" or "by quantum immortality" for those still reading.

Point 2. It's either possible or impossible in principle to implement consciousness on a Turing-equivalent digital computer.

Justification: obvious corollary of Point 1.

Point 3. If consciousness is implementable on a digital computer, all imaginable conscious experiences already exist.

Justification: the state of any program can be encoded as an integer. What does it mean for an integer to "exist"? Does three "exist"? If a computer program gives rise to "actually existing" subjective experiences, then so does the decimal expansion of the x-coordinate of some particle in the Magellanic Cloud when written out in trinary.

Point 4. If consciousness is implementable, the Simulation Argument is invalid and Pascal's Mugging is almost certainly invalid.

Justification: obvious corollary of Point 3.

Point 5. If consciousness is non-implementable, the Simulation Argument and Robin's uploads scenario lose much of their punch.

Justification: the extinction threat in SA and the upload transition only feel urgent due to our current rapid progress with digital computers. We don't yet have a computer peripheral for providing programs with a feeling of non-implementable consciousness.

Point 6. If consciousness could be implementable, Eliezer had better account for it when designing his FAI.

Justification: there's no telling what the FAI will do when it realizes that actual humans have no privileged status over imaginable humans, or alternatively that they do and torturing simulated humans carries no moral weight.

Point 7. The implementability of currently known physics gives strong evidence that consciousness is implementable.

Justification: pretty obvious. Neurons have been called "essentially classical objects", not even quantum.

Point 8. The fact that evolution gave us conscious brains rather than "dumb computers" gives weak evidence that consciousness is non-implementable.

Justification: we currently know of no reason why organisms would need implementable consciousness, whereas using a natural phenomenon of non-implementable consciousness could give brains extra computational power.

Any disagreements? Anything else interesting in this line of inquiry?