by [anonymous]
3 min read19th Dec 201234 comments

1

Disclaimer: I don't have sufficient knowledge of the concepts appearing in this post, hopefully though I have categorized them properply and put them into suitable boxes to be used correctly. If you can find a real error here, please point them out. I'm sure there are people (including me) who'd like to know them. I'll probably rewrite this bit soon in case of errors.


For the purpose of this thought experiment let's assume that the reductionist approach to consciousness is truthful and consciousness is indeed reducible to physics and there is no ontologically distinct basic mental element that produces the experience of consciousness - the qualia.

 

"The Scary problem of Qualia"

This leads to a very strange problem which I'd term "the Scary problem of Qualia" in a little humouristic sense. If mental experience is as proposed, and humans with brains doing thinking are "merely physics", or "ordered physics", physics + logic that is, then that results to the experience of consciousness being logic given the physics...

...Which is to say that whenever there is (a physical arrangement with) a logical structure that matches (is transitive with) the logical structure of consciousness - then there would be consciousness.  It gets more complicated. If you draw a line with a pencil on a piece of paper, so that it encodes a three dimensional trajectory over time of a sentient being's consciousness - you basically have created a "soulful" being. Except there's just a drawn line on a piece of paper.

(Assuming you can store a sufficient amount of bits in such an encoding. Think of a "large" paper and a long complicated line if imagining an A4 with something scribbled on is a problem. You can also replace the pencil & paper with a turing machine if you like)

If you now take a device complicated enough to decode the message, and create a representation for this message, for a sci-fi example a "brain modifying device which stimulates" a form of empathic inference, you can partially think the thoughts recorded on the piece of paper. Further yet you could simulate this particular person inside a powerful AI, even save that information to a disk, insert it into an android, and let that person go and live his/her life on. If this isn't sufficient you could engineer a biological being which would have a brain that produces a series of chemical reactions, enzymatic reactions, combined with an electron cloud etc that happens to be transitive with the logical structure of the data stored on the chip.

 

Meditation: And this creates another kind of problem. Did the person come into existence:

1. When the line drawn with the pencil came into existence?

2. When the entity that created the line thought of how to draw the line?

3. When the line was decoded?

4. Did it come into existence when the supercomputer AI simulated the person from the line?

5. When it was recorded onto the chip for the android?

6. Or lastly when the contents of the chip were translated to a biological brain producing the same thoughts?



Meditation: Is logic an ontologically basic thing?

In my book this is all a natural consequence from the reductionist approach to the hard problem of consciousness. That is unless we wan't to consider the possibility that electron clouds have little tags which say "consciousness" hanging from.. uhuh.. From their amplitudes.

So in other words: From the reductionist perspective there's just physics which can be described with the help of logic. Whenever there is a physical part of the universe that is correlated with the rest of the universe in such a way that it would resemble consciousness when interacted with, that thing would be just as much a person/zombie as we're. Same goes for simulated people.

New to LessWrong?

New Comment
34 comments, sorted by Click to highlight new comments since: Today at 8:04 AM
[-][anonymous]11y80

consciousness is indeed reducible to physics and there is no ontologically distinct basic mental element that produces the experience of consciousness - the qualia.

Ok.

Did the person come into existence

Hold, on this "existence" thing is assuming some kind of magical reality juice beyond physics and logic. Taboo "existence" what are we expecting to see?

If I find a conscious being in a simple mathematical formalism (like the fibonacci series or a cellular automaton) when did they come into existence? If I tweak the math a little bit so that I inject a message for them that saves them some trouble, do they still experience the trouble? Is the tweaked being real? If you shut down the computer when you realize they are going to be tortured, does that stop the torture, or just stop you from seeing it?

Assume the qualia hypothesis is true. When difference does that make? Do they just not exist because they got no magical reality juice? What if you rescue them from your formalism and they thank you and tell you how good it feels to be out and go on to write papers about consciousness? When did they come into existence?

I see no difference between this and the questions given reductionism. Therefor I think that talking about this "come into existence" thing assumes qualia or something equivalent.

Forgetting qualia/noqualia for a minute, are these questions even meaningful? I think they are meaningful iff they constrain what we should do or see. (Which they do.)

This of course is a hard problem, but it doesn't seem to have much to do with physicalism/qualiaism.

I think the latter part about existence and magical reality juice is a misinterpretation of the OP.

By "come into existence", my second-take (i.e. charitable reading because default reading makes me confused) says he's probably referring to the specific moment/state of: "This entity (mind) is aware", i.e. experiences its first qualia.

When does the first experience of qualia happen for that entity? That seems like a very legitimate question, and a very important and difficult moral problem. It's also potentially scary, depending on the answer and its implications (e.g.: ol jevgvat nobhg e!Uneel'f gubhtugf ba ovyyvbaf bs fragvrag vafrpgf, qvq Ryvrmre vafgnagyl tvir dhnyvn naq vzzrafr fhssrevat gb ovyyvbaf bs zvaqf, whfg orpnhfr bs gur rapbqrq zrnavat va gur jbeqf gung na NV ernqvat UCZbE pbhyq cbgragvnyyl qrpbqr naq gurersber fvzhyngr?)

(edit: Rot13'd just in case that's considered a spoiler - it's relatively early HPMoR stuff)

[-][anonymous]11y20

I think the latter part about existence and magical reality juice is a misinterpretation of the OP.

Agreed

By "come into existence", my second-take (i.e. charitable reading because default reading makes me confused) says he's probably referring to the specific moment/state of: "This entity (mind) is aware", i.e. experiences its first qualia.

Actually I didn't really think about that part, but you sir did. I think that this is an entirely different issue and I think this is one way to answer it:

When does the experience of consciousness occur? Is a little misleading question. Instead it should be termed "Which location in spacetime does this arrangement of physics correspond to?" because if you record an experience in some form I wouldn't really say this particular experienience happening at any given time, but it's structure has a causal relationship with other physical structures. If A is the state of a mind and B is a outside physical event, and A and B interact in someway, for an example if a person sees a flower and therefore the state of A changes accordingly, the changes in the internal structure due to the interaction correspond to the event.

In the post the creation process of the conscious experience was left out, in other words the correspondent was also left out, and if there isn't one - then this experience isn't linkable with any specific moment - except if we consider an agent simulating this experience in order to record it then that would be the correspondent for the internal structure of data.

It's a really interesting question though and I'm really not sure if this answer is sufficient.

Ps. Meanwhile the moment when the drawn line came into existence is a different question from the timing of the "experience". Normally though these two things are not distinct I suppose?

[This comment is no longer endorsed by its author]Reply

This answer is certainly very helpful in clarifying which questions we should be asking. I think figuring out the exact specific questions to ask is most of the solution here, like in maths.

[-][anonymous]11y00

When does the first experience of qualia happen for that entity?

The same time that human babies do; when they get to that stage in development. It is a legit question, and people fight over it (abortion, etc).

If you mean some outside when, that doesn't make any sense. When did the universe come into existence? What about the integers?

Your first response is what I was thinking of. I'm not sure I even understand what you could mean by "some outside when".

Yup, people fight over it all the time. They're just not always (or even "usually") aware that this is one of the hidden queries behind their confusing (or confused) questions and arguments.

Hold, on this "existence" thing is assuming some kind of magical reality juice beyond physics and logic. Taboo "existence" what are we expecting to see?

"Magical reality fluid" is exactly what Eliezer calls it, to remind himself that he's still fundamentally confused about it. So am I.

Meditation: And this creates another kind of problem. Did the person come into existence:

Are you familiar with the Sorites paradox? It's a great example of how human intuition is vague, and sometime self-contradictory. For any transtion, there must be a boundary. But humans don't really "do" boundary cases - we reason about typical cases. If you asked your 6 questions in the opposite order, you could get people to on average place the boundary differently.

[-][anonymous]11y20

edited: I was not aware of that paradox, but it looks like this paradox is created by formulating a false premise and accepting it as true. In the example given on that wikipedia page, the "paradox of of the heap" the second premise is obviously incorrect. If you have 5 dollars in your pocket, and create a rule which says "even if you spend money, you'll still have money in your pocket" it's pretty clear that this isn't true after you've spent 5 dollars.

I think it's closely related to the ship of theseus which argues identity after changin parts. Sorites paradox argues identity after removing parts. If you have vague definitions for the identity and construct false rules and accept those false rules as true, then these unnecessary paradoxes will follow.

If you remove pieces of board from a ship, you won't get to "no ship" or "scattered boards" directly, but instead you get to "sinking ship" or "broken ship" or "incomplete ship" at some point, or just "ship missing a board", etc, which isn't about the ship, but rather the vagueness of our labels for it. That's what I think at least.

I think it was a good pick in this context of consciousness because consciousness is really complicated and we only have very vague definitions for it.

Replace the symbol with substance and Disputing Definitions I think are good lesswrong posts around similar issues.

[This comment is no longer endorsed by its author]Reply

Well yes, we can clearly see that the second premise is false after some inductive reasoning.

But there's also another route, the non-inductive route: can you give me a single example of a heap of sand that becomes a non-heap when you remove a grain?

The point is not that heaps are magic or induction is broken or anything like that. The point is that humans are awful at finding the boundaries of their categories. And as Wei Dai would note, we can't just get around this by playing taboo when the thing we're supposed to be finding the boundary of enters directly into our utility function.

If you have four grains of sand arranged in a tetrahedron, you could conceivably call it a (very small) heap. When you take away one of the grains, you will no longer have a heap, just three grains of sand.

This is assuming that your definition of "heap" includes some of it being on top of the rest of it, which I'm fairly sure is standard.

That's what I would have said.

[-][anonymous]11y00

To avoid this paradox you can make the following rule:

("The heap of sand minus one grain is still a heap " is true) if and only if (The heap of sand minus one grain still constitutes a heap) in the style suggested in this lesswrong post

But there's also another route, the non-inductive route: can you give me a single example of a heap of sand that becomes a non-heap when you remove a grain?

Yes that's pretty easy, since it's only a question of what you call a heap. The paradox is basically feeding you a bait by asking you to think of a million grains of sand, which you obviously can't quantitatively visualize and that could result into abandoning trying to find definitive criteria.

As I'm not a native speaker of english I'm not sure if my idea of a "heap" corresponds to what it's generally used to refer to, but I'd draw the mininum boundary at 1 grain always not being a heap of grains. In my opinion a heap refers to a count of objects above 1. In addition I think it also refers to a geometric structure where objects are arranged in such a fashion that some objects are supporting other objects on top of them. You could also try and make a distinction between stacks and heaps.. Anyway I think you should just drop suggested the million grains and start from "3 grains" and ask yourself "if I remove one grain, is what's remaining a heap?"

This gets quickly to "This plucked chicken has two legs and no feathers—therefore, by definition, it is a human!" that is it's hard to find a really solid definitive criterion and so you should instead just try and imagine a situation where you would no longer call the the remaining grains of sand a heap

[This comment is no longer endorsed by its author]Reply

The paradox is basically feeding you a bait by asking you to think of a million grains of sand, which you obviously can't quantitatively visualize

http://answers.yahoo.com/question/index?qid=20100805132521AAcGBqs

:)

...Which is to say that whenever there is (a physical arrangement with) a logical structure that matches (is transitive with) the logical structure of consciousness - then there would be consciousness. It gets more complicated. If you draw a line with a pencil on a piece of paper, so that it encodes a three dimensional trajectory over time of a sentient being's consciousness - you basically have created a "soulful" being. Except there's just a drawn line on a piece of paper.

(Assuming you can store a sufficient amount of bits in such an encoding. Think of a "large" paper and a long complicated line if imagining an A4 with something scribbled on is a problem. You can also replace the pencil & paper with a turing machine if you like)

I view consciousness as a process. When tracing the chains of cause and effect from the line to the thing that caused the line, we find a pencil, then a hand, then a mind controlling the hand. Similarly, a programless, empty turing machine will not create these states, but a sufficiently complex state within the machine could. The line may contain an interesting potential, but it lacks motion, and therefor lacks consciousness. In my opinion, consciousness is a peculiar sort of motion.

Meditation: Is logic an ontologically basic thing?

The ontologically basic thing is the relationships between different aspects of reality. Logic is a toolset we use to vet our descriptions of those relationships. I don't see how the toolset is a given, except in the sense that it emerged naturally (in the same way that, say, pizza and politics have).

So in other words: From the reductionist perspective there's just physics which can be described with the help of logic. Whenever there is a physical part of the universe that is correlated with the rest of the universe in such a way that it would resemble consciousness when interacted with, that thing would be just as much a person/zombie as we're. Same goes for simulated people.

Pretty much. What the line on A4 lacks is the interaction (motion, action). Perhaps there is some possible, wildly creative structure that remains static but exhibits conscious behavior when randomly sampled from, but that's a different rabbit hole.

EDIT: Phrasing.

For the first read the question sounded like it didn't constrain anything in the real world (see the "tree falling in forest" question)... but, in fact, it is relevant because it impacts our moral judgments.

Which says something though about the consistency of our moral judgments... (recently discussed around here).

Which is to say that whenever there is (a physical arrangement with) a logical structure that matches (is transitive with) the logical structure of consciousness - then there would be consciousness.

Um, no? Why on Earth does that follow? We postulate that there is something about the physical properties of carbon atoms arranged as a human brain that causes, or is, consciousness. The physical properties of your line on paper aren't anything like that.

If it's a pencil line then it's got carbon atoms ;-)

Yes, but they're not organised anything like the ones in the brain! The energy flows, binding energies, wave amplitudes are completely different.

So what if I take an ink made of neural steam cells, and some other stuff to keep them alive and not connecting to one another?

Then the quantum amplitudes and current flows are still nothing like that of a conscious brain. The logical structure only exists in your head; the atoms in my desk have some sort of "logical correspondence" to the ones in my head, for sufficiently arbitrary assignments of representation. "This atom corresponds to that one, and this one to that one over there... so the desk represents my brain." Well, it can, but only in my head!

...Which is to say that whenever there is (a physical arrangement with) a logical structure that matches (is transitive with) the logical structure of consciousness - then there would be consciousness. It gets more complicated. If you draw a line with a pencil on a piece of paper, so that it encodes a three dimensional trajectory over time of a sentient being's consciousness - you basically have created a "soulful" being. Except there's just a drawn line on a piece of paper.

Assuming this is possible, I would say the line on the paper is a "rendering" or "depiction" of a concious being at some point in time. In order for the rendering to in some way "be" a concious being, would it not require the ability to change itself somehow? At very least it must be able to accrue memories, meaning that over time some part or parts of the rendering must be updated to coincide with the new memories. If the rendering cannot physically update itself, it seems there must be at least one extra part required.

It's hard to discuss further without relying on my personal definition of conciousness. But now that I think about it, I probably came up with this definition by analyzing similar ideas. Perhaps in some way specifying precise boundary conditions is equivalent to having a precise definition?

Meditation 1: The person has always existed. When do we come into existence from a perspective outside of our universe? Either we exist in spacetime or we don't; there's no "when" outside the universe. Meditation 2: Yes.

It's just Tegmark's Level IV multiverse. The only evidence I know for holding such a position is the greater complexity of a theory that only allows our particular universe to exist to the exclusion of all others. If we find a succinct and perfect Theory of Everything that is shorter than a description of the Tegmark Level IV multiverse there would no longer be any evidence to support it, in my opinion.

A part of a sky cloud, could simulate my brains for a second. There is a lot of those parts. 2^N (N=number of parts), what could be enough. At least on a long run.

I also struggle with this scary idea for years now.

Did the person come into existence:

Ve came into existence whenever a computation isomorphic to verself was performed.

That seems to trivially follow. It also seems to just push the burden of reduction onto "performed".

Yes, that's a big problem.

The linked article is kind of confused, but I don't know how to answer the general argument, which is related to "Dust Theory". If there is a computer program that you can run which will be conscious - call it "Program Dave" - there's a snag you can run into. For any random bit string you give me, I can design a "computer" that, when it "runs" that bit string as a program, it will interpret the bit string as being Program Dave. There's no obvious way to look at something and tell if it's "really" a copy of Program Dave or not, because everything is a copy of Program Dave if you look at it the right way.

See also.

Your example reminds me of one of Hofstadter's dialogues in The Mind's I, where he imagines that after Einstein's death, all of the information in his brain has been transcribed into a huge book of numbers which can tell you how precisely it would have responded to different inputs. It would of course be possible to 'talk' to Einstein's brain in this way, work out how his brain would change and how he would respond, and thus have a conversation with the brain. I found the question of whether such a book would be capable of consciousness (and whether it would be Einstein) baffling and a little scary, and raises many of the same problems as in your first meditation.

Importantly, such a book is "not physical" -- Einstein's computations are far too big to fit in a physical book. If you respond that this is just a thought experiment, well, the point is we have to stick to physics if we are to be reductionists. Sticking to physics actually eliminates a lot of counterintuitive thought experiments.

[-][anonymous]11y20

Your example reminds me of one of Hofstadter's dialogues in The Mind's I, where he imagines that after Einstein's death, all of the information in his brain has been transcribed into a huge book of numbers which can tell you how precisely it would have responded to different inputs. It would of course be possible to 'talk' to Einstein's brain in this way, work out how his brain would change and how he would respond, and thus have a conversation with the brain. I found the question of whether such a book would be capable of consciousness (and whether it would be Einstein) baffling and a little scary, and raises many of the same problems as in your first meditation.

I'll take that as a compliment because if the "The Mind's I" is a book - I have never heard of it before - written by a famous author and you find this post similar then I suppose it can't be all bad? :)

Ps. However if the book in that particular example recorded only a "still image" or a "slice" from the structure of the consciousness, then I don't think it constitutes as a recording of a thought (even though it still would be possible interact with it if it was allowed to evolve) I think that would require a recording of a three dimensional trajectory over time.

[This comment is no longer endorsed by its author]Reply
[-][anonymous]11y00

Meditation: Is logic an ontologically basic thing?

Edit: I accidentally quoted myself as well.

Perhaps I should ask what are one or more differences between an example where logic is ontologically basic and an example where logic isn't, other than the "ontologically basic" tag?

For example, is it something like this?

In Ontologically Basic Logic, the answer to the Paradox of the Stone is "That is a Paradox." In Non-Ontologically Basic Logic, the answer to the Paradox of the Stone is "Yes."

I really don't actually know if that is an example of the differences which is why I am asking to check. It may be something else entirely.

Meditation: And this creates another kind of problem. Did the person come into existence:

Nobody knows. That is the real problem of qualia: on the one hand, we have subjective experience; on the other, everything else we know leaves no room for any such thing to exist. "But it exists! But it can't! But it exists! But it can't!" All proposed solutions amount to chopping off one hand or the other, and all refutations to those solutions consist of pointing out that the hand is still there.

Curiously, none of this prevents people from seriously talking about interactive animatronic puppets as if they had emotions.

Curiously, none of this prevents people from seriously talking about interactive animatronic puppets as if they had emotions.

For now!

It will be interesting to see the cultural confusion when 'simulations' are as complex and deep as the real deal. I wonder if robots will look at me with (simulated?) disgust when I joke about circuit bending my friend's little sister's furby? Will I simulate shame?