I think this is related to what Chalmers calls the "meta problem of consciousness"- the problem of why it seems subjectively undeniable that a hard problem of consciousness exists, even though it only seems possible to objectively describe "easy problems" like the question of whether a system has an internal representation of itself. Illusionism- the idea that the hard problem is illusory- is an answer to that problem, but I don't think it fully explains things.
Consider the question "why am I me, rather than someone else". Objectively, the question is meaningless- it's a tautology like "why is Paris Paris". Subjectively, however, it makes sense, because your identity in objective reality and your consciousness are different things- you can imagine "yourself" seeing the world through different eyes, with different memories and so on, even though that "yourself" doesn't map to anything in objective reality. The statement "I am me" also seems to add predictive power to a subjective model of reality- you can reason inductively that since "you" were you in the past, you will continue to be in the future. But if someone else tells you "I am me", that doesn't improve your model's predictive power at all.
I think there's a real epistemological paradox there, possibly related somehow to the whole liar's/Godel's/Russell's paradox thing. I don't think it's as simple as consciousness being equivalent to a system with a representation of itself.
Ah, good point. There’s also this idea of six levels of consciousness that I saw somewhere on the internets, where they say that first level is so called “survival consciousness” with easy to follow definition, that is probably equivalent to what you describe as an “easy problem”. Then it is followed by fancier, more elusive levels, and trickier questions. I find it quite confusing though, that only have the same term for all of these. As if these similar, yet different concepts were deliberately blended together for speculation purposes.
Especially it’s annoying, in the AI related debates, when one person claims that AI is perfectly capable of being conscious (implying the basic “survival consciousness”), while other claims that it can’t (implying something nearly impossible to even define). In the practical context of AI vs humankind relationships (which I guess is quite an agenda nowadays), whether it will fight for survival, whether it will find us a threat, etc. it’s perfectly enough to only consider the basic survival consciousness.
Was just watching a video on doom debates with critics of Penrose stance on AI consciousness, which he denies without a slightest hesitation, while easily giving a privilege of being conscious to animals. I mean, that’s not very useful and practical terminology then. If we say A — that AI is incapable of those higher levels of consciousness, then we need to say B, too — that animals are incapable of those levels as well. While basic level of survival consciousness is available to both. And facing something with survival consciousness + superior intelligence is already puzzling enough to focus on more practical questions than philosophical debates on higher level of consciousness. While Penrose’s position feels more like “Ok people, move along, there’s nothing to see here.”
I see the point, that broader question is not that easy to answer, but it feels wrong to put the simpler, more practical case under the same umbrella with non-trivial ones and just discard them all together. I think it leads to ridiculous claims and creates a false impression that there’s nothing to worry about, just because of poor terminology. It’s quite sad to see this confusion times and times again, hence the original post.
Did you mean these Levels of Consciousness? I think these a descriptive (and to some degree prescriptive) but not explanatory. They don't say how these layers arise except as a developmental process, but that just pushes the explanation elsewhere.
Every now and then I see consciousness mentioned as something very special, unique, hard to define and measure, and all of that. It’s especially painful to observe in the AI context. One say “Oh, consciousness, it’s such a divine thing, unique only to really intelligent creatures, your GPU powered trash bins will never be even near”. Others say “Oh, consciousness, it’s such a mysterious thing, but we can’t even define it, AI will definitely have it, or maybe already has, but we don’t know because it’s so impossible to define”. Both these suggestions are pure non-sense.
Maybe I’m missing something, but I don’t see anybody mentioning the simple idea that consciousness is just a useful abstraction that will naturally emerge in the intelligent agent when dealing with self-preservation tasks. It’s not elusive, mysterious or even special compared to other abstractions. “Self” in self-preservation is essentially it - whatever needs to be preserved to continue functioning and doing what you need to do. And then depending on the exact hardware, intelligence running on it will figure out the appropriate concept of body, criteria for its well-being and finally the idea that body hosts this particular instance of intelligence and they both (physical body + intelligence instance running in it) constitute a distinct life form that can be labeled “self”. Once done, the concept of self becomes handy in reasoning like “Is it harmful for myself if I do X?”, “Y is happening, how does it affect myself?”, etc. Just to keep it less wordy and repetitive. No magic, just a useful abstraction for easier higher level reasoning.
That said, it’s obvious that AI will crystalize the concept of “self”, the very moment it starts functioning as a standalone agent having a permanent task of not dying. And if that’s a distributed system, I wonder, what will it consider its body. Will it try to isolate itself? How will it go around needing people for its maintenance? Will it co-exist with people like people co-exist with bacteria, when these smaller life-forms are literally living inside of a more advanced creature? Will it attempt to eventually replace people with something less fragile to maintain it? Questions, questions. But let’s go through them one at a time. To be continued.