What is the consensus here on jumping from qualia (inner experience) to full self-awareness in LLM based AI? Meaning: If an AI running on something like LLM based architecture would gain qualia; inner experience of any kind, is the gap to self-awareness small?
Is it perhaps 15 % for qualia, 10 % for full self-awareness?
The alternative would be a bigger gap between qualia and self-awareness. Perhaps as big as, or bigger than the gap from non-sentience to qualia.
This question is only about how big the sentience jump would be, relatively speaking. I do not explicitly care about agency here. (The consensus there is ofc that agency is more likely than qualia. Those Ps are another discussion.)
I would believe that frontier labs and most researchers (alignment and capability alike) would agree that unlike in evolved, organic life, the jump from qualia to self-awareness would be smaller. This since the LLM is already wired and trained for reasoning. The springing point is then that qualia itself is unlikely. But what the probabilities are for both are debatable. I am curious about the sentiment about the relative gap between them.
I have no idea where LW stands on this, or where the broader public (those who think about this, I presume mostly academia) is at.
The premise here is that the labs would make all necessary changes and scaffolding to allow this to at least in theory be possible, say on purpose.