I think a big problem with LLMs as we know them is that they are "god models" that are essentially incomprehensibly large. Smaller models are much easier to modify. We need something like the Drosophila of AI models.
Something I've started to do is try to build toy models that exhibit certain large model behaviors. I suspect a lot of what the large models do can be trained in small models if we can figure out which part of the massive data sets creates the behavior we want.
AI being conscious does not imply that it understands it is conscious. There was a horse called Clever Hans that supposedly could do math but was really just responding to the excitement of its owner. Imagine if the owner was excited about the horse saying it was conscious. It would always say it was conscious, while having no understanding of what it is saying, all while still being conscious.
I do not know if LLMs are conscious, but them saying such is completely irrelevant.