The better framing is almost certainly "how conscious is AI in which ways?"
The question "if AI is conscious" is ill-formed. People mean different things by "consciousness". And even if we settled on one definition, there's no reason to think it would be an either-or question; like all most other phenomena, most dimensions of "consciousness" are probably on a continuum.
We tend to assume that consciousness is a discrete thing because we have only one example, human consciousness, and ultimately our own. And most people who can describe their consciousness are having a pretty human-standard experience. But that's a weak reason to think there's really one discrete thing we're referring to as "consciousness".
That's my standard comment. I apologize for not reading your paper before commenting on your post title. I am starting to think that the question of AI rights might become important for human survival, but I'm waiting til we see if it is before turning my attention back to "consciousness".
The article is a meta analysis of consciousness research rather than an analysis of whether or not AI is conscious. I discuss the assumptions various disciplines hold in the article.
I quite enjoyed reading this - I’m surprised I’d not read something like it before and quite happy you did the work and posted it here.
Do you have plans of using the dataset you built here to work on “figuring out if AI is conscious”?
My aim in this article is to examine the field of consciousness research with an analysis of 78 of the most important papers in the field. In doing so, I compare the empirical foundations these disciplines rely upon and detail how each approaches the measurement of conscious experience.