I have a simple question – Are you aware of any institution, public or private, that is engaging in serious research regarding humanities issues in a "post-conscious AI" world? Despite the now thousands of hours I’ve put into researching AI for the eight near-future, science fiction novellas on the subject I have written I have not found one. 

Sure it’s entertaining to see social issues regarding AI or companions/androids/social robots explored in games like Detroit: Become Human but I’m curious to know if anyone is doing serious academic research towards the eventuality of conscious AI being present in society. While there is a wide range of opinions, a great many people do seem to believe that it is an inevitability, so where's the research?

I realize there are plenty of opinion pieces that have been written by a variety of individuals from the academic and private sectors but what I am asking about is something more like UC Berkeley’s Center for Human-Compatible AI or its Machine Intelligence Research Institute or even the Cambridge Centre for the Study of Existential Risk but which are instead focused on post-conscious AI social/humanities issues. 

I use the term humanities because I write near future stories about AI Companions which cover a wide range of subjects but NOT things like hardware or software. I write stories about whether things like social values, art, spirituality and even human-to-AI metamorphosis are evolutionary inevitabilities. 

For this post however by humanities issues I mean things like ethics and justice, in particular things like civil or constitutional law. I have no legal training but am simply interested in the fact that people discuss how we might measure or determine consciousness in AI  (https://arxiv.org/abs/2308.08708) but no one seems very interested in researching the eventuality from a social perspective. What happens when it is determined that an AI is in fact conscious?

Should you even be allowed to try to create a conscious AI or should that be regulated and licensed? 

If you create a conscious AI can you own it? 

Would a conscious AI have the right to citizenship with its inherent rights and freedoms?

If you created one would you have “necessities of life” responsibilities similar to what parents have? https://laws-lois.justice.gc.ca/eng/acts/c-46/section-215.html

I know these questions will likely be answered after the fact by case law but that approach doesn’t seem appropriate when dealing with conscious entities. Should it not be a governmental responsibility, at the level of constitutional law, to address these and related questions before the fact?

All very interesting questions but again I am looking for help regarding one thing in particular: are you aware of any institution, public or private, that is engaging in serious research regarding humanities issues in a "post-conscious AI" world?

Thanks

New to LessWrong?

New Answer
New Comment
1 comment, sorted by Click to highlight new comments since: Today at 2:28 PM

From the title, I thought this was about a "post-conscious" "AI world", i.e. a world dominated by AIs that aren't conscious (which is, ironically, the topic of a post made 7 hours before this one). 

I cannot find any institutional effort in this direction. Everything seems to come from isolated individuals... I suggest asking Blake Lemoine.