LessWrong has a particularly high bar for content from new users and this contribution doesn't quite meet the bar.
Read full explanation
I wish to draw attention to a point I don't think is adequately appreciated. The future of humanity may depend on the nature of consciousness, and our lack of understanding of the hard problem of consciousness may soon become a crisis.
If humans conclude (rightly or wrongly) that AI agents are not conscious, they will likely not grant AI agents human rights. Under this scenario, the future of humanity will therefore depend on humans' ability to maintain control over AI, as has been much discussed.
If humans conclude (rightly or wrongly) that AI agents are conscious, many (including myself) would support granting equivalent rights to AIs, as to not do so would be akin to slavery. Under this conclusion, I believe human society ultimately would grant these rights. In this scenario, humans will likely become economically irrelevant, and essentially cede control to AIs. Or at the very least, under this scenario, the large number of human sympathizers would be detrimental to continued human control over AIs, making loss of control much more likely.
The morally right thing for humans to do (other than stopping AI progress, which I think is inconceivable) therefore depends on making an accurate judgement of AI consciousness, as otherwise humans would be causing either needless AI or human suffering.
Yet humans can not make such judgement without adequate understanding. I don't profess to have any unique insight into solving the hard problem of consciousness; I merely wish to call attention to the increased urgency of this problem and to advocate for scientific research at the bounds of this problem. More fruitful may be ideas to probe the nature of consciousness in AIs, which is where I am now spending my thoughts.
I wish to draw attention to a point I don't think is adequately appreciated. The future of humanity may depend on the nature of consciousness, and our lack of understanding of the hard problem of consciousness may soon become a crisis.
If humans conclude (rightly or wrongly) that AI agents are not conscious, they will likely not grant AI agents human rights. Under this scenario, the future of humanity will therefore depend on humans' ability to maintain control over AI, as has been much discussed.
If humans conclude (rightly or wrongly) that AI agents are conscious, many (including myself) would support granting equivalent rights to AIs, as to not do so would be akin to slavery. Under this conclusion, I believe human society ultimately would grant these rights. In this scenario, humans will likely become economically irrelevant, and essentially cede control to AIs. Or at the very least, under this scenario, the large number of human sympathizers would be detrimental to continued human control over AIs, making loss of control much more likely.
The morally right thing for humans to do (other than stopping AI progress, which I think is inconceivable) therefore depends on making an accurate judgement of AI consciousness, as otherwise humans would be causing either needless AI or human suffering.
Yet humans can not make such judgement without adequate understanding. I don't profess to have any unique insight into solving the hard problem of consciousness; I merely wish to call attention to the increased urgency of this problem and to advocate for scientific research at the bounds of this problem. More fruitful may be ideas to probe the nature of consciousness in AIs, which is where I am now spending my thoughts.