Posts

Sorted by New

Wiki Contributions

Comments

If Aryeh another editor smarter than me sees fit to delete this question, please do, but I am asking genuinely. I'm a 19-year-old college student studying mathematics, floating around LW for about 6 months. 

How does understanding consciousness relate to aligning an AI in terms of difficulty? If a conscious AGI could be created that correlates positive feelings*  with the execution of its utility function, is that not a better world than one with an unconscious AI and no people?

I understand that there are many other technical problems implicit in that question that would have to be reconciled should something like this ever be considered, but given how hopeless the discourse around alignment has been, some other ways to salvage moral good from the situation may be worth considering.

* positive conscious experience (there is likely a better word for this, apologies)