LESSWRONG
LW

1521
Drew Mirdala
5010
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No posts to display.
AGI Safety FAQ / all-dumb-questions-allowed thread
Drew Mirdala3y60

If Aryeh another editor smarter than me sees fit to delete this question, please do, but I am asking genuinely. I'm a 19-year-old college student studying mathematics, floating around LW for about 6 months. 

How does understanding consciousness relate to aligning an AI in terms of difficulty? If a conscious AGI could be created that correlates positive feelings*  with the execution of its utility function, is that not a better world than one with an unconscious AI and no people?

I understand that there are many other technical problems implicit in that question that would have to be reconciled should something like this ever be considered, but given how hopeless the discourse around alignment has been, some other ways to salvage moral good from the situation may be worth considering.

* positive conscious experience (there is likely a better word for this, apologies)

Reply
No wikitag contributions to display.