If Aryeh another editor smarter than me sees fit to delete this question, please do, but I am asking genuinely. I'm a 19-year-old college student studying mathematics, floating around LW for about 6 months.
How does understanding consciousness relate to aligning an AI in terms of difficulty? If a conscious AGI could be created that correlates positive feelings* with the execution of its utility function, is that not a better world than one with an unconscious AI and no people?
I understand that there are many other technical problems implicit in ... (read more)
If Aryeh another editor smarter than me sees fit to delete this question, please do, but I am asking genuinely. I'm a 19-year-old college student studying mathematics, floating around LW for about 6 months.
How does understanding consciousness relate to aligning an AI in terms of difficulty? If a conscious AGI could be created that correlates positive feelings* with the execution of its utility function, is that not a better world than one with an unconscious AI and no people?
I understand that there are many other technical problems implicit in ... (read more)