/Edit 1: I want to preface this by saying I am just a noob who has never posted on Less Wrong before.
/Edit 2:
I feel I should clarify my main questions (which are controversial): Is there a reason why turning all of reality into maximized conscious happiness is not objectively the best outcome for all of reality, regardless of human survival and human values? Should this in any way affect our strategy to align the first agi, and why?
/Original comment:
If we zoom out and look at the biggest picture philosophically possible, then, isn´t the only thing that ultimately matters in the end 2 things - the level of consciousness and the overall "happiness" of... (read more)
/Edit 1: I want to preface this by saying I am just a noob who has never posted on Less Wrong before.
/Edit 2:
I feel I should clarify my main questions (which are controversial): Is there a reason why turning all of reality into maximized conscious happiness is not objectively the best outcome for all of reality, regardless of human survival and human values?
Should this in any way affect our strategy to align the first agi, and why?
/Original comment:
If we zoom out and look at the biggest picture philosophically possible, then, isn´t the only thing that ultimately matters in the end 2 things - the level of consciousness and the overall "happiness" of... (read more)