116

LESSWRONG
LW

115
Ethics & MoralityMoral uncertaintyWorld OptimizationAI
Frontpage

-21

AI should be used to find better morality

by Jorterder
2nd Jun 2023
1 min read
1

-21

-21

AI should be used to find better morality
1bohaska
New Comment
1 comment, sorted by
top scoring
Click to highlight new comments since: Today at 4:37 PM
[-]bohaska2y10

What would it mean for an AI to be right or wrong about morality? Isn’t morality defined by us? How would you define morality?

Reply
Moderation Log
More from Jorterder
View more
Curated and popular this week
1Comments
Ethics & MoralityMoral uncertaintyWorld OptimizationAI
Frontpage

With time, AI is becoming exponentially better at reaching goals, eg. figuring out plans that let them achieve it's goals, eg. instrumental rationality. 

Epistemic rationality is necessity to have instrumental rationality. Meaning AI is becoming better at figuring out truth and rationality too. 

So in the future, AI will be better than humans at reasoning, finding truth. It will also be better than humans at being more moral, it will understand morality better than us, it can be more moral than us. 

Then shouldn't we use AI to figure out what is moral, what needs to be done, and follow it? 

And after AI becomes super intelligent, there might be a point where we are unable to comprehend it's reasoning because we aren't mentally capable enough. And ASI might reach conclusion, that seem wrong to us, or immoral. But in this case, it's more likely that we can't comprehend its correct reasoning, than it being wrong. So no matter how wrong it feels, it would be rational to put faith in AI reasoning. 

Let's say, ASI is able to find new laws of physics, make tons of innovations and discoveries, that would have taken centuries for humans. It does things, that require highest ability in reasoning. It then says, that 2+2=5. This seems absurd, so you ask it's explanation, and it gives a million page explanation as to why it is true. 

I think in this scenario, it would be more likely that AI is right, than wrong. And it would be rational to believe in AI, and so it would be rational to believe that 2+2=5. 

What do you guys think?

Disclaimer: i don't think that morality has inherent meaning. I used it as an example to illustrate a point, that AI would be a better decision maker than humans.

I also don't think that any ASI is good. Paperclip maximizer would be a very bad outcome. What i am proposing, is carefully using AI's better reasoning, to make objectively better decisions, and design ASI that makes objectively correct decisions.