164

LESSWRONG
LW

163
Ethics & MoralityInternal Alignment (Human)Nootropics & Other Cognitive EnhancementAIWorld Optimization
Frontpage

25

Artificial Moral Advisors: A New Perspective from Moral Psychology

by David Gross
28th Aug 2022
1 min read
1

25

This is a linkpost for https://dl.acm.org/doi/pdf/10.1145/3514094.3534139

25

Artificial Moral Advisors: A New Perspective from Moral Psychology
1Phil Tanny
New Comment
1 comment, sorted by
top scoring
Click to highlight new comments since: Today at 8:05 AM
[-]Phil Tanny3y10

Tired: can humans solve artificial intelligence alignment?

Wired: can artificial intelligence solve human alignment?

 

Apologies that I haven't read the article (not an academic) but I just wanted to cast my one little vote that I enjoy this point, and the clever way you put it.

Briefly, it's my sense that most of the self inflicted problems which plague humanity (war for example) arise out of the nature of thought, that which we are all made of psychologically.   They're built-in.  

I can see how AI, like computing and the Internet, could have a significant impact upon the content of thought, but not the nature of thought.

Genetic engineering seems a more likely candidate for editing the nature of thought, but I'm not at all optimistic that this could happen any time soon, or maybe any time ever.   

Reply
Moderation Log
More from David Gross
View more
Curated and popular this week
1Comments
Ethics & MoralityInternal Alignment (Human)Nootropics & Other Cognitive EnhancementAIWorld Optimization
Frontpage

Tired: can humans solve artificial intelligence alignment?

Wired: can artificial intelligence solve human alignment?

“Philosophers have recently put forward the possibility of achieving moral enhancement through artificial intelligence… proposing various forms of ‘artificial moral advisor’ (AMA) to help people make moral decisions without the drawbacks of human cognitive limitations. In this paper, we provide a new perspective on the AMA, drawing on empirical evidence from moral psychology to point out several challenges to these proposals that have been largely neglected by AI ethicists.”