This post was rejected for the following reason(s):

  • Low Quality or 101-Level AI Content. There’ve been a lot of new users coming to LessWrong recently interested in AI. To keep the site’s quality high and ensure stuff posted is interesting to the site’s users, we’re currently only accepting posts that meets a pretty high bar. We look for good reasoning, making a new and interesting point, bringing new evidence, and/or building upon prior discussion. If you were rejected for this reason, possibly a good thing to do is read more existing material. The AI Intro Material wiki-tag is a good place, for example. You're welcome to post quotes in the latest AI Questions Open Thread.

There is an unspoken presumption behind the alignment problem.

The unspoken presumption is the following: if we build an AI that has a sound, robust epistemology and leave it to observe the world via digital information, that is insufficient for creating a morally good AI.

In other words, knowing the Truth is not a sufficient condition for moral goodness.

What is the basis for this presumption?

New Answer
New Comment