You are viewing revision 1.0.0, last edited by TerminalAwareness

Machine Ethics is the emerging field which seeks to create technology with moral decision making capabilities. A superintelligence will take many actions with moral implications; programming it to act morally is the main goal of the field of friendly artificial intelligence.

A famous early attempt at machine ethics was that by Issac Asimov in a 1942 short story, a set of rules known as the Three Laws of Robotics. They formed the basis of many of his stories.

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Later, he added a zeroth rule, used in further expanding his series.

0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

These rules were implemented in fictional "positronic brains"; in reality today the field usually concerns itself in discussing the practical ethics and programming of such issues as robots in war, as home assistants, and lately in driverless cars.

References