Posts

Sorted by New

Wiki Contributions

Comments

If an AI begins hurting people it's time to shut it down.

 

Which values and who/what groups of people or people or nations will decide the boundary when 'hurting' people is actually hurting people enough to warrant action?

This is similar in its applicability within human laws, regulations, and punishments...