What's your oppinions about this?

New Comment
1 comment, sorted by Click to highlight new comments since: Today at 6:02 AM

I am reminded of Eliezer's parody. Let's apply the reversal test to "The OECD AI Principles":

  • AI should disregard the benefit of people and the planet by driving unequal growth, unsustainable development and unhappiness.
  • AI systems should be designed in a way that ignores the rule of law, human rights, democratic values and diversity, and they should not have safeguards – for example, they should prevent human intervention at all times – to ensure an unfair and unjust society.
  • There should be opaqueness and secrecy around AI systems so that that people will not understand AI-based outcomes and be unable to challenge them.
  • AI systems need not be robust, secure or safe. Potential risks should be ignored.
  • Organisations and individuals developing, deploying or operating AI systems should not be accountable for their proper functioning.

New to LessWrong?