This post was rejected for the following reason(s):

  • Sorry, LessWrong has a particularly high bar for content from new users and this didn't quite make the cut. (We have a somewhat higher bar for approving a user's first post or comment)

  • Low Quality or 101-Level AI Content. There’ve been a lot of new users coming to LessWrong recently interested in AI. To keep the site’s quality high and ensure stuff posted is interesting to the site’s users, we’re currently only accepting posts that meets a pretty high bar. We look for good reasoning, making a new and interesting point, bringing new evidence, and/or building upon prior discussion. If you were rejected for this reason, possibly a good thing to do is read more existing material. The AI Intro Material wiki-tag is a good place, for example. You're welcome to post quotes in the latest AI Questions Open Thread.

The race for integrating AI means there are a lot of decision makers and leaders making decisions about AI with very superficial knowledge and consideration. I believe the largest risk in AI is human mismanagement. The human brain deals in human logic which has its characteristic human biases. Just like it took us a while to figure out what fallacies even were and how not to fall for them, we’ll need to first define AI fallacies and human fallacies towards AI and then teach the public these condensed learnings so that we can easily identify 90% of bad uses of AI before they happen. There are no established best practices yet on how to successfully operate AIs. You’ll need to be one of the firsts to figure that out if you want to succeed in our new world to come.

New Comment