In MIRI's March newsletter, they link this post which argues against the importance of AI safety because we haven't yet achieved a number of "canaries in the coal mines of AI". The post lists:

  • The automatic formulation of learning problems
  • Self-driving cars
  • AI doctors
  • Limited versions of the Turing test

What other sources identify warning signs for the development of AGI?

New to LessWrong?

New Answer
New Comment

1 Answers sorted by

Daniel Kokotajlo

Apr 01, 2020

90

AI Impacts has a list of reasons people give for why current methods won't lead to human-level AI. With sources. It's not exactly what you are looking for, but it's close, because most of these could be inverted and used as warning signs for AGI, e.g. "Current methods can't build good, explanatory causal models" becomes "When we have AI which can build good, explanatory causal models, that's a warning sign."