These could be theoretical breakthroughs (like "the idea of a perceptron" or [something Judea Pearl did]), or they could be watershed developments / new applications that don't necessarily involve much new theory (like AlexNet or AlphaGo). Anything that seems like an important development in AI, is fair game.

I want an independently-generated list of all the interesting developments of AI, over the whole life of the fields, for a research project that I'm working on.

Feel free to include ones that you, personally, think were a big deal in some way, even if most people don't think so.


New Answer
Ask Related Question
New Comment

10 Answers sorted by

Check the section called "derivations" here: it links to a document attempting to list all conceptual breakthroughs in AI, of at least a certain significance, ever with related discussion on forecasting implications here:

-Reverse Mode Autodiff

-Using GPUs for computing.

These are the two big ones. Yes there are some others, but those two ideas together are the backbone of the current AI and ML boom.

AlexNet in 2012. I'm not super clear on the details, but it seems to be the first time a deep neural net substantially outperformed other AI methods, and thereby kicked off the deep learning revolution.

Deep Blue: a chess engine beats the reigning world chess champion, Gary Kasparov.

From A 'Brief' History of Neural Nets and Deep Learning, Part 4:

So, why indeed, did purely supervised learning with backpropagation not work well in the past? Geoffrey Hinton summarized the findings up to today in these four points:

  1. Our labeled datasets were thousands of times too small.
  2. Our computers were millions of times too slow.
  3. We initialized the weights in a stupid way.
  4. We used the wrong type of non-linearity.

I think this blog series might help provide a partial answer to your question.

It seems like this one should be combined with the one below it - AlphaGo.