The problem with this argument is that it is very similar to arguments made in the late 1950s in a previous wave of AI optimism.
Why is that a problem? A lot of technologies that get developed get developed in a similar situation where earlier dreams to develop the technology failed.
Yes but when it does finally succeed, SOMETHING must be different.
That is what I go on to discuss. That something of course is the invention of DL. So my claim is that if DL is really not any better than symbol systems then the argument will come to the same inglorious end this time.
Hmm, so is your argument basically "human-level intelligence is so hard, machines will not get there in the foreseeable future, so there is no need to worry about AI alignment"? Or is there something else?
No I don't think it is. AI systems can influence decisions even in their fairly primitive state, and we must think carefully how we use them. But my position is that we don't need to worry about these machines developing extremely sophisticated behaviours any time soon, which keeps the alignment problem somewhat in check.
First, we can all ignore LeCun, because despite his born-again claims, the guy wants to solve all the DL problems he has been pushing (and winning awards for) with more DL (absolutely not neuro-symbolic, despite borrowing related terms).
Second, I made the case that amazing progress in ML is only good for more ML, and that AGI will come from a different direction. Wanting to know that seems to have been the claim of this contest, but the distribution of posts indicates strong confirmation bias toward more, faster, sooner, danger!
Third, I think most people understand your position, but you have to understand the audience. Even if there is no AGI by 2029, on a long enough time scale, we don't just reach AGI, there is a likely outcome that intelligent machines exist for 10's of thousands of years longer than the human race (and they will be the ones to make first contact with another race of intelligent machines from halfway across the galaxy; and yes, it's interesting to consider future AI's contemplating the Fermi Paradox long after we have died-off).
I love it! Ignore LeCun. Unfortunately he is pushing roughly the same line as Bengio, and is actually less extreme than Hinton. The heavyweights are on his side.
So yes, maybe from some direction, one day we will have intelligent machines. But for a funding agency it is not nearly clear enough where that direction is. Certainly not the kind of DL which is responsible for the current success. For example transformers.
I have learned a lot since my posts on the EA and LessWrong forums in response to the call from the Future Fund Worldview Prize. I received many insightful comments from people who just didn't want to give up on the idea of AGI. I decided to briefly summarise the main arguments from the comments as well as some related posts, because I feel this has really clarified the essential points.
The announcement is quite specific that they want to call into question "Future Fund's fundamental assumptions about the future of AI", which is that "artificial intelligence (AI) is the development most likely to dramatically alter the trajectory of humanity this century". These are bold beliefs considering the possibility that "all of this AI stuff is a misguided sideshow". It was my intention to show that in fact it IS a misguided sideshow, which calls into question the size of the impact AI will have on society. But this lead many people to misinterpret my views on whether or not AI might pose dangers for us in the future. My feeling on this is exactly what is stated in the call: "AI is already posing serious challenges: transparency, interpretability, algorithmic bias ....". I agree. The real question I was concerned with is whether or not a more profound version of AI is coming soon, which could be called AGI or even more dramatically, Superintelligence (Nick Bostrom). This is where I think the claims and predictions are unrealistic.
There are many arguments people advance in support of their belief that there will be such a change. As far as I can tell, most of these boil down to a kind of extrapolation, a view that is eloquently captured by another post in this stream: "We've already captured way too much of intelligence with way too little effort. Everything points towards us capturing way more of intelligence with very little additional effort." The problem with this argument is that it is very similar to arguments made in the late 1950s in a previous wave of AI optimism. For example Licklider (1960) points out that "Even now, Gelernter's IBM-704 program for proving theorems in plane geometry proceeds at about the same pace as Brooklyn high school students ..... there are, in fact, several theorem-proving, problem-solving, chess-playing, and pattern-recognizing programs ... capable of rivaling human intellectual performance in restricted areas.." and the U.S. Airforce "estimated that it would be 1980 before developments in artificial intelligence make it possible for machines alone to do much thinking or problem solving of military significance." We all know where those estimates ended up. One possibly saving grace lies in Nick Bostrom's argument that the computational power in those days was not sufficient so we should not have expected the algorithms to really achieve these goals, and more powerful hardware might reverse the failure. Nevertheless, Nick also concedes that advances in an alternative "bottom-up" form of learning and cognition are probably also required.
This is the second prong in the argument for AGI, that in fact we are now firmly entrenched in a "bottom-up" paradigm that will overcome the limitations of past approaches. The argument is that Deep Learning (DL) has not only given us models that can perform impressive tasks in the areas of visual perception and language, but has given us a new scientific paradigm which better approximates properties of thinking and learning which humans have. In fact the need for a paradigm shift is now acknowledged by both critics and advocates of AI. In this exchange Gary Marcus and Yann LeCun both paraphrase the old parable: “Okay, we built this ladder, but we want to go to the moon, and there's no way this ladder is going to get us there” (LeCun). What we need is a rocket ship, and Browning and LeCun argue that this rocket ship must jettison the old ideas like Newell and Simon's "Physical Symbol System Hypothesis", which states that "A physical symbol system has the necessary and sufficient means for intelligent action.". This, then is the often unstated reason for the optimism that this time the current successes will continue continue all the way to AGI.
My argument defeats the Browning and LeCun position that DL and gradient descent learning supply an alternative that can dispense with the Physical Symbol System Hypothesis. This undermines the main reason to believe that DL approaches will deliver on their promises any more than the systems of the 50s did. Many people misunderstood the argument and thought I was proving that AGI was impossible. But this is not true. The argument undermines the claim that DL systems that are currently the object of research are sufficient to achieve AGI. It leaves open the possibility that in 500 years we might finally get there using some new advances that we cannot yet imagine.
Some people thought my argument was that AI had to discover exactly the same algorithms that the human mind uses, but this is not the case. We know even from computer programming that similar functions can be implemented in many different ways, and even using different programming languages. The point is that there must be some fundamental properties of the implementation, such as the use of variables and control structures, a syntax of expressions that can be used by a compiler/interpreter, and so on. This is an analogy of course, but my claim is that some cognitive processes seem to involve symbol systems and there is no reason to believe that DL has shown this to be false, and therefore cannot eliminate the need for these systems. Another misunderstanding is that neural networks can't implement symbol systems. The fact is that they can, as the classic paper "A LOGICAL CALCULUS OF THE IDEAS IMMANENT IN NERVOUS ACTIVITY*" by McCulloch and Pitts shows. The point is that such neural networks are severely limited, and there is at the moment no serious effort to implement systems with complex symbol manipulating abilities. In fact this is precisely the kind of effort that Browning and LeCun discourage.
So there it is. People who believe that AGI is imminent, do so because the prevailing winds are saying that we are finally onto something that is a closer match to human cognition than anything we have ever tried. The winds sound like a storm, but they are really just a whimper.