I disagree with your argument here, but thought it was good for you to write a more concise post articulating the argument more clearly. I've upvoted the post, mostly because it seemed reasonable for this one to have positive karma.
This is the boiled-down version of your argument? You clearly state that GOFAI failed and that classic symbolic AI won't help. You also seem to be in favor of DL methods, but in a confusing way, say they don't go far enough (and then refer to others that have said as much). But I don't understand the leap to AGI, where you only seem to say we can't get there, not really connecting the two ideas. Neurosymbolic AI is not enough?
I think you're saying DL captures how the brain works without being a model for cognitive functions, but you don't expand on the link between human cognition and AGI. And while I've thrown some shade on the group dynamics here (moderators can surface the posts they like while letting others sink, aka discovery issues), you have to understand that this 'AGI is near' mentality is based on accelerating progress in ML/DL (and thus, the safety issues are based on the many problems with ML/DL systems). At best, we can shift attention onto more realistic milestones (thou they will call that goalpost moving). If your conclusion is that this AGI stuff is unlikely to ever pan-out, then yes, all of your arguments will be rejected.
Just to clarify, I am responding to the proposition "AGI will be developed by January 1, 2100". The safety issues are orthogonal because we already know that existing AI technologies are dangerous and are being used to profile people and influence elections.
I have added a paragraph before the points, which might clarify the thrust of my argument. I am undermining the reason why so many people have a belief that DL based AI will achieve AGI when GOFAI didn't
this seems like a reasonable point, but why should the looming success of neurosymbolic approaches make anyone expect agi to not happen?
This is the third post about my argument to try and convince the Future Fund Worldview Prize judges that "all of this AI stuff is a misguided sideshow". My first post was an extensive argument that unfortunately confused many people.
(The probability that Artificial General Intelligence will be develop)
My second post was much more straightforward but ended up focusing mostly on revealing the reaction that some "AI luminaries" have shown to my argument
(Don't expect AGI anytime soon)
Now, as a result of answering many excellent questions that exposed the confusions caused by my argument, I believe I am in a position to make a very clear and brief summary of the argument in point form.
To set the scene, the Future Fund is interested in predicting when we will have AI systems that can match human level cognition: "This includes entirely AI-run companies, with AI managers and AI workers and everything being done by AIs." This is a pretty tall order. It means systems with advanced planning and decision making capabilities. But this is not the first time people predicted that we will have such machines. In my first article I reference a 1960 paper which states that the US Air Force predicted such a machine by 1980. The prediction was based on the same "look how much progress we have made, so AGI can't be too far away" argument we see today. There must be a new argument/belief if today's AGI predictions are to bear more fruit than they did in 1960. My argument identifies this new belief. Then it shows why the belief is wrong.
Part 1
Part 2