Great article! I think the push for AGI is potentially misguided due to a number of reasons: 1- As Yann LeCun argues, human intelligence is not general 2- The minute we reach AGI, it will already be ASI (Artificial Super Intelligence), since AI is not so constrained by memory, processing time and power as our brains. 3- Most of the effort is on Language models, and our intelligence is not JUST about language. True AGI would require a "world model", as argued by LeCun and Demis Hassabis (Deep Mind) among others. 4- We still struggle to define what constitutes intelligence, and it very frequently gets mixed up with consciousness, and the concept of AI singularity.
Then why do labs push for AGI? We can speculate: 1- Some people may be after it to say they've done it first and reap the economic benefits. Or, using the negative perspective, they don't want to be last, or even second ("If we don't do it, someone else will do it, and they may be bad users", therefore a mix of FOMO with greed, and a hint of "effective altruism") 2- Some others may be after it as a step towards transhumanism and transcendence. 3- It's a blurry enough goal that labs can hedge their bets and defend their claims. 4- It's a blurry enough concept that it can be used as a differentiator, by abstrusity or obfuscation.
Personally, I see it of a symptom of a worrying trend: the focus of products being built is not on delighting the users or being truly useful for any specific use cases, but rather on dazzling the investors with promises of what may (but also may not) be achieved in the future. This system is a bit perverse, because it is not driven by a clear, long term benefit for humanity, but by a short term drivers that benefit a minority and impact the majority.
Great article! I think the push for AGI is potentially misguided due to a number of reasons:
1- As Yann LeCun argues, human intelligence is not general
2- The minute we reach AGI, it will already be ASI (Artificial Super Intelligence), since AI is not so constrained by memory, processing time and power as our brains.
3- Most of the effort is on Language models, and our intelligence is not JUST about language. True AGI would require a "world model", as argued by LeCun and Demis Hassabis (Deep Mind) among others.
4- We still struggle to define what constitutes intelligence, and it very frequently gets mixed up with consciousness, and the concept of AI singularity.
Then why do labs push for AGI? We can speculate:
1- Some people may be after it to say they've done it first and reap the economic benefits. Or, using the negative perspective, they don't want to be last, or even second ("If we don't do it, someone else will do it, and they may be bad users", therefore a mix of FOMO with greed, and a hint of "effective altruism")
2- Some others may be after it as a step towards transhumanism and transcendence.
3- It's a blurry enough goal that labs can hedge their bets and defend their claims.
4- It's a blurry enough concept that it can be used as a differentiator, by abstrusity or obfuscation.
Personally, I see it of a symptom of a worrying trend: the focus of products being built is not on delighting the users or being truly useful for any specific use cases, but rather on dazzling the investors with promises of what may (but also may not) be achieved in the future. This system is a bit perverse, because it is not driven by a clear, long term benefit for humanity, but by a short term drivers that benefit a minority and impact the majority.