I am perplexed by this model and theory on a couple fronts.
1. As an R&D Developer working in RAD and AI, I often go to my tools. And my battle buddy is an idiot. By that I mean LLM. LLM's look snazzy, but they're parrots. Yes I can glean good code from them - sometimes, but I have to keep starting new conversations. Why? No context retention at all! No matter what the sellers of these tools say. They start drifting into very dumb code, and I start getting annoyed. So the best bet is to restart and rebuild context with a new conversation.
That's the best we've done with sequence-to-sequence modeling? I literally cannot have a long drawn-out conversation with GPT today because it gets completely lost after a few thousand tokens? (Forget the billions of tokens they sell, empirically it's not holding up to the calcs.)
So that's the kind of science that gets us to 2027? I don't think so. That gets us to the next sellable model by the next company that wants to make a profit next year. And then the next. And the next. Sadly, my opinion.
Don't get me wrong, I love AI and am innovating in my own career with it.
But, I think this article is a geeked out interpretation of the sheer prediction of computing power we're predicting to have by then and a hope that AI will...somehow....AI itself into creation by then so that it acts like an idle game that just slowly but exponentially ramps itself up. Okay, great imagination, but by 2027?
2. And what have we heard about Agentic Misalignment?? Why would we even want this goal so soon? Anthropic defined their "powerful AI" as something that acts autonomously... Studies have already shown (and in fact Anthropic did these studies!!!) that AI agents are emotionally unaware and have no moral compass. Just a rewards system. So....they go morally south if they must to achieve a goal.
Look up the Agentic Misalignment paper published by Anthropic's Red Team.
That would not be AGI, that would be a great way to make artificial sociopaths! We give the power to interact with the world to an autonomous agent with the emotional and moral compass of a teenage sociopath that will hulk smash to its goal??? What do we gain?
I think we can't get there without some sort of Cognitive Agent Framework. It would mimic emotion just like AI mimics reason. The idea is mimicked empathy - which is still better than programmed sociopathy by a narrow margin. Just a thought.
Yes we must and will get there. All I'm saying is I doubt it in that timeframe... HOPEFULLY not by 2027!!!! Because we're not ready!
I am perplexed by this model and theory on a couple fronts.
1. As an R&D Developer working in RAD and AI, I often go to my tools. And my battle buddy is an idiot. By that I mean LLM. LLM's look snazzy, but they're parrots. Yes I can glean good code from them - sometimes, but I have to keep starting new conversations. Why? No context retention at all! No matter what the sellers of these tools say. They start drifting into very dumb code, and I start getting annoyed. So the best bet is to restart and rebuild context with a new conversation.
That's the best we've done with sequence-to-sequence modeling? I literally cannot have a long drawn-out conversation with GPT today because it gets completely lost after a few thousand tokens? (Forget the billions of tokens they sell, empirically it's not holding up to the calcs.)
So that's the kind of science that gets us to 2027? I don't think so. That gets us to the next sellable model by the next company that wants to make a profit next year. And then the next. And the next. Sadly, my opinion.
Don't get me wrong, I love AI and am innovating in my own career with it.
But, I think this article is a geeked out interpretation of the sheer prediction of computing power we're predicting to have by then and a hope that AI will...somehow....AI itself into creation by then so that it acts like an idle game that just slowly but exponentially ramps itself up. Okay, great imagination, but by 2027?
2. And what have we heard about Agentic Misalignment?? Why would we even want this goal so soon? Anthropic defined their "powerful AI" as something that acts autonomously... Studies have already shown (and in fact Anthropic did these studies!!!) that AI agents are emotionally unaware and have no moral compass. Just a rewards system. So....they go morally south if they must to achieve a goal.
Look up the Agentic Misalignment paper published by Anthropic's Red Team.
That would not be AGI, that would be a great way to make artificial sociopaths! We give the power to interact with the world to an autonomous agent with the emotional and moral compass of a teenage sociopath that will hulk smash to its goal??? What do we gain?
I think we can't get there without some sort of Cognitive Agent Framework. It would mimic emotion just like AI mimics reason. The idea is mimicked empathy - which is still better than programmed sociopathy by a narrow margin. Just a thought.
Yes we must and will get there. All I'm saying is I doubt it in that timeframe... HOPEFULLY not by 2027!!!! Because we're not ready!