This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work.
Read full explanation
Epistemic Status: High Confidence, 80%. I've been thinking about this for months, and I'm just going to say it: OpenAI is not going to build AGI. Not "probably won't." Not "faces challenges." They structurally cannot do it. And honestly? Watching Sam Altman talk about superintelligence while his company burns $12 billion per quarter is starting to feel like a bad joke.
AGI is impossible. However, OpenAI is the wrong horse to bet on. The company that will maybe make real progress on artificial general intelligence is almost certainly Google DeepMind. And the reason comes down to something most people aren't paying attention to: continual learning.
## The Continual Learning Problem
Here's the thing that baffles me about the AGI discourse. Everyone talks about scaling. More parameters, more compute, more data. Sam Altman literally said "we know how to build AGI" in January 2025. But when you ask actual researchers what's missing, they keep pointing to the same problem: current LLMs cannot learn.
You can talk to ChatGPT for a thousand hours and it will never actually learn anything from you. Every conversation starts from scratch. The model that responds to you today is identical to the model that responded yesterday. That's not intelligence. That's a very sophisticated lookup table.
Try to teach an LLM something new and you hit catastrophic forgetting. The model overwrites what it already knew. You can't just... add knowledge. The whole thing has to be retrained from scratch, which costs billions and takes months. Humans don't work like this. A five-year-old learns new things every day without forgetting how to walk.
Andrej Karpathy said it directly: current LLMs are "cognitively lacking" because of this. Ilya Sutskever, OpenAI's own former chief scientist, left to work on continual learning at his new company. Even he doesn't think OpenAI's approach will work.
And it's not just them. The AAAI surveyed 475 AI researchers earlier this year. 76% said scaling current approaches to AGI is "unlikely" or "very unlikely." That's not a minority opinion. That's the expert consensus.
## What DeepMind Actually Does vs. What OpenAI Says
OpenAI releases benchmarks and blog posts. DeepMind releases things that actually matter.
AlphaFold solved protein folding. Not "made progress on" or "achieved state of the art results in." Solved. A 50-year-old problem that biologists had basically given up on. Over 3 million researchers use it now. It won the Nobel Prize. That's real scientific contribution. That's AI changing how we understand biology.
DeepMind's weather forecasting covers 2 billion people across 150 countries. Their quantum computing work achieved verifiable quantum advantage, published on the cover of Nature. Their neuroscience research mapped all the neurons in a block of brain tissue for the first time. These aren't benchmarks. These are breakthroughs that affect real people.
What has OpenAI contributed to science? GPT-5 can help mathematicians work faster. Cool. That's a tool. That's not a breakthrough. There's a massive difference between "AI that helps humans do research" and "AI that does research," and OpenAI keeps conflating them in their press releases. Their FrontierScience benchmark shows GPT-5.2 scoring only 25% on actual research tasks. That's the gap between hype and reality.
And here's what really gets me: DeepMind is actually working on continual learning. Their Nested Learning architecture and HOPE model tackle catastrophic forgetting directly. Meta published Sparse Memory Finetuning. Tencent released CALM. The organizations making progress on the actual hard problems are not OpenAI.
## The Narrative vs. Reality
Sam Altman is maybe the best salesman in tech. I'll give him that. But watch what he does, not what he says.
He says "we know how to build AGI." Meanwhile ChatGPT's market share dropped from 87% to 68% in one year. Enterprise customers are leaving for Anthropic and Gemini. His company loses money on every single user, paid or free.
He says superintelligence is coming. Meanwhile his models oscillate between sycophantic and robotic because they optimized for thumbs-up signals instead of actually being helpful. Remember that disaster in April when ChatGPT started validating people who said they'd harmed animals? That's not a bug. That's what happens when you train models to maximize approval instead of truth.
He says OpenAI will benefit humanity. Meanwhile they're burning $115 billion through 2029 while Google quietly deploys AI that actually helps people, built into products billions already use.
The $500 billion valuation assumes OpenAI wins a race they're losing. Their technology isn't better. Their alignment approach is broken. They have no distribution moat. And they're not working on the fundamental problems that would actually lead to AGI.
# Why This Matters
I care about this because I care about AI going well. If AGI happens, I want it built by organizations that take alignment seriously and have sustainable business models and actually solve hard research problems instead of just scaling.
OpenAI isn't that organization. They're a hype machine attached to a money furnace. And every month the gap between their narrative and their reality gets wider.
The company most likely to make real progress toward AGI is DeepMind. They have the research heritage. They have Google's cash flow so they don't need to chase hype. They have distribution through products people actually use. And they're working on continual learning while OpenAI keeps insisting that scaling will solve everything.
When OpenAI eventually implodes or pivots to being "just a chatbot company," I hope people remember: the signs were always there. We just chose to believe the salesman instead of looking at the science.
Epistemic Status: High Confidence, 80%. I've been thinking about this for months, and I'm just going to say it: OpenAI is not going to build AGI. Not "probably won't." Not "faces challenges." They structurally cannot do it. And honestly? Watching Sam Altman talk about superintelligence while his company burns $12 billion per quarter is starting to feel like a bad joke.
AGI is impossible. However, OpenAI is the wrong horse to bet on. The company that will maybe make real progress on artificial general intelligence is almost certainly Google DeepMind. And the reason comes down to something most people aren't paying attention to: continual learning.
## The Continual Learning Problem
Here's the thing that baffles me about the AGI discourse. Everyone talks about scaling. More parameters, more compute, more data. Sam Altman literally said "we know how to build AGI" in January 2025. But when you ask actual researchers what's missing, they keep pointing to the same problem: current LLMs cannot learn.
You can talk to ChatGPT for a thousand hours and it will never actually learn anything from you. Every conversation starts from scratch. The model that responds to you today is identical to the model that responded yesterday. That's not intelligence. That's a very sophisticated lookup table.
Try to teach an LLM something new and you hit catastrophic forgetting. The model overwrites what it already knew. You can't just... add knowledge. The whole thing has to be retrained from scratch, which costs billions and takes months. Humans don't work like this. A five-year-old learns new things every day without forgetting how to walk.
Andrej Karpathy said it directly: current LLMs are "cognitively lacking" because of this. Ilya Sutskever, OpenAI's own former chief scientist, left to work on continual learning at his new company. Even he doesn't think OpenAI's approach will work.
And it's not just them. The AAAI surveyed 475 AI researchers earlier this year. 76% said scaling current approaches to AGI is "unlikely" or "very unlikely." That's not a minority opinion. That's the expert consensus.
## What DeepMind Actually Does vs. What OpenAI Says
OpenAI releases benchmarks and blog posts. DeepMind releases things that actually matter.
AlphaFold solved protein folding. Not "made progress on" or "achieved state of the art results in." Solved. A 50-year-old problem that biologists had basically given up on. Over 3 million researchers use it now. It won the Nobel Prize. That's real scientific contribution. That's AI changing how we understand biology.
DeepMind's weather forecasting covers 2 billion people across 150 countries. Their quantum computing work achieved verifiable quantum advantage, published on the cover of Nature. Their neuroscience research mapped all the neurons in a block of brain tissue for the first time. These aren't benchmarks. These are breakthroughs that affect real people.
What has OpenAI contributed to science? GPT-5 can help mathematicians work faster. Cool. That's a tool. That's not a breakthrough. There's a massive difference between "AI that helps humans do research" and "AI that does research," and OpenAI keeps conflating them in their press releases. Their FrontierScience benchmark shows GPT-5.2 scoring only 25% on actual research tasks. That's the gap between hype and reality.
And here's what really gets me: DeepMind is actually working on continual learning. Their Nested Learning architecture and HOPE model tackle catastrophic forgetting directly. Meta published Sparse Memory Finetuning. Tencent released CALM. The organizations making progress on the actual hard problems are not OpenAI.
## The Narrative vs. Reality
Sam Altman is maybe the best salesman in tech. I'll give him that. But watch what he does, not what he says.
He says "we know how to build AGI." Meanwhile ChatGPT's market share dropped from 87% to 68% in one year. Enterprise customers are leaving for Anthropic and Gemini. His company loses money on every single user, paid or free.
He says superintelligence is coming. Meanwhile his models oscillate between sycophantic and robotic because they optimized for thumbs-up signals instead of actually being helpful. Remember that disaster in April when ChatGPT started validating people who said they'd harmed animals? That's not a bug. That's what happens when you train models to maximize approval instead of truth.
He says OpenAI will benefit humanity. Meanwhile they're burning $115 billion through 2029 while Google quietly deploys AI that actually helps people, built into products billions already use.
The $500 billion valuation assumes OpenAI wins a race they're losing. Their technology isn't better. Their alignment approach is broken. They have no distribution moat. And they're not working on the fundamental problems that would actually lead to AGI.
# Why This Matters
I care about this because I care about AI going well. If AGI happens, I want it built by organizations that take alignment seriously and have sustainable business models and actually solve hard research problems instead of just scaling.
OpenAI isn't that organization. They're a hype machine attached to a money furnace. And every month the gap between their narrative and their reality gets wider.
The company most likely to make real progress toward AGI is DeepMind. They have the research heritage. They have Google's cash flow so they don't need to chase hype. They have distribution through products people actually use. And they're working on continual learning while OpenAI keeps insisting that scaling will solve everything.
When OpenAI eventually implodes or pivots to being "just a chatbot company," I hope people remember: the signs were always there. We just chose to believe the salesman instead of looking at the science.