"If the Superintelligence were near, OpenAI wouldn't be selling ads."
"If the Superintelligence were near, OpenAI wouldn't be adding adult content to ChatGPT."
"If the Superintelligence were near, OpenAI wouldn't be taking ecommerce referral fees."
"If the Superintelligence were near and about to automate software development, Anthropic wouldn't have a dozen of open roles for software developers."
"If the Superintelligence were near, OpenAI wouldn't be trying to take a cut of scientific innovations created with OpenAI models."
"If the Superintelligence were near, OpenAI employees wouldn't be selling OpenAI equity in the secondary market."
"If the Superintelligence were near, OpenAI wouldn't be doing acquisitions such as io, Roi, Torch, Sky, and Neptune."
"If the Superintelligence were near, OpenAI wouldn't be spending compute with Studio Ghibli or the Sora app."
"If the Superintelligence were near, Anthropic wouldn't be rumored to have hired lawyers for a 2026 IPO."
"If the Superintelligence were near, Google wouldn't be selling and renting TPUs to Anthropic."
"If the Superintelligence were near, Trump would know that and he wouldn't allow H200 sales to China."
"If the Superintelligence were near, Ilya wouldn't have left OpenAI to create his own underfunded AI Lab."
"If the Superintelligence were near, Mira Murati and John Schulman wouldn't have left OpenAI to create their own underfunded AI Lab."
"If the Superintelligence were near, Anthropic wouldn't be cheap and would allow us to use Claude Max subscription inside of OpenCode."
I will keep updating the list above over time.
I believe the public has been using very bad heuristics to decide how much they should care about the field of artificial intelligence. The goal of this essay is to try to explain why having a world model of imminent Superintelligence isn't in opposition with the way the Labs behave.
The audience I expect to read this text are Less Wrong readers and that people who much better communicators than myself can repackage the argument to normies.
The capitalist class treats AI as normal technology
The reality is that the entire capitalist class, with some rare exceptions (like Masayoshi Son, who was ASI pilled back in 2010), look at revenue, not capabilities. And for a variety of reasons revenue is extremely lagging of AI capabilities:
It takes time for people to discover what they can do with AI.
The labs are capacity constrained.
The labs allocate substantial amounts of their compute budget towards training.
It takes time to build GW-scale data centers.
If a given AI Lab wants to get to the Superintelligence, and to get there first, they expect they will have exponentially growing training costs to train the Superintelligence. And even though they could fund their increasing training costs with their exponentially growing revenue, they know they'd lose to some other lab that would accept to also have exponentially growing losses, funded by capitalists.
What happens is that capitalists will want the labs to beat the very rosy expectations they set, for example, leaking financials to The Information.
Capitalists can and do look ahead, but they will always have a hard time paying attention to the exponential. But if the AI Lab CFO says things such as:
"We will convert free-user to premium-user at half the rate Spotify does."
"We will monetize free-users through ads at half the rate Facebook does."
"Inference costs will drop by half and we will be able to manage costs for free users."
Capitalists can pencil down some math and invest into OpenAI at $500B valuation or to Anthropic at $300B valuation, or something like that.
Even if internally your goal is to create the Superintelligence, ask it to create 100 new world-changing drugs, patent them, and get unbelievably rich, you can't tell the capitalists that. Or if you tell them, they won't believe. You need to tell them you'll take a cut of eCommerce sales.
But capitalists are smart, this means that if you tell them you'll put ads in ChatGPT, you need to actually add ads to ChatGPT one year later, otherwise they'll question your execution and their revenue expectations will disappoint them.
Because creating the Superintelligence is really expensive and it might require AI Labs to raise hundreds if not trillions of equity capital from society, they will need to increasingly play this game.
Adding monetization that will be meaningless when the Superintelligence arrives is a cost they AI Labs are willing to pay to create the Superintelligence.
The Labs want to be prepared if AI is a normal technology
If xAI ends up creating universal high income: great! If xAI ends up killing everyone, not great, but who will be left to care? But in the worlds where AI ends up being normal technology, those will be the ones where it makes the most sense to be prepared if AI is normal technology.
In reality, being prepared for AI being normal technology is easy. If you are Sam Altman and you are busy securing compute, going to podcasts, talking to your researchers, and broadly enabling everyone to create AGI, you might think "Gosh, how pointless it is to spend time creating health care features for ChatGPT when in 2 years GPT-6.5-CODEX-xhigh will be able to one-shot it", but in the great scheme of things, the cost of hiring a couple hundreds engineers and putting Fudji Simo to create ChatGPT Health and put ads in ChatGPT isn't immense and you can pay them in equity anyway. Imagine if Scaling Laws hit a wall and you didn't do these things and you lose to Google!
More importantly, many of these decisions that have formed people's views were made during the past eighteen months where it was much less clear compared to today how much line of sight we have to create the Superintelligence. Sam Altman has recently said:
"We are planning to dramatically slow down how quickly we grow because we think we'll be able to do so much more with fewer people."
Some AI bears will have it both ways: heads AI doesn't work because they hire people, tails AI doesn't work and Sam Altman needs to cut costs because his company is unprofitable.
Some other reasons why the labs want to be prepared if AI is normal technology:
People inside labs might have varying views about the timeline of the arrival of the Superintelligence
Leaders want to be diligent with their stakeholders and not provide disappointing economics if they don't create the Superintelligence.
People are weird. Elon Musk is as scaling law pilled as the next guy and he believes in super abundance. But he somehow thought that reducing the U.S. federal debt by some couple of hudrends of billions was something that was worth his time during the most pivotal time in history. I think his beliefs were inconsistent, until he thought more about it, and left the administration.
The Labs want to be prepared if the Superintelligence doesn't deliver superior economics
Given how much the labs talk about their fears in public (see Dario Amodei's The Adolescence of Technology), I find it underdiscussed how little is talked about the possibility about the Superintelligence being a commodity.
The debate around the Superintelligence almost entirely assumes you need to "escape the permanent underclass" or contributions like Dwarkesh Patel and Phillip Trammell Capital in the 22nd Century. Dwarkesh and Phillip's implied view is that one hundred year post-singularity, there will still exist enough interesting things for capital to compound at accelerated rates, instead of the forces of competition pushing all the prices to zero, because there are no longer many things people want.[1]
The labs model is predicated at always being demand for SOTA, similarly to TSMC. Unlike TSMC, China SOTA is 6 months behind, and do it yourself AI is 18 months behind, and the rate of change is 4x faster. I assign a probability higher than 50% that in 2028, I will be using an older open-source model instead of paying market prices for the State of the Art.
As the value of selling proto-AGI through an API becomes commoditzed, it's likely that the labs will need to transition to creating science themselves, patents themselves, and have internally built AI they don't share to the public.
The labs obviously know this, but the transition could be far from perfect. The one best prepared is Google. Isomorphic Labs already exists and already is patenting and creating the muscle to make money off AI-created science. Even there, I doubt Isomorphic Labs will be even considered a top-20 pharma company in 2030. At the same time, while I think I'll have use for a model with a time-horizon 100x bigger than now in three years, I don't know if I will have use for a model with a time-horizon 10,000x bigger than today in six years. I might prefer AI that is cheaper and faster. We could hit bumps in the monetization road.
Once again, I don't pretend to have answers.
The key point is that it makes sense for labs to hedge. The cost of hedging is small in the great scheme of things. But it creates apparently inconsistent behavior.
The Labs think they desperately need to win
A lot of AI Labs have deep distrust between each other, between them and China, and so forth. Anthropic was created because the founders didn't trust OpenAI. xAI was created because Elon asked for AI to be paused and no one heard him (and he doesn't trust OpenAI). Meta Superintelligence Labs was created because Mark doesn't trust Google. OpenAI was created because Elon didn't trust Google and Page. Safe Superintelligence was created because Ilya didn't like OpenAI research path (and likely he also doesn't trust OpenAI). [2]
And all the Lab leaders, wholeheartedly, believe they are about to create the Superintelligence and the prize is only there for whoever gets there first (assuming singularity/recursive-self improvement).
Anthropic is right now betting the ranch they'll get there. Our Effective Autruist overlords at Anthropic, quite likely, would like that we could slow down the development of the Superintelligence for society to be ready. Dario Amodei said that he would coordinate with Google DeepMind, if the race was only between the two.
Because the EAs at Anthropic are leading the AI race they get a seat at the table at how the Department of War deploys their proto-AGI, despite how much the administration dislike Anthropic
From the AI perspective, no cost is high enough to increase the likelihood they will be the ones creating the Superintelligence and getting to control it.
Hypothetical All-knowing rationalist chief of staff at OpenAI: "Sam, we think we can increase our probability of winning the AI race from 42% to 42.7% if we add porn to ChatGPT, because the increased revenue means we will be able to better match Google DeepMind compute capacity."
Hypothetical Sam Altman: "I wish we could keep our brand clean, but creating the Artificial General Intelligence for the benefit of all of humanity is our ultimate goal, and if it helps us to achieve the goal instead of Google, who will only create the Artificial General Intelligence for the benefit of Larry Page and Sergey Brin, we are more than happy to make that tradeoff."[3]
The costs of creating the Superintelligence are increasing exponentially
This point is obvious for anyone who knows one thing or two about the scaling laws. See Gwern, Leopold, and Dwarkesh.
The costs of scaling state of the art artificial intelligence are increasing by 10x every two years, with no end in sight. Last year, OpenAI fundraised something like $40B. This year they are fundraising just in the first month of the year 2.5x that, and they plan the largest initial public offering in history later this year. That's because the costs of creating the Superintelligence are increasing to the point that soon even Google DeepMind will have difficulty funding it.
The implications are two-fold:
Even if the Superintelligence is imminent and you have line of sight to get there, you still need to fundraise more than the Gross Domestic Product than most countries to build it. Creating it is not an inevitability, you need to actually build it. This helps you model the behavior of the Labs.
The immense necessity of capital means the Labs are forced to play the capitalist class game.
If the investments in artificial intelligence continue constant without increasing, AI timelines are much longer than most on the labs and on this website expect.[4]
In reality, every year that we 3x the compute used to train state of the art models and the slot machine prints intelligence, society will return to increase the intelligence, either through the capitalist system, or through state intervention.
The fallacy is caused by past misguided techno-optimism (and lies)
The problem the AI Safety community needs to overcome is the system one thinking that was created in many tech observers, to fade most, if not all, claims made by tech people.
Elon Musk has said that Full Self-Driving was imminent for 10 years. He's now, once again, saying that. No one believes him. But FSD is imminent.
Vinod Khosla said, in 2017, that in five years, radiologists will be obsolete. Not only radiologists were not obsolete in 2017, but the employment in Diagnostic Imaging Centers in the United Stated has outpaced the overall employment growth in the overall economy. But AI that can make radiologists obsolete is imminent.
The heuristics many people have created for themselves is "Tech bros often lie about the capabilities and about the future because they are trying to sell you something or to raise money from you. I can avoid understanding the underlying details about technology XYZ only if I look at how they behave."
This is a fallacy. I am calling it "If the Superintelligence was near fallacy".
What is needed is to push the public to look at straight lines in a log-(log-)chart.
What is needed is to explicitly call out the fallacy in the public discourse.
What is needed is to tell people they don't need to ever hear Sam Altman. All they need to do is to understand benchmarks and to use AI, in its best form, for themselves every 6 months.
The hardest thing about AI Safety is that it's an extremely optimistic and extremely pessimistic view of the future. Most people don't get it and it's needed to be extremely candid about that.
I hope that by documenting the 'If the Superintelligence was near fallacy', we can start to have better conversations.
There will always exist arguments over how much people want and the nature of the things there are still there to create. I would argue a compressed 21st century, like Dario Amodei describes in machines of loving grace is possible, a compressed 3rd millenia is unlikely.
Lots of this lack sources! If I was writing for the NYT, I wouldn't write these things. It's hard to say these things in such uncertain terms as I am saying. Don't quote me on that. That's just me reading in the between lines!
You could argue that this is how all the big atrocities in History started because someone thought their cause was fair and no price was high enough to get there. I would argue many AI leaders are victims of thinking they will singlehandedly solve History. But that's not the point of the essay.
Recursive-self improvement is a dark horse here! Anthropic seems to think they can get there by investing dozens of billions, not hundreds of billions of dollars. And RSI is the base expectation of many really good AI observers.
People will say:
I will keep updating the list above over time.
I believe the public has been using very bad heuristics to decide how much they should care about the field of artificial intelligence. The goal of this essay is to try to explain why having a world model of imminent Superintelligence isn't in opposition with the way the Labs behave.
The audience I expect to read this text are Less Wrong readers and that people who much better communicators than myself can repackage the argument to normies.
The capitalist class treats AI as normal technology
The reality is that the entire capitalist class, with some rare exceptions (like Masayoshi Son, who was ASI pilled back in 2010), look at revenue, not capabilities. And for a variety of reasons revenue is extremely lagging of AI capabilities:
If a given AI Lab wants to get to the Superintelligence, and to get there first, they expect they will have exponentially growing training costs to train the Superintelligence. And even though they could fund their increasing training costs with their exponentially growing revenue, they know they'd lose to some other lab that would accept to also have exponentially growing losses, funded by capitalists.
What happens is that capitalists will want the labs to beat the very rosy expectations they set, for example, leaking financials to The Information.
Capitalists can and do look ahead, but they will always have a hard time paying attention to the exponential. But if the AI Lab CFO says things such as:
Capitalists can pencil down some math and invest into OpenAI at $500B valuation or to Anthropic at $300B valuation, or something like that.
Even if internally your goal is to create the Superintelligence, ask it to create 100 new world-changing drugs, patent them, and get unbelievably rich, you can't tell the capitalists that. Or if you tell them, they won't believe. You need to tell them you'll take a cut of eCommerce sales.
But capitalists are smart, this means that if you tell them you'll put ads in ChatGPT, you need to actually add ads to ChatGPT one year later, otherwise they'll question your execution and their revenue expectations will disappoint them.
Because creating the Superintelligence is really expensive and it might require AI Labs to raise hundreds if not trillions of equity capital from society, they will need to increasingly play this game.
Adding monetization that will be meaningless when the Superintelligence arrives is a cost they AI Labs are willing to pay to create the Superintelligence.
The Labs want to be prepared if AI is a normal technology
If xAI ends up creating universal high income: great! If xAI ends up killing everyone, not great, but who will be left to care? But in the worlds where AI ends up being normal technology, those will be the ones where it makes the most sense to be prepared if AI is normal technology.
In reality, being prepared for AI being normal technology is easy. If you are Sam Altman and you are busy securing compute, going to podcasts, talking to your researchers, and broadly enabling everyone to create AGI, you might think "Gosh, how pointless it is to spend time creating health care features for ChatGPT when in 2 years GPT-6.5-CODEX-xhigh will be able to one-shot it", but in the great scheme of things, the cost of hiring a couple hundreds engineers and putting Fudji Simo to create ChatGPT Health and put ads in ChatGPT isn't immense and you can pay them in equity anyway. Imagine if Scaling Laws hit a wall and you didn't do these things and you lose to Google!
More importantly, many of these decisions that have formed people's views were made during the past eighteen months where it was much less clear compared to today how much line of sight we have to create the Superintelligence. Sam Altman has recently said:
Some AI bears will have it both ways: heads AI doesn't work because they hire people, tails AI doesn't work and Sam Altman needs to cut costs because his company is unprofitable.
Some other reasons why the labs want to be prepared if AI is normal technology:
The Labs want to be prepared if the Superintelligence doesn't deliver superior economics
Given how much the labs talk about their fears in public (see Dario Amodei's The Adolescence of Technology), I find it underdiscussed how little is talked about the possibility about the Superintelligence being a commodity.
The debate around the Superintelligence almost entirely assumes you need to "escape the permanent underclass" or contributions like Dwarkesh Patel and Phillip Trammell Capital in the 22nd Century. Dwarkesh and Phillip's implied view is that one hundred year post-singularity, there will still exist enough interesting things for capital to compound at accelerated rates, instead of the forces of competition pushing all the prices to zero, because there are no longer many things people want.[1]
The labs model is predicated at always being demand for SOTA, similarly to TSMC. Unlike TSMC, China SOTA is 6 months behind, and do it yourself AI is 18 months behind, and the rate of change is 4x faster. I assign a probability higher than 50% that in 2028, I will be using an older open-source model instead of paying market prices for the State of the Art.
As the value of selling proto-AGI through an API becomes commoditzed, it's likely that the labs will need to transition to creating science themselves, patents themselves, and have internally built AI they don't share to the public.
The labs obviously know this, but the transition could be far from perfect. The one best prepared is Google. Isomorphic Labs already exists and already is patenting and creating the muscle to make money off AI-created science. Even there, I doubt Isomorphic Labs will be even considered a top-20 pharma company in 2030. At the same time, while I think I'll have use for a model with a time-horizon 100x bigger than now in three years, I don't know if I will have use for a model with a time-horizon 10,000x bigger than today in six years. I might prefer AI that is cheaper and faster. We could hit bumps in the monetization road.
Once again, I don't pretend to have answers.
The key point is that it makes sense for labs to hedge. The cost of hedging is small in the great scheme of things. But it creates apparently inconsistent behavior.
The Labs think they desperately need to win
A lot of AI Labs have deep distrust between each other, between them and China, and so forth. Anthropic was created because the founders didn't trust OpenAI. xAI was created because Elon asked for AI to be paused and no one heard him (and he doesn't trust OpenAI). Meta Superintelligence Labs was created because Mark doesn't trust Google. OpenAI was created because Elon didn't trust Google and Page. Safe Superintelligence was created because Ilya didn't like OpenAI research path (and likely he also doesn't trust OpenAI). [2]
And all the Lab leaders, wholeheartedly, believe they are about to create the Superintelligence and the prize is only there for whoever gets there first (assuming singularity/recursive-self improvement).
Anthropic is right now betting the ranch they'll get there. Our Effective Autruist overlords at Anthropic, quite likely, would like that we could slow down the development of the Superintelligence for society to be ready. Dario Amodei said that he would coordinate with Google DeepMind, if the race was only between the two.
From the AI perspective, no cost is high enough to increase the likelihood they will be the ones creating the Superintelligence and getting to control it.
The costs of creating the Superintelligence are increasing exponentially
This point is obvious for anyone who knows one thing or two about the scaling laws. See Gwern, Leopold, and Dwarkesh.
The costs of scaling state of the art artificial intelligence are increasing by 10x every two years, with no end in sight. Last year, OpenAI fundraised something like $40B. This year they are fundraising just in the first month of the year 2.5x that, and they plan the largest initial public offering in history later this year. That's because the costs of creating the Superintelligence are increasing to the point that soon even Google DeepMind will have difficulty funding it.
The implications are two-fold:
If the investments in artificial intelligence continue constant without increasing, AI timelines are much longer than most on the labs and on this website expect.[4]
In reality, every year that we 3x the compute used to train state of the art models and the slot machine prints intelligence, society will return to increase the intelligence, either through the capitalist system, or through state intervention.
The fallacy is caused by past misguided techno-optimism (and lies)
The problem the AI Safety community needs to overcome is the system one thinking that was created in many tech observers, to fade most, if not all, claims made by tech people.
Elon Musk has said that Full Self-Driving was imminent for 10 years. He's now, once again, saying that. No one believes him. But FSD is imminent.
Vinod Khosla said, in 2017, that in five years, radiologists will be obsolete. Not only radiologists were not obsolete in 2017, but the employment in Diagnostic Imaging Centers in the United Stated has outpaced the overall employment growth in the overall economy. But AI that can make radiologists obsolete is imminent.
The heuristics many people have created for themselves is "Tech bros often lie about the capabilities and about the future because they are trying to sell you something or to raise money from you. I can avoid understanding the underlying details about technology XYZ only if I look at how they behave."
This is a fallacy. I am calling it "If the Superintelligence was near fallacy".
What is needed is to push the public to look at straight lines in a log-(log-)chart.
What is needed is to explicitly call out the fallacy in the public discourse.
What is needed is to tell people they don't need to ever hear Sam Altman. All they need to do is to understand benchmarks and to use AI, in its best form, for themselves every 6 months.
The hardest thing about AI Safety is that it's an extremely optimistic and extremely pessimistic view of the future. Most people don't get it and it's needed to be extremely candid about that.
I hope that by documenting the 'If the Superintelligence was near fallacy', we can start to have better conversations.
There will always exist arguments over how much people want and the nature of the things there are still there to create. I would argue a compressed 21st century, like Dario Amodei describes in machines of loving grace is possible, a compressed 3rd millenia is unlikely.
Lots of this lack sources! If I was writing for the NYT, I wouldn't write these things. It's hard to say these things in such uncertain terms as I am saying. Don't quote me on that. That's just me reading in the between lines!
You could argue that this is how all the big atrocities in History started because someone thought their cause was fair and no price was high enough to get there. I would argue many AI leaders are victims of thinking they will singlehandedly solve History. But that's not the point of the essay.
Recursive-self improvement is a dark horse here! Anthropic seems to think they can get there by investing dozens of billions, not hundreds of billions of dollars. And RSI is the base expectation of many really good AI observers.