I often feel that people don't get how the sucking up thing works. Not only does it not matter that it is transparent, that is part of the point. There is simultaneously common knowledge of the sucking up and common knowledge that those in the inner party don't acknowledge the sucking up, that's part of what the inner party membership consists of. People outside can accuse the insiders of nakedly sucking up and the insiders can just politely smile at them while carrying on. Sucking up can be what deference networks look like from the outside when we don't particularly like any of the people involved or what they are doing. But their hierarchy visibly produces their own aims, so more fools we.
There was a comedy routine a few years ago. I believe it was by Hannah Gadsby. She brought up a painting, and looked at some details. The details weren’t important in and of themselves. If an AI had randomly put them there, we wouldn’t care.
Except an AI didn’t put them there. And they weren’t there at random.
A human put them there. On purpose. Or, as she put it:
THAT was a DECISION.
This is the correct way to view decisions around a $500 billion AI infrastructure project, announced right after Trump takes office, having it be primarily funded by SoftBank, with all the compute intended to be used by OpenAI, and calling it Stargate.
Table of Contents
The Announcement
Note that ‘intends to invest’ does not mean ‘has the money to invest’ or ‘definitely will invest.’ Intends is not a strong word. The future is unknown and indeed do many things come to pass.
One of these things is not like the others. Secure American leadership in AI, generate massive economic benefit for the entire world, provide strategic capability to allies, sure, fine, makes sense, support reindustrialization is a weird flex but kinda, yeah.
And then… jobs? American… jobs? Um, Senator Blumenthal, that is not what I meant.
If you listen to Altman’s announcement, he too highlights these ‘hundreds of thousands of jobs.’ It’s so absurd. Remember when Altman tried to correct this error?
If you want to spend way too much money on a technology project, and give the people investing the money a remarkably small share of the enterprise, you definitely want to be giving Masayoshi Sun and Softbank a call.
“Sam Altman, you are not crazy enough. You need to think bigger.”
This proves there is real activity, also it is a tell that some of this is not new.
Increase consumption of compute is different from Azure as sole compute provider. It seems OpenAI expects plenty of compute needs to go around.
Can’t stop, won’t stop, I suppose. ‘Enable creative people to elevate humanity’ continues to miss the point of the whole enterprise, but not as much as talking ‘jobs.’
Is That a Lot?
Certainly $500 billion for this project sounds like a lot. It’s a lot, right?
Microsoft is investing $80 billion a year in Azure, which is $400 billion over 5 years, and I’d bet that their investment goes up over time and they end up spending over $500 billion during that five year window.
The number $500 billion is distributed throughout many sites and physical projects. If it does indeed happen, and it is counterfactual spending, then it’s a lot. But it’s not a sea change, and it’s not obvious that the actual spending should be surprising. Investments on this scale were already very much projected and already happening.
It’s also not that much when compared to the compute needs anticipated for the scaling of top end training runs, which very much continue to be a thing.
$500 billion is a lot is to the extent all of this is dedicated specifically and exclusively to OpenAI, as opposed to Microsoft’s $80 billion which is for everyone. But it’s not a lot compared to the anticipated future needs of a frontier lab.
One thing to think about is that OpenAI recently raised money at a valuation of approximately $170 billion, presumably somewhat higher now with o3 and agents, but also potentially lower because of DeepSeek. Now we are talking about making investments dedicated to OpenAI of $500 billion.
There is no theoretical incompatibility. Perhaps OpenAI is mining for gold and will barely recoup its investment, while Stargate is selling pickaxes and will rake it in.
It does still seem rather odd to presume that is how the profits will be distributed.
The reason OpenAI is so unprofitable today is that they are spending a ton on increasing capabilities, and not serving enough inference to make it up on their unit economics, and also not yet using their AI to make money in other ways.
And yes, the equilibrium could end up being that compute providers have margins and model providers mostly don’t have margins. But OpenAI, if it succeeds, should massively benefit from economies of scale here, and its economics should improve. Thus, if you take Stargate seriously, it is hard to imagine OpenAI being worth only a fraction of $500 billion.
There is a solution to this puzzle. When we say OpenAI is worth $170 billion, we are not talking about all of OpenAI. We are talking about the part that takes outside investment. All the dramatic upside potential? That is for now owned by the non-profit, and not (or at least not fully) part of the valuation.
And that is the part that has the vast majority of the expected net present value of future cash flows of OpenAI. So OpenAI the entire enterprise can be worth quite a lot, and yet ‘OpenAI’ the corporate entity you can invest in is only worth $170 billion.
This should put into perspective that the move to a for-profit entity truly is in the running for the largest theft in the history of the world.
What Happened to the Microsoft Partnership?
Didn’t they have an exclusive partnership?
Microsoft is one of the providers here. Reports are that the Microsoft partnership has now been renegotiated, to allow OpenAI to also seek other providers, since Altman needs moar compute. Hence Stargate. Microsoft will retain right of first refusal (ROFR), which seems like the right deal to make here. The question is, how much of the non-profit’s equity did Altman effectively promise in order to get free from under the old deal?
Where’s Our 20%?
Remember that time Altman promised 20% of compute would go to superalignment, rather than blowing up a sun?
So that’s a no, then.
I do expect Roon to push for more compute. I don’t expect to get anything like 20%.
Show Me the Money
If you take the companies collectively, they absolutely have the money, or at least the ability to get the money. This is Microsoft and Nvidia. I have no doubt that Microsoft is, as its Nadella affirmed, ‘good for its $80 billion.’
That doesn’t mean SoftBank has the money, and SoftBank explicitly is tasked with providing the funding for Stargate.
Nor does the first site in Texas prove anything either way on this.
Remember the wording on the announcement: “which intends to invest $500 billion over the next four years.”
That does not sound like someone who has the money.
That sounds like someone who intends to raise the money. And I presume SoftBank has every expectation of being able to do so, with the aid of this announcement. And of working out the structure. And the financing.
But that’s why you say ‘intend to invest’ rather than ‘will invest.’
Things between Musk and Altman did not stop there, as we all took this opportunity to break open the International Popcorn Reserve.
Musk’s not exactly wrong about that. He also said and retweeted other… less dignified things.
It was not a good look for either party. Elon Musk is, well, being Elon Musk. Altman is trying to throw in performative ‘look at me taking the high road’ statements that should fool no one, not only the one above but also:
There’s a subtle art to doing this sort of thing well. The Japanese especially are very good at it. All of this is, perhaps, the exact opposite of that.
Altman, you made it weird. Also guache. Let’s all do better.
Trump world is not, as you would expect, thrilled with what Musk has been up to, with Trump saying he is ‘furious,’ saying he ‘got over his skis.’ My guess is that Trump ‘gets it’ at heart, because he knows what it’s like to hate and never let something go, and that this won’t be that big a deal for Musk’s long term position, but there is high variance. I could easily be wrong about that. If I was Musk I would not have gone with this strategy, but that statement is almost always true and why I’m not Musk.
It Never Hurts to Suck Up to the Boss
This particular Rule of Acquisition is somewhat imprecise. It’s not always true.
But Donald Trump? Yeah. It definitely never hurts to suck up to that particular boss.
Altman does admit this is a rather big change. Anyone remember when Altman said “More terrifying than Trump intentionally lying all the time is the possibility that he actually believes it all” or when he congratulated Reid Hoffman for helping keep Trump out of power? Or “Back to work tomorrow on a new project to stop Trump?” He was rather serious about wanting to stop Trump.
You can guess what I think he saw while watching Trump to make Altman change his mind.
So they announced this $500 billion deal, or at least a $100 billion deal with intent to turn it into $500 billion, right after Trump’s inauguration, with construction already underway, with a press conference on the White House lawn.
And the funds are all private. Which is great, but all this together also raises the obvious question: Does Trump actually have anything to do with this?
Trump says that it is all about him, of course:
I presume that in addition to completely missing the point, this particular jobs claim is, technically speaking, not true. But numbers don’t have to be real in politics. And of course, if this is going to create those jobs ‘almost immediately’ it had to have been in the works for a long time.
I agree with Dylan Matthews that the board’s assessment of Altman as deeply dishonest and unreliable has very much been vindicated, and Altman’s actions here only confirm that once again. But that doesn’t mean that Trump has nothing to do with the fact that this project is going forward, with this size.
So how much does this project depend on Trump being president instead of Harris?
I think the answer is actually a substantial amount.
In order to build AI infrastructure in America, you need three things.
Does Sun have many good reasons to pretend that this is all because of Trump? Yes, absolutely. He would find ways to praise the new boss either way. But I do think that Trump mattered here, even if you don’t think that there is anything corrupt involved in all this.
Look at Trump’s executive orders, already signed, about electrical power plants and transmission lines being exempt from NEPA, and otherwise being allowed to go forwards. They can expect more similar support in the future, if they run into roadblocks, and fewer other forms of regulatory trouble and everything bagel requirements across the board.
Also, I totally believe that Sun came to Trump and promised $100 billion, and Trump said go for $200 billion, and Sun now is at $500 billion, and I think that plausibly created a lot of subsequent investment. It may sound stupid, but that’s Grade-A handling of Masayoshi Sun, and exactly within Trump’s wheelhouse. Tell the man who thinks big he’s not thinking big enough. Just keep him ramping up. Don’t settle for a big win when you can go for an even bigger win. You have to hand it to him.
What’s in a Name
It is so absurd that these people, with a straight face, decided to call this Stargate.
They wanted to call it the Enterprise, but their lawyers wouldn’t let them.
Was SkyNet still under copyright?
Is this where I give you 10,000 words on the history of Stargate SG-1 and Stargate Atlantis and all the different ways Earth and often also everyone else would have been enslaved or wiped out if it wasn’t for narrative causality and plot armor, and what would have been reasonable things to do in that situation?
No, and I am sad about that, despite yes having watched all combined 15 seasons, because alas we do not currently have that kind of time. Maybe later I’ll be able to spend a day doing that, it sounds like fun.
But in brief about that Stargate plan. Was it a good plan? What were the odds?
As is pointed out in the thread (minor spoilers for the end of season 1), the show actually answers this question, as there is crossover between different Everett branches, and we learn that even relatively early on – before most of the different things that almost kill us have a chance to almost kill us – that most branches have already lost. Which was one of the things that I really liked about the show, that it realized this. The thread also includes discussions of things like ‘not only did we not put a nuclear bomb by the Stargate and use a secondary gate to disguise our location, we wore Earth’s gate code on our f***ing uniforms.’
To be fair, there is a counterargument, which is that (again, minor spoilers) humanity was facing various ticking clocks. There was one in particular that was ticking in ways Earth did not cause, and then there were others that were set in motion rapidly once we had a Stargate program, and in general we were on borrowed time. So given what was happening we had little choice but to go out into the galaxy and try to develop superior technology and find various solutions before time ran out on us, and it would have been reasonable to expect we were facing a ticking clock in various ways given what Earth knew at the time.
There’s also the previous real life Project Stargate, a CIA-DIA investigation of the potential for psychic phenomena. That’s… not better.
There are also other ways to not be thrilled by all this.
Taking a beat sounds like a good idea.
Just Think of the Potential
What does Trump actually think AI can do?
That’s not how I view the clip at the link. Trump is selling the project. It makes sense to highlight medical advances, which are a very real and valuable upside. It certainly makes a lot more sense than highlighting job creation.
Altman I don’t see hesitating, I see him trying to be precise while also going with the answer, and I don’t like his previous emphasis on jobs (again, no doubt, following Trump’s and his political advisor’s lead) but on the medical question I think he does well and it’s not obvious what a better answer would have been.
The hilarious part of this is the right wing faction that says ‘you want to use this to make mRNA vaccines, wtf I hate AI now’ and trying to figure out what to do with people whose worldviews are that hopelessly inverted.
That moment when you say ‘look at how this could potentially cure cancer’ and your hardcore supporters say ‘And That’s Terrible.’
And also when you somehow think ‘Not Again!’
There’s a certain kind of bullshit that these folks and many other folks are deeply tired of hearing. This is one of those places where I very much agree that it does hurt to suck up to the boss, both because the boss will see through it and because the whole strategy involves not doing things like that, and also have you seen or heard the boss.
My prediction and hope is that we will continue to see those worried about AI killing everyone continue to not embrace these kinds of crazy arguments of convenience. That doesn’t mean not playing politics at all or being some sort of suicidal purist. It does mean we care about whether our arguments are true, rather than treating them as soldiers for a cause.
Whereas we have learned many times, most recently with the fight over SB 1047 and then the latest round of jingoism, that many (#NotAllUnworried!) of those who want to make sure others do not worry about AI killing everyone, or at least want to ensure that creating things smarter than humans faces less regulatory barriers than a barber shop, care very little whether the arguments made on their behalf, by themselves or by others, are true or correspond to physical reality. They Just Didn’t Care.
The flip side is the media, which is, shall we say, not situationally aware.
As noted elsewhere I don’t think this merits ‘Manhattan Project’ for various reasons but yes, it is kind of weird to announce a $500 billion investment in artificial general intelligence and then have only one question about it in a 30 minute Q&A.
I’m not saying that primarily from an existential risk perspective – this is far more basic even than that. I’m saying, maybe this is a big deal that all this is happening, maybe ask some questions about it?
I Believe Toast is an Adequate Description
Remember when Altman was talking about how we have to build AGI now because he was worried about a compute overhang? Yes, well.
Between the $500 billion of Stargate, the full-on jingoistic rhetoric from all sides including Anthropic, and the forcing function of DeepSeek with v3 and r1, it is easy to see how one could despair over our prospects for survival.
Unless something changes, we are about to create smarter than human intelligence, entities more capable and competitive than we are across all cognitive domains, and we are going to do so as rapidly as we can and then put them in charge of everything, with essentially zero margin to ensure that this goes well despite it obviously by default getting everyone killed.
Even if we are so fortunate that the technical and other barriers in front of us are highly solvable, that is exactly how we get everyone killed anyway.
I work hard to find the joy and the gallows humor in it all, to fight the good fight, to say the odds are against us and the situation is grim, sounds like fun. One must imagine Buffy at the prom, and maintain Scooby Gang Mindset. Also necessary is the gamer mindset, which says you play to win the game, and in many ways it’s easiest to play your best game with your back against the wall.
And in a technical sense, I have hope that the solutions exist, and that there are ways to at least give ourselves a fighting chance.
But yeah, weeks like this do not make it easy to keep up hope.
The Lighter Side