This is the full text of a post first published on Obsolete, a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race to Build Machine Superintelligence. Consider subscribing to stay up to date with my work.
This morning, the Delaware and California attorneys general conditionally signed off on OpenAI’s plan to restructure as a for-profit public benefit corporation (PBC), seemingly closing the book on a fiercely contested legal fight over the company’s future.
Microsoft, OpenAI’s earliest investor and another party with power to block the restructuring, also said today it would sign off in exchange for changes to its partnership terms and a $135 billion stake in the new PBC.
With these stakeholders mollified, OpenAI has now cleared its biggest obstacles to a potential IPO — aside from its projected $115 billion cash burn through 2029.
While the news initially seemed like a total defeat for the many opponents of the restructuring effort, the details of the AGs’ announcements show that the new plan includes some modest but meaningful governance protections — even as it eliminates the profit caps that might have ultimately delivered trillions to the nonprofit.
Some of these protections are now enshrined in the charter for OpenAI’s new PBC, which Obsolete obtained and made available here.
Overall, this seems like a relative win on governance compared to the previous proposal, but still an enormous loss on the profit caps — only slightly mitigated by some additional equity the nonprofit will get if the company does very well.
OpenAI did not immediately reply to a request for comment.
Board chair Bret Taylor presents the restructuring as a closed case, writing that, “OpenAI has completed its recapitalization, simplifying its corporate structure. The nonprofit remains in control of the for-profit, and now has a direct path to major resources before AGI arrives.” And CEO Sam Altman expressed gratitude to “the Delaware and California AGs, our partners at Microsoft, all our investors, and especially to our tireless team for their work in getting to a good place here.”
In reality, the announcements are better understood as the culmination of months of high stakes and acrimonious negotiations between OpenAI and the parties who could block the restructuring.
The AGs, for instance, sent a scorching letter to the board last month following reports of ChatGPT encouraging suicides and murderous delusions:
The recent deaths are unacceptable. They have rightly shaken the American public’s confidence in OpenAI and this industry. OpenAI – and the AI industry – must proactively and transparently ensure AI’s safe deployment. Doing so is mandated by OpenAI’s charitable mission, and will be required and enforced by our respective offices.
Yesterday, OpenAI announced changes to ChatGPT intended to make it behave more appropriately with users experiencing mental health crises.
These measures — along with private negotiations — appear to have convinced the AGs not to challenge the restructuring, provided OpenAI meets their twenty-paragraph list of demands, including:
The AGs’ statements make clear that their non-objection to the restructuring expressly relies on the conditions being met, and, if a dispute arises, they reserve the right to seek court intervention.
Former OpenAI employee Page Hedley, who helped organize the Not for Private Gain letters urging the AGs to block the restructuring, highlighted two “silver linings”: PBC directors can consider only the mission when making safety and security decisions, and the SSC — run by the nonprofit — will have the authority to require mitigation measures, or even halt deployments.
Hedley noted that the other big power the nonprofit board is given — its ability to hire and fire PBC directors — is significantly undermined by the fact that the boards are currently identical, save for Carnegie Mellon professor Zico Kolter, who serves exclusively on the nonprofit side and leads the Safety and Security Committee.
Todor Markov, another ex-OpenAI employee, called this outcome better than he expected and noted that the board overlap problem is somewhat mitigated by the fiduciary duty the nonprofit directors have to OpenAI’s mission — giving the AGs an ongoing enforcement lever.
If your main concern is OpenAI recklessly pushing ahead with risky AI development, the new structure at least puts some formal governance checks in place.
But these measures are still weaker than the nonprofit’s former level of control, when — at least in theory — decisions weren’t subject to any profit pressure. (OpenAI’s string of major scandals while nominally being under nonprofit control shows the limits of relying on corporate governance alone.)
One of the California AG’s conditions is that the PBC board be “composed of a majority of independent directors,” defined as non-employees, who “in the determination of the PBC Board, will have no relationship or interest that could compromise their judgment — ensuring strong, objective oversight that reinforces accountability and mission alignment.”
In its structure page, OpenAI lists the following as independent directors of the nonprofit:
Bret Taylor (Chair), Adam D’Angelo, Dr. Sue Desmond-Hellmann, Dr. Zico Kolter, Retired U.S. Army General Paul M. Nakasone, Adebayo Ogunlesi, Nicole Seligman, and Larry Summers—as well as CEO Sam Altman.
As an employee, Altman wouldn’t qualify as an independent director under the California AG’s definition. Additionally, the OpenAI Files, a project from watchdog nonprofits The Midas Project and the Tech Oversight Project, has documented potential conflicts of interest between OpenAI and Taylor, Ogunlesi, and D’Angelo, who each run businesses that “are customers of OpenAI or stand to benefit from OpenAI’s commercial activity.” (Fidji Simo also served on the nonprofit board as it pursued the restructuring before being appointed OpenAI’s CEO of Applications.)
That leaves Desmond-Hellmann, Kolter, Nakasone, Seligman, and Summers. And Kolter, as we’ll recall, is only on the nonprofit board.
So, four of the eight board members plausibly don’t qualify as independent, but the PBC has determined they are — a determination the AGs are apparently respecting, so long as the director is neither an employee nor management member.
The second big thing at stake with the restructuring was the profit caps. When OpenAI created a for-profit arm in 2019, it famously capped the profits investors could make and company president Greg Brockman wrote that, “If we succeed, we believe we’ll create orders of magnitude more value than any existing company — in which case all but a fraction is returned to the world.”
This plan, like the proposal before it, does away with the caps, compensating the nonprofit with a 26 percent stake in the for-profit PBC, with some additional equity of an unstated amount promised if OpenAI’s value grows more than ten-fold over the next 15 years. The Information reported that if the company reaches a $5 trillion valuation, “the foundation could receive shares worth hundreds of billions of dollars,” citing “a person who has been involved in the restructuring discussions.” (No company has ever been worth $5 trillion, though Nvidia’s market cap is awfully close.)
That’s an improvement over just removing the profit caps, but — in the scenarios where OpenAI really wins big — it’s still dramatically less valuable to the nonprofit (and the public) than if the profit caps had stayed in place.
This is at the core of why Zvi Mowshowitz, a prominent rationalist blogger, calls the restructuring the greatest theft in human history. In his view, the value of controlling the for-profit alone (known as the control premium) should entitle the nonprofit to 20-40 percent of the PBC — and that’s before even considering the value of unlimited profits beyond the old caps.
When announcing the removal of the profit caps in May, Altman wrote:
Instead of our current complex capped-profit structure—which made sense when it looked like there might be one dominant AGI effort but doesn’t in a world of many great AGI companies—we are moving to a normal capital structure where everyone has stock. This is not a sale, but a change of structure to something simpler.
But as Obsolete previously observed, these caps only bite if OpenAI does very, very well. So why fight to get rid of them? The only reason to spend political capital on this is if investors now see a real chance of OpenAI actually hitting those caps — something that seems a lot more plausible now than it did back in 2019.
UVA economist Anton Korinek has used standard economic models to estimate that AGI could be worth anywhere from $1.25 to $71 quadrillion globally. If you take Korinek’s assumptions about OpenAI’s share, that would put the company’s value at $30.9 trillion. In this scenario, Microsoft would walk away with less than one percent of the total, with the overwhelming majority flowing to the nonprofit.
It’s tempting to dismiss these numbers as fantasy. But it’s a fantasy constructed in large part by OpenAI, when it wrote lines like, “it may be difficult to know what role money will play in a post-AGI world,” or when Altman said that if OpenAI succeeded at building AGI, it might “capture the light cone of all future value in the universe.” That, he said, “is for sure not okay for one group of investors to have.”
OpenAI presents the new Foundation as “one of the best-resourced nonprofits ever.” But The Midas Project sees it differently, writing:
From the public’s perspective, OpenAI may be one of the worst financially performing nonprofits in history, having voluntarily transferred more of the public’s entitled value to private interests than perhaps any charitable organization ever.
In May, I made four predictions in Obsolete about how OpenAI’s restructuring would go. Here’s how they held up:
I’ve covered this story extensively for a year, and the recurring theme from my conversations with legal experts was that the actual law said that OpenAI should not be allowed to do this without proving that doing so would advance its mission to “ensure AGI benefits humanity.”
But, I kept thinking, isn’t this ultimately a political question? The AGs were the key potential blockers, and both are elected officials. OpenAI has become one of the most powerful organizations in the world, with up to $1.5 trillion in deals struck over the past year and an army of lobbyists with deep ties to California politics.
This afternoon, Altman tweeted:
California is my home, and I love it here, and when I talked to Attorney General Bonta two weeks ago I made clear that we were not going to do what those other companies do and threaten to leave if sued.
This promise is at odds with what OpenAI executives were telling the Wall Street Journal behind the scenes: that the company might exit California if it didn’t get its way on the restructuring, which was cast as existential for the cash-hungry startup.
In May, Obsolete first reported on a letter OpenAI wrote to the California AG, in which the company said that “many potential investors in OpenAI’s recent funding rounds declined to invest” due to its nonprofit governance structure.
If the company went poof, there’s a strong case that the US stock market would crash, and maybe the economy with it. But it’s far from clear that OpenAI couldn’t have continued raising capital and growing without the restructuring. Still, that was the narrative advanced by the company, reinforced by aggressive deadlines on deca-billion-dollar investments.
Not everyone is buying it. Luigi Zingales, a critic of the restructuring and professor at the University of Chicago Booth School of Business, previously argued that:
The current structure, which caps returns at 100x the capital invested, does not really constrain its ability to raise funds. So, what is the need to transfer the control to a for-profit? To overrule the mandate that AI should be used for the benefit of humanity.
OpenAI also navigated the complexity of the situation to its great benefit. The final plan to “keep the nonprofit in control” largely defaulted to what OpenAI wanted in its original effort to sideline the nonprofit entirely.
But the media and public framed it as a huge win for opponents of the restructuring. Even OpenAI employees told me they were happy the nonprofit would stay in control — despite how little had actually changed.
And again, the law says the restructuring should only have been permitted if it was shown to advance OpenAI’s nonprofit mission better than the status quo. Previously for Obsolete, I sketched at least one outcome that could plausibly satisfy this condition:
As strong a claim as OpenAI has to leadership of the AI industry, it’s only one company. If it slows down for the sake of safety, others could overtake it. So perhaps the OpenAI nonprofit would better advance its mission if it were spun out into a truly independent entity with $150 billion and the mission to lobby for binding domestic and international safeguards on advanced AI systems.
If this sounds far-fetched, then so should the idea that the nonprofit board that initiated this conversion is genuinely representing the public interest.
In the end, it was always hard to see any outcome but OpenAI and its investors getting their way. The Elon Musk lawsuit trying to block the restructuring is the last real unknown, with a trial set for next year. But so far, investors don’t seem concerned enough to hold back their money
Notably, the judge made clear that if Musk had standing, blocking the restructuring would have been within her powers. The attorneys general have that power — and chose not to use it.
If you enjoyed this post, please subscribe to Obsolete.