Even though the nine members are very friendly to Altman, they are also sufficiently professional class people, Responsible Authority Figures of a type, that one would expect the board to have real limits,
Maybe I'm making some mistake, since I haven't followed it closely, but this very same board was on track to sign off on the original deal that you found completely unacceptable, was it not? These exact same "Professional class Responsible Authority Figures" were about to give up control, do obvious damage to the nonprofit's mission, and go off and do "generic nonprofit" things while Altman and investors ran everything? The deal couldn't have happened without a majority of that board?
If that's true, I don't see how you could possibly imagine that they're suitable people to oversee anything that would demand they show any backbone. They've already capitulated once, or at least signaled every intention of doing so.
And, to be honest, Professional class Responsible Authority Figures are usually not the people you want when a boat may have to be rocked.
The current OpenAI structure is bizarre and complex. It does important good things some of which this new arrangement will break. But the current structure also made OpenAI far less investable, which means giving away more of the company to profit maximizers, and causes a lot of real problems.
That was almost entirely good though, no? Myopic profit-maximization made OpenAI stray from the "beeline to AGI" suicidal course in favor of trying to R&D products people would buy, because if they did not do so, the money would dry up. If money instead comes from big investors buying into Sam Altman's promises of making them god-kings, OpenAI is no longer "distracted" by needing to make their products short-term useful, and the actually existentially dangerous research (instead of "o3 lied to someone" kind of "dangerous") is accelerated.
Simultaneously, the company's safety culture is already dismantled beyond recovery, so there's no reason to think they'd step up on safety just because the rat race's pressures have lessened. Worst of both worlds.
IMO, profit caps making OpenAI less appealing for investment is/was the only actual feature of OpenAI's structure that meaningfully hampers them; the rest is flavor text, particularly post-2023.
One thing that's not clear to me (and you may have discussed this in the previous posts, I don't remember) is: was the previous structure even legally valid/enforceable? Can you write into the structure of a for-profit LLC that it has to act in accordance with some goal other than profit? Because as I understand it, a board member has a fiduciary duty to the company regardless of their own interests, or that of the organization or process that made them a board member. Someone recently highlighted to me some examples of cases (in normal for-profit startups) where this gives you behavior like board members approving some measure, and then the same individuals, now acting as shareholders where they can do as they please, vote against it.
Maybe the original OpenAI structure included a clever and enforceable way around this. But if not, then maybe it's possible the switch to a PBC closes a loophole whereby investors could have sued the board for acting according to the nonprofit's interests instead of their own.
I think Sam Altman restructuring OpenAI is not a power coup because he already has dictatorial power. He mostly wants to get rid of pesky profit caps so he can secure more investment at a better valuation.
The OpenAI board members already have very little real power, since the previous board already tried to fire Sam Altman, and learned the hard way that Altman and all the employees can simply threaten to jump ship to Microsoft or another company. Altman effectively fired all the board members he disliked using this threat.
Because everyone joins the side they think will win during a coup, a previous victory against a previous board guarantees Altman absolute power over the new board.
Sam Altman isn't fighting for power because he already has it. He probably wants the board to have more power over investors of the PBC, because the board is more afraid of him than the investors and won't act against him. The only reason he acts like he wants investors to have more power, is that he knows that giving investors a lot of power on paper won't actually matter much (look at how Tesla investors can't do anything to reign in Elon Musk's behaviours). It only encourages certain kinds of investors to invest more.
I think certain investors care a lot about removing the profit caps, because they hope to rake in the sweet AGI money. And certain activists care a lot about keeping the profit caps, because they hope the sweet AGI money will go to humanity (as promised).
But I don't think the fight is as pivotally important as it looks, because I'm skeptical of the risk that "rich people will take all the AGI money and poor people will all starve to death." Rich people are not that uniformly evil, there will be a small fraction of them who have enough of a heart to keep the rest of humanity living comfortably, assuming AGI actually is powerful enough to automate the entire economy.
Again, whether the decision to release the new AI model technically falls on the nonprofit board or investors, isn't that important in my opinion because Sam Altman will have the de facto power either way. The board members are afraid of him and have no real power. The investors won't be able to do anything either, since PBC investors are even weaker than normal investors. Even Tesla investors can't reign in Elon Musk.
But I may be totally wrong. I wrote a lot here but I never actually read much about this.
Your voice has been heard. OpenAI has ‘heard from the Attorney Generals’ of Delaware and California, and as a result the OpenAI nonprofit will retain control of OpenAI under their new plan, and both companies will retain the original mission.
Technically they are not admitting that their original plan was illegal and one of the biggest thefts in human history, but that is how you should in practice interpret the line ‘we made the decision for the nonprofit to retain control of OpenAI after hearing from civic leaders and engaging in constructive dialogue with the offices of the Attorney General of Delaware and the Attorney General of California.’
Another possibility is that the nonprofit board finally woke up and looked at what was being proposed and how people were reacting, and realized what was going on.
The letter ‘not for private gain’ that was recently sent to those Attorney Generals plausibly was a major causal factor in any or all of those conversations.
The question is, what exactly is the new plan? The fight is far from over.
Table of Contents
The Mask Stays On?
As previously intended, OpenAI will transition their for-profit arm, currently an LLC, into a PBC. They will also be getting rid of the capped profit structure.
However they will be retaining the nonprofit’s control over the new PBC, and the nonprofit will (supposedly) get fair compensation for its previous financial interests in the form of a major (but suspiciously unspecified, other than ‘a large shareholder’) stake in the new PBC.
The rest of the post is a letter from Sam Altman, and sounds like it, you are encouraged to read the whole thing.
Your Offer is (In Principle) Acceptable
I find the structure of this solution not ideal but ultimately acceptable.
The current OpenAI structure is bizarre and complex. It does important good things some of which this new arrangement will break. But the current structure also made OpenAI far less investable, which means giving away more of the company to profit maximizers, and causes a lot of real problems.
Thus, I see the structural changes, in particular the move to a normal profit distribution, as a potentially a fair compromise to enable better access to capital – provided it is implemented fairly, and isn’t a backdoor to further shifts.
The devil is in the details. How is all this going to work?
What form will the nonprofit’s control take? Is it only that they will be a large shareholder? Will they have a special class of supervoting shares? Something else?
This deal is only acceptable if and only he nonprofit:
Remember that in these situations, the ratchet only goes one way. The commercial interests will constantly try to wrestle greater control and ownership of the profits away from us. They will constantly cite necessity and expedience to justify this. You’re playing defense, forever. Every compromise improves their position, and this one definitely will compared to doing nothing.
Or: This deal is getting worse and worse all the time.
Or, from Leo Gao:
There’s also the issue of the extent to which Altman controls the nonprofit board.
The reason the nonprofit needs control is to impact key decisions in real time. It needs control of a form that lets it do that. Because that kind of lever is not ‘standard,’ there will constantly be pressure to get rid of that ability, with threats of mild social awkwardness if these pressures are resisted.
So with love, now that we have established what you are, now it’s time to haggle over the price.
The Skeptical Take
He had an excellent thread explaining the attempted conversion, and he has another good explainer on what this new announcement means, as well as an emergency 80,000 Hours podcast on the topic that should come out tomorrow.
Consider this the highly informed and maximally skeptical and cynical take. Which, given the track records here, seems like a highly reasonable place to start.
The central things to know about the new plan are indeed:
It’s an improvement, but it might not effectively be all that much of one?
We need to stay vigilant. The fight is far from over.
Tragedy in the Bay
Roon says the quiet part out loud. We used to think it was possible to do the right thing and care about whether AI killed everyone. Now, those with power say, we can’t even imagine how we could have been so naive, let’s walk that back as quickly as we can so we can finally do some maximizing of the profits.
I do not think that the capped profit requires strong assumptions about a singleton to make sense. It only requires that there be an oligopoly where the players are individually meaningful. If you have close to perfect competition and the players have no market power and their products are fully fungible, then yes, of course being a capped profit makes no sense. Although it also does no real harm, your profits were already rather capped in that scenario.
More than that, we have largely lost our ability to actually ask what problems humanity will face, and then ask what would actually solve those problems, and then try to do that thing. We are no longer trying to backward chain from a win. Which means we are no longer playing to win.
At best, we are creating institutions that might allow the people involved to choose to do the right thing, when the time comes, if they make that decision.
The Spirit of the Rules
For several reasons, recent developments do still give me hope, even if we get a not-so-great version of the implementation details here.
The first is that this shows that the right forms of public pressure can still work, at least sometimes, for some combination of getting public officials to enforce the law and causing a company like OpenAI to compromise. The fight is far from over, but we have won a victory that was at best highly uncertain.
The second is that this will give the nonprofit at least a much better position going forward, and the ‘you have to change things or we can’t raise money’ argument is at least greatly weakened. Even though the nine members are very friendly to Altman, they are also sufficiently professional class people, Responsible Authority Figures of a type, that one would expect the board to have real limits, and we can push for them to be kept more in-the-loop and be given more voice. De facto I do not think that the nonprofit was going to get much if any additional financial compensation in exchange for giving up its stake.
The third is that, while OpenAI likely still has the ability to ‘weasel out’ of most of its effective constraints and obligations here, this preserves its ability to decide not to. As in, OpenAI and Altman could choose to do the right thing, even if they haven’t had the practice, with the confidence that the board would back them up, and that this structure would protect them from investors and lawsuits.
This is very different from saying that the board will act as a meaningful check on Altman, if Altman decides to act recklessly or greedily.
It is easy to forget that in the world of VCs and corporate America, in many ways it is not only that you have no obligation to do the right thing. It is that you have an obligation, and will face tremendous pressure, to do the wrong thing, in many cases merely because it is wrong, and certainly to do so if the wrong thing maximizes shareholder value in the short term.
Thus, the ability to fight back against that is itself powerful. Altman, and others in OpenAI leadership, are keenly aware of the dangers they are leading us into, even if we do not see eye to eye on what it will take to navigate them or how deadly are the threats we face. Altman knows, even if he claims in public to actively not know. Many members of technical stuff know. I still believe most of those who know do not wish for the dying of the light, and want humanity and value to endure in this universe, that they are normative and value good over bad and life over death and so on. So when the time comes, we want them to feel as much permission, and have as much power, to stand up for that as we can preserve for them.
It is the same as the Preparedness Framework, except that in this case we have only ‘concepts of a plan’ rather than an actually detailed plan. If everyone involved with power abides by the spirit of the Preparedness Framework, it is a deeply flawed but valuable document. If those involved with power discard the spirit of the framework, it isn’t worth the tokens that compose it. The same will go for a broad range of governance mechanisms.
Have Altman and OpenAI been endlessly disappointing? Well, yes. Are many of their competitors doing vastly worse? Also yes. Is OpenAI getting passing grades so far, given that reality does not grade on a curve? Oh, hell no. And it can absolutely be, and at some point will be, too late to try and do the right thing.
The good news is, I believe that today is not that today. And tomorrow looks good, too.