I feel like I want here is a campaign to make sure history remembers the specific people who let this happen – names of board members, Attorney General Kathy Jennings, etc.
It feels achievable and correct to me for this to be a thing where, if you're going to do this, a lot of people associate your name with enabling the theft.
I feel ambivalent and complicated about this. In some objective sense, I think that the Attorneys General enabled a huge theft and (I think more importantly) made humanity a lot less safe than it could have been if they had acted in a different way that was also totally within their power. So in an objective sense they enabled great harm.
On the other hand I get the sense that they did a lot more than they could have and than most people who are more knowledgeable about this kind of thing expected them to, and the negotiation seems complicated enough that it seems like they at least tried to engage on the issue (an area they were probably unfamiliar with and not well-staffed to adjudicate) in a pretty deep way. They were probably under enormous pressure. I also get the sense that Attorney General Jennings is less susceptible to pressure from companies and more concerned with the rule of law than most attorneys general. And so in a relative sense, I think that it's possible that they did a pretty good job.
I feel worse about the board members, both because I think this was much more directly their responsibility, and because I generally get the sense that they allow or even encourage a lot of egregious behavior from OpenAI in general that's contrary to OpenAI's mission. Compared to the reference class of nonprofit board members, I think they perform much more poorly than Jennings does to the reference class of attorneys general.
Even for the attorneys general, I think you could make a case that there ought to be some sort of social punishment, even if the way that they acted was in some sense normal or above-average. That could be both because we want to change the norm / incentivize better behavior in the future and for decision theory reasons (even if what they did was normal or above-average compared to how most attorneys general handle most cases, we might want it to be the case that people think that they'll be remembered badly by history if they so suboptimally in such important circumstances)
I think this rhetoric will just confuse people. Who stole from whom? Well, one part of OpenAI (the for-profit part) "stole" the money from another part of OpenAI (the non-profit part). And what did they "steal" - equity? a share of future profits? In other words, stock-market funny-money and enormous projected income that hasn't happened yet. Maybe it's meaningful to people who follow corporate finance, but for normal people, this is just more shuffling around of billions of dollars of the kind that governments and corporations and super-rich do all the time, with the rights and wrongs of it being very opaque to outsiders, but probably evil just because it involves enormous amounts of money. The idea that OpenAI is stealing from itself especially sounds weird.
I think the idea is that non-profit money is much closer to public property, especially when the non-profit has this kind of charter. So the complaints about a potentially unfair deal are legitimate.
But I am not sure that the non-profit has less of the expected future money as a result, even before the yet undisclosed warrants.
On one hand, it has presumably smaller share than before (it’s tricky to know exactly, with capped profits for other investors and such, one really needs to calculate more precisely and not just presume). On the other hand, the restructuring is expected to increase OpenAI’s future market share by enabling it to expand faster. So, in expectation, this is, presumably, a smaller share of a larger pie. Whether this smaller share of a larger pie is smaller in expectation compared to the pre-existing situation is not clear.
The material effect of this restructuring is non-obvious: one needs to do some quantitative modeling, to take into account the undisclosed terms of the warrants, and so on in order to figure this out.
(It’s probably not an accident that Microsoft equity in OpenAI is not far from 10x of what they invested, and their original profit cap was 100x, and warrants require another 10x growth in valuation to start kicking in. It is likely that the board was trying to formulate a fair replacement of that 100x profit cap when formulating the warrants (but it’s really annoying that the terms of those warrants don’t seem to be disclosed; or might they be actually disclosed somewhere deep in the filed documents?).)
Health and curing diseases. The OpenAI Foundation will fund work to accelerate health breakthroughs so everyone can benefit from faster diagnostics, better treatments, and cures. This will start with activities like the creation of open-sourced and responsibly built frontier health datasets, and funding for scientists.
The first seems like a generally worthy cause that is highly off mission. There’s nothing wrong with health and curing diseases, but pushing this now does not advance the fundamental mission of OpenAI. They are going to start with, essentially, doing AI capabilities research and diffusion in health, and funding scientists to do AI-enabled research. A lot of this will likely fall right back into OpenAI and be good PR.
Again, that’s a net positive thing to do, happy to see it done, but that’s not the mission.
I don't think that's correct. The mission is to ensure that AGI benefits all humanity. There are various facets of it, but dealing with health, diseases(, and aging) is one of the main ways smarter and smarter AI systems are expected to benefit all humanity.
AI systems are strong enough already to start contributing in this sense, so it's time for OpenAI to start pushing explicitly in this direction. Also it would be good if AIs see that we actually value this direction.
But going deeper into that is probably not for this comment. In your previous post you wrote:
Sam Altman: We have a safety strategy that relies on 5 layers: Value alignment, Goal alignment, Reliability, Adversarial robustness, and System safety. Chain-of-thought faithfulness is a tool we are particularly excited about, but it somewhat fragile and requires drawing a boundary and a clear abstraction.
All five of these are good things, but I notice (for reasons I will not attempt to justify here) that I do not expect he who approaches the problem in this way to have a solution that scales to true automated AI researchers. The Tao is missing.
That's certainly correct. None of what they have been saying sheds any light on how to scale this safety strategy to the situation when one has true automated AI researchers. We should be discussing various aspects of this fundamental problem more.
Still, value alignment is fundamental, and the importance of taking care of health issues of humans is an important part of value alignment, so it's a good thing for them to start emphasizing that.
AI systems are strong enough already to start contributing in this sense, so it's time for OpenAI to start pushing explicitly in this direction.
I'm not sure that follows. Does diverting resources in that direction now help more than spending those same resources on making the AGI development go better in order to help more later? I expect anything AI can do now, it will be able to do vastly better and cheaper in a future with AGI.
Note: If we could somehow get all the AI labs to slow down the push for AGI and divert resources to 1) alignment work and 2) these kinds of good causes, I'd find that to be a more compelling argument.
I think those “lines in the sand” are very artificial. That’s especially true about AGI, because the road to superintelligence goes not via human equivalence, but around it.
So at any point in time we have AI systems which are somewhat deficient compared to humans along some dimensions to be called “true AGI”, but also strongly superhuman along larger and larger number of dimensions. At the point in time when all important deficiencies compared to humans are gone and we can call a system “AGI” without reservations, it’s already wildly superhuman along many dimensions (including many capabilities related to biomedical research).
But also we expect continuous progress, we don’t expect saturation, so at any given point in time any given task remains easier to accomplish in the future. But that’s not a good reason to postpone, because we usually need the solution ASAP. People are dying now, more than a million each week, and the sooner we can start to meaningfully decrease this number, the better.
In any case, AIs need to get better at biomedical research in order to be helpful with this, and it takes time. I doubt there is a generic intelligence capability from which everything follows automatically and super rapidly. The direction is towards artificial research assistants, then to artificial researchers, then to very superhuman artificial researchers, but one still needs to push it for any given application field. (Of course, people prioritize AI research first, for obvious reasons, and that’s also where the most formidable existential safety challenges come from, because artificial AI researchers do mean straightforward non-saturating recursive self-improvement, so safety-wise we should talk about that aspect first. But it’s good that they are pushing towards research help in more applied areas too, when those applied areas are urgent. It grounds the whole thing in the right values and the right priorities to some extent. If it slows down the rush to superintelligence a bit, it might be a positive thing too. Although I don’t really expect a slowdown from that, I think AI practioners and AIs themselves will learn a lot from those “biomedical exercises”.)
I agree with a lot of what you're saying, and it made me realize I left out some of my reasoning that's maybe more central than I realized.
Namely, what is the rate-limiting step in getting improved outcomes for people, health-wise? I would say the limiter is regulatory, in ways I don't see current or near term AI significantly altering. In other words, under OpenAI's own claimed timelines, I wouldn't expect AI-assisted health innovation to generate real world results before close-enough-to-AGI-to-be-really-dangerous gets developed. Of course we should be using AI to advance medicine faster as soon as we can do so. But I don't see why we need a non-profit to fund that, when it will also be very profitable to the companies that will use it. Conversely, an additional $25B invested in making future AI safer doesn't have a whole lot of other funders lining up to make it happen.
Yes, in this sense you are right. In many countries, regulatory barriers are all-important. Although, a good chunk of the world can start adopting fast (and medical tourism does exist).
I think the main body of OpenAI will be dealing with the key safety issues, not even the whole main body, but the "core group". They have to, the key safety problems are of such nature they can't be dealt from the outside, and the non-profit is "the outside" in this sense, they can only direct/advise/assent/review the plans, but they can't do more than that, they just don't know how to do it. We've got a glimpse of OpenAI current thinking on "core safety" from Jakub Pachocki during the latest livestream (that's whom they now have instead of Ilya), it has sounded good modulo the main difficulty, and we don't know if they are well prepared to address the main difficulty (maintaining invariant properties through accelerating recursive self-improvement provided by artificial AI researchers, so not letting those properties diverge and tightening the delta between what those properties ideally should be and what they are at the moment, making sure that not only the probability of big disaster per unit of time does not grow, but that it diminishes fast enough, so that the accumulated probability of big disaster remains moderate in the infinite limit).
The other big project led by the non-profit, the cybersecurity improvements, shows that the non-profit is ready to lead on externalities, on systemic safety problems downstream of AI development. They are better equipped to do that, they have connections across the industry, this requires a systemic action, a lot of coordination.
(I presume their biomedical project will also try to quietly (or not so quietly) include prevention of artificial pandemics, which is another big downstream safety externality of AI development. The non-profit is capable of driving that.)
But with the core safety of self-modifying, self-improving systems, one can't split safety and capability, it has to be the same group of people, a group of leading AI researchers who need to be strongly mindful of existential safety, to have a correct approach of collaborating on that set of issues with AI systems, and to drive a take-off jointly with collaborating AI systems (I don't know if OpenAI has a right group of people in this sense these days).
Your argument that OpenAI stole money here is poorly thought-out.
OpenAI's ~$500b valuation priced in a very high likelihood of it becoming a for-profit.
If it wasn't going to be a for-profit its valuation would be much lower.
And if it wasn't going to be a for-profit the odds of it having any control whatsoever over the creation of ASI would be very much reduced.
It seems likely public gained billions from this.
A merger negotiation is one of the best, if not the best, opportunity for improving safety practises and focusing on competitive benefits from public safety centered strategy. This event right here is very similar. A truly competent negotiating team could and should have dramatically increased the safety strategy in exchange for this much equity. If the game was even somewhat fair, that was completely doable. This did not happen, The opportunity was largely wasted.
OpenAI is now set to become a Public Benefit Corporation, with its investors entitled to uncapped profit shares. Its nonprofit foundation will retain some measure of control and a 26% financial stake, in sharp contrast to its previous stronger control and much, much larger effective financial stake. The value transfer is in the hundreds of billions, thus potentially the largest theft in human history.
I say potentially largest because I realized one could argue that the events surrounding the dissolution of the USSR involved a larger theft. Unless you really want to stretch the definition of what counts this seems to be in the top two.
I am in no way surprised by OpenAI moving forward on this, but I am deeply disgusted and disappointed they are being allowed (for now) to do so, including this statement of no action by Delaware and this Memorandum of Understanding with California.
Many media and public sources are calling this a win for the nonprofit, such as this from the San Francisco Chronicle. This is mostly them being fooled. They’re anchoring on OpenAI’s previous plan to far more fully sideline the nonprofit. This is indeed a big win for the nonprofit compared to OpenAI’s previous plan. But the previous plan would have been a complete disaster, an all but total expropriation.
It’s as if a mugger demanded all your money, you talked them down to giving up half your money, and you called that exchange a ‘change that recapitalized you.’
OpenAI Calls It Completing Their Recapitalization
As in, they claim OpenAI has ‘completed its recapitalization’ and the nonprofit will now only hold equity OpenAI claims is valued at approximately $130 billion (as in 26% of the company, which is actually to be fair worth substantially more than that if they get away with this), as opposed to its previous status of holding the bulk of the profit interests in a company valued at (when you include the nonprofit interests) well over $500 billion, along with a presumed gutting of much of the nonprofit’s highly valuable control rights.
They claim this additional clause, presumably the foundation is getting warrants with but they don’t offer the details here:
We don’t know that ‘significant’ additional equity means, there’s some sort of unrevealed formula going on, but given the nonprofit got expropriated last time I have no expectation that these warrants would get honored. We will be lucky if the nonprofit meaningfully retains the remainder of its equity.
Sam Altman’s statement on this is here, also announcing his livestream Q&A that took place on Tuesday afternoon.
How Much Was Stolen?
There can be reasonable disagreements about exactly how much. It’s a ton.
There used to be a profit cap, where in Greg Brockman’s own words, ‘If we succeed, we believe we’ll create orders of magnitude more value than any existing company — in which case all but a fraction is returned to the world.’
Well, so much for that.
I looked at this question in The Mask Comes Off: At What Price a year ago.
If we take seriously that OpenAI is looking to go public at a $1 trillion valuation, then consider that Matt Levine estimated the old profit cap only going up to about $272 billion, and that OpenAI still is a bet on extreme upside.
I guess Altman is okay with that now?
Obviously you can’t base your evaluations on a projection that puts the company at a value of $30.9 trillion, and that calculation is deeply silly, for many overloaded and obvious reasons, including decreasing marginal returns to profits.
It is still true that most of the money OpenAI makes in possible futures, it makes as part of profits in excess of $1 trillion.
I think Levine’s estimate was low at the time, and you also have to account for equity raised since then or that will be sold in the IPO, but it seems obvious that the majority of future profit interests were, prior to the conversion, still in the hands of the non-profit.
Even if we thought the new control rights were as strong as the old, we would still be looking at a theft in excess of $250 billion, and a plausible case can be made for over $500 billion. I leave the full calculation to others.
The vote in the board was unanimous.
I wonder exactly how and by who they will be sued over it, and what will become of that. Elon Musk, at a minimum, is trying.
They say behind every great fortune is a great crime.
The Nonprofit Still Has Lots of Equity After The Theft
Altman points out that the nonprofit could become the best-resourced non-profit in the world if OpenAI does well. This is true. There is quite a lot they were unable to steal. But it is beside the point, in that it does not make taking the other half, including changing the corporate structure without permission, not theft.
There’s no perhaps on that last clause. On this level, whether or not you agree with the term ‘theft,’ it isn’t even close, this is the largest transfer. Of course, if you take the whole of OpenAI’s nonprofit from inception, performance looks better.
Yes, it is true that the nonprofit, after the theft and excluding control rights, will have an on-paper valuation only slightly lower than the on-paper value of all of Anthropic.
The $500 billion valuation excludes the non-profit’s previous profit share, so even if you think the nonprofit was treated fairly and lost no control rights you would then have it be worth $175 billion rather than $130 billion, so yes slightly less than Anthropic, and if you acknowledge that the nonprofit got stolen from it’s even more.
If OpenAI can successfully go public at a $1 trillion valuation, then depending on how much of that are new shares they will be selling the nonprofit could be worth up to $260 billion.
What about some of the comparable governance structures here? Coursera does seem to be a rather straightforward B-corp. The others don’t?
Patagonia has the closely held Patagonia Purpose Trust, which holds 2% of shares and 100% of voting control, and The Holdfast Collective, which is a 501c(4) nonprofit with 98% of the shares and profit interests. The Chouinard family has full control over the company, and 100% of profits go to charitable causes.
Does that sound like OpenAI’s new corporate structure to you?
Change.org’s nonprofit owns 100% of its PBC.
Does that sound like OpenAI’s new corporate structure to you?
Anthropic is a PBC, but also has the Long Term Benefit Trust. One can argue how meaningfully different this is from OpenAI’s new corporate structure, if you disregard who is involved in all of this.
What the new structure definitely is distinct from is the original intention:
The Theft Was Unnecessary For Further Fundraising
Would OpenAI have been able to raise further investment without withdrawing its profit caps for investments already made?
When you put it like that it seems like obviously yes?
I can see the argument that to raise funds going forward, future equity investments need to not come with a cap. Okay, fine. That doesn’t mean you hand past investors, including Microsoft, hundreds of billions in value in exchange for nothing.
One can argue this was necessary to overcome other obstacles, that OpenAI had already allowed itself to be put in a stranglehold another way and had no choice. But the fundraising story does not make sense.
The argument that OpenAI had to ‘complete its recapitalization’ or risk being asked for its money back is even worse. Investors who put in money at under $200 billion are going to ask for a refund when the valuation is now at $500 billion? Really? If so, wonderful, I know a great way to cut them that check.
How Much Control Will The Nonprofit Retain?
I am deeply disappointed that both the Delaware and California attorneys general found this deal adequate on equity compensation for the nonprofit.
I am however reasonably happy with the provisions on control rights, which seem about as good as one can hope for given the decision to convert to a PBC. I can accept that the previous situation was not sustainable in practice given prior events.
The new provisions include an ongoing supervisory role for the California AG, and extensive safety veto points for the NFP and the SSC committee.
If I was confident that these provisions would be upheld, and especially if I was confident their spirit would be upheld, then this is actually pretty good, and if it is used wisely and endures it is more important than their share of the profits.
The nonprofit will indeed retain substantial resources and influence, but no I do not expect the public safety mission to dominate the OpenAI enterprise. Indeed, contra the use of the word ‘ongoing,’ it seems clear that it already had ceased to do so, and this seems obvious to anyone tracking OpenAI’s activities, including many recent activities.
What is the new control structure?
OpenAI did not say, but the Delaware AG tells us more and the California AG has additional detail. NFP means OpenAI’s nonprofit here and throughout.
This is the Delaware AG’s non-technical announcement (for the full list see California’s list below), she has also ‘warned of legal action if OpenAI fails to act in public interest’ although somehow I doubt that’s going to happen once OpenAI inevitably does not act in the public interest:
What did California get?
California also has its own Memorandum of Understanding. It talks a lot in its declarations about California in particular, how OpenAI creates California jobs and economic activity (and ‘problem solving’?) and is committed to doing more of this and bringing benefits and deepening its commitment to the state in particular.
The whole claim via Tweet by Sam Altman that he did not threaten to leave California is raising questions supposedly answered by his Tweet. At this level you perhaps do not need to make your threats explicit.
The actual list seems pretty good, though? Here’s a full paraphrased list, some of which overlaps with Delaware’s announcement above, but which is more complete.
Also, it’s not even listed in the memo, but the ‘merge and assist’ clause was preserved, meaning OpenAI commits to join forces with any ‘safety-conscious’ rival that has a good chance of reaching OpenAI’s goal of creating AGI within a two-year time frame. I don’t actually expect an OpenAI-Anthropic merger to happen, but it’s a nice extra bit of optionality.
This is better than I expected, and as Ben Shindel points out better than many traders expected. This actually does have real teeth, and it was plausible that without pressure there would have been no teeth at all.
It grants the NFP the sole power to appoint and remove directors, and requiring them not to consider the for-profit mission in safety contexts. The explicit granting of the power to halt deployments and mandate mitigations, without having to cite any particular justification and without respect to profitability, is highly welcome, if structured in a functional fashion.
It is remarkable how little many expected to get. For example, here’s Todor Markov, who didn’t even expect the NFP to be able to replace directors at all. If you can’t do that, you’re basically dead in the water.
I am not a lawyer, but my understanding is that the ‘no cheating around this’ clauses are about as robust as one could reasonably hope for them to be.
It’s still, as Garrison Lovely calls it, ‘on paper’ governance. Sometimes that means governance in practice. Sometimes it doesn’t. As we have learned.
The distinction between the boards still means there is an additional level removed between the PBC and the NFP. In a fast moving situation, this makes a big difference, and the NFP likely would have to depend on its enumerated additional powers being respected. I would very much have liked them to include appointing or firing the CEO directly.
Whether this overall ‘counts as a good deal’ depends on your baseline. It’s definitely a ‘good deal’ versus what our realpolitik expectations projected. One can argue that if the control rights really are sufficiently robust over time, that the decline in dollar value for the nonprofit is not the important thing here.
The counterargument to that is both that those resources could do a lot of good over time, and also that giving up the financial rights has a way of leading to further giving up control rights, even if the current provisions are good.
Will These Control Rights Survive And Do Anything?
Similarly to many issues of AI alignment, if an entity has ‘unnatural’ control, or ‘unnatural’ profit interests, then there are strong forces that continuously try to take that control away. As we have already seen.
Unless Altman genuinely wants to be controlled, the nonprofit will always be under attack, where at every move we fight to hold its ground. On a long enough time frame, that becomes a losing battle.
Right now, the OpenAI NFP board is essentially captured by Altman, and also identical to the PBC board. They will become somewhat different, but no matter what it only matters if the PBC board actually tries to fulfill its fiduciary duties rather than being a rubber stamp.
One could argue that all of this matters little, since the boards will both be under Altman’s control and likely overlap quite a lot, and they were already ignoring their duties to the nonprofit.
So yes, there is that.
They claim to now be a public benefit corporation, OpenAI Group PBC.
This is a mischaracterization of how PBCs work. It’s more like the flip side of this. A conventional corporation is supposed to maximize profits and can be sued if it goes too far in not doing that. Unlike a conventional corporation, a PBC is allowed to consider those broader interests to a greater extent, but it is not in practice ‘required’ to do anything other than maximize profits.
One particular control right is the special duty to the mission, especially via the safety and security committee. How much will they attempt to downgrade the scope of that?
What About OpenAI’s Deal With Microsoft?
They have an announcement about that too.
Anyone else notice something funky here? OpenAI’s nonprofit has had its previous rights expropriated, and been given 26% of OpenAI’s shares in return. If Microsoft had 32.5% of the company excluding the nonprofit’s rights before that happened, then that should give them 24% of the new OpenAI. Instead they have 27%.
I don’t know anything nonpublic on this, but it sure looks a lot like Microsoft insisted they have a bigger share than the nonprofit (27% vs. 26%) and this was used to help justify this expropriation and a transfer of additional shares to Microsoft.
In exchange, Microsoft gave up various choke points it held over OpenAI, including potential objections to the conversion, and clarified points of dispute.
Microsoft got some upgrades in here as well.
That’s kind of a wild set of things to happen here.
In some key ways Microsoft got a better deal than it previously had. In particular, AGI used to be something OpenAI seemed like it could simply declare (you know, like war or the defense production act) and now it needs to be verified by an ‘expert panel’ which implies there is additional language I’d very much like to see.
In other ways OpenAI comes out ahead. An incremental $250B of Azure services sounds like a lot but I’m guessing both sides are happy with that number. Getting rid of the right of first refusal is big, as is having their non-API products free and clear. Getting hardware products fully clear of Microsoft is a big deal for the Ives project.
My overall take here is this was one of those broad negotiations where everything trades off, nothing is done until everything is done, and there was a very wide ZOPA (zone of possible agreement) since OpenAI really needed to make a deal.
What Will OpenAI’s Nonprofit Do Now?
In theory govern the OpenAI PBC. I have my doubts about that.
What they do have is a nominal pile of cash. What are they going to do with it to supposedly ensure that AGI goes well for humanity?
The default, as Garrison Lovely predicted a while back, is that the nonprofit will essentially buy OpenAI services for nonprofits and others, recapture much of the value and serve as a form of indulgences, marketing and way to satisfy critics, which may or may not do some good along the way.
The initial $50 million spend looked a lot like exactly this.
Their new ‘initial focus’ for $25 billion will be in these two areas:
They literally did the meme.
The first seems like a generally worthy cause that is highly off mission. There’s nothing wrong with health and curing diseases, but pushing this now does not advance the fundamental mission of OpenAI. They are going to start with, essentially, doing AI capabilities research and diffusion in health, and funding scientists to do AI-enabled research. A lot of this will likely fall right back into OpenAI and be good PR.
Again, that’s a net positive thing to do, happy to see it done, but that’s not the mission.
Technical solutions to AI resilience could potentially at least be useful AI safety work to some extent. With a presumed ~$12 billion this is a vast overconcentration of safety efforts into things that are worth doing but ultimately don’t seem likely to be determining factors. Note how Altman described it in his tl;dr from the Q&A:
This is now infinitely broad. It could be addressing ‘economic impact’ and be basically a normal (ineffective) charity, or one that intervenes mostly by giving OpenAI services to normal nonprofits. It could be mostly spent on valuable technical safety, and be on the most important charitable initiatives in the world. It could be anything in between, in any distribution. We don’t know.
My default assumption is that this is primarily going to be about mundane safety or even fall short of that, and make the near term world better, perhaps importantly better, but do little to guard against the dangers or downsides of AGI or superintelligence, and again largely be a de facto customer of OpenAI.
There’s nothing wrong with mundane risk mitigation or defense in depth, and nothing wrong with helping people who need a hand, but if your plan is ‘oh we will make things resilient and it will work out’ then you have no plan.
That doesn’t mean this will be low impact, or that what OpenAI left the nonprofit with is chump change.
I also don’t want to knock the size of this pool. The previous nonprofit initiative was $50 million, which can do a lot of good if spent well (in that case, I don’t think it was) but in this context $50 million chump change.
Whereas $25 billion? Okay, yeah, we are talking real money. That can move needles, if the money actually gets spent in short order. If it’s $25 billion as a de facto endowment spent down over a long time, then this matters and counts for a lot less.
The warrants are quite far out of the money and the NFP should have gotten far more stock than it did, but 26% (worth $130 billion or more) remains a lot of equity. You can do quite a lot of good in a variety of places with that money. The board of directors of the nonprofit is highly qualified if they want to execute on that. It also is highly qualified to effectively shuttle much of that money right back to OpenAI’s for profit, if that’s what they mainly want to do.
It won’t help much with the whole ‘not dying’ or ‘AGI goes well for humanity’ missions, but other things matter too.
Is The Deal Done?
Not entirely. As Garrison Lovely notes, all these sign-offs are provisional, and there are other lawsuits and the potential for other lawsuits. In a world where Elon Musk’s payouts can get crawled back, I wouldn’t be too confident that this conversation sticks. It’s not like the Delaware AG drives most objections to corporate actions.
The last major obstacle is the Elon Musk lawsuit, where standing is at issue but the judge has made clear that the suit otherwise has merit. There might be other lawsuits on the horizon. But yeah, probably this is happening.
So this is the world we live in. We need to make the most of it.