Bret Taylor and Larry Summers (members of the current OpenAI board) have responded to Helen Toner and Tasha McCauley in The Economist.
The key passages:
Helen Toner and Tasha McCauley, who left the board of Openai after its decision to reverse course on replacing Sam Altman, the CEO, last November, have offered comments on the regulation of artificial intelligence (AI) and events at OpenAI in a By Invitation piece in The Economist.
We do not accept the claims made by Ms Toner and Ms McCauley regarding events at OpenAI. Upon being asked by the former board (including Ms Toner and Ms McCauley) to serve on the new board, the first step we took was to commission an external review of events leading up to Mr Altman’s forced resignation. We chaired a special committee set up by the board, and WilmerHale, a prestigious law firm, led the review. It conducted dozens of interviews with members of OpenAI's previous board (including Ms Toner and Ms McCauley), Openai executives, advisers to the previous board and other pertinent witnesses; reviewed more than 30,000 documents; and evaluated various corporate actions. Both Ms Toner and Ms McCauley provided ample input to the review, and this was carefully considered as we came to our judgments.
The review’s findings rejected the idea that any kind of ai safety concern necessitated Mr Altman’s replacement. In fact, WilmerHale found that “the prior board’s decision did not arise out of concerns regarding product safety or security, the pace of development, OpenAI's finances, or its statements to investors, customers, or business partners.”
Furthermore, in six months of nearly daily contact with the company we have found Mr Altman highly forthcoming on all relevant issues and consistently collegial with his management team. We regret that Ms Toner continues to revisit issues that were thoroughly examined by the WilmerHale-led review rather than moving forward.
Ms Toner has continued to make claims in the press. Although perhaps difficult to remember now, OpenAI released ChatGPT in November 2022 as a research project to learn more about how useful its models are in conversational settings. It was built on GPT-3.5, an existing ai model which had already been available for more than eight months at the time.
we have found Mr Altman highly forthcoming
He was caught lying about the non-disparagement agreements, but I guess lying to the public is fine as long as you don't lie to the board?
Taylor's and Summers' comments here are pretty disappointing—it seems that they have no issue with, and maybe even endorse, Sam's now-publicly-verified bad behavior.
we have found Mr Altman highly forthcoming
That's exactly the line that made my heart sink.
I find it a weird thing to choose to say/emphasize.
The issue under discussion isn't whether Altman hid things from the new board; it's whether he hid things to the old board a long while ago.
Of course he's going to seem forthcoming towards the new board at first. So, the new board having the impression that he was forthcoming towards them? This isn't information that helps us much in assessing whether to side with Altman vs the old board. That makes me think: why report on it? It would be a more relevant update if Taylor or Summers were willing to stick their necks out a little further and say something stronger and more direct, something more in the direction of (hypothetically), "In all our by-now extensive interactions with Altman, we got the sense that he's the sort of person you can trust; in fact, he had surprisingly circumspect and credible things to say about what happened, and he seems self-aware about things that he could've done better (and those things seem comparatively small or at least very understandable)." If they had added something like that, it would have been more interesting and surprising. (At least for those who are currently skeptical or outright negative towards Altman; but also "surprising" in terms of "nice, the new board is really invested in forming their own views here!").
By contrast, this combination of basically defending Altman (and implying pretty negative things about Toner and McCauley's objectivity and their judgment on things that they deem fair to tell the media), but doing so without sticking their necks out, makes me worried that the board is less invested in outcomes and more invested in playing their role. By "not sticking their necks out," I mean the outsourcing of judgment-forming to the independent investigation and the mentioning of clearly unsurprising and not-very-relevant things like whether Altman has been forthcoming to them, so far. By "less invested in outcomes and more invested in playing their role," I mean the possibility that the new board maybe doesn't consider it important to form opinions at the object level (on Altman's character and his suitability for OpenAI's mission, and generally them having a burning desire to make the best CEO-related decisions). Instead, the alternative mode they could be in would be having in mind a specific "role" that board members play, which includes things like, e.g., "check whether Altman ever gets caught doing something outrageous," "check if he passes independent legal reviews," or "check if Altman's answers seem reassuring when we occasionally ask him critical questions." And then, that's it, job done. If that's the case, I think that'd be super unfortunate. The more important the org, the more it matters to have a engaged/invested board that considers itself ultimately responsible for CEO-related outcomes ("will history look back favorably on their choices regarding the CEO").
To sum up, I'd have much preferred it if their comments had either included them sticking their neck out a little more, or if I had gotten from them more of a sense of still withholding judgment. I think the latter would have been possible even in combination with still reminding the public that Altman (e.g.,) passed that independent investigation or that some of the old board members' claims against him seem thinly supported, etc. (If that's their impression, fair enough.) For instance, it's perfectly possible to say something like, "In our duty as board members, we haven't noticed anything unusual or worrisome, but we'll continue to keep our eyes open." That's admittedly pretty similar, in substance, to what they actually said. Still, it would read as a lot more reassuring to me because of its different emphasis My alternative phrasing would help convey that (1) they don't naively believe that Altman – in worlds where he is dodgy – would have likely already given things away easily in interactions towards them, and (2) that they consider themselves responsible for the outcome (and not just following of the common procedures) of whether OpenAI will be led well and in line with its mission.
(Maybe they do in fact have these views, 1 and 2, but didn't do a good job here at reassuring me of that.)
The review’s findings rejected the idea that any kind of ai safety concern necessitated Mr Altman’s replacement. In fact, WilmerHale found that “the prior board’s decision did not arise out of concerns regarding product safety or security, the pace of development, OpenAI's finances, or its statements to investors, customers, or business partners.”
Note that Toner did not make claims regarding product safety, security, the pace of development, OAI's finances, or statements to investors (the board is not investors), customers, or business partners (the board are not business partners). She said he was not honest to the board.
I'm not sure what to make of this omission.
OpenAI's March 2024 summary of the WilmerHale report included:
The firm conducted dozens of interviews with members of OpenAI’s prior Board, OpenAI executives, advisors to the prior Board, and other pertinent witnesses; reviewed more than 30,000 documents; and evaluated various corporate actions. Based on the record developed by WilmerHale and following the recommendation of the Special Committee, the Board expressed its full confidence in Mr. Sam Altman and Mr. Greg Brockman’s ongoing leadership of OpenAI.
[...]
WilmerHale found that the prior Board acted within its broad discretion to terminate Mr. Altman, but also found that his conduct did not mandate removal.
I'd guess that telling lies to the board would mandate removal. If that's right, then the summary suggests that they didn't find evidence of this.
It's also notable that Toner and McCauley have not provided public evidence of “outright lies” to the board. We also know that whatever evidence they shared in private during that critical weekend did not convince key stakeholders that Sam should go.
The WSJ reported:
Some board members swapped notes on their individual discussions with Altman. The group concluded that in one discussion with a board member, Altman left a misleading perception that another member thought Toner should leave, the people said.
I really wish they'd publish these notes.
If we presume that Graham’s story is accurate, it still means that Altman took on two incompatible leadership positions, and only stepped down from one of them when asked to do so by someone who could fire him. That isn’t being fired. It also isn’t entirely not being fired.
According to the most friendly judge (e.g. GPT-4o) if it was made clear Altman would get fired from YC if he did not give up one of his CEO positions, then ‘YC fired Altman’ is a reasonable claim. I do think precision is important here, so I would prefer ‘forced to choose’ or perhaps ‘effectively fired.’ Yes, that is a double standard on precision, no I don’t care.
I think that Paul Graham’s remarks today—particularly the “we didn’t want him to leave” part—make it clear that Altman was not fired.
In December 2023, Paul Graham gave a similar account to the Wall St Journal and said “it would be wrong to use the word ‘fired’”.
Roon has a take.
These are the remarks Zvi was referring to in the post. Also worth noting Graham's consistent choice of the word 'agreed' rather than 'chose', and Altman's failed attempt to transition to chairman/advisor to YC. It sure doesn't sound like Altman was the one making the decisions here.
Altman's failed attempt to transition to chairman/advisor to YC
Of some relevance in this context is that Altman has apparently for years been claiming to be YC Chairman (including in filings to the SEC): https://www.bizjournals.com/sanfrancisco/inno/stories/news/2024/04/15/sam-altman-y-combinator-board-chair.html
Helen Toner went on the TED AI podcast, giving us more color on what happened at OpenAI. These are important claims to get right.
I will start with my notes on the podcast, including the second part where she speaks about regulation in general. Then I will discuss some implications more broadly.
Notes on Helen Toner’s TED AI Show Podcast
This seems like it deserves the standard detailed podcast treatment. By default each note’s main body is description, any second-level notes are me.
Things That Could Have Been Brought To Our Attention Previously
A particular note from Helen Toner’s podcast: The OpenAI board learned about the release of ChatGPT from Twitter. They were not informed in advance.
This was nowhere near as crazy as it now sounds. The launch was relatively quiet and no one saw the reaction coming. I do not think that, on its own, this mistake would be egregious given the low expectations. You still should inform your board of new product launches, even if they are ‘research previews,’ but corners get cut.
As an isolated incident of not informing the board, I would be willing to say this is a serious process failure but ultimately not that big a deal. But this is part of a years-long (by Toner’s account) pattern of keeping the board in the dark and often outright lying to it.
Altman’s continual ‘saying that which was not’ and also ‘failure to say that which was and was also relevant’ included safety issues along with everything else.
It is the pattern that matters, and that is hard to convey to outsiders. As she says in the podcast, any one incident can be explained away, but a consistent pattern cannot. Any one person’s sense of the situation can be written off. A consistent pattern of it, say by two executives plus all the board members that aren’t either Altman or his right hand man Brockman, should be a lot harder, alas statements with substance could not be given.
Only now do we understand the non-disparagement and non-disclosure agreements and other tactics used to silence critics, along other threats and leverage. Indeed, it damn well sure sounds like Toner is holding back a lot of the story.
Thus, one way or another, this all falls under ‘things that could have been brought to our attention yesterday’ on so many levels.
Alas, it is too late now. The new board clearly wants business as usual.
Brad Taylor Responds
The only contradiction of Toner’s claims, so far, has been Paul Graham’s statement that Sam Altman was not fired from YC. Assuming we believe Paul’s story, which I mostly do, that puts whether Altman was effectively fired in a gray area.
Bret Taylor, the current OpenAI board chief, took a different approach.
In response to Toner’s explanations, Taylor did not dispute any of the claims, or the claims in general. Instead he made the case that Altman should still be CEO of OpenAI, and that Toner talking was bad for business so she should cut that out.
Notice the Exact Words here.
So yes. Those are all true statements, and very much things the Board Chief should say if he has decided he does not want the trouble of firing Altman as CEO.
With one possible exception, none of it in any way contradicts anything said by Toner.
Indeed, this looks awfully close to a corroboration.
Notice that Toner did not make any claims regarding product safety or security, the pace of developments, OpenAI’s finances, or any statements to investors, customers or business partners not related to OpenAI having an independant board. And I am happy to believe that those potentially false statements about the board’s independence were not a consideration in the firing of Altman.
Whether or not the company is focused on its ‘mission to ensure AGI benefits all of humanity’ is an open question where I think any reasonable outsider would be highly skeptical at this point given everything we now know, and would treat that as an empty corporate slogan.
I believe that the independent report’s conclusion is technically correct, the best kind of correct. If we are to draw any further conclusion than the exact words? Well, let’s see the report, then.
None of that goes to whether it was wise to respond by firing Altman, or whether the board would have been wise to do so if they had executed better.
How Much Does This Matter?
Is the new information damning for Sam Altman? Opinions vary.
The specific claim that the board was not informed of ChatGPT’s launch does not seem much more damaging, on the margin, than the things we already know. As I have said before, ‘lying to the board about important things’ seems to me the canonical offense that forces the board to consider firing the CEO, and in my book lying in an attempt to control the board is the one that forces you to outright fire the CEO, but we already put that part together.
The additional color does help crystalize and illustrate the situation. It clarifies the claims. The problem is that when there is the sum of a lot of bad incidents, any one of which could be excused as some combination of sloppy or a coincidence or not so bad or not sufficiently proven or similar, there is the tendency to only be able to focus on the worst one thing, or even to evaluate based on the least bad of all the listed things.
We got explicit confirmation that Altman lied to the board in an attempt to remove Toner from the board. To me, this remains by far the worst offense, on top of other details. We also got the news about Altman hiding his ownership of the AI startup fund. That seems like a potentially huge deal to hide from the board.
Why, people then ask, are you also harping on what is only like the 9th and 11th worst things we have heard about? Why do you ‘keep revisiting’ such issues? Why can’t you understand that you fought power, power won, and now you don’t have any?
Because the idea of erasing our memories, of saying that if you get away with it then it didn’t count, is one of the key ways to excuse such patterns of awful behavior.
If You Come at the King
OpenAI’s Joshua Achiam offered a reasonable take, saying that the board was well meaning and does not deserve to be ‘hated or ostracized,’ but they massively screwed up. Achiam thinks they made the wrong choice firing Altman, the issues were not sufficiently severe, but that this was not obvious, and the decision not so unreasonable.
His other claim, however, even if firing had been the right choice, they then had a duty if they went through with it to provide a clear and convincing explanation to all the stakeholders not only the employees.
Essentially everyone agrees that the board needed to provide a real explanation. They also agree that the board did not do so, and that this doomed the attempt to fire Altman without destroying the company, whether or not it had a shot anyway. If your approach will miss, it does not matter what he has done, you do not come at the king.
And that seems right.
For a vindictive king who will use the attempt to consolidate power? Doubly so.
The wrinkle remains why the board did not provide a better explanation. Why they did not get written statements from the two other executives, and issue additional statements themselves, if only internally or to other executives and key stakeholders. We now know that they considered this step for weeks, and on some level for years. I get that they feared Altman fighting back, but even given that this was clearly a massive strategic blunder. What gives?
It must be assumed that part of that answer is still hidden.
So That is That
Perhaps we will learn more in the future. There is still one big mystery left to solve. But more and more, the story is confirmed, and the story makes perfect sense.
Altman systematically withheld information from and on many occasions lied to the board. This included lying in an attempt to remove Toner from the board so Altman could appoint new members and regain control. The board quite reasonably could not trust Altman, and had tried for years to institute new procedures without success. Then they got additional information from other executives that things were worse than they knew.
Left with no other options, the board fired Altman. But they botched the firing, and now Altman is back and has de facto board control to run the company as a for profit startup, whether or not he has a full rubber stamp. And the superalignment team has been denied its promised resources and largely driven out of the company, and we have additional highly troubling revelations on other fronts.
The situation is what it is. The future is still coming. Act accordingly.