New Things Have Come To Light
The Information offers us new information about what happened when the board if AI unsuccessfully tried to fire Sam Altman, which I call
The Battle of the Board.
The Information: OpenAI co-founder Ilya Sutskever shared new details on the internal conflicts that led to Sam Altman’s initial firing, including a memo alleging Altman exhibited a “consistent pattern of lying.”
Liv: Lots of people dismiss Sam’s behaviour as typical for a CEO but I really think we can and should demand better of the guy who thinks he’s building the machine god.
Toucan: From Ilya’s deposition—
• Ilya plotted over a year with Mira to remove Sam
• Dario wanted Greg fired and himself in charge of all research
• Mira told Ilya that Sam pitted her against Daniela
• Ilya wrote a 52 page memo to get Sam fired and a separate doc on Greg
This Really Was Primarily A Lying And Management Problem
Daniel Eth: A lot of the OpenAI boardroom drama has been blamed on EA – but looks like it really was overwhelmingly an Ilya & Mira led effort, with EA playing a minor role and somehow winding up as a scapegoat
Peter Wildeford: It seems troubling that the man doing trillions of dollars of infrastructure spending in order to transform the entire fabric of society also has a huge lying problem.
I think this is like on an extra bad level even for typical leaders.
Charles: I haven’t seen many people jumping to defend Altman with claims like “he doesn’t have a huge lying problem” either, it’s mostly claims that map to “I don’t care, he gets shit done”.
Joshua Achiam (OpenAI Head of Mission Alignment): There is plenty to critique about Sam in the same way there is plenty to critique about any significant leader. But it kills me to see what kind of tawdry, extreme stuff people are willing to believe about him.
When we look back years from now with the benefit of hindsight, it’s my honest belief that the record will show he was no more flawed than anyone, more virtuous than most, and did his best to make the world a better place. I also expect the record will show that he succeeded.
Joshua Achiam spoke out recently about some of OpenAI’s unethical legal tactics, and this is about as full throated a defense as I’ve seen of Altman’s behaviors. As with anyone important, no matter how awful they are, some people are going to believe they’re even worse, or worse in particular false ways. And in many ways, as I have consistently said, I find Altman to be well ‘above replacement’ as someone to run OpenAI, and I would not want to swap him out for a generic replacement executive.
I do still think he has a rather severe (even for his peer group) lying and manipulation problem, and a power problem, and that ‘no more flawed than anyone’ or ‘more virtuous than most’ seems clearly inaccurate, as is reinforced by the testimony here.
As I said at the time, The Battle of the Board, as in the attempt to fire Altman, was mostly not a fight over AI safety and not motivated by safety. It was about ordinary business issues.
Ilya Tells Us How It Went Down And Why He Tried To Do It
Ilya had been looking to replace Altman for a year, the Witness here is Ilya,
here’s the transcript link. If you are interested in the details, consider reading the whole thing.
Here are some select quotes:
Q. So for — for how long had you been planning to propose removal of Sam?
A. For some time. I mean, “planning” is the wrong word because it didn’t seem feasible.
Q. It didn’t seem feasible?
A. It was not feasible prior; so I was not planning.
Q. How — how long had you been considering it?
A. At least a year.
The other departures from the board, Ilya reports, made the math work where it didn’t before. Until then, the majority of the board had been friendly with Altman, which basically made moving against him a non-starter. So that’s why he tried when he did. Note that all the independent directors agreed on the firing.
…
[As Read] Sam exhibits a consistent pattern of lying, undermining his execs, and pitting his execs against one another. That was clearly your view at the time?
A: Correct.
…
Q. This is the section entitled “Pitting People Against Each Other.”
A. Yes.
Q. And turning on the next page, you see an example that’s offered is “Daniela versus Mira”?
A. Yes.
Q. Is “Daniela” Daniela Amodei?
A. Yes.
Q. Who told you that Sam pitted Daniela against Mira?
A. Mira.
…
Q. In the section below that where it says “Dario versus Greg, Ilya”—
A. Yes.
Q. — you see that?
A. Yes.
Q. The complaint — it says — you say here that:
[As Read] Sam was not taking a firm position in respect of Dario wanting to run all of research at OpenAI to have Greg fired — and to have Greg fired? Do you see that?
A. I do see that.
Q. And “Dario” is Dario Amodei?
A. Yes.
Q. Why were you faulting Sam for Dario’s efforts?
THE WITNESS: So my recollection of what I wrote here is that I was faulting Sam for not accepting or rejecting Dario’s conditions.
And for fun:
ATTORNEY MOLO: That’s all you’ve done the entire deposition is object.
ATTORNEY AGNOLUCCI: That’s my job. So —
ATTORNEY MOLO: Actually, it’s not.
…
ATTORNEY MOLO: Yeah, don’t raise your voice.
ATTORNEY AGNOLUCCI: I’m tired of being 24 told that I’m talking too much.
ATTORNEY MOLO: Well, you are.
If You Come At The King
Best not miss.
What did Sutskever and Murati think firing Altman meant? Vibes, paper, essays?
What happened here was, it seems, that Ilya Sutskever and Mira Murati came at the king for very good reasons one might come at a king, combined with Altman’s attempt to use lying to oust Helen Toner from the board.
But those involved (including the rest of the board) didn’t execute well because of various fears, during the fight both Murati and Sutskever refused to explain to the employees or world what they were upset about, lost their nerve and folded. The combination of that plus the board’s refusal to explain, and especially Murati’s refusal to back them up after setting things in motion, was fatal.
Do they regret coming at the king and missing? Yes they do, and did within a few days. That doesn’t mean they’d be regretting it if it had worked. And I continue to think if they’d been forthcoming about the reasons from the start, and otherwise executed well, it would have worked, and Mira Murati could have been OpenAI CEO.
Now, of course, it’s too late, and it would take a ten times worse set of behaviors for Altman to get into this level of trouble again.
Enter The Scapegoats
It really was a brilliant response, to scapegoat Effective Altruism and the broader AI safety movement as the driving force and motivation for the change, thus with this one move burying Altman’s various misdeeds, remaking the board, purging the company and justifying the potentially greatest theft in human history while removing anyone who would oppose the path of commercialization. Well played.
This scapegoating continues to this day. For the record,
Helen Toner (I believe highly credibly) clarifies that Ilya’s version of the events related to the extremely brief consideration of a potential merger was untrue, and unrelated to the rest of events.
And In Summary
The below is terrible writing, presumably from an AI, but yeah this sums it all up:
Pogino (presumably an AI generated Twitter reply): “This reframes the OpenAI power struggle as a clash of personalities and philosophies, not a proxy war for EA ideology.
Ilya’s scientific purism and Mira’s governance assertiveness collided with Altman’s entrepreneurial pragmatism — a tension intrinsic to mission-driven startups scaling into institutions. EA may have provided the vocabulary, but the conflict’s grammar was human: trust, ambition, and control.”