Approximately four GPTs and seven years ago, OpenAI’s founders brought forth on this corporate landscape a new entity, conceived in liberty, and dedicated to the proposition that all men might live equally when AGI is created.

Now we are engaged in a great corporate war, testing whether that entity, or any entity so conceived and so dedicated, can long endure.

What matters is not theory but practice. What happens when the chips are down?

So what happened? What prompted it? What will happen now?

To a large extent, even more than usual, we do not know. We should not pretend that we know more than we do.

Rather than attempt to interpret here or barrage with an endless string of reactions and quotes, I will instead do my best to stick to a compilation of the key facts.

(Note: All times stated here are eastern by default.)

Just the Facts, Ma’am

What do we know for sure, or at least close to sure?

Here is OpenAI’s corporate structure, giving the board of the 501c3 the power to hire and fire the CEO. It is explicitly dedicated to its nonprofit mission, over and above any duties to shareholders of secondary entities. Investors were warned that there was zero obligation to ever turn a profit:

A block diagram of OpenAI's unusual structure, provided by OpenAI.

Here are the most noteworthy things we know happened, as best I can make out.

  1. On Friday afternoon at 3:28pm, the OpenAI board fired Sam Altman, appointing CTO Mira Murati as temporary CEO effective immediately. They did so over a Google Meet that did not include then-chairmen Greg Brockman.
  2. Greg Brockman, Altman’s old friend and ally, was removed as chairman of the board but the board said he would stay on as President. In response, he quit.
  3. The board told almost no one. Microsoft got one minute of warning.
  4. Mira Murati is the only other person we know was told, which happened on Thursday night.
  5. From the announcement by the board: “Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.
  6. In a statement, the board of directors said: “OpenAI was deliberately structured to advance our mission: to ensure that artificial general intelligence benefits all humanity. The board remains fully committed to serving this mission. We are grateful for Sam’s many contributions to the founding and growth of OpenAI. At the same time, we believe new leadership is necessary as we move forward. As the leader of the company’s research, product, and safety functions, Mira is exceptionally qualified to step into the role of interim CEO. We have the utmost confidence in her ability to lead OpenAI during this transition period.”
  7. OpenAI’s board of directors at this point: OpenAI chief scientist Ilya Sutskever, independent directors Quora CEO Adam D’Angelo, technology entrepreneur Tasha McCauley, and Georgetown Center for Security and Emerging Technology’s Helen Toner.
  8. Usually a 501c3’s board must have a majority of people not employed by the company. Instead, OpenAI’s said that a majority did not have a stake in the company, due to Sam Altman having zero equity.
  9. In response to many calling this a ‘board coup’: “You can call it this way,” Sutskever said about the coup allegation. “And I can understand why you chose this word, but I disagree with this. This was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAI builds AGI that benefits all of humanity.” AGI stands for artificial general intelligence, a term that refers to software that can reason the way humans do.When Sutskever was asked whether “these backroom removals are a good way to govern the most important company in the world?” he answered: “I mean, fair, I agree that there is a not ideal element to it. 100%.”
  10. Other than that, the board said nothing in public. I am willing to outright say that, whatever the original justifications, the removal attempt was insufficiently considered and planned and massively botched. Either they had good reasons that justified these actions and needed to share them, or they didn’t.
  11. There had been various clashes between Altman and the board. We don’t know what all of them were. We do know the board felt Altman was moving too quickly, without sufficient concern for safety, with too much focus on building consumer products, while founding additional other companies. ChatGPT was a great consumer product, but supercharged AI development counter to OpenAI’s stated non-profit mission.
  12. OpenAI was previously planning an oversubscribed share sale at a valuation of $86 billion that was to close a few weeks later.
  13. Board member Adam D’Angelo said in a Forbes in January: There’s no outcome where this organization is one of the big five technology companies. This is something that’s fundamentally different, and my hope is that we can do a lot more good for the world than just become another corporation that gets that big.
  14. Sam Altman on October 16: “4 times in the history of OpenAI––the most recent time was in the last couple of weeks––I’ve gotten to be in the room when we push the veil of ignorance back and the frontier of discovery forward. Getting to do that is the professional honor of a lifetime.” There was speculation that events were driven in whole or in part by secret capabilities gains within OpenAI, possibly from a system called Gobi, perhaps even related to the joking claim ‘AI has been achieved internally’ but we have no concrete evidence of that.
  15. Ilya Sutskever co-leads the Superalignment Taskforce, has very short timelines for when we will get AGI, and is very concerned about AI existential risk.
  16. Sam Altman was involved in starting multiple new major tech companies. He was looking to raise tens of billions from Saudis to start a chip company. He was in other discussions for an AI hardware company.
  17. Sam Altman has stated time and again, including to Congress, that he takes existential risk from AI seriously. He was part of the creation of OpenAI’s corporate structure. He signed the CAIS letter. OpenAI spent six months on safety work before releasing GPT-4. He understands the stakes. One can question OpenAI’s track record on safety, many did including those who left to found Anthropic. But this was not a pure ‘doomer vs. accelerationist’ story.
  18. Sam Altman is very good at power games such as fights for corporate control. Over the years he earned the loyalty of his employees, many of whom moved in lockstep, using strong strategic ambiguity. Hand very well played.
  19. Essentially all of VC, tech, founder, financial Twitter united to condemn the board for firing Altman and for how they did it, as did many employees, calling upon Altman to either return to the company or start a new company and steal all the talent. The prevailing view online was that no matter its corporate structure, it was unacceptable to fire Altman, who had built the company, or to endanger OpenAI’s value by doing so. That it was good and right and necessary for employees, shareholders, partners and others to unite to take back control.
  20. Talk in those circles is that this will completely discredit EA or ‘doomerism’ or any concerns over the safety of AI, forever. Yes, they say this every week, but this time it was several orders of magnitude louder and more credible. New York Times somehow gets this backwards. Whatever else this is, it’s a disaster.
  21. By contrast, those concerned about existential risk, and some others, pointed out that the unique corporate structure of OpenAI was designed for exactly this situation. They also mostly noted that the board clearly handled decisions and communications terribly, but that there was much unknown, and tried to avoid jumping to conclusions.
  22. Thus we are now answering the question: What is the law? Do we have law? Where does the power ultimately lie? Is it the charismatic leader that ultimately matters? Who you hire and your culture? Can a corporate structure help us, or do commercial interests and profit motives dominate in the end?
  23. Great pressure was put upon the board to reinstate Altman. They were given two 5pm Pacific deadlines, on Saturday and Sunday, to resign. Microsoft’s aid, and that of its CEO Satya Nadella, was enlisted in this. We do not know what forms of leverage Microsoft did or did not bring to that table.
  24. Sam Altman tweets ‘I love the openai team so much.’ Many at OpenAI respond with hearts, including Mira Murati.
  25. Invited by employees including Mira Murati and other top executives, Sam Altman visited the OpenAI offices on Sunday. He tweeted ‘First and last time i ever wear one of these’ with a picture of his visitors pass.
  26. The board does not appear to have been at the building at the time.
  27. Press reported that the board had agreed to resign in principle, but that snags were hit over who the replacement board would be, and over whether or not they would need to issue a statement absolving Altman of wrongdoing, which could be legally perilous for them given their initial statement.
  28. Bloomberg reported on Sunday 11:16pm that temporary CEO Mira Murati aimed to rehire Altman and Brockman, while board sought alternative CEO.
  29. OpenAI board hires former Twitch CEO Emmett Shear to be the new CEO. He issues his initial statement here. I know a bit about him. If the board needs to hire a new CEO from outside that takes existential risk seriously, he seems to me like a truly excellent pick, I cannot think of a clearly better one. The job set for him may or may not be impossible. Shear’s PPS in his note: PPS: “Before I took the job, I checked on the reasoning behind the change. The board did *not* remove Sam over any specific disagreement on safety, their reasoning was completely different from that. I’m not crazy enough to take this job without board support for commercializing our awesome models.”
  30. New CEO Emmett Shear has made statements in favor of slowing down AI development, although not a stop. His p(doom) is between 5% and 50%. He has said ‘My AI safety discourse is 100% “you are building an alien god that will literally destroy the world when it reaches the critical threshold but be apparently harmless before that.”’ Here is a thread and video link with more, transcript here or a captioned clip. Here he is tweeting a 2×2 faction chart a few days ago.
  31. Microsoft CEO Satya Nadella posts 2:53am Monday morning: We remain committed to our partnership with OpenAI and have confidence in our product roadmap, our ability to continue to innovate with everything we announced at Microsoft Ignite, and in continuing to support our customers and partners. We look forward to getting to know Emmett Shear and OAI’s new leadership team and working with them. And we’re extremely excited to share the news that Sam Altman and Greg Brockman, together with colleagues, will be joining Microsoft to lead a new advanced AI research team. We look forward to moving quickly to provide them with the resources needed for their success.
  32. Sam Altman retweets the above with ‘the mission continues.’ Brockman confirms. Other leadership to include Jackub Pachocki the GPT-4 lead, Szymon Sidor and Aleksander Madry.
  33. Nadella continued in reply: I’m super excited to have you join as CEO of this new group, Sam, setting a new pace for innovation. We’ve learned a lot over the years about how to give founders and innovators space to build independent identities and cultures within Microsoft, including GitHub, Mojang Studios, and LinkedIn, and I’m looking forward to having you do the same.
  34. Ilya Sutskever posts 8:15am Monday morning: I deeply regret my participation in the board’s actions. I never intended to harm OpenAI. I love everything we’ve built together and I will do everything I can to reunite the company. Sam retweets with three heart emojis. Jan Leike, the other head of the superalignment team, Tweeted that he worked through the weekend on the crisis, and that the board should resign.
  35. Microsoft stock was down -1% after hours on Friday, was back to roughly its previous value on Monday morning and at the open. All priced in. Neither Google or S&P made major moves either.
  36. 505 of 700 employees of OpenAI, including Ilya Sutskever, sign a letter telling the board to resign and reinstate Altman and Brockman, threatening to otherwise move to Microsoft to work in the new subsidiary under Altman, which will have a job for every OpenAI employee. Full text of the letter that was posted: To the Board of Directors at OpenAI,OpenAl is the world’s leading Al company. We, the employees of OpenAl, have developed the best models and pushed the field to new frontiers. Our work on Al safety and governance shapes global norms. The products we built are used by millions of people around the world. Until now, the company we work for and cherish has never been in a stronger position.The process through which you terminated Sam Altman and removed Greg Brockman from the board has jeopardized all of this work and undermined our mission and company. Your conduct has made it clear you did not have the competence to oversee OpenAI.When we all unexpectedly learned of your decision, the leadership team of OpenAl acted swiftly to stabilize the company. They carefully listened to your concerns and tried to cooperate with you on all grounds. Despite many requests for specific facts for your allegations, you have never provided any written evidence. They also increasingly realized you were not capable of carrying out your duties, and were negotiating in bad faith.The leadership team suggested that the most stabilizing path forward – the one that would best serve our mission, company, stakeholders, employees and the public – would be for you to resign and put in place a qualified board that could lead the company forward in stability. Leadership worked with you around the clock to find a mutually agreeable outcome. Yet within two days of your initial decision, you again replaced interim CEO Mira Murati against the best interests of the company. You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”Your actions have made it obvious that you are incapable of overseeing OpenAl. We are unable to work for or with people that lack competence, judgement and care for our mission and employees. We, the undersigned, may choose to resign from OpenAl and join the newly announced Microsoft subsidiary run by Sam Altman and Greg Brockman. Microsoft has assured us that there are positions for all OpenAl employees at this new subsidiary should we choose to join. We will take this step imminently, unless all current board members resign, and the board appoints two new lead independent directors, such as Bret Taylor and Will Hurd, and reinstates Sam Altman and Greg Brockman.1. Mira Murati2. Brad Lightcap3. Jason Kwon4. Wojciech Zaremba5. Alec Radford6. Anna Makanju7. Bob McGrew8. Srinivas Narayanan9. Che Chang10. Lillian Weng11. Mark Chen12. Ilya Sutskever
  37. There is talk that OpenAI might completely disintegrate as a result, that ChatGPT might not work a few days from now, and so on.
  38. It is very much not over, and still developing.
  39. There is still a ton we do not know.
  40. This weekend was super stressful for everyone. Most of us, myself included, sincerely wish none of this had happened. Based on what we know, there are no villains in the actual story that matters here. Only people trying their best under highly stressful circumstances with huge stakes and wildly different information and different models of the world and what will lead to good outcomes. In short, to all who were in the arena for this on any side, or trying to process it, rather than spitting bile: ❤.

Later, when we know more, I will have many other things to say, many reactions to quote and react to. For now, everyone please do the best you can to stay sane and help the world get through this as best you can.

New to LessWrong?

New Comment
130 comments, sorted by Click to highlight new comments since: Today at 4:18 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

The key news today: Altman had attacked Helen Toner (HN, Zvi) Which explains everything if you recall board structures and voting.

Altman and the board had been unable to appoint new directors because there was an even balance of power, so during the deadlock/low-grade cold war, the board had attrited down to hardly any people. He thought he had Sutskever on his side, so he moved to expel Helen Toner from the board. He would then be able to appoint new directors of his choice. This would have irrevocably tipped the balance of power towards Altman. But he didn't have Sutskever like he thought he did, and they had, briefly, enough votes to fire Altman before he broke Sutskever (as he did yesterday), and they went for the last-minute hail-mary with no warning to anyone.

As always, "one story is good, until another is told"...

The WSJ has published additional details about the Toner fight, filling in the other half of the story. The NYT merely mentions the OA execs 'discussing' it, but the WSJ reports much more specifically that the exec discussion of Toner was a Slack channel that Sutskever was in, and that approximately 2 days before the firing and 1 day before Mira was informed* (ie. the exact day Ilya would have flipped if they had then fired Altman about as fast as possible to schedule meetings 48h before & vote), he saw them say that the real problem was EA and that they needed to get rid of EA associations.

The specter of effective altruism had loomed over the politics of the board and company in recent months, particularly after the movement’s most famous adherent, Sam Bankman-Fried, the founder of FTX, was found guilty of fraud in a highly public trial.

Some of those fears centered on Toner, who previously worked at Open Philanthropy. In October, she published an academic paper touting the safety practices of OpenAI’s competitor, Anthropic, which didn’t release its own AI tool until ChatGPT’s emergence. “By delaying the release of Clau

... (read more)

Thanks, this makes more sense than anything else I've seen, but one thing I'm still confused about:

If the factions were Altman-Brockman-Sutskever vs. Toner-McCauley-D'Angelo, then even assuming Sutskever was an Altman loyalist, any vote to remove Toner would have been tied 3-3. I can't find anything about tied votes in the bylaws - do they fail? If so, Toner should be safe. And in fact, Toner knew she (secretly) had Sutskever on her side, and it would have been 4-2. If Altman manufactured some scandal, the board could have just voted to ignore it.

So I still don't understand "why so abruptly?" or why they felt like they had to take such a drastic move when they held all the cards (and were pretty stable even if Ilya flipped).

Other loose ends:

  • Toner got on the board because of OpenPhil's donation. But how did McCauley get on the board?
  • Is D'Angelo a safetyist?
  • Why wouldn't they tell anyone, including Emmett Shear, the full story?

I can't find anything about tied votes in the bylaws - do they fail?

I can't either, so my assumption is that the board was frozen ever since Hoffman/Hurd left for that reason.

And there wouldn't've been a vote at all. I've explained it before but - while we wait for phase 3 of the OA war to go hot - let me take another crack at it, since people seem to keep getting hung up on this and seem to imagine that it's a perfectly normal state of a board to be in a deathmatch between two opposing factions indefinitely, and so confused why any of this happened.

In phase 1, a vote would be pointless, and neither side could nor wanted to force it to a vote. After all, such a vote (regardless of the result) is equivalent to admitting that you have gone from simply "some strategic disagreements among colleagues all sharing the same ultimate goals and negotiating in good faith about important complex matters on which reasonable people of goodwill often differ" to "cutthroat corporate warfare where it's-them-or-us everything-is-a-lie-or-fog-of-war fight-to-the-death there-can-only-be-one". You only do such a vote in the latter situation; in the former, you just keep negotiating until you reach a ... (read more)

I note that the articles I have seen have said things like (emphasis mine). If Shear had been unable to get any information about the board's reasoning, I very much doubt that they would have included the word "written".
1Daniel 3d
A 3-3 tie between the CEO founder of the company, the president founder of the company, and the chief scientist of the company vs three people with completely separate day jobs who never interact with rank-and-file employees is not a stable equilibrium. There are ways to leverage this sort of soft power into breaking the formal deadlock, for example: as we saw last week.
I have envisaged a scenario in which the US intelligence community has an interagency working group on AI, and Toner and McCauley were its defacto representatives on the OpenAI board, Toner for CIA, McCauley for NSA. Maybe someone who has studied the history of the board can tell me whether that makes sense, in terms of its shifting factions. 

Why would Toner be related to the CIA, and how is McCauley NSA?

If OpenaI is running out money, and is too dependent on Microsoft, defense/intelligence/government is not the worst place for them to look for money. There are even possible futures where they are partially nationalised in a crisis. Or perhaps they will help with regulatory assessment. This possibility certainly makes the Larry Summers appointment take on a different't light with his ties to not only Microsoft, but also the Government.

4David Hornbein2d
Toner's employer, the Center for Security and Emerging Technology (CSET), was founded by Jason Matheny. Matheny was previously the Director of the Intelligence Advanced Research Projects Activity (IARPA), and is currently CEO of the RAND Corporation. CSET is currently led by Dewey Murdick, who previously worked at the Department of Homeland Security and at IARPA. Much of CSET's initial staff was former (or "former") U.S. intelligence analysts, although IIRC they were from military intelligence rather than the CIA specifically. Today many of CSET's researchers list prior experience with U.S. civilian intelligence, military intelligence, or defense intelligence contractors. Given the overlap in staff and mission, U.S. intelligence clearly and explicitly has a lot of influence at CSET, and it's reasonable to suspect a stronger connection than that. I don't see it for McCauley though.
Toner's university has a long history of association with the CIA. Just google "georgetown cia" and you'll see more than I can summarize.  As for McCauley, well, I did call this a "scenario"... The movie maker Oliver Stone rivals Chomsky as the voice of an elite political counterculture who are deadly serious in their opposition to what the American deep state gets up to, and whose ranks include former insiders who became leakers, whistleblowers, and ideological opponents of the system. When Stone, already known as a Wikileaks supporter, decided to turn his attention to NSA's celebrity defector Edward Snowden, he ended up casting McCauley's actor boyfriend as the star.  My hunch, my scenario, is that people associated with the agency, or formerly associated with the agency, put him forward for the role, with part of the reason being that he was already dating one of their own. What we know about her CV - robotics, geographic information systems, speaks Arabic, mentored by Alan Kay - obviously doesn't prove anything, but it's enough to make this scenario work, as a possibility. 

We shall see. I'm just ignoring the mainstream media spins at this point.

For those of us who don't know yet, criticizing the accuracy of mainstream Western news outlets is NOT a strong bayesian update against someone's epistemics, especially on a site like Lesswrong (doesn't matter how many idiots you might remember ranting about "mainstream media" on other sites, the numbers are completely different here).

There is a well-known dynamic called Gell-Mann Amnesia, where people strongly lose trust in mainstream Western news outlets on a topic they are an expert on, but routinely forget about this loss of trust when they read coverage on a topic that they can't evaluate accuracy on. Western news outlets Goodhart readers by depicting themselves as reliable instead of prioritizing reliability.

If you read major Western news outlets, or are new to major news outlets due to people linking to them on Lesswrong recently, some basic epistemic prep can be found in Scott Alexander's The Media Very Rarely Lies and if it's important, the follow up posts.

Yeah, that makes sense and does explain most things, except that if I was Helen, I don't currently see why I wouldn't have just explained that part of the story early on?* Even so, I still think this sounds very plausible as part of the story.

*Maybe I'm wrong about how people would react to that sort of justification. Personally, I think the CEO messing with the board constitution to gain de facto ultimate power is clearly very bad and any good board needs to prevent that. I also believe that it's not a reason to remove a board member if they publish a piece of research that's critical of or indirectly harmful for your company. (Caveat that we're only reading a secondhand account of this, and maybe what actually happened would make Altman's reaction seem more understandable.) 

Hm, to add a bit more nuance, I think it's okay at a normal startup for a board to be comprised of people who are likely to almost always side with the CEO, as long as they are independent thinkers who could vote against the CEO if the CEO goes off the rails. So, it's understandable (or even good/necessary) for CEOs to care a lot about having "aligned" people on the board, as long as they don't just add people who never think for themselves. It gets more complex in OpenAI's situation where there's more potential for tensions between CEO and the board. I mean, there shouldn't necessarily be any tensions, but Altman probably had less of a say over who the original board members were than a normal CEO at a normal startup, and some degree of "norms-compliant maneuvering" to retain board control feels understandable because any good CEO cares a great deal about how to run things. So, it actually gets a bit murky and has to be judged case-by-case. (E.g., I'm sure Altman feels like what happened vindicated him wanting to push Helen off the board.) 
8Ben Pace9d
I was confused about the counts, but I guess this makes sense if Helen cannot vote on her own removal. Then it's Altman/Brockman/Sutskever v Tasha/D'Angelo. Pretty interesting that Sutskever/Tasha/D'Angelo would be willing to fire Altman just to prevent Helen from going. They instead could have negotiated someone to replace her. Wouldn't you just remove Altman from the Board, or maybe remove Brockman? Why would they be willing to decapitate the company in order to retain Helen?

They instead could have negotiated someone to replace her.

Why do they have to negotiate? They didn't want her gone, he did. Why didn't Altman negotiate a replacement for her, if he was so very upset about the damages she had supposedly done OA...?

"I understand we've struggled to agree on any replacement directors since I kicked Hoffman out, and you'd worry even more about safety remaining a priority if she resigns. I totally get it. So that's not an obstacle, I'll agree to let Toner nominate her own replacement - just so long as she leaves soon."

When you understand why Altman would not negotiate that, you understand why the board could not negotiate that.

I was confused about the counts, but I guess this makes sense if Helen cannot vote on her own removal. Then it's Altman/Brockman/Sutskever v Tasha/D'Angelo.

Recusal or not, Altman didn't want to bring it to something as overt as a vote expelling her. Power wants to conceal itself and deny the coup. The point here of the CSET paper pretext is to gain leverage and break the tie any way possible so it doesn't look bad or traceable to Altman: that's why this leaking is bad for Altman, it shows him at his least fuzzy and PR-friend... (read more)

I... still don't understand why the board didn't say anything? I really feel like a lot of things would have flipped if they had just talked openly to anyone, or taken advice from anyone. Like, I don't think it would have made them global heroes, and a lot of people would have been angry with them, but every time any plausible story about what happened came out, there was IMO a visible shift in public opinion, including on HN, and the board confirming any story or giving any more detail would have been huge. Instead they apparently "cited legal reasons" for not talking, which seems crazy to me.

7Adam Scholl7d
I can imagine it being the case that their ability to reveal this information is their main source of leverage (over e.g. who replaces them on the board).
My favorite low-probability theory is that he had blackmail material on one of the board members[1], who initially decided after much deliberation to go forwards despite the blackmail, and then when they realized they got outplayed by Sam not using the blackmail material, backpeddled and refused to dox themselves.  And the other 2-3 didn't know what to do afterwards, because their entire strategy was predicated on optics management around said blackmail + blackmail material. 1. ^ Like something actually really bad.

It would be sheer insanity to have a rule that you can't vote on your own removal, I would think, or else a tied board will definitely shrink right away.

4mako yass8d
Wait, simple majority is an insane place to put the threshold for removal in the first place. Majoritarian shrinking is still basically inevitable if the threshold for removal is 50%, it should be a higher than that, maybe 62%. And generally, if 50% of a group thinks A and 50% thinks ¬A, that tells you that the group is not ready to make a decision about A.
It is not clear, in the non--profit structure of a board, that Helen cannot vote on her own removal. The vote to remove Sam may have been some trickery around holding a quorum meeting without notifying Sam or Greg.
I think it was most likely unanimous among the remaining 4, otherwise one of the dissenters would've spoken out by now.
5Tristan Wegner9d
Here the paper: Some more recent (Nov/Okt 2023) publications from her here:
Manifold says 23% (*edit: link doesn't link directly to that option, it shows up if you search "Helen") on as "a significant factor for why Sam Altman was fired". It would make sense as a motivation, though it's a bit odd that the board would say that Sam was "not consistently candid" and not "trying to undermine the governance structure of the organization" in that case.

When I read this part of the letter, the authors seem to be throwing it in the face of the board like it is a damning accusation, but actually, as I read it, it seems very prudent and speaks well for the board.

You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”

Maybe I'm missing some context, but wouldn't it be better for Open AI as an organized entity to be destroyed than for it to exist right up to the point where all humans are destroyed by an AGI that is neither benevolent nor "aligned with humanity" (if we are somehow so objectively bad as to not deserve care by a benevolent powerful and very smart entity).

This reminds me a lot of a blockchain project I served as an ethicist, which was initially a "project" that was interested in advancing a "movement" and ended up with a bunch of people whose only real goal was to cash big paychecks for a long time (at which point I handled my residual duties to the best of my ability and resigned, with lots of people expressing extreme confusion and asking why I was acting "foolishly" or "incompetently" (except for a tiny number who got angry at me for not causing a BIGGER ex... (read more)

Maybe I'm missing some context, but wouldn't it be better for Open AI as an organized entity to be destroyed than for it to exist right up to the point where all humans are destroyed by an AGI that is neither benevolent nor "aligned with humanity" (if we are somehow so objectively bad as to deserve care by a benevolent powerful and very smart entity).

The problem I suspect is that people just can't get out of the typical "FOR THE SHAREHOLDERS" mindset, so a company that is literally willing to commit suicide rather than getting hijacked for purposes antithetic to its mission, like a cell dying by apoptosis rather than going cancerous, can be a very good thing, and if only there was more of this. You can't beat Moloch if you're not willing to precommit to this sort of action. And let's face it, no one involved here is facing homelessness and soup kitchens even if Open AI crashes tomorrow. They'll be a little worse off for a while, their careers will take a hit, and then they'll pick themselves up. If this was about the safety of humanity it would be a no-brainer that you should be ready to sacrifice that much.

5Michael Thiessen10d
Sam's latest tweet suggests he can't get out of the "FOR THE SHAREHOLDERS" mindset. "satya and my top priority remains to ensure openai continues to thrive we are committed to fully providing continuity of operations to our partners and customers" This does sound antithetical to the charter and might be grounds to replace Sam as CEO.

I feel like, not unlike the situation with SBF and FTX, the delusion that OpenAI could possibly avoid this trap maps on the same cognitive weak spot among EA/rationalists of "just let me slip on the Ring of Power this once bro, I swear it's just for a little while bro, I'll take it off before Moloch turns me into his Nazgul, trust me bro, just this once".

This is honestly entirely unsurprising. Rivers flow downhill and companies part of a capitalist economy producing stuff with tremendous potential economic value converge on making a profit.

The corporate structure of OpenAI was set up as an answer to concerns (about AGI and control over AGIs) which were raised by rationalists. But I don’t think rationalists believed that this structure was a sufficient solution to the problem, anymore than non-rationalists believed it. The rationalists that I have been speaking to were generally mostly sceptical about OpenAI.

Oh, I mean, sure, scepticism about OpenAI was already widespread, no question. But in general it seems to me like there's been too many attempts to be too clever by half from people at least adjacent in ways of thinking to rationalism/EA (like Elon) that go "I want to avoid X-risk but also develop aligned friendly AGI for myself" and the result is almost invariably that it just advances capabilities more than safety. I just think sometimes there's a tendency to underestimate the pull of incentives and how you often can't just have your cake and eat it. I remain convinced that if one wants to avoid X-risk from AGI the safest road is probably to just strongly advocate for not building AGI, and putting it in the same bin as "human cloning" as a fundamentally unethical technology. It's not a great shot, but it's probably the best one at stopping it. Being wishy-washy doesn't pay off.
2Seth Herd9d
I think you're in the majority in this opinion around here. I am noticing I'm confused about the lack of enthusiasm for developing alignment methods for thetypes of AGI that are being developed. Trying to get people to stop building it would be ideal, but I don't see a path to it. The actual difficulty of alignment seems mostly unknown, so potentially vastly more tractable. Yet such efforts make up a tiny part of x-risk discussion. This isn't an argument for building ago, but for aligning the specific AGI others build.
Personally I am fascinated by the problems of interpretability and I would consider "no more GPTs for you guys until you figure out at least the main functioning principles of GPT-3" a healthy exercise in actual ML science to pursue, but I also have to acknowledge that such an understanding would make distillation far more powerful and thus also lead to a corresponding advance in capabilities. I am honestly stumped at what "I want to do something" looks like that doesn't somehow end up backfiring. It maybe that the problem is just thinking this way in the first place, and this really is just a shudder political problem, and tech/science can only make it worse.
4Seth Herd9d
That all makes sense. Except that this is exactly what I'm puzzled by: a focus on solutions that probably won't work ("no more GPTs for you guys" is approximately impossible), instead of solutions that still might - working on alignment, and trading off advances in alignment for advances in AGI. It's like the field has largely given up on alignment, and we're just trying to survive a few more months by making sure to not contribute to AGI at all. But that makes no sense. MIRI gave up on aligning a certain type of AGI for good reasons. But nobody has seriously analyzed prospects for aligning the types of AGI we're likely to get: language model agents or loosely brainlike collections of deep nets. When I and a few others write about plans for aligning those types of AGI, we're largely ignored. The only substantive comments are "well there are still ways those plans could fail", but not arguments that they're actually likely to fail. Meanwhile, everyone is saying we have no viable plans for alignment, and acting like that means it's impossible. I'm just baffled by what's going on in the collective unspoken beliefs of this field.
I'll be real, I don't know what everyone else thinks, but personally I can say I wouldn't feel comfortable contributing to anything AGI-related at this point because I have very low trust even aligned AGI would result in a net good for humanity, with this kind of governance. I can imagine maybe amidst all the bargains with the Devil there is one that will genuinely pay off and is the lesser evil, but can't tell which one. I think the wise thing to do would be just not to build AGI at all, but that's not a realistically open path. So yeah, my current position is that literally any action I could take advances the kind of future I would want by an amount that is at best below the error margin of my guesses, and at worst negative. It's not a super nice spot to be in but it's where I'm at and I can't really lie to myself about it.
2Gerald Monroe10d
In the cancer case, the human body has every cell begin aligned with the body. Anthropically this has to function until breeding age plus enough offspring to beat losses. And yes, if faulty cells self destruct instead of continuing this is good, there are cancer treatments that try to gene edit in clean copies of specific genes (p51 as I recall) that mediate this (works in rats...). However the corporate world/international competition world has many more actors and they are adversarial. OAI self destructing leaves the world's best AI researchers unemployed, removes them from competing in the next round of model improvements - whoever makes a gpt-5 at a competitor will have the best model outright. Coordination is hard. Consider the consequences if an entire town decided to stop consuming fossil fuels. They pay the extra costs and rebuild the town to be less car dependent. However the consequence is this lowers the market price of fossil fuels. So others use more. (Demand elasticity makes the effect still slightly positive)
I mean, yes, a company self-destructing doesn't stop much if their knowledge isn't also actively deleted - and even then, it's just a setback of a few months. But also, by going "oh well we need to work inside the system to fix it somehow" at some point all you get is just another company racing with all others (and in this case, effectively being a pace setter). However you put it, OpenAI is more responsible than any other company for how close we may be to AGI right now, and despite their stated mission, I suspect they did not advance safety nearly as much as capability. So in the end, from the X-risk viewpoint, they mostly made things worse.

I agree with all of this in principal, but I am hung up on the fact that it is so opaque. Up until now the board have determinedly remained opaque.

If corporate seppuku is on the table, why not be transparent? How does being opaque serve the mission?

I wrote a LOT of words in response to this, talking about personal professional experiences that are not something I coherently understand myself as having a duty (or timeless permission?) to share, so I have reduced my response to something shorter and more general. (Applying my own logic to my own words, in realtime!)

There are many cases (arguably stupid cases or counter-producive cases, but cases) that come up more and more when deals and laws and contracts become highly entangling.

Its illegal to "simply" ask people for money in exchange for giving them a transferable right future dividends on a project for how to make money, that you seal with a handshake. The SEC commands silence sometimes and will put you in a cage if you don't.

You get elected to local office and suddenly the Brown Act (which I'd repeal as part of my reboot of the Californian Constitution had I the power) forbids you from talking with your co-workers (other elected officials) about work (the city government) at a party. 

A Confessor is forbidden kinds of information leak.

Fixing <all of this (gesturing at nearly all of human civilization)> isn't something that we have the time or power to do before w... (read more)

This seems to presuppose that there is a strong causal effect from OpenAI's destruction to avoiding creation of an omnicidal AGI, which doesn't seem likely? The real question is whether OpenAI was, on the margin, a worse front-runner than its closest competitors, which is plausible, but then the board should have made that case loudly and clearly, because, entirely predictably, their silence has just made the situation worse.

The most likely explanation I can think of, for what look like about-faces by Ilya and Jan this morning, is realizing that the worst plausible outcome is exactly what we're seeing: Sam running a new OpenAI at Microsoft, free of that pesky charter. Any amount of backpedaling, and even resigning in favor of a less safety-conscious board, is preferable to that.

They came at the king and missed.

Yeah but if this is the case, I'd have liked to see a bit more balance than just retweeting the tribal-affiliation slogan ("OpenAI is nothing without its people") and saying that the board should resign (or, in Ilya's case, implying that he regrets and denounces everything he initially stood for together with the board). Like, I think it's a defensible take to think that the board should resign after how things went down, but the board was probably pointing to some real concerns that won't get addressed at all if the pendulum now swings way too much in the opposite direction, so I would have at least hoped for something like "the board should resign, but here are some things that I think they had a point about, which I'd like to see to not get shrugged under the carpet after the counter-revolution."

It's too late for a conditional surrender now that Microsoft is a credible threat to get 100% of OpenAI's capabilities team; Ilya and Jan are communicating unconditional surrender because the alternative is even worse.

I'm not sure this is an unconditional surrender. They're not talking about changing the charter, just appointing a new board. If the new board isn't much less safety conscious, then a good bit of the organization's original purpose and safeguards are preserved. So the terms of surrender would be negotiated in picking the new board.

AFAICT the only formal power the board has is in firing the CEO, so if we get a situation where whenever the board wants to fire Sam, Sam comes back and fires the board instead, well, it's not exactly an inspiring story for OpenAI's governance structure.

This is a very good point. It is strange, though, that the Board was able to fire Sam without the Chair agreeing to it. It seems like something as big as firing the CEO should have required at least a conversation with the Chair, if not the affirmative vote of the Chair. The way this was handled was a big mistake. There needs to be new rules in place to prevent big mistakes like this.

If actually enforcing the charter leads to them being immediately disempowered, it‘s not worth anything in the first place. We were already in the “worst case scenario”. Better to be honest about it. Then at least, the rest of the organisation doesn‘t get to keep pointing to the charter and the board as approving their actions when they don‘t.

The charter it is the board’s duty to enforce doesn‘t say anything about how the rest of the document doesn‘t count if investors and employees make dire enough threats, I‘m pretty sure.

If actually enforcing the charter leads to them being immediately disempowered, it‘s not worth anything in the first place.

If you pushed for fire sprinklers to be installed, then yell "FIRE", and turn on the fire sprinklers, causing a bunch of water damage, and then refuse to tell anyone where you thought the fire was and why you thought that, I don't think you should be too surprised when people contemplate taking away your ability to trigger the fire sprinklers.

Keep in mind that the announcement was not something like

After careful consideration and strategic review, the Board of Directors has decided to initiate a leadership transition. Sam Altman will be stepping down from his/her role, effective November 17, 2023. This decision is a result of mutual agreement and understanding that the company's long-term strategy and core values require a different kind of leadership moving forward.

Instead, the board announced

Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his

... (read more)
  The situation is actually even less surprising than this, because the thing people actually initially contemplated doing in response to the board's actions was not even 'taking away your ability to trigger the fire sprinklers' but 'going off and living in a new building somewhere else that you can't flood for lulz'. As I'm understanding the situation OpenAI's board had and retained the legal right to stay in charge of OpenAI as all its employees left to go to Microsoft.  If they decide they would rather negotiate from their starting point of 'being in charge of an empty building' to 'making concessions' this doesn't mean that the charter didn't mean anything!  It means that the charter gave them a bunch of power which they wasted.

If they thought this would be the outcome of firing Sam, they would not have done so.

The risk they took was calculated, but man, are they bad at politics.

I keep being confused by them not revealing their reasons. Whatever they are, there's no way that saying them out loud wouldn't give some ammo to those defending them, unless somehow between Friday and now they swung from "omg this is so serious we need to fire Altman NOW" to "oops looks like it was a nothingburger, we'll look stupid if we say it out loud". Do they think it's a literal infohazard or something? Is it such a serious accusation that it would involve the police to state it out loud?

At this point I'm beginning to wonder if a gag order is involved.
Interesting! Bad at politics is a good way to put it. So you think this was purely a political power move to remove Sam, and they were so bad at projecting the outcomes, all of them thought Greg would stay on board as President and employees would largely accept the change. 
No, I don't think the board's motives were power politics; I'm saying that they failed to account for the kind of political power moves that Sam would make in response.
It's hard to know for sure, but I think this is a reasonable and potentially helpful perspective. Some of the perceived repercussions on the state of AI safety might be "the band-aid being ripped off". 

The important question is, why now? Why with so little evidence to back-up what is such an extreme action?

Whatever else, there were likely mistakes from the side of the board, but man does the personality cult around Altman make me uncomfortable. 

It reminds me of the loyalty successful generals like Caesar and Napoleon commanded from their men. The engineers building GPT-X weren't loyal to The Charter, and they certainly weren't loyal to the board. They were loyal to the projects they were building and to Sam, because he was the one providing them resources to build and pumping the value of their equity-based compensation.

They were not loyal to the board, but it is not clear if they were loyal to The Charter since they were not given any concrete evidence of a conflict between Sam and the Charter.
Feels like an apt comparison given that the way we find out now is what happens when some kind of Senate tries to cut to size the upstart general and the latter basically goes "you and what army?".
1Tristan Wegner10d
From your last link: As the company was doing well recently, with ongoing talks about a investment imply a market cap of $90B, this would mean many employees might have hit their 10x already. The highest payout they would ever get. So all incentive to cash out now (or as soon as the 2-year lock will allow), 0 financial incentive to care about long term value. This seems worse in aligning employee interest with the long term interest of the company even compare to regular (unlimited allowed growth) equity, where each employee might hope that the valuation could get even higher. Also: So it seems the growth cap actually encourages short term thinking, which seems against their long term mission. Do you also understand these incentives this way? 

It's not even a personality cult. Until the other day Altman was a despicable doomer and decel, advocating for regulations that would clip humanity's wings. As soon as he was fired and the "what did Ilya see" narrative emerged (I don't even think it was all serious at the beginning), the immediate response from the e/acc crowd was to elevate him to the status of martyr in minutes and recast the Board as some kind of reactionary force for evil that wants humanity to live in misery forever rather than bask in the Glorious AI Future.

Honestly even without the doom stuff I'd be extremely worried about this being the cultural and memetic environment in which AI gets developed anyway. This stuff is pure poison.

It doesn't seem to me like e/acc has contributed a whole lot to this beyond commentary. The rallying of OpenAI employees behind Altman is quite plausibly his general popularity + ability to gain control of a situation. 

At least that seems likely if Paul Graham's assessment of him as a master persuader is to be believed (and why wouldn't it?). 

I mean, the employees could be motivated by a more straightforward sense that the firing is arbitrary and threatens the functioning of OpenAI and thus their immediate livelihood. I'd be curious to understand how much of this is calculated self-interest and how much indeed personal loyalty to Sam Altman, which would make this incident very much a crossing of the Rubicon.

I do find it quite surprising that so many who work at OpenAI are so eager to follow Altman to Microsoft - I guess I assumed the folks at OpenAI valued not working for big tech (that's more(?) likely to disregard safety) more than it appears they actually did.

My guess is they feel that Sam and Greg (and maybe even Ilya) will provide enough of a safety net (compared to a randomized Board overlord) but also a large dose of self-interest once it gains steam and you know many of your coworkers will leave

RE: the board’s vague language in their initial statement

Smart people who have an objective of accumulating and keeping control—who are skilled at persuasion and manipulation —will often leave little trace of wrongdoing. They’re optimizing for alibis and plausible deniability. Being around them and trying to collaborate with them is frustrating. If you’re self-aware enough, you can recognize that your contributions are being twisted, that your voice is going unheard, and that critical information is being withheld from you, but it’s not easy. And when you try to bring up concerns, they are very good at convincing you that those concerns are actually your fault.

I can see a world where the board was able to recognize that Sam’s behaviors did not align with OpenAI’s mission, while not having a smoking gun example to pin him on. Being unskilled politicians with only a single lever to push (who were probably morally opposed to other political tactics) the board did the only thing they could think of, after trying to get Sam to listen to their concerns. Did it play out well? No.

It’s clear that EA has a problem with placing people who are immature at politics in key political positions. I also believe there may be a misalignment in objectives between the politically skilled members of EA and the rest of us—politically skilled members may be withholding political advice/training from others out of fear that they will be outmaneuvered by those they advise. This ends up working against the movement as a whole.

Feels sometimes like all of the good EAs are bad at politics and everybody on our side that's good at politics is not a good EA.

Yeah, I'm getting that vibe. EAs keep going "hell yeah, we got an actual competent mafioso on our side, but they're actually on our side!", and then it turns out the mafioso wasn't on their side, any more than any other mafioso in history had ever been on anyone's side.

Ok, but then why the statement implying severe misconduct rather than a generic "the board has decided that the style of leadership that Mr. Altman provides is not what OpenAI needs at this time"?

I'm surprised that nobody has yet brought up the development that the board offered Dario Amodei the position as a merger with Anthropic (and Dario said no!).

(There's no additional important content in the original article by The Information, so I linked the Reuters paywall-free version.)

Crucially, this doesn't tell us in what order the board made this offer to Dario and the other known figures (GitHub CEO Nat Friedman and Scale AI CEO Alex Wang) before getting Emmett Shear, but it's plausible that merging with Anthropic was Plan A all along. Moreover, I strongly suspect that the bad blood between Sam and the Anthropic team was strong enough that Sam had to be ousted in order for a merger to be possible.

So under this hypothesis, the board decided it was important to merge with Anthropic (probably to slow the arms race), booted Sam (using the additional fig leaf of whatever lies he's been caught in), immediately asked Dario and were surprised when he rejected them, did not have an adequate backup plan, and have been scrambling ever since.

P.S. Shear is known to be very much on record worrying that alignment is necessary and not likely to be easy; I'm curious what Friedman and Wang are on record as saying about AI x-risk.

Has this one been confirmed yet? (Or is there more evidence that this reporting that something like this happened?)

Having a "plan A" requires detailed advance-planning. I think it's much more likely that their decision was reactive rather than plan-based. They felt strongly that Altman had to go based on stuff that happened, and so they followed procedures – appoint an interim CEO and do a standard CEO search. Of course, it's plausible – I'd even say likely – that an "Anthropic merger" was on their mind as something that could happen as a result of this further down the line. But I doubt (and hope not) that this thought made a difference to their decision. Reasoning: * If they had a detailed plan that was motivating their actions (as opposed to reacting to a new development and figuring out what to do as things go on), they would probably have put in a bit more time gathering more potentially incriminating evidence or trying to form social alliances.  For instance, even just, in the months or weeks before, visiting OpenAI and saying hi to employees, introducing themselves as the board, etc., would probably have improved staff's perception of how this went down. Similarly, gathering more evidence by, e.g., talking to people close to Altman but sympathetic to safety concerns, asking whether they feel heard in the company, etc, could have unearthed more ammunition. (It's interesting that even the safety-minded researchers at OpenAI basically sided with Altman here, or, at the very least, none of them came to the board's help speaking up against Altman on similar counts. [Though I guess it's hard to speak up "on similar counts" if people don't even really know their primary concerns apart from the vague "not always candid."]) * If the thought of an Anthropic merge did play a large role in their decision-making (in the sense of "making the difference" to whether they act on something across many otherwise-similar counterfactuals), that would constitute a bad kind of scheming/plotting. People who scheme like that are probably less likely than baseline to underestimate power pol

"Before I took the job, I checked on the reasoning behind the change. The board did *not* remove Sam over any specific disagreement on safety, their reasoning was completely different from that. I'm not crazy enough to take this job without board support for commercializing our awesome models."

Yeah, should have put that in the main, forgot. Added now.

Most likely explanation is the simplest fitting one:

  • The Board was angry on lack of communication for some time but with internal disagreement (Greg, Ilya)
  • The things sped up lately. Ilya thought it might be good to change CEO to someone who would slow down and look more into safety as Altman says a lot about safety but speeds up anyway. So he gave a green light on his side (acceptation of change)
  • Then the Board made the moves that they made
  • Then the new CEO wanted to try to hire back Altman so they changed her
  • Then that petition/letter started rolling because the prominent people saw those moves as harming to the company and the goal
  • Ilya also saw that the outcome is bad both for the company and for the goal of slowing down and he saw that if the letter will get more signatures it will be even worse, so he changed his mind and also signed

Take note about the language that Ilya uses. He didn't say they did bad to Altman or that decission was bad. He said that that he changed his mind because of consequences being harm for the company.

This seems about right. Sam is a bit of a cowboy and probably doesn't bother involving the board more than he absolutely has to.

One thing I've realized more in the last 24h: 

  • It looks like Sam Altman is using a bunch of "tricks" now trying to fight his way back into more influence over OpenAI. I'm not aware of anything I'd consider unethical (at least if one has good reasons to believe one has been unfairly attacked), but it's still the sort of stuff that wouldn't come naturally to a lot of people and wouldn't feel fair to a lot of people (at least if there's a strong possibility that the other side is acting in good faith too).
  • Many OpenAI employees have large monetary incentives on the line and there's levels of peer pressure that are off the charts, so we really can't read too much into who tweeted how many hearts or signed the letter or whatever. 

Maybe the extent of this was obvious to most others, but for me, while I was aware that this was going on, I feel like I underestimated the extent of it. One thing that put things into a different light for me is this tweet

Which makes me wonder, could things really have gone down a lot differently? Sure, smoking-gun-type evidence would've helped the board immensely. But is it their fault that they don't have it? Not necessarily. If they had (1) t... (read more)

The board could (justifiably based on Sam's incredible mobilization the past days**) believe that they have little to no chance of winning the war of public opinion and focus on doing everything privately since that is where they feel on equal footing.

This doesn't explain fully why they haven't stated reasons in private, but it does seem they provided at least something to Emmett Shear as he said he had a reason from the board that wasn't safety or commercialization (PPS of

** Very few fires employees would even consider pushing back, but to be this successful this quickly is impressive. Not taking a side on it being good or evil, just stating the fact of his ability to fight back after things seemed gloom (betting markets were down below 10%)

Well, seems like the board did provide zero evidence in private, too! Quite the saga: glad it is over and think that Larry Summers is a great independent thinker that could help the board make some smart expansion decisions
I feel like at this point the only truly rational comment is: what the absolute fuck.
this Washington Post article supports the 'Scheming Sam' Hypothesis: anonymous reports mostly from his time at Y Combinator

He's back. Again. Maybe.

We have reached an agreement in principle for Sam [Altman] to return to OpenAI as CEO with a new initial board of Bret Taylor (Chair), Larry Summers, and Adam D'Angelo.

We are collaborating to figure out the details. Thank you so much for your patience through this.

Anyone know how Larry or Bret feel about x-risk?

The verge article is better, shows tweets by Toner and Nadella confirming that it wasn't just someone getting access to the OpenAI twitter/x account (unless of course someone acquired access to all the accounts, which doesn't seem likely).
1O O9d
Uh, try this one.

Fun story.

I met Emmett Shear once at a conference, and have read a bunch of his tweeting.

On Friday I turned to a colleague and asked for Shear's email, so that I could email him suggesting he try to be CEO, as he's built a multi-billion company before and has his head screwed on about x-risk.

My colleague declined, I think they thought it was a waste of time (or didn't think it was worth their social capital).

Man, I wish I had done it, that would have been so cool to have been the one to suggest it to him.

Man, Sutskever's back and forth is so odd. Hard to make obvious sense of, especially if we believe Shear's claim that this was not about disagreements on safety. Any chance that it was Annie Altman's accusations towards Sam that triggered this whole thing? It seems strange since you'd expect it to only happen if public opinion built up to unsustainable levels.

My guess: Sutskever was surprised by the threatened mass exodus. Whatever he originally planned to achieve, he no longer thinks he can succeed. He now thinks that falling on his sword will salvage more of what he cares about than letting the exodus happen.

This would be very consistent with the problem being about safety (Altman at MSFT is worse than Altman at OAI for that), but then Shear is lying (understandable that he might have to for political reasons). Or I suppose anything that involved the survival of Open AI, which at this point is threatened anyway.

Maybe Shear was lying. Maybe the board lied to Shear, and he truthfully reported what they told him. Maybe "The board did *not* remove Sam over any specific disagreement on safety" but did remove him over a *general* disagreement which, in Sutskever's view, affects safety. Maybe Sutskever wanted to remove Altman for a completely different reason which also can't be achieved after a mass exodus. Maybe different board members had different motivations for removing Altman.

I agree, it's critical to have a very close reading of "The board did *not* remove Sam over any specific disagreement on safety".

This is the kind of situation where every qualifier in a statement needs to be understood as essential—if the statement were true without the word "specific", then I can't imagine why that word would have been inserted.

To elaborate on that, Shear is presumably saying exactly as much as he is allowed to say in public. This implies that if the removal had nothing to do with safety, then he would say "The board did not remove Sam over anything to do with safety". His inserting of that qualifier implies that he couldn't make a statement that broad, and therefore that safety considerations were involved in the removal.

7Michael Thiessen9d
According to Bloomberg, "Even CEO Shear has been left in the dark, according to people familiar with the matter. He has told people close to OpenAI that he doesn’t plan to stick around if the board can’t clearly communicate to him in writing its reasoning for Altman’s sudden firing." Evidence that Shear simply wasn't told the exact reason, though the "in writing" part is suspicious. Maybe he was told not in writing and wants them to write it down so they're on the record.
He was probably kinda sleep deprived and rushed, which could explain inessential words being added.
I would normally agree with this, except it does not seem to me like the board is particularly deliberate about their communication so far. If they are conscientious enough about their communication to craft it down to the word, why did they handle the whole affair in the way they seem to have so far? I feel like a group of people who did not see fit to provide context or justifications to either their employees or largest shareholder when changing company leadership and board composition probably also wouldn't weigh each word carefully when explaining the situation to a total outsider. We still benefit from a very close reading, mind you; I just believe there's a lot more wiggle room here than we would normally expect from corporate boards operating with legal advice based on the other information we have.
  1. The quote is from Emmett Shear, not a board member.
  2. The board is also following the "don't say anything literally false" policy by saying practically nothing publicly.
  3. Just as I infer from Shear's qualifier that the firing did have something to do with safety, I infer from the board's public silence that their reason for the firing isn't one that would win back the departing OpenAI members (or would only do so at a cost that's not worth paying). 
  4. This is consistent with it being a safety concern shared by the superalignment team (who by and large didn't sign the statement at first) but not by the rest of OpenAI (who view pushing capabilities forward as a good thing, because like Sam they believe the EV of OpenAI building AGI is better than the EV of unilaterally stopping). That's my current main hypothesis.
Ah, oops! My expectations are reversed for Shear; him I strongly expect to be as exact as humanly possible. With that update, I'm inclined to agree with your hypothesis.
That's the part that confuses me most. An NDA wouldn't be strong enough reason at this point. As you say, safety concerns might, but that seems pretty wild unless they literally already have AGI and are fighting over what to do with it. The other thing is anything that if said out loud might involve the police, so revealing the info would be itself an escalation (and possibly mutually assured destruction, if there's criminal liability on both sides). I got nothing.

The facts very strongly suggest that the board is not a monolithic entity.  Its inability to tell a sensible story about the reasons for Sam's firing might be due to such a single comprehensible story not existing but different board members having different motives that let them agree on the firing initially but ultimately not on a story that they could jointly endorse.  

There's... too many things here. Too many unexpected steps, somehow pointing at too specific an outcome. If there's a plot, it is horrendously Machiavellian.

(Hinton's quote, which keeps popping into my head: "These things will have learned from us by reading all the novels that ever were and everything Machiavelli ever wrote, that how to manipulate people, right? And if they're much smarter than us, they'll be very good at manipulating us. You won't realise what's going on. You'll be like a two year old who's being asked, do you want the peas or the cauliflower? And doesn't realise you don't have to have either. And you'll be that easy to manipulate. And so even if they can't directly pull levers, they can certainly get us to pull levers. It turns out if you can manipulate people, you can invade a building in Washington without ever going there yourself.")

(And Altman: "i expect ai to be capable of superhuman persuasion well before it is superhuman at general intelligence, which may lead to some very strange outcomes")

If an AI were to spike in capabilities specifically relating to manipulating individuals and groups of people, this is roughly how I would expect the outcome to look l... (read more)

7Seth Herd10d
I think we can discount it as a real possibility, while accepting Altman's "i expect ai to be capable of superhuman persuasion well before it is superhuman at general intelligence, which may lead to some very strange outcomes". I think it might be weakly superhuman at persuasion for things like "buy our products", but that doesn't imply being superhuman at working out complex consequences of political maneuvering. Doing that would firmly imply a generally superhuman intelligence, I think. So I think if this has anything to do with internal AI breakthroughs, it's tangential at most.
I mean, this would not be too hard though. It could be achieved by a simple trick of appearing smarter to some people and then dumber at subsequent interactions with others, scaring the safety conscious and then making them look insane for being scared. I don't think that's what's going on (why would even an AGI model they made be already so cleverly deceptive and driven? I would expect OAI to not be stupid enough to build the most straightforward type of maximizer) but it wouldn't be particularly hard to think up or do.
6Odd anon9d
Time for some predictions. If this is actually from AI developing social manipulation superpowers, I would expect: 1. We never find out any real reasonable-sounding reason for Altman's firing. 2. OpenAI does not revert to how it was before. 3. More instances of people near OpenAI's safety people doing bizarre unexpected things that have stranger outcomes. 4. Possibly one of the following: 1. Some extreme "scissors statements" pop up which divide AI groups into groups that hate each other to an unreasonable degree. 2. An OpenAI person who directly interacted with some scary AI suddenly either commits suicide or becomes a vocal flat-earther or similar who is weirdly convincing to many people. 3. An OpenAI person skyrockets to political power, suddenly finding themselves in possession of narratives and phrases which convince millions to follow them. (Again, I don't think it's that likely, but I do think it's possible.)
Things might be even weirder than that if this is a narrowly superhuman AI that is specifically superhuman at social manipulation, but still has the same inability to form new gears-level models exhibited by current LLMs (e.g. if they figured out how to do effective self-play on the persuasion task, but didn't actually crack AGI).
While I don't think this is true, it's a fun thought (and can also be pointed at Altman himself, rather than an AGI). Neither are true, but fun to think about

I love how short this post is! Zvi, you should do more posts like this (in addition to your normal massive-post fare).

Adam D'Angelo retweeted a tweet implying that hidden information still exists and will come out in the future:

Have known Adam D’Angelo for many years and although I have not spoken to him in a while, the idea that he went crazy or is being vindictive over some feature overlap or any of the other rumors seems just wrong. It’s best to withhold judgement until more information comes out.

#14: If there have indeed been secret capability gains, so that Altman was not joking about reaching AGI internally (it seems likely that he was joking, though given the stakes, it's probably not the sort of thing to joke about), then the way I read their documents, the board should make that determination:

Fifth, the board determines when we've attained AGI. Again, by AGI we mean a highly autonomous system that outperforms humans at most economically valuable work. Such a system is excluded from IP licenses and other commercial terms with Microsoft, which only apply to pre-AGI technology.

Once they've made that determination, then Microsoft will not have access to the AGI technology. Given the possible consequences, I doubt that Microsoft would have found such a joke very amusing.

Honestly this does seem... possible. A disagreement on whether GPT-5 counts as AGI would have this effect. The most safety minded would go "ok, this is AGI, we can't give it to Microsoft". The more business oriented and less conservative would go "no, this isn't AGI yet, it'll make us a fuckton of money though". There would be conflict. But for example seeing how now everyone might switch to Microsoft and simply rebuild the thing from scratch there, Ilya despairs and decides to do a 180 because at least this way he gets to supervise the work somehow.

This conflict has inescapably taken place in the context of US-China competition over AI, as leaders in both countries are well known to pursue AI acceleration for applications like autonomous low-flying nuclear cruise missiles (e.g. in contingencies where military GPS networks fail), economic growth faster than the US/China/rest of the world, and information warfare.

I think I could confidently bet against Chinese involvement, that seems quite reasonable. I can't bet so confidently against US involvement; although I agree that it remains largely unclear, i... (read more)

There had been various clashes between Altman and the board. We don’t know what all of them were. We do know the board felt Altman was moving too quickly, without sufficient concern for safety, with too much focus on building consumer products, while founding additional other companies. ChatGPT was a great consumer product, but supercharged AI development counter to OpenAI’s stated non-profit mission.

Does anyone have proof of the board's unhappiness about speed, lack of safety concern and disagreement with founding other companies. All seem plausible but have seen basically nothing concrete.

The theory that my mind automatically generates seeing these happenings is that Ilya was in cahoots with Sam&Greg, and the pantomime was a plot to oust external members of the board.

However, I like to think I'm wise enough to give this 5% probability on reflection.

Is there any chance that Altman himself triggered this? Did something that he knew would cause the board to turn on him, with knowledge that Microsoft would save him?

I'm 90% sure that the issue here was an inexperienced board with Chief Scientist that didn't understand the human dimension of leadership. 

Most independent board members usually have a lot of management experience and so understand that their power on paper is less than their actual power. They don't have day-to-day factual knowledge about the business of the company and don't have a good grasp of relationships between employees. So, they normally look to management to tell them what to do.

Here, two of the board members lacked the organizational exper... (read more)

What about this?

We can definitely say that the board's decision was not made in response to malfeasance or anything related to our financial, business, safety, or security/privacy practices. This was a breakdown in communication between Sam and the board.

If considered reputable (and not a lie), this would significantly narrow the space of possible reasons.

Mira Mutari.

Typo here and below: Murati


Jan Leike, the other head of the superalignment team, Tweeted that he worked through the weekend on the crisis, and that the board should resign.

No link for this one?

3Tristan Wegner9d

What's the source of that 505 employees letter? I mean the contents aren't too crazy, but isn't it strange that the only thing we have is a screenshot of the first page?

It was covered in Axios, who also link to it as a separate pdf with all 505 signatories.
Now claim that it's up to 650/770.
That link is broken for me, did you mean to link to this Lilian Weng tweet?
Initially I saw it from Kara Swisher (~1mm views) then I saw it from a BB employee. I presume it is genuine.
[+][comment deleted]10d20
[+][comment deleted]10d12