Approximately four GPTs and seven years ago, OpenAI’s founders brought forth on this corporate landscape a new entity, conceived in liberty, and dedicated to the proposition that all men might live equally when AGI is created.

Now we are engaged in a great corporate war, testing whether that entity, or any entity so conceived and so dedicated, can long endure.

What matters is not theory but practice. What happens when the chips are down?

So what happened? What prompted it? What will happen now?

To a large extent, even more than usual, we do not know. We should not pretend that we know more than we do.

Rather than attempt to interpret here or barrage with an endless string of reactions and quotes, I will instead do my best to stick to a compilation of the key facts.

(Note: All times stated here are eastern by default.)

Just the Facts, Ma’am

What do we know for sure, or at least close to sure?

Here is OpenAI’s corporate structure, giving the board of the 501c3 the power to hire and fire the CEO. It is explicitly dedicated to its nonprofit mission, over and above any duties to shareholders of secondary entities. Investors were warned that there was zero obligation to ever turn a profit:

A block diagram of OpenAI's unusual structure, provided by OpenAI.
Image

Here are the most noteworthy things we know happened, as best I can make out.

  1. On Friday afternoon at 3:28pm, the OpenAI board fired Sam Altman, appointing CTO Mira Murati as temporary CEO effective immediately. They did so over a Google Meet that did not include then-chairmen Greg Brockman.
  2. Greg Brockman, Altman’s old friend and ally, was removed as chairman of the board but the board said he would stay on as President. In response, he quit.
  3. The board told almost no one. Microsoft got one minute of warning.
  4. Mira Murati is the only other person we know was told, which happened on Thursday night.
  5. From the announcement by the board: “Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.
  6. In a statement, the board of directors said: “OpenAI was deliberately structured to advance our mission: to ensure that artificial general intelligence benefits all humanity. The board remains fully committed to serving this mission. We are grateful for Sam’s many contributions to the founding and growth of OpenAI. At the same time, we believe new leadership is necessary as we move forward. As the leader of the company’s research, product, and safety functions, Mira is exceptionally qualified to step into the role of interim CEO. We have the utmost confidence in her ability to lead OpenAI during this transition period.”
  7. OpenAI’s board of directors at this point: OpenAI chief scientist Ilya Sutskever, independent directors Quora CEO Adam D’Angelo, technology entrepreneur Tasha McCauley, and Georgetown Center for Security and Emerging Technology’s Helen Toner.
  8. Usually a 501c3’s board must have a majority of people not employed by the company. Instead, OpenAI’s said that a majority did not have a stake in the company, due to Sam Altman having zero equity.
  9. In response to many calling this a ‘board coup’: “You can call it this way,” Sutskever said about the coup allegation. “And I can understand why you chose this word, but I disagree with this. This was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAI builds AGI that benefits all of humanity.” AGI stands for artificial general intelligence, a term that refers to software that can reason the way humans do.When Sutskever was asked whether “these backroom removals are a good way to govern the most important company in the world?” he answered: “I mean, fair, I agree that there is a not ideal element to it. 100%.”
  10. Other than that, the board said nothing in public. I am willing to outright say that, whatever the original justifications, the removal attempt was insufficiently considered and planned and massively botched. Either they had good reasons that justified these actions and needed to share them, or they didn’t.
  11. There had been various clashes between Altman and the board. We don’t know what all of them were. We do know the board felt Altman was moving too quickly, without sufficient concern for safety, with too much focus on building consumer products, while founding additional other companies. ChatGPT was a great consumer product, but supercharged AI development counter to OpenAI’s stated non-profit mission.
  12. OpenAI was previously planning an oversubscribed share sale at a valuation of $86 billion that was to close a few weeks later.
  13. Board member Adam D’Angelo said in a Forbes in January: There’s no outcome where this organization is one of the big five technology companies. This is something that’s fundamentally different, and my hope is that we can do a lot more good for the world than just become another corporation that gets that big.
  14. Sam Altman on October 16: “4 times in the history of OpenAI––the most recent time was in the last couple of weeks––I’ve gotten to be in the room when we push the veil of ignorance back and the frontier of discovery forward. Getting to do that is the professional honor of a lifetime.” There was speculation that events were driven in whole or in part by secret capabilities gains within OpenAI, possibly from a system called Gobi, perhaps even related to the joking claim ‘AI has been achieved internally’ but we have no concrete evidence of that.
  15. Ilya Sutskever co-leads the Superalignment Taskforce, has very short timelines for when we will get AGI, and is very concerned about AI existential risk.
  16. Sam Altman was involved in starting multiple new major tech companies. He was looking to raise tens of billions from Saudis to start a chip company. He was in other discussions for an AI hardware company.
  17. Sam Altman has stated time and again, including to Congress, that he takes existential risk from AI seriously. He was part of the creation of OpenAI’s corporate structure. He signed the CAIS letter. OpenAI spent six months on safety work before releasing GPT-4. He understands the stakes. One can question OpenAI’s track record on safety, many did including those who left to found Anthropic. But this was not a pure ‘doomer vs. accelerationist’ story.
  18. Sam Altman is very good at power games such as fights for corporate control. Over the years he earned the loyalty of his employees, many of whom moved in lockstep, using strong strategic ambiguity. Hand very well played.
  19. Essentially all of VC, tech, founder, financial Twitter united to condemn the board for firing Altman and for how they did it, as did many employees, calling upon Altman to either return to the company or start a new company and steal all the talent. The prevailing view online was that no matter its corporate structure, it was unacceptable to fire Altman, who had built the company, or to endanger OpenAI’s value by doing so. That it was good and right and necessary for employees, shareholders, partners and others to unite to take back control.
  20. Talk in those circles is that this will completely discredit EA or ‘doomerism’ or any concerns over the safety of AI, forever. Yes, they say this every week, but this time it was several orders of magnitude louder and more credible. New York Times somehow gets this backwards. Whatever else this is, it’s a disaster.
  21. By contrast, those concerned about existential risk, and some others, pointed out that the unique corporate structure of OpenAI was designed for exactly this situation. They also mostly noted that the board clearly handled decisions and communications terribly, but that there was much unknown, and tried to avoid jumping to conclusions.
  22. Thus we are now answering the question: What is the law? Do we have law? Where does the power ultimately lie? Is it the charismatic leader that ultimately matters? Who you hire and your culture? Can a corporate structure help us, or do commercial interests and profit motives dominate in the end?
  23. Great pressure was put upon the board to reinstate Altman. They were given two 5pm Pacific deadlines, on Saturday and Sunday, to resign. Microsoft’s aid, and that of its CEO Satya Nadella, was enlisted in this. We do not know what forms of leverage Microsoft did or did not bring to that table.
  24. Sam Altman tweets ‘I love the openai team so much.’ Many at OpenAI respond with hearts, including Mira Murati.
  25. Invited by employees including Mira Murati and other top executives, Sam Altman visited the OpenAI offices on Sunday. He tweeted ‘First and last time i ever wear one of these’ with a picture of his visitors pass.
  26. The board does not appear to have been at the building at the time.
  27. Press reported that the board had agreed to resign in principle, but that snags were hit over who the replacement board would be, and over whether or not they would need to issue a statement absolving Altman of wrongdoing, which could be legally perilous for them given their initial statement.
  28. Bloomberg reported on Sunday 11:16pm that temporary CEO Mira Murati aimed to rehire Altman and Brockman, while board sought alternative CEO.
  29. OpenAI board hires former Twitch CEO Emmett Shear to be the new CEO. He issues his initial statement here. I know a bit about him. If the board needs to hire a new CEO from outside that takes existential risk seriously, he seems to me like a truly excellent pick, I cannot think of a clearly better one. The job set for him may or may not be impossible. Shear’s PPS in his note: PPS: “Before I took the job, I checked on the reasoning behind the change. The board did *not* remove Sam over any specific disagreement on safety, their reasoning was completely different from that. I’m not crazy enough to take this job without board support for commercializing our awesome models.”
  30. New CEO Emmett Shear has made statements in favor of slowing down AI development, although not a stop. His p(doom) is between 5% and 50%. He has said ‘My AI safety discourse is 100% “you are building an alien god that will literally destroy the world when it reaches the critical threshold but be apparently harmless before that.”’ Here is a thread and video link with more, transcript here or a captioned clip. Here he is tweeting a 2×2 faction chart a few days ago.
  31. Microsoft CEO Satya Nadella posts 2:53am Monday morning: We remain committed to our partnership with OpenAI and have confidence in our product roadmap, our ability to continue to innovate with everything we announced at Microsoft Ignite, and in continuing to support our customers and partners. We look forward to getting to know Emmett Shear and OAI’s new leadership team and working with them. And we’re extremely excited to share the news that Sam Altman and Greg Brockman, together with colleagues, will be joining Microsoft to lead a new advanced AI research team. We look forward to moving quickly to provide them with the resources needed for their success.
  32. Sam Altman retweets the above with ‘the mission continues.’ Brockman confirms. Other leadership to include Jackub Pachocki the GPT-4 lead, Szymon Sidor and Aleksander Madry.
  33. Nadella continued in reply: I’m super excited to have you join as CEO of this new group, Sam, setting a new pace for innovation. We’ve learned a lot over the years about how to give founders and innovators space to build independent identities and cultures within Microsoft, including GitHub, Mojang Studios, and LinkedIn, and I’m looking forward to having you do the same.
  34. Ilya Sutskever posts 8:15am Monday morning: I deeply regret my participation in the board’s actions. I never intended to harm OpenAI. I love everything we’ve built together and I will do everything I can to reunite the company. Sam retweets with three heart emojis. Jan Leike, the other head of the superalignment team, Tweeted that he worked through the weekend on the crisis, and that the board should resign.
  35. Microsoft stock was down -1% after hours on Friday, was back to roughly its previous value on Monday morning and at the open. All priced in. Neither Google or S&P made major moves either.
  36. 505 of 700 employees of OpenAI, including Ilya Sutskever, sign a letter telling the board to resign and reinstate Altman and Brockman, threatening to otherwise move to Microsoft to work in the new subsidiary under Altman, which will have a job for every OpenAI employee. Full text of the letter that was posted: To the Board of Directors at OpenAI,OpenAl is the world’s leading Al company. We, the employees of OpenAl, have developed the best models and pushed the field to new frontiers. Our work on Al safety and governance shapes global norms. The products we built are used by millions of people around the world. Until now, the company we work for and cherish has never been in a stronger position.The process through which you terminated Sam Altman and removed Greg Brockman from the board has jeopardized all of this work and undermined our mission and company. Your conduct has made it clear you did not have the competence to oversee OpenAI.When we all unexpectedly learned of your decision, the leadership team of OpenAl acted swiftly to stabilize the company. They carefully listened to your concerns and tried to cooperate with you on all grounds. Despite many requests for specific facts for your allegations, you have never provided any written evidence. They also increasingly realized you were not capable of carrying out your duties, and were negotiating in bad faith.The leadership team suggested that the most stabilizing path forward – the one that would best serve our mission, company, stakeholders, employees and the public – would be for you to resign and put in place a qualified board that could lead the company forward in stability. Leadership worked with you around the clock to find a mutually agreeable outcome. Yet within two days of your initial decision, you again replaced interim CEO Mira Murati against the best interests of the company. You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”Your actions have made it obvious that you are incapable of overseeing OpenAl. We are unable to work for or with people that lack competence, judgement and care for our mission and employees. We, the undersigned, may choose to resign from OpenAl and join the newly announced Microsoft subsidiary run by Sam Altman and Greg Brockman. Microsoft has assured us that there are positions for all OpenAl employees at this new subsidiary should we choose to join. We will take this step imminently, unless all current board members resign, and the board appoints two new lead independent directors, such as Bret Taylor and Will Hurd, and reinstates Sam Altman and Greg Brockman.1. Mira Murati2. Brad Lightcap3. Jason Kwon4. Wojciech Zaremba5. Alec Radford6. Anna Makanju7. Bob McGrew8. Srinivas Narayanan9. Che Chang10. Lillian Weng11. Mark Chen12. Ilya Sutskever
  37. There is talk that OpenAI might completely disintegrate as a result, that ChatGPT might not work a few days from now, and so on.
  38. It is very much not over, and still developing.
  39. There is still a ton we do not know.
  40. This weekend was super stressful for everyone. Most of us, myself included, sincerely wish none of this had happened. Based on what we know, there are no villains in the actual story that matters here. Only people trying their best under highly stressful circumstances with huge stakes and wildly different information and different models of the world and what will lead to good outcomes. In short, to all who were in the arena for this on any side, or trying to process it, rather than spitting bile: ❤.

Later, when we know more, I will have many other things to say, many reactions to quote and react to. For now, everyone please do the best you can to stay sane and help the world get through this as best you can.

OpenAI: Facts from a Weekend
New Comment
167 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]gwern10827

The key news today: Altman had attacked Helen Toner https://www.nytimes.com/2023/11/21/technology/openai-altman-board-fight.html (HN, Zvi; excerpts) Which explains everything if you recall board structures and voting.

Altman and the board had been unable to appoint new directors because there was an even balance of power, so during the deadlock/low-grade cold war, the board had attrited down to hardly any people. He thought he had Sutskever on his side, so he moved to expel Helen Toner from the board. He would then be able to appoint new directors of his choice. This would have irrevocably tipped the balance of power towards Altman. But he didn't have Sutskever like he thought he did, and they had, briefly, enough votes to fire Altman before he broke Sutskever (as he did yesterday), and they went for the last-minute hail-mary with no warning to anyone.

As always, "one story is good, until another is told"...

[-]gwern10615

The WSJ has published additional details about the Toner fight, filling in the other half of the story. The NYT merely mentions the OA execs 'discussing' it, but the WSJ reports much more specifically that the exec discussion of Toner was a Slack channel that Sutskever was in, and that approximately 2 days before the firing and 1 day before Mira was informed* (ie. the exact day Ilya would have flipped if they had then fired Altman about as fast as possible to schedule meetings 48h before & vote), he saw them say that the real problem was EA and that they needed to get rid of EA associations.

https://www.wsj.com/tech/ai/altman-firing-openai-520a3a8c (excerpts)

The specter of effective altruism had loomed over the politics of the board and company in recent months, particularly after the movement’s most famous adherent, Sam Bankman-Fried, the founder of FTX, was found guilty of fraud in a highly public trial.

Some of those fears centered on Toner, who previously worked at Open Philanthropy. In October, she published an academic paper touting the safety practices of OpenAI’s competitor, Anthropic, which didn’t release its own AI tool until ChatGPT’s emergence. “By delaying the rele

... (read more)

The NYer has confirmed that Altman's attempted coup was the cause of the hasty firing (excerpts; HN):

...Some members of the OpenAI board had found Altman an unnervingly slippery operator. For example, earlier this fall he’d confronted one member, Helen Toner, a director at the Center for Security and Emerging Technology, at Georgetown University, for co-writing a paper that seemingly criticized OpenAI for “stoking the flames of AI hype.” Toner had defended herself (though she later apologized to the board for not anticipating how the paper might be perceived). Altman began approaching other board members, individually, about replacing her. When these members compared notes about the conversations, some felt that Altman had misrepresented them as supporting Toner’s removal. “He’d play them off against each other by lying about what other people thought”, the person familiar with the board’s discussions told me. “Things like that had been happening for years.” (A person familiar with Altman’s perspective said that he acknowledges having been “ham-fisted in the way he tried to get a board member removed”, but that he hadn’t attempted to manipulate the board.)

...His tactical skills w

... (read more)

I left a comment over on EAF which has gone a bit viral, describing the overall picture of the runup to the firing as I see it currently.

The summary is: evaluations of the Board's performance in firing Altman generally ignore that Altman made OpenAI and set up all of the legal structures, staff, and the board itself; the Board could, and should, have assumed good faith of Altman because if he hadn't been sincere, why would he have done all that, proving in extremely costly and unnecessary ways his sincerity? But, as it happened, OA recently became such a success that Altman changed his mind about the desirability of all that and now equally sincerely believes that the mission requires him to be in total control; and this is why he started to undermine the board. The recency is why it was so hard for them to realize that change of heart or develop common knowledge about it or coordinate to remove him given his historical track record - but that historical track record was also why if they were going to act against him at all, it needed to be as fast & final as possible. This led to the situation becoming a powder keg, and when proof of Altman's duplicity in the Toner firing became undeniable to the Board, it exploded.

Latest news: Time sheds considerably more light on the board position, in its discouragingly-named piece "2023 CEO of the Year: Sam Altman" (excerpts; HN). While it sounds & starts like a puff piece (no offense to Ollie - cute coyote photos!), it actually contains a fair bit of leaking I haven't seen anywhere else. Most strikingly:

  • claims that the Board thought it had the OA executives on its side, because the executives had approached it about Altman:

    The board expected pressure from investors and media. But they misjudged the scale of the blowback from within the company, in part because they had reason to believe the executive team would respond differently, according to two people familiar with the board’s thinking, who say the board’s move to oust Altman was informed by senior OpenAI leaders, who had approached them with a variety of concerns about Altman’s behavior and its effect on the company’s culture.

    (The wording here strongly implies it was not Sutskever.) This of course greatly undermines the "incompetent Board" narrative, possibly explains both why the Board thought it could trust Mira Murati & why she didn't inform Altman ahead of time (was she one of tho

... (read more)

If you've noticed OAers being angry on Twitter today, and using profanity & bluster and having oddly strong opinions about how it is important to refer to roon as @tszzl and never as @roon, it's because another set of leaks has dropped, and they are again unflattering to Sam Altman & consistent with the previous ones.

Today the Washington Post adds to the pile, "Warning from OpenAI leaders helped trigger Sam Altman’s ouster: The senior employees described Altman as psychologically abusive, creating delays at the artificial-intelligence start-up — complaints that were a major factor in the board’s abrupt decision to fire the CEO" (archive.is; HN; excerpts), which confirms the Time/WSJ reporting about executives approaching the board with concerns about Altman, and adds on more details - their concerns did not relate to the Toner dispute, but apparently was about regular employees:

This fall, a small number of senior leaders approached the board of OpenAI with concerns about chief executive Sam Altman. Altman---a revered mentor, prodigious start-up investor and avatar of the AI revolution---had been psychologically abusive, the employees said, creating pockets of chaos and de

... (read more)

An elaboration on the WaPo article in the 2023-12-09 NYT: “Inside OpenAI’s Crisis Over the Future of Artificial Intelligence: Split over the Leadership of Sam Altman, Board Members and Executives Turned on One Another. Their Brawl Exposed the Cracks at the Heart of the AI Movement” (excerpts). Mostly a gossipy narrative from both the Altman & D'Angelo sides, so I'll just copy over my HN comment:

  • another reporting of internal OA complaints about Altman's manipulative/divisive behavior, see previously on HN

  • previously we knew Altman had been dividing-and-conquering the board by lying about others wanted to fire Toner, this says that specifically, Altman had lied about McCauley wanting to fire Toner; presumably, this was said to D'Angelo.

  • Concerns over Tigris had been mooted, but this says specifically that the board thought Altman had not been forthcoming about it; still unclear if he had tried to conceal Tigris entirely or if he had failed to mention something more specific like who he was trying to recruit for capital.

  • Sutskever had threatened to quit after Jakub Pachocki's promotion; previous reporting had said he was upset about it, but hadn't hinted at him being so a

... (read more)
[-]gwern9518

The WSJ dashes our hopes for a quiet Christmas by dropping on Christmas Eve a further extension of all this reporting: "Sam Altman’s Knack for Dodging Bullets—With a Little Help From Bigshot Friends: The OpenAI CEO lost the confidence of top leaders in the three organizations he has directed, yet each time he’s rebounded to greater heights", Seetharam et al 2024-12-24 (Archive.is, HN; annotated excerpts).

This article confirms - among other things - what I suspected about there being an attempt to oust Altman from Loopt for the same reasons as YC/OA, adds some more examples of Altman amnesia & behavior (including what is, since people apparently care, being caught in a clearcut unambiguous public lie), names the law firm in charge of the report (which is happening), and best of all, explains why Sutskever was so upset about the Jakub Pachocki promotion.


  • Loopt coup: Vox had hinted at this in 2014 but it was unclear; however, WSJ specifically says that Loopt was in chaos and Altman kept working on side-projects while mismanaging Loopt (so, nearly identical to the much later, unconnected, YC & OA accusations), leading to the 'senior employees' to (twice!) appeal to the board

... (read more)
[-]gwern19161

An OA update: it's been quiet, but the investigation is over. And Sam Altman won. (EDIT: yep.)

To recap, because I believe I haven't been commenting on this since December (this is my last big comment, skimming my LW profile): WilmerHale was brought in to do the investigation. The tender offer, to everyone's relief, went off. A number of embarrassing new details about Sam Altman have surfaced: in particular, about his enormous chip fab plan with substantial interest from giants like Temasek, and how the OA VC Fund turns out to be owned by Sam Altman (his explanation was it saved some paperwork and he just forgot to ever transfer it to OA). Ilya Sutskever remains in hiding and lawyered up (his silence became particularly striking with the release of Sora). There have been increasing reports the past week or two that the WilmerHale investigation was coming to a close - and I am told that the investigators were not offering confidentiality and the investigation was narrowly scoped to the firing. (There was also some OA drama with the Musk lawfare & the OA response, but aside from offering an abject lesson in how not to redact sensitive information, it's both irrelevant & unimpo... (read more)

Reply1321
[-]gwern566

Ilya Sutskever and Mira Murati will leave OA or otherwise take on some sort of clearly diminished role by year-end (90%, 75%; cf. Murati's desperate-sounding internal note)

Mira Murati announced today she is resigning from OA. (I have also, incidentally, won a $1k bet with an AI researcher on this prediction.)

2Nikita Sokolsky
Do you think this will have any impact on OpenAI's future revenues / ability to deliver frontier-level models?
[-]gwern239

See my earlier comments on 23 June 2024 about what 'OA rot' would look like; I do not see any revisions necessary given the past 3 months.

As for Murati finally leaving (perhaps she was delayed by the voice shipping delays), I don't think it matters too much as far as I could tell (not like Sutskever or Brockman leaving), she was competent but not critical; probably the bigger deal is that her leaving is apparently a big surprise to a lot of OAers (maybe I should've taken more bets?), and so will come as a blow to morale and remind people of last year's events.

EDIT: Zoph Barret & Bob McGrew are now gone too. Altman has released a statement, confirming that Murati only quit today:

...When Mira [Murati] informed me this morning that she was leaving, I was saddened but of course support her decision. For the past year, she has been building out a strong bench of leaders that will continue our progress.

I also want to share that Bob [McGrew] and Barret [Zoph] have decided to depart OpenAI. Mira, Bob, and Barret made these decisions independently of each other and amicably, but the timing of Mira’s decision was such that it made sense to now do this all at once, so that we can work t

... (read more)
3RobertM
It does not actually make any sense to me that Mira wanted to prevent leaks, and therefore didn't even tell Sam that she was leaving ahead of time.  What would she be afraid of, that Sam would leak the fact that she was planning to leave... for what benefit? Possibilities: * She was being squeezed out, or otherwise knew her time was up, and didn't feel inclined to make it a maximally comfortable parting for OpenAI.  She was willing to eat the cost of her own equity potentially losing a bunch of value if this derailed the ongoing investment round, as well as the reputational cost of Sam calling out the fact that she, the CTO of the most valuable startup in the world, resigned with no notice for no apparent good reason. * Sam is lying or otherwise being substantially misleading about the circumstances of Mira's resignation, i.e. it was not in fact a same-day surprise to him.  (And thinks she won't call him out on it?) * ???
[-]gwern155

Of course it doesn't make sense. It doesn't have to. It just has to be a face-saving excuse for why she pragmatically told him at the last possible minute. (Also, it's not obvious that the equity round hasn't basically closed.)

7Matthew Barnett
Looks like you were right, at least if the reporting in this article is correct, and I'm interpreting the claim accurately.
[-]gwern268

At least from the intro, it sounds like my predictions were on-point: re-appointed Altman (I waffled about this at 60% because while his narcissism/desire to be vindicated requires him to regain his board seat, because anything less is a blot on his escutcheon, and also the pragmatic desire to lock down the board, both strongly militated for his reinstatement, it also seems so blatant a powergrab in this context that surely he wouldn't dare...? guess he did), released to an Altman outlet (The Information), with 3 weak apparently 'independent' and 'diverse' directors to pad out the board and eventually be replaced by full Altman loyalists - although I bet if one looks closer into these three women (Sue Desmond-Hellmann, Nicole Seligman, & Fidji Simo), one will find at least one has buried Altman ties. (Fidji Simo, Instacart CEO, seems like the most obvious one there: Instacart was YC S12.)

[-]gwern7521

The official OA press releases are out confirming The Information: https://openai.com/blog/review-completed-altman-brockman-to-continue-to-lead-openai https://openai.com/blog/openai-announces-new-members-to-board-of-directors

“I’m pleased this whole thing is over,” Altman said at a press conference Friday.

He's probably right.


As predicted, the full report will not be released, only the 'summary' focused on exonerating Altman. Also as predicted, 'the mountain has given birth to a mouse' and the report was narrowly scoped to just the firing: they bluster about "reviewing 30,000 documents" (easy enough when you can just grep Slack + text messages + emails...), but then admit that they looked only at "the events concerning the November 17, 2023 removal" and interviewed hardly anyone ("dozens of interviews" barely even covers the immediate dramatis personae, much less any kind of investigation into Altman's chip stuff, Altman's many broken promises, Brockman's complainers etc). Doesn't sound like they have much to show for over 3 months of work by the smartest & highest-paid lawyers, does it... It also seems like they indeed did not promise confidentiality or set up any kind of ... (read more)

3jacquesthibs
Heh, here it is: https://x.com/miramurati/status/1839025700009030027 
2ESRogs
Nitpick: Larry Summers not Larry Sumners
4gwern
(Fixed. This is a surname typo I make an unbelievable number of times because I reflexively overcorrect it to 'Sumners', due to reading a lot more of Scott Sumner than Larry Summers. Ugh - just caught myself doing it again in a Reddit comment...)
2ESRogs
Yeah I figured Scott Sumner must have been involved.
2Zach Stein-Perlman
Source?
2Zach Stein-Perlman
@gwern I've failed to find a source saying that Hydrazine invested in OpenAI. If it did, that would be a big deal; it would make this a lie.
8gwern
It was either Hydrazine or YC. In either case, my point remains true: he's chosen to not dispose of his OA stake, whatever vehicle it is held in, even though it would be easy for someone of his financial acumen to do so by a sale or equivalent arrangement, forcing an embarrassing asterisk to his claims to have no direct financial conflict of interest in OA LLC - and one which comes up regularly in bad OA PR (particularly by people who believe it is less than candid to say you have no financial interest in OA when you totally do), and a stake which might be quite large at this point*, and so is particularly striking given his attitude towards much smaller conflicts supposedly risking bad OA PR. (This is in addition to the earlier conflicts of interest in Hydrazine while running YC or the interest of outsiders in investing in Hydrazine, apparently as a stepping stone towards OA.) * if he invested a 'small' amount via some vehicle before he even went full-time at OA, when OA was valued at some very small amount like $50m or $100m, say, and OA's now valued at anywhere up to $90,000m or >900x more, and further, he strongly believes it's going to be worth far more than that in the near-future... Sure, it may be worth 'just' $500m or 'just' $1000m after dilution or whatever, but to most people that's pretty serious money!
1Rebecca
Why do you think McCauley is likely to be the board member Labenz spoke to? I had inferred that it was someone not particularly concerned about safety given that Labenz reported them saying they could be easily request access to the model if they’d wanted to (and hadn’t). I took the point of the anecdote to be ‘here was a board member not concerned about safety’.
3gwern
Because there is not currently any evidence that Toner was going around talking to a bunch of people, whereas this says McCauley was doing so. If I have to guess "did Labenz talk to the person who was talking to a bunch of people in OA, or did he talk to the person who was as far as I know not talking to a bunch of people in OA?", I am going to guess the former.
1Rebecca
They weren’t the only non employee board members though - that’s what I meant by the part about not being concerned about safety, that I took it to rule out both Toner and McCauley. (Although it for some other reason you were only looking at Toner and McCauley, then no, I would say the person going around speaking to OAI employees is_less_ likely to be out of the loop on GPT-4’s capabilities)
8gwern
The other ones are unlikely. Shivon Zilis & Reid Hoffman had left by this point; Will Hurd might or might not still be on the board at this point but wouldn't be described nor recommended by Labenz's acquaintance as researching AI safety, as that does not describe Hurd or D'Angelo; Brockman, Altman, and Sutskever are right out (Sutskever researches AI safety but Superalignment was a year away); by process of elimination, over 2023, the only board members he could have been plausibly contacting would be Toner and McCauley, and while Toner weakly made more sense before, now McCauley does. (The description of them not having used the model unfortunately does not distinguish either one - none of the writings connected to them sound like they have all that much hands-on experience and would be eagerly prompt-engineering away at GPT-4-base the moment they got access. And I agree that this is a big mistake, but it is, even more unfortunately, and extremely common one - I remain shocked that Altman had apparently never actually used GPT-3 before he basically bet the company on it. There is a widespread attitude, even among those bullish about the economics, that GPT-3 or GPT-4 are just 'tools', which are mere 'stochastic parrots', and have no puzzling internal dynamics or complexities. I have been criticizing this from the start, but the problem is, 'sampling can show the presence of knowledge and not the absence', so if you don't think there's anything interesting there, your prompts are a mirror which reflect only your low expectations; and the safety tuning makes it worse by hiding most of the agency & anomalies, often in ways that look like good things. For example, the rhyming poetry ought to alarm everyone who sees it, because of what it implies underneath - but it doesn't. This is why descriptions of Sydney or GPT-4-base are helpful: they are warning shots from the shoggoth behind the friendly tool-AI ChatGPT UI mask.)
1Rebecca
I think you might be misremembering the podcast? Nathan said that he was assured that the board as a whole was serious about safety, but I don’t remember the specific board member being recommended as someone researching AI safety (or otherwise more pro safety than the rest of the board). I went back through the transcript to check and couldn’t find any reference to what you’ve said. “ And ultimately, in the end, basically everybody said, “What you should do is go talk to somebody on the OpenAI board. Don’t blow it up. You don’t need to go outside of the chain of command, certainly not yet. Just go to the board. And there are serious people on the board, people that have been chosen to be on the board of the governing nonprofit because they really care about this stuff. They’re committed to long-term AI safety, and they will hear you out. And if you have news that they don’t know, they will take it seriously.” So I was like, “OK, can you put me in touch with a board member?” And so they did that, and I went and talked to this one board member. And this was the moment where it went from like, “whoa” to “really whoa.”” (https://80000hours.org/podcast/episodes/nathan-labenz-openai-red-team-safety/?utm_campaign=podcast__nathan-labenz&utm_source=80000+Hours+Podcast&utm_medium=podcast#excerpt-from-the-cognitive-revolution-nathans-narrative-001513)
2gwern
I was not referring to the podcast (which I haven't actually read yet because from the intro it seems wildly out of date and from a long time ago) but to Labenz's original Twitter thread turned into a Substack post. I think you misinterpret what he is saying in that transcript because it is loose and extemporaneous "they're committed" could just as easily refer to "are serious people on the board" who have "been chosen" for that (implying that there are other members of the board not chosen for that); and that is what he says in the written down post:
1Rebecca
This quote doesn’t say anything about the board member/s being people who are researching AI safety though - it’s Nathan’s friends who are in AI safety research not the board members. I agree that based on this quote, it could have very well been just a subset of the board. But I believe Nathan’s wife works for CEA (and he’s previously MCed an EAG), and Tasha is (or was?) on the board of EVF US, and so idk, if it’s Tasha he spoke to and the “multiple people” was just her and Helen, I would have expected a rather different description of events/vibe. E.g. something like ‘I googled who was on the board and realised that two of them were EAs, so I reached out to discuss’. I mean maybe that is closer to what happened and it’s just being obfuscated, either way is confusing to me tbh. Btw, by “out of date” do you mean relative to now, or to when the events took place? From what I can see, the tweet thread, the substack post and the podcast were all published the same day - Nov 22nd 2023. The link I provided is just 80k excerpting the original podcast.
[-]gwern196

I suspect there is much more to this thread, and it may tie back to Superalignment & broken promises about compute-quotas.

The Superalignment compute-quota flashpoint is now confirmed. Aside from Jan Leike explicitly calling out compute-quota shortages post-coup (which strictly speaking doesn't confirm shortages pre-coup), Fortune is now reporting that this was a serious & longstanding issue:

...According to a half-dozen sources familiar with the functioning of OpenAI’s Superalignment team, OpenAI never fulfilled its commitment to provide the team with 20% of its computing power.

Instead, according to the sources, the team repeatedly saw its requests for access to graphics processing units, the specialized computer chips needed to train and run AI applications, turned down by OpenAI’s leadership, even though the team’s total compute budget never came close to the promised 20% threshold.

The revelations call into question how serious OpenAI ever was about honoring its public pledge, and whether other public commitments the company makes should be trusted. OpenAI did not respond to requests to comment for this story.

...It was a task so important that the company said in it

... (read more)
5Wei Dai
There seems to be very little discussion of this story on Twitter. WP's tweet about it got only 75k views and 59 likes as of now, even though WP has 2M followers. (I guess Twitter will hide your tweets even from your followers if the engagement rate is low enough. Not sure what the cutoff is, but 1 like to 100 views doesn't seem uncommon for tweets, and this one is only 1:1000. BTW what's a good article to read to understand Twitter better?)

There's two things going on. First, Musk-Twitter appears to massively penalize external links. Musk has vowed to fight 'spammers' who post links on Twitter to what are other sites (gasp) - the traitorous scum! Substack is only the most abhorred of these vile parasites, but all shall be brought to justice in due course. There is no need for other sites. You should be posting everything on Twitter as longform tweets (after subscribing), obviously.

You only just joined Twitter so you wouldn't have noticed the change, but even direct followers seem to be less likely to see a tweet if you've put a link in it. So tweeters are increasingly reacting by putting the external link at the end of a thread in a separate quarantine tweet, not bothering with the link at all, or just leaving Twitter under the constant silent treatment that high-quality tweeting gets you these days.* So, many of the people who would be linking or discussing it are either not linking it or not discussing it, and don't show up in the WaPo thread or by a URL search.

Second, OAers/pro-Altman tweets are practicing the Voldemort strategy: instead of linking the WaPo article at all (note that roon, Eigenrobot etc don't sho... (read more)

2Wei Dai
Thanks for the explanations, but I'm not noticing a big "external links" penalty on my own tweets. Found some discussion of this penalty via Google, so it seems real but maybe not that "massive"? Also some of it dates to before Musk purchased Twitter. Can you point me to anything that says he increased the penalty by a lot? Ah Musk actually published Twitter's algorithms, confirming the penalty. Don't see anyone else saying that he increased the penalty though. BTW why do you "protect" your account (preventing non-followers from seeing your tweets)?
5gwern
'The algorithm' is an emergent function of the entire ecosystem. I have no way of knowing what sort of downstream effects a tweak here or there would cause or the effects of post-Musk changes. I just know what I see: my tweets appear to have plummeted since Musk took over, particularly when I link to my new essays or documents etc. If you want to do a more rigorous analysis, I export my Twitter analytics every few months (thank goodness Musk hasn't disabled that to try to upsell people to the subscription - maybe he doesn't know it's there?) and could provide you my archives. (BTW, there is a moving window where you can only get the last few months, so if you think you will ever be interested in your Twitter traffic numbers, you need to start exporting them every 2-3 months now, or else the historical data will become inaccessible. I don't know if you can restore access to old ones by signing up as an advertiser.) EDIT: I looked at the last full pre-Musk month and my last month, and I've lost ~75% of views/clicks/interactions, despite trying to use Twitter in the same way. As for the 'published' algorithm, I semi-believe it is genuine (albeit doubtless incomplete) because Musk was embarrassed that it exposed how some parts of the new algorithm are manipulating Twitter to make Musk look more popular (confirming earlier reporting that Musk had ordered such changes after getting angry his views were dropping due to his crummy tweets), but that is also why it hasn't been updated in almost half a year, apparently. God knows what the real thing is like by now...
1Rebecca
Could you link to some examples of “ OAers being angry on Twitter today, and using profanity & bluster and having oddly strong opinions about how it is important to refer to roon as @tszzl and never as @roon”? I don’t have a twitter account so can’t search myself
5lc
I've read your explanations of what happened, and it still seems like the board acted extremely incompetently. Call me an armchair general if you want. Specific choices that I take severe issue with: 1. The decision to fire Sam, instead of just ejecting him from the board Both kicking Sam off the board, and firing him, and kicking Greg off at the same time all at once with no real explanation is completely unnecessary and is also what ultimately gives Sam the cassus belli for organizing the revolt to begin with. It's also unnecessary to defend Helen from Sam's attacks. Consider what happens if Sam had just lost his board seat. First, his cost-benefit analysis looks different: Sam still has most of what he had before to lose, namely his actual position at OpenAI, and so probably no matter how mad he is he doesn't hold the entire organization hostage. Second, he is way, way more limited in what he can justifiably publicly do in response. Taking the nuclear actions he did - quitting in protest and moving to Microsoft - in response to losing control over a board he shouldn't have control over in the first place would look disloyal and vindictive. And if/when Sam tries to use his position as CEO to sabotage the company or subvert the board further (this time lacking his own seat), you'll have more ammunition to fire him later if you really need to. If I had been on the board, my first action after getting the five together is to call Greg and Mira into an office and explain what was going on. Then after a long conversation about our motivations (whether or not they'd agreed with our decision), I immediately call Sam in/over the internet and deliver the news that he is no longer a board member, and that the vote had already been passed. I then overtly and clearly explain the reasoning behind why he's losing the board seat ("we felt you were trying to compromise the integrity of the board with your attacks on Helen and playing of board members against one another"),

Thanks, this makes more sense than anything else I've seen, but one thing I'm still confused about:

If the factions were Altman-Brockman-Sutskever vs. Toner-McCauley-D'Angelo, then even assuming Sutskever was an Altman loyalist, any vote to remove Toner would have been tied 3-3. I can't find anything about tied votes in the bylaws - do they fail? If so, Toner should be safe. And in fact, Toner knew she (secretly) had Sutskever on her side, and it would have been 4-2. If Altman manufactured some scandal, the board could have just voted to ignore it.

So I still don't understand "why so abruptly?" or why they felt like they had to take such a drastic move when they held all the cards (and were pretty stable even if Ilya flipped).

Other loose ends:

  • Toner got on the board because of OpenPhil's donation. But how did McCauley get on the board?
  • Is D'Angelo a safetyist?
  • Why wouldn't they tell anyone, including Emmett Shear, the full story?

I can't find anything about tied votes in the bylaws - do they fail?

I can't either, so my assumption is that the board was frozen ever since Hoffman/Hurd left for that reason.

And there wouldn't've been a vote at all. I've explained it before but - while we wait for phase 3 of the OA war to go hot - let me take another crack at it, since people seem to keep getting hung up on this and seem to imagine that it's a perfectly normal state of a board to be in a deathmatch between two opposing factions indefinitely, and so confused why any of this happened.

In phase 1, a vote would be pointless, and neither side could nor wanted to force it to a vote. After all, such a vote (regardless of the result) is equivalent to admitting that you have gone from simply "some strategic disagreements among colleagues all sharing the same ultimate goals and negotiating in good faith about important complex matters on which reasonable people of goodwill often differ" to "cutthroat corporate warfare where it's-them-or-us everything-is-a-lie-or-fog-of-war fight-to-the-death there-can-only-be-one". You only do such a vote in the latter situation; in the former, you just keep negotiating until you reach a ... (read more)

4Daniel
A 3-3 tie between the CEO founder of the company, the president founder of the company, and the chief scientist of the company vs three people with completely separate day jobs who never interact with rank-and-file employees is not a stable equilibrium. There are ways to leverage this sort of soft power into breaking the formal deadlock, for example: as we saw last week.
3faul_sname
I note that the articles I have seen have said things like (emphasis mine). If Shear had been unable to get any information about the board's reasoning, I very much doubt that they would have included the word "written".
0Mitchell_Porter
I have envisaged a scenario in which the US intelligence community has an interagency working group on AI, and Toner and McCauley were its defacto representatives on the OpenAI board, Toner for CIA, McCauley for NSA. Maybe someone who has studied the history of the board can tell me whether that makes sense, in terms of its shifting factions. 

Why would Toner be related to the CIA, and how is McCauley NSA?

If OpenaI is running out money, and is too dependent on Microsoft, defense/intelligence/government is not the worst place for them to look for money. There are even possible futures where they are partially nationalised in a crisis. Or perhaps they will help with regulatory assessment. This possibility certainly makes the Larry Summers appointment take on a different't light with his ties to not only Microsoft, but also the Government.

8David Hornbein
Toner's employer, the Center for Security and Emerging Technology (CSET), was founded by Jason Matheny. Matheny was previously the Director of the Intelligence Advanced Research Projects Activity (IARPA), and is currently CEO of the RAND Corporation. CSET is currently led by Dewey Murdick, who previously worked at the Department of Homeland Security and at IARPA. Much of CSET's initial staff was former (or "former") U.S. intelligence analysts, although IIRC they were from military intelligence rather than the CIA specifically. Today many of CSET's researchers list prior experience with U.S. civilian intelligence, military intelligence, or defense intelligence contractors. Given the overlap in staff and mission, U.S. intelligence clearly and explicitly has a lot of influence at CSET, and it's reasonable to suspect a stronger connection than that. I don't see it for McCauley though.
3Mitchell_Porter
Toner's university has a long history of association with the CIA. Just google "georgetown cia" and you'll see more than I can summarize.  As for McCauley, well, I did call this a "scenario"... The movie maker Oliver Stone rivals Chomsky as the voice of an elite political counterculture who are deadly serious in their opposition to what the American deep state gets up to, and whose ranks include former insiders who became leakers, whistleblowers, and ideological opponents of the system. When Stone, already known as a Wikileaks supporter, decided to turn his attention to NSA's celebrity defector Edward Snowden, he ended up casting McCauley's actor boyfriend as the star.  My hunch, my scenario, is that people associated with the agency, or formerly associated with the agency, put him forward for the role, with part of the reason being that he was already dating one of their own. What we know about her CV - robotics, geographic information systems, speaks Arabic, mentored by Alan Kay - obviously doesn't prove anything, but it's enough to make this scenario work, as a possibility. 
[-]lc2113

We shall see. I'm just ignoring the mainstream media spins at this point.

[-]trevor1110

For those of us who don't know yet, criticizing the accuracy of mainstream Western news outlets is NOT a strong bayesian update against someone's epistemics, especially on a site like Lesswrong (doesn't matter how many idiots you might remember ranting about "mainstream media" on other sites, the numbers are completely different here).

There is a well-known dynamic called Gell-Mann Amnesia, where people strongly lose trust in mainstream Western news outlets on a topic they are an expert on, but routinely forget about this loss of trust when they read coverage on a topic that they can't evaluate accuracy on. Western news outlets Goodhart readers by depicting themselves as reliable instead of prioritizing reliability.

If you read major Western news outlets, or are new to major news outlets due to people linking to them on Lesswrong recently, some basic epistemic prep can be found in Scott Alexander's The Media Very Rarely Lies and if it's important, the follow up posts.

Yeah, that makes sense and does explain most things, except that if I was Helen, I don't currently see why I wouldn't have just explained that part of the story early on?* Even so, I still think this sounds very plausible as part of the story.

*Maybe I'm wrong about how people would react to that sort of justification. Personally, I think the CEO messing with the board constitution to gain de facto ultimate power is clearly very bad and any good board needs to prevent that. I also believe that it's not a reason to remove a board member if they publish a piece of research that's critical of or indirectly harmful for your company. (Caveat that we're only reading a secondhand account of this, and maybe what actually happened would make Altman's reaction seem more understandable.) 

8Lukas_Gloor
Hm, to add a bit more nuance, I think it's okay at a normal startup for a board to be comprised of people who are likely to almost always side with the CEO, as long as they are independent thinkers who could vote against the CEO if the CEO goes off the rails. So, it's understandable (or even good/necessary) for CEOs to care a lot about having "aligned" people on the board, as long as they don't just add people who never think for themselves. It gets more complex in OpenAI's situation where there's more potential for tensions between CEO and the board. I mean, there shouldn't necessarily be any tensions, but Altman probably had less of a say over who the original board members were than a normal CEO at a normal startup, and some degree of "norms-compliant maneuvering" to retain board control feels understandable because any good CEO cares a great deal about how to run things. So, it actually gets a bit murky and has to be judged case-by-case. (E.g., I'm sure Altman feels like what happened vindicated him wanting to push Helen off the board.) 
8Ben Pace
I was confused about the counts, but I guess this makes sense if Helen cannot vote on her own removal. Then it's Altman/Brockman/Sutskever v Tasha/D'Angelo. Pretty interesting that Sutskever/Tasha/D'Angelo would be willing to fire Altman just to prevent Helen from going. They instead could have negotiated someone to replace her. Wouldn't you just remove Altman from the Board, or maybe remove Brockman? Why would they be willing to decapitate the company in order to retain Helen?
[-]gwern9225

They instead could have negotiated someone to replace her.

Why do they have to negotiate? They didn't want her gone, he did. Why didn't Altman negotiate a replacement for her, if he was so very upset about the damages she had supposedly done OA...?

"I understand we've struggled to agree on any replacement directors since I kicked Hoffman out, and you'd worry even more about safety remaining a priority if she resigns. I totally get it. So that's not an obstacle, I'll agree to let Toner nominate her own replacement - just so long as she leaves soon."

When you understand why Altman would not negotiate that, you understand why the board could not negotiate that.

I was confused about the counts, but I guess this makes sense if Helen cannot vote on her own removal. Then it's Altman/Brockman/Sutskever v Tasha/D'Angelo.

Recusal or not, Altman didn't want to bring it to something as overt as a vote expelling her. Power wants to conceal itself and deny the coup. The point here of the CSET paper pretext is to gain leverage and break the tie any way possible so it doesn't look bad or traceable to Altman: that's why this leaking is bad for Altman, it shows him at his least fuzzy and PR-friend... (read more)

I... still don't understand why the board didn't say anything? I really feel like a lot of things would have flipped if they had just talked openly to anyone, or taken advice from anyone. Like, I don't think it would have made them global heroes, and a lot of people would have been angry with them, but every time any plausible story about what happened came out, there was IMO a visible shift in public opinion, including on HN, and the board confirming any story or giving any more detail would have been huge. Instead they apparently "cited legal reasons" for not talking, which seems crazy to me.

7Adam Scholl
I can imagine it being the case that their ability to reveal this information is their main source of leverage (over e.g. who replaces them on the board).
7Linch
My favorite low-probability theory is that he had blackmail material on one of the board members[1], who initially decided after much deliberation to go forwards despite the blackmail, and then when they realized they got outplayed by Sam not using the blackmail material, backpeddled and refused to dox themselves.  And the other 2-3 didn't know what to do afterwards, because their entire strategy was predicated on optics management around said blackmail + blackmail material. 1. ^ Like something actually really bad.
[-]Zvi205

It would be sheer insanity to have a rule that you can't vote on your own removal, I would think, or else a tied board will definitely shrink right away.

4mako yass
Wait, simple majority is an insane place to put the threshold for removal in the first place. Majoritarian shrinking is still basically inevitable if the threshold for removal is 50%, it should be a higher than that, maybe 62%. And generally, if 50% of a group thinks A and 50% thinks ¬A, that tells you that the group is not ready to make a decision about A.
7Chess3D
It is not clear, in the non--profit structure of a board, that Helen cannot vote on her own removal. The vote to remove Sam may have been some trickery around holding a quorum meeting without notifying Sam or Greg.
4Linch
I think it was most likely unanimous among the remaining 4, otherwise one of the dissenters would've spoken out by now.
5Tristan Wegner
Here the paper: https://cset.georgetown.edu/publication/decoding-intentions/ Some more recent (Nov/Okt 2023) publications from her here: https://cset.georgetown.edu/staff/helen-toner/
4faul_sname
Manifold says 23% (*edit: link doesn't link directly to that option, it shows up if you search "Helen") on as "a significant factor for why Sam Altman was fired". It would make sense as a motivation, though it's a bit odd that the board would say that Sam was "not consistently candid" and not "trying to undermine the governance structure of the organization" in that case.

When I read this part of the letter, the authors seem to be throwing it in the face of the board like it is a damning accusation, but actually, as I read it, it seems very prudent and speaks well for the board.

You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”

Maybe I'm missing some context, but wouldn't it be better for Open AI as an organized entity to be destroyed than for it to exist right up to the point where all humans are destroyed by an AGI that is neither benevolent nor "aligned with humanity" (if we are somehow so objectively bad as to not deserve care by a benevolent powerful and very smart entity).

This reminds me a lot of a blockchain project I served as an ethicist, which was initially a "project" that was interested in advancing a "movement" and ended up with a bunch of people whose only real goal was to cash big paychecks for a long time (at which point I handled my residual duties to the best of my ability and resigned, with lots of people expressing extreme confusion and asking why I was acting "foolishly" or "incompetently" (except for a tiny number who got angry at me for not causing a BIGGER ex... (read more)

[-]dr_s3029

Maybe I'm missing some context, but wouldn't it be better for Open AI as an organized entity to be destroyed than for it to exist right up to the point where all humans are destroyed by an AGI that is neither benevolent nor "aligned with humanity" (if we are somehow so objectively bad as to deserve care by a benevolent powerful and very smart entity).

The problem I suspect is that people just can't get out of the typical "FOR THE SHAREHOLDERS" mindset, so a company that is literally willing to commit suicide rather than getting hijacked for purposes antithetic to its mission, like a cell dying by apoptosis rather than going cancerous, can be a very good thing, and if only there was more of this. You can't beat Moloch if you're not willing to precommit to this sort of action. And let's face it, no one involved here is facing homelessness and soup kitchens even if Open AI crashes tomorrow. They'll be a little worse off for a while, their careers will take a hit, and then they'll pick themselves up. If this was about the safety of humanity it would be a no-brainer that you should be ready to sacrifice that much.

5Michael Thiessen
Sam's latest tweet suggests he can't get out of the "FOR THE SHAREHOLDERS" mindset. "satya and my top priority remains to ensure openai continues to thrive we are committed to fully providing continuity of operations to our partners and customers" This does sound antithetical to the charter and might be grounds to replace Sam as CEO.
[-]dr_s2918

I feel like, not unlike the situation with SBF and FTX, the delusion that OpenAI could possibly avoid this trap maps on the same cognitive weak spot among EA/rationalists of "just let me slip on the Ring of Power this once bro, I swear it's just for a little while bro, I'll take it off before Moloch turns me into his Nazgul, trust me bro, just this once".

This is honestly entirely unsurprising. Rivers flow downhill and companies part of a capitalist economy producing stuff with tremendous potential economic value converge on making a profit.

[-]Sune113

The corporate structure of OpenAI was set up as an answer to concerns (about AGI and control over AGIs) which were raised by rationalists. But I don’t think rationalists believed that this structure was a sufficient solution to the problem, anymore than non-rationalists believed it. The rationalists that I have been speaking to were generally mostly sceptical about OpenAI.

6dr_s
Oh, I mean, sure, scepticism about OpenAI was already widespread, no question. But in general it seems to me like there's been too many attempts to be too clever by half from people at least adjacent in ways of thinking to rationalism/EA (like Elon) that go "I want to avoid X-risk but also develop aligned friendly AGI for myself" and the result is almost invariably that it just advances capabilities more than safety. I just think sometimes there's a tendency to underestimate the pull of incentives and how you often can't just have your cake and eat it. I remain convinced that if one wants to avoid X-risk from AGI the safest road is probably to just strongly advocate for not building AGI, and putting it in the same bin as "human cloning" as a fundamentally unethical technology. It's not a great shot, but it's probably the best one at stopping it. Being wishy-washy doesn't pay off.
2Seth Herd
I think you're in the majority in this opinion around here. I am noticing I'm confused about the lack of enthusiasm for developing alignment methods for thetypes of AGI that are being developed. Trying to get people to stop building it would be ideal, but I don't see a path to it. The actual difficulty of alignment seems mostly unknown, so potentially vastly more tractable. Yet such efforts make up a tiny part of x-risk discussion. This isn't an argument for building ago, but for aligning the specific AGI others build.
3dr_s
Personally I am fascinated by the problems of interpretability and I would consider "no more GPTs for you guys until you figure out at least the main functioning principles of GPT-3" a healthy exercise in actual ML science to pursue, but I also have to acknowledge that such an understanding would make distillation far more powerful and thus also lead to a corresponding advance in capabilities. I am honestly stumped at what "I want to do something" looks like that doesn't somehow end up backfiring. It maybe that the problem is just thinking this way in the first place, and this really is just a shudder political problem, and tech/science can only make it worse.
4Seth Herd
That all makes sense. Except that this is exactly what I'm puzzled by: a focus on solutions that probably won't work ("no more GPTs for you guys" is approximately impossible), instead of solutions that still might - working on alignment, and trading off advances in alignment for advances in AGI. It's like the field has largely given up on alignment, and we're just trying to survive a few more months by making sure to not contribute to AGI at all. But that makes no sense. MIRI gave up on aligning a certain type of AGI for good reasons. But nobody has seriously analyzed prospects for aligning the types of AGI we're likely to get: language model agents or loosely brainlike collections of deep nets. When I and a few others write about plans for aligning those types of AGI, we're largely ignored. The only substantive comments are "well there are still ways those plans could fail", but not arguments that they're actually likely to fail. Meanwhile, everyone is saying we have no viable plans for alignment, and acting like that means it's impossible. I'm just baffled by what's going on in the collective unspoken beliefs of this field.
9dr_s
I'll be real, I don't know what everyone else thinks, but personally I can say I wouldn't feel comfortable contributing to anything AGI-related at this point because I have very low trust even aligned AGI would result in a net good for humanity, with this kind of governance. I can imagine maybe amidst all the bargains with the Devil there is one that will genuinely pay off and is the lesser evil, but can't tell which one. I think the wise thing to do would be just not to build AGI at all, but that's not a realistically open path. So yeah, my current position is that literally any action I could take advances the kind of future I would want by an amount that is at best below the error margin of my guesses, and at worst negative. It's not a super nice spot to be in but it's where I'm at and I can't really lie to myself about it.
2[anonymous]
In the cancer case, the human body has every cell begin aligned with the body. Anthropically this has to function until breeding age plus enough offspring to beat losses. And yes, if faulty cells self destruct instead of continuing this is good, there are cancer treatments that try to gene edit in clean copies of specific genes (p51 as I recall) that mediate this (works in rats...). However the corporate world/international competition world has many more actors and they are adversarial. OAI self destructing leaves the world's best AI researchers unemployed, removes them from competing in the next round of model improvements - whoever makes a gpt-5 at a competitor will have the best model outright. Coordination is hard. Consider the consequences if an entire town decided to stop consuming fossil fuels. They pay the extra costs and rebuild the town to be less car dependent. However the consequence is this lowers the market price of fossil fuels. So others use more. (Demand elasticity makes the effect still slightly positive)
8dr_s
I mean, yes, a company self-destructing doesn't stop much if their knowledge isn't also actively deleted - and even then, it's just a setback of a few months. But also, by going "oh well we need to work inside the system to fix it somehow" at some point all you get is just another company racing with all others (and in this case, effectively being a pace setter). However you put it, OpenAI is more responsible than any other company for how close we may be to AGI right now, and despite their stated mission, I suspect they did not advance safety nearly as much as capability. So in the end, from the X-risk viewpoint, they mostly made things worse.

I agree with all of this in principal, but I am hung up on the fact that it is so opaque. Up until now the board have determinedly remained opaque.

If corporate seppuku is on the table, why not be transparent? How does being opaque serve the mission?

I wrote a LOT of words in response to this, talking about personal professional experiences that are not something I coherently understand myself as having a duty (or timeless permission?) to share, so I have reduced my response to something shorter and more general. (Applying my own logic to my own words, in realtime!)

There are many cases (arguably stupid cases or counter-producive cases, but cases) that come up more and more when deals and laws and contracts become highly entangling.

Its illegal to "simply" ask people for money in exchange for giving them a transferable right future dividends on a project for how to make money, that you seal with a handshake. The SEC commands silence sometimes and will put you in a cage if you don't.

You get elected to local office and suddenly the Brown Act (which I'd repeal as part of my reboot of the Californian Constitution had I the power) forbids you from talking with your co-workers (other elected officials) about work (the city government) at a party. 

A Confessor is forbidden kinds of information leak.

Fixing <all of this (gesturing at nearly all of human civilization)> isn't something that we have the time or power to do before w... (read more)

5xpym
This seems to presuppose that there is a strong causal effect from OpenAI's destruction to avoiding creation of an omnicidal AGI, which doesn't seem likely? The real question is whether OpenAI was, on the margin, a worse front-runner than its closest competitors, which is plausible, but then the board should have made that case loudly and clearly, because, entirely predictably, their silence has just made the situation worse.

Whatever else, there were likely mistakes from the side of the board, but man does the personality cult around Altman make me uncomfortable. 

It reminds me of the loyalty successful generals like Caesar and Napoleon commanded from their men. The engineers building GPT-X weren't loyal to The Charter, and they certainly weren't loyal to the board. They were loyal to the projects they were building and to Sam, because he was the one providing them resources to build and pumping the value of their equity-based compensation.

9Sune
They were not loyal to the board, but it is not clear if they were loyal to The Charter since they were not given any concrete evidence of a conflict between Sam and the Charter.
4dr_s
Feels like an apt comparison given that the way we find out now is what happens when some kind of Senate tries to cut to size the upstart general and the latter basically goes "you and what army?".
1Tristan Wegner
From your last link: As the company was doing well recently, with ongoing talks about a investment imply a market cap of $90B, this would mean many employees might have hit their 10x already. The highest payout they would ever get. So all incentive to cash out now (or as soon as the 2-year lock will allow), 0 financial incentive to care about long term value. This seems worse in aligning employee interest with the long term interest of the company even compare to regular (unlimited allowed growth) equity, where each employee might hope that the valuation could get even higher. Also: So it seems the growth cap actually encourages short term thinking, which seems against their long term mission. Do you also understand these incentives this way? 
[-]dr_s193

It's not even a personality cult. Until the other day Altman was a despicable doomer and decel, advocating for regulations that would clip humanity's wings. As soon as he was fired and the "what did Ilya see" narrative emerged (I don't even think it was all serious at the beginning), the immediate response from the e/acc crowd was to elevate him to the status of martyr in minutes and recast the Board as some kind of reactionary force for evil that wants humanity to live in misery forever rather than bask in the Glorious AI Future.

Honestly even without the doom stuff I'd be extremely worried about this being the cultural and memetic environment in which AI gets developed anyway. This stuff is pure poison.

It doesn't seem to me like e/acc has contributed a whole lot to this beyond commentary. The rallying of OpenAI employees behind Altman is quite plausibly his general popularity + ability to gain control of a situation. 

At least that seems likely if Paul Graham's assessment of him as a master persuader is to be believed (and why wouldn't it?). 

5dr_s
I mean, the employees could be motivated by a more straightforward sense that the firing is arbitrary and threatens the functioning of OpenAI and thus their immediate livelihood. I'd be curious to understand how much of this is calculated self-interest and how much indeed personal loyalty to Sam Altman, which would make this incident very much a crossing of the Rubicon.

I do find it quite surprising that so many who work at OpenAI are so eager to follow Altman to Microsoft - I guess I assumed the folks at OpenAI valued not working for big tech (that's more(?) likely to disregard safety) more than it appears they actually did.

2Chess3D
My guess is they feel that Sam and Greg (and maybe even Ilya) will provide enough of a safety net (compared to a randomized Board overlord) but also a large dose of self-interest once it gains steam and you know many of your coworkers will leave

The most likely explanation I can think of, for what look like about-faces by Ilya and Jan this morning, is realizing that the worst plausible outcome is exactly what we're seeing: Sam running a new OpenAI at Microsoft, free of that pesky charter. Any amount of backpedaling, and even resigning in favor of a less safety-conscious board, is preferable to that.

They came at the king and missed.

Yeah but if this is the case, I'd have liked to see a bit more balance than just retweeting the tribal-affiliation slogan ("OpenAI is nothing without its people") and saying that the board should resign (or, in Ilya's case, implying that he regrets and denounces everything he initially stood for together with the board). Like, I think it's a defensible take to think that the board should resign after how things went down, but the board was probably pointing to some real concerns that won't get addressed at all if the pendulum now swings way too much in the opposite direction, so I would have at least hoped for something like "the board should resign, but here are some things that I think they had a point about, which I'd like to see to not get shrugged under the carpet after the counter-revolution."

It's too late for a conditional surrender now that Microsoft is a credible threat to get 100% of OpenAI's capabilities team; Ilya and Jan are communicating unconditional surrender because the alternative is even worse.

I'm not sure this is an unconditional surrender. They're not talking about changing the charter, just appointing a new board. If the new board isn't much less safety conscious, then a good bit of the organization's original purpose and safeguards are preserved. So the terms of surrender would be negotiated in picking the new board.

[-]Linch5343

AFAICT the only formal power the board has is in firing the CEO, so if we get a situation where whenever the board wants to fire Sam, Sam comes back and fires the board instead, well, it's not exactly an inspiring story for OpenAI's governance structure.

2TLK
This is a very good point. It is strange, though, that the Board was able to fire Sam without the Chair agreeing to it. It seems like something as big as firing the CEO should have required at least a conversation with the Chair, if not the affirmative vote of the Chair. The way this was handled was a big mistake. There needs to be new rules in place to prevent big mistakes like this.

If actually enforcing the charter leads to them being immediately disempowered, it‘s not worth anything in the first place. We were already in the “worst case scenario”. Better to be honest about it. Then at least, the rest of the organisation doesn‘t get to keep pointing to the charter and the board as approving their actions when they don‘t.

The charter it is the board’s duty to enforce doesn‘t say anything about how the rest of the document doesn‘t count if investors and employees make dire enough threats, I‘m pretty sure.

If actually enforcing the charter leads to them being immediately disempowered, it‘s not worth anything in the first place.

If you pushed for fire sprinklers to be installed, then yell "FIRE", and turn on the fire sprinklers, causing a bunch of water damage, and then refuse to tell anyone where you thought the fire was and why you thought that, I don't think you should be too surprised when people contemplate taking away your ability to trigger the fire sprinklers.

Keep in mind that the announcement was not something like

After careful consideration and strategic review, the Board of Directors has decided to initiate a leadership transition. Sam Altman will be stepping down from his/her role, effective November 17, 2023. This decision is a result of mutual agreement and understanding that the company's long-term strategy and core values require a different kind of leadership moving forward.

Instead, the board announced

Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his

... (read more)
8aphyer
  The situation is actually even less surprising than this, because the thing people actually initially contemplated doing in response to the board's actions was not even 'taking away your ability to trigger the fire sprinklers' but 'going off and living in a new building somewhere else that you can't flood for lulz'. As I'm understanding the situation OpenAI's board had and retained the legal right to stay in charge of OpenAI as all its employees left to go to Microsoft.  If they decide they would rather negotiate from their starting point of 'being in charge of an empty building' to 'making concessions' this doesn't mean that the charter didn't mean anything!  It means that the charter gave them a bunch of power which they wasted.

If they thought this would be the outcome of firing Sam, they would not have done so.

The risk they took was calculated, but man, are they bad at politics.