Approximately four GPTs and seven years ago, OpenAI’s founders brought forth on this corporate landscape a new entity, conceived in liberty, and dedicated to the proposition that all men might live equally when AGI is created.

Now we are engaged in a great corporate war, testing whether that entity, or any entity so conceived and so dedicated, can long endure.

What matters is not theory but practice. What happens when the chips are down?

So what happened? What prompted it? What will happen now?

To a large extent, even more than usual, we do not know. We should not pretend that we know more than we do.

Rather than attempt to interpret here or barrage with an endless string of reactions and quotes, I will instead do my best to stick to a compilation of the key facts.

(Note: All times stated here are eastern by default.)

Just the Facts, Ma’am

What do we know for sure, or at least close to sure?

Here is OpenAI’s corporate structure, giving the board of the 501c3 the power to hire and fire the CEO. It is explicitly dedicated to its nonprofit mission, over and above any duties to shareholders of secondary entities. Investors were warned that there was zero obligation to ever turn a profit:

A block diagram of OpenAI's unusual structure, provided by OpenAI.
Image

Here are the most noteworthy things we know happened, as best I can make out.

  1. On Friday afternoon at 3:28pm, the OpenAI board fired Sam Altman, appointing CTO Mira Murati as temporary CEO effective immediately. They did so over a Google Meet that did not include then-chairmen Greg Brockman.
  2. Greg Brockman, Altman’s old friend and ally, was removed as chairman of the board but the board said he would stay on as President. In response, he quit.
  3. The board told almost no one. Microsoft got one minute of warning.
  4. Mira Murati is the only other person we know was told, which happened on Thursday night.
  5. From the announcement by the board: “Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.
  6. In a statement, the board of directors said: “OpenAI was deliberately structured to advance our mission: to ensure that artificial general intelligence benefits all humanity. The board remains fully committed to serving this mission. We are grateful for Sam’s many contributions to the founding and growth of OpenAI. At the same time, we believe new leadership is necessary as we move forward. As the leader of the company’s research, product, and safety functions, Mira is exceptionally qualified to step into the role of interim CEO. We have the utmost confidence in her ability to lead OpenAI during this transition period.”
  7. OpenAI’s board of directors at this point: OpenAI chief scientist Ilya Sutskever, independent directors Quora CEO Adam D’Angelo, technology entrepreneur Tasha McCauley, and Georgetown Center for Security and Emerging Technology’s Helen Toner.
  8. Usually a 501c3’s board must have a majority of people not employed by the company. Instead, OpenAI’s said that a majority did not have a stake in the company, due to Sam Altman having zero equity.
  9. In response to many calling this a ‘board coup’: “You can call it this way,” Sutskever said about the coup allegation. “And I can understand why you chose this word, but I disagree with this. This was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAI builds AGI that benefits all of humanity.” AGI stands for artificial general intelligence, a term that refers to software that can reason the way humans do.When Sutskever was asked whether “these backroom removals are a good way to govern the most important company in the world?” he answered: “I mean, fair, I agree that there is a not ideal element to it. 100%.”
  10. Other than that, the board said nothing in public. I am willing to outright say that, whatever the original justifications, the removal attempt was insufficiently considered and planned and massively botched. Either they had good reasons that justified these actions and needed to share them, or they didn’t.
  11. There had been various clashes between Altman and the board. We don’t know what all of them were. We do know the board felt Altman was moving too quickly, without sufficient concern for safety, with too much focus on building consumer products, while founding additional other companies. ChatGPT was a great consumer product, but supercharged AI development counter to OpenAI’s stated non-profit mission.
  12. OpenAI was previously planning an oversubscribed share sale at a valuation of $86 billion that was to close a few weeks later.
  13. Board member Adam D’Angelo said in a Forbes in January: There’s no outcome where this organization is one of the big five technology companies. This is something that’s fundamentally different, and my hope is that we can do a lot more good for the world than just become another corporation that gets that big.
  14. Sam Altman on October 16: “4 times in the history of OpenAI––the most recent time was in the last couple of weeks––I’ve gotten to be in the room when we push the veil of ignorance back and the frontier of discovery forward. Getting to do that is the professional honor of a lifetime.” There was speculation that events were driven in whole or in part by secret capabilities gains within OpenAI, possibly from a system called Gobi, perhaps even related to the joking claim ‘AI has been achieved internally’ but we have no concrete evidence of that.
  15. Ilya Sutskever co-leads the Superalignment Taskforce, has very short timelines for when we will get AGI, and is very concerned about AI existential risk.
  16. Sam Altman was involved in starting multiple new major tech companies. He was looking to raise tens of billions from Saudis to start a chip company. He was in other discussions for an AI hardware company.
  17. Sam Altman has stated time and again, including to Congress, that he takes existential risk from AI seriously. He was part of the creation of OpenAI’s corporate structure. He signed the CAIS letter. OpenAI spent six months on safety work before releasing GPT-4. He understands the stakes. One can question OpenAI’s track record on safety, many did including those who left to found Anthropic. But this was not a pure ‘doomer vs. accelerationist’ story.
  18. Sam Altman is very good at power games such as fights for corporate control. Over the years he earned the loyalty of his employees, many of whom moved in lockstep, using strong strategic ambiguity. Hand very well played.
  19. Essentially all of VC, tech, founder, financial Twitter united to condemn the board for firing Altman and for how they did it, as did many employees, calling upon Altman to either return to the company or start a new company and steal all the talent. The prevailing view online was that no matter its corporate structure, it was unacceptable to fire Altman, who had built the company, or to endanger OpenAI’s value by doing so. That it was good and right and necessary for employees, shareholders, partners and others to unite to take back control.
  20. Talk in those circles is that this will completely discredit EA or ‘doomerism’ or any concerns over the safety of AI, forever. Yes, they say this every week, but this time it was several orders of magnitude louder and more credible. New York Times somehow gets this backwards. Whatever else this is, it’s a disaster.
  21. By contrast, those concerned about existential risk, and some others, pointed out that the unique corporate structure of OpenAI was designed for exactly this situation. They also mostly noted that the board clearly handled decisions and communications terribly, but that there was much unknown, and tried to avoid jumping to conclusions.
  22. Thus we are now answering the question: What is the law? Do we have law? Where does the power ultimately lie? Is it the charismatic leader that ultimately matters? Who you hire and your culture? Can a corporate structure help us, or do commercial interests and profit motives dominate in the end?
  23. Great pressure was put upon the board to reinstate Altman. They were given two 5pm Pacific deadlines, on Saturday and Sunday, to resign. Microsoft’s aid, and that of its CEO Satya Nadella, was enlisted in this. We do not know what forms of leverage Microsoft did or did not bring to that table.
  24. Sam Altman tweets ‘I love the openai team so much.’ Many at OpenAI respond with hearts, including Mira Murati.
  25. Invited by employees including Mira Murati and other top executives, Sam Altman visited the OpenAI offices on Sunday. He tweeted ‘First and last time i ever wear one of these’ with a picture of his visitors pass.
  26. The board does not appear to have been at the building at the time.
  27. Press reported that the board had agreed to resign in principle, but that snags were hit over who the replacement board would be, and over whether or not they would need to issue a statement absolving Altman of wrongdoing, which could be legally perilous for them given their initial statement.
  28. Bloomberg reported on Sunday 11:16pm that temporary CEO Mira Murati aimed to rehire Altman and Brockman, while board sought alternative CEO.
  29. OpenAI board hires former Twitch CEO Emmett Shear to be the new CEO. He issues his initial statement here. I know a bit about him. If the board needs to hire a new CEO from outside that takes existential risk seriously, he seems to me like a truly excellent pick, I cannot think of a clearly better one. The job set for him may or may not be impossible. Shear’s PPS in his note: PPS: “Before I took the job, I checked on the reasoning behind the change. The board did *not* remove Sam over any specific disagreement on safety, their reasoning was completely different from that. I’m not crazy enough to take this job without board support for commercializing our awesome models.”
  30. New CEO Emmett Shear has made statements in favor of slowing down AI development, although not a stop. His p(doom) is between 5% and 50%. He has said ‘My AI safety discourse is 100% “you are building an alien god that will literally destroy the world when it reaches the critical threshold but be apparently harmless before that.”’ Here is a thread and video link with more, transcript here or a captioned clip. Here he is tweeting a 2×2 faction chart a few days ago.
  31. Microsoft CEO Satya Nadella posts 2:53am Monday morning: We remain committed to our partnership with OpenAI and have confidence in our product roadmap, our ability to continue to innovate with everything we announced at Microsoft Ignite, and in continuing to support our customers and partners. We look forward to getting to know Emmett Shear and OAI’s new leadership team and working with them. And we’re extremely excited to share the news that Sam Altman and Greg Brockman, together with colleagues, will be joining Microsoft to lead a new advanced AI research team. We look forward to moving quickly to provide them with the resources needed for their success.
  32. Sam Altman retweets the above with ‘the mission continues.’ Brockman confirms. Other leadership to include Jackub Pachocki the GPT-4 lead, Szymon Sidor and Aleksander Madry.
  33. Nadella continued in reply: I’m super excited to have you join as CEO of this new group, Sam, setting a new pace for innovation. We’ve learned a lot over the years about how to give founders and innovators space to build independent identities and cultures within Microsoft, including GitHub, Mojang Studios, and LinkedIn, and I’m looking forward to having you do the same.
  34. Ilya Sutskever posts 8:15am Monday morning: I deeply regret my participation in the board’s actions. I never intended to harm OpenAI. I love everything we’ve built together and I will do everything I can to reunite the company. Sam retweets with three heart emojis. Jan Leike, the other head of the superalignment team, Tweeted that he worked through the weekend on the crisis, and that the board should resign.
  35. Microsoft stock was down -1% after hours on Friday, was back to roughly its previous value on Monday morning and at the open. All priced in. Neither Google or S&P made major moves either.
  36. 505 of 700 employees of OpenAI, including Ilya Sutskever, sign a letter telling the board to resign and reinstate Altman and Brockman, threatening to otherwise move to Microsoft to work in the new subsidiary under Altman, which will have a job for every OpenAI employee. Full text of the letter that was posted: To the Board of Directors at OpenAI,OpenAl is the world’s leading Al company. We, the employees of OpenAl, have developed the best models and pushed the field to new frontiers. Our work on Al safety and governance shapes global norms. The products we built are used by millions of people around the world. Until now, the company we work for and cherish has never been in a stronger position.The process through which you terminated Sam Altman and removed Greg Brockman from the board has jeopardized all of this work and undermined our mission and company. Your conduct has made it clear you did not have the competence to oversee OpenAI.When we all unexpectedly learned of your decision, the leadership team of OpenAl acted swiftly to stabilize the company. They carefully listened to your concerns and tried to cooperate with you on all grounds. Despite many requests for specific facts for your allegations, you have never provided any written evidence. They also increasingly realized you were not capable of carrying out your duties, and were negotiating in bad faith.The leadership team suggested that the most stabilizing path forward – the one that would best serve our mission, company, stakeholders, employees and the public – would be for you to resign and put in place a qualified board that could lead the company forward in stability. Leadership worked with you around the clock to find a mutually agreeable outcome. Yet within two days of your initial decision, you again replaced interim CEO Mira Murati against the best interests of the company. You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”Your actions have made it obvious that you are incapable of overseeing OpenAl. We are unable to work for or with people that lack competence, judgement and care for our mission and employees. We, the undersigned, may choose to resign from OpenAl and join the newly announced Microsoft subsidiary run by Sam Altman and Greg Brockman. Microsoft has assured us that there are positions for all OpenAl employees at this new subsidiary should we choose to join. We will take this step imminently, unless all current board members resign, and the board appoints two new lead independent directors, such as Bret Taylor and Will Hurd, and reinstates Sam Altman and Greg Brockman.1. Mira Murati2. Brad Lightcap3. Jason Kwon4. Wojciech Zaremba5. Alec Radford6. Anna Makanju7. Bob McGrew8. Srinivas Narayanan9. Che Chang10. Lillian Weng11. Mark Chen12. Ilya Sutskever
  37. There is talk that OpenAI might completely disintegrate as a result, that ChatGPT might not work a few days from now, and so on.
  38. It is very much not over, and still developing.
  39. There is still a ton we do not know.
  40. This weekend was super stressful for everyone. Most of us, myself included, sincerely wish none of this had happened. Based on what we know, there are no villains in the actual story that matters here. Only people trying their best under highly stressful circumstances with huge stakes and wildly different information and different models of the world and what will lead to good outcomes. In short, to all who were in the arena for this on any side, or trying to process it, rather than spitting bile: ❤.

Later, when we know more, I will have many other things to say, many reactions to quote and react to. For now, everyone please do the best you can to stay sane and help the world get through this as best you can.

New to LessWrong?

New Comment
160 comments, sorted by Click to highlight new comments since: Today at 5:41 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]gwern5mo10527

The key news today: Altman had attacked Helen Toner https://www.nytimes.com/2023/11/21/technology/openai-altman-board-fight.html (HN, Zvi; excerpts) Which explains everything if you recall board structures and voting.

Altman and the board had been unable to appoint new directors because there was an even balance of power, so during the deadlock/low-grade cold war, the board had attrited down to hardly any people. He thought he had Sutskever on his side, so he moved to expel Helen Toner from the board. He would then be able to appoint new directors of his choice. This would have irrevocably tipped the balance of power towards Altman. But he didn't have Sutskever like he thought he did, and they had, briefly, enough votes to fire Altman before he broke Sutskever (as he did yesterday), and they went for the last-minute hail-mary with no warning to anyone.

As always, "one story is good, until another is told"...

[-]gwern5mo10314

The WSJ has published additional details about the Toner fight, filling in the other half of the story. The NYT merely mentions the OA execs 'discussing' it, but the WSJ reports much more specifically that the exec discussion of Toner was a Slack channel that Sutskever was in, and that approximately 2 days before the firing and 1 day before Mira was informed* (ie. the exact day Ilya would have flipped if they had then fired Altman about as fast as possible to schedule meetings 48h before & vote), he saw them say that the real problem was EA and that they needed to get rid of EA associations.

https://www.wsj.com/tech/ai/altman-firing-openai-520a3a8c (excerpts)

The specter of effective altruism had loomed over the politics of the board and company in recent months, particularly after the movement’s most famous adherent, Sam Bankman-Fried, the founder of FTX, was found guilty of fraud in a highly public trial.

Some of those fears centered on Toner, who previously worked at Open Philanthropy. In October, she published an academic paper touting the safety practices of OpenAI’s competitor, Anthropic, which didn’t release its own AI tool until ChatGPT’s emergence. “By delaying the rele

... (read more)
[-]gwern5mo285

The NYer has confirmed that Altman's attempted coup was the cause of the hasty firing (excerpts; HN):

...Some members of the OpenAI board had found Altman an unnervingly slippery operator. For example, earlier this fall he’d confronted one member, Helen Toner, a director at the Center for Security and Emerging Technology, at Georgetown University, for co-writing a paper that seemingly criticized OpenAI for “stoking the flames of AI hype.” Toner had defended herself (though she later apologized to the board for not anticipating how the paper might be perceived). Altman began approaching other board members, individually, about replacing her. When these members compared notes about the conversations, some felt that Altman had misrepresented them as supporting Toner’s removal. “He’d play them off against each other by lying about what other people thought”, the person familiar with the board’s discussions told me. “Things like that had been happening for years.” (A person familiar with Altman’s perspective said that he acknowledges having been “ham-fisted in the way he tried to get a board member removed”, but that he hadn’t attempted to manipulate the board.)

...His tactical skills w

... (read more)
[-]gwern5mo110

I left a comment over on EAF which has gone a bit viral, describing the overall picture of the runup to the firing as I see it currently.

The summary is: evaluations of the Board's performance in firing Altman generally ignore that Altman made OpenAI and set up all of the legal structures, staff, and the board itself; the Board could, and should, have assumed good faith of Altman because if he hadn't been sincere, why would he have done all that, proving in extremely costly and unnecessary ways his sincerity? But, as it happened, OA recently became such a success that Altman changed his mind about the desirability of all that and now equally sincerely believes that the mission requires him to be in total control; and this is why he started to undermine the board. The recency is why it was so hard for them to realize that change of heart or develop common knowledge about it or coordinate to remove him given his historical track record - but that historical track record was also why if they were going to act against him at all, it needed to be as fast & final as possible. This led to the situation becoming a powder keg, and when proof of Altman's duplicity in the Toner firing became undeniable to the Board, it exploded.

[-]gwern5mo312

Latest news: Time sheds considerably more light on the board position, in its discouragingly-named piece "2023 CEO of the Year: Sam Altman" (excerpts; HN). While it sounds & starts like a puff piece (no offense to Ollie - cute coyote photos!), it actually contains a fair bit of leaking I haven't seen anywhere else. Most strikingly:

  • claims that the Board thought it had the OA executives on its side, because the executives had approached it about Altman:

    The board expected pressure from investors and media. But they misjudged the scale of the blowback from within the company, in part because they had reason to believe the executive team would respond differently, according to two people familiar with the board’s thinking, who say the board’s move to oust Altman was informed by senior OpenAI leaders, who had approached them with a variety of concerns about Altman’s behavior and its effect on the company’s culture.

    (The wording here strongly implies it was not Sutskever.) This of course greatly undermines the "incompetent Board" narrative, possibly explains both why the Board thought it could trust Mira Murati & why she didn't inform Altman ahead of time (was she one of tho

... (read more)
[-]gwern5mo500

If you've noticed OAers being angry on Twitter today, and using profanity & bluster and having oddly strong opinions about how it is important to refer to roon as @tszzl and never as @roon, it's because another set of leaks has dropped, and they are again unflattering to Sam Altman & consistent with the previous ones.

Today the Washington Post adds to the pile, "Warning from OpenAI leaders helped trigger Sam Altman’s ouster: The senior employees described Altman as psychologically abusive, creating delays at the artificial-intelligence start-up — complaints that were a major factor in the board’s abrupt decision to fire the CEO" (archive.is; HN; excerpts), which confirms the Time/WSJ reporting about executives approaching the board with concerns about Altman, and adds on more details - their concerns did not relate to the Toner dispute, but apparently was about regular employees:

This fall, a small number of senior leaders approached the board of OpenAI with concerns about chief executive Sam Altman. Altman---a revered mentor, prodigious start-up investor and avatar of the AI revolution---had been psychologically abusive, the employees said, creating pockets of chaos and de

... (read more)
[-]gwern5mo350

An elaboration on the WaPo article in the 2023-12-09 NYT: “Inside OpenAI’s Crisis Over the Future of Artificial Intelligence: Split over the Leadership of Sam Altman, Board Members and Executives Turned on One Another. Their Brawl Exposed the Cracks at the Heart of the AI Movement” (excerpts). Mostly a gossipy narrative from both the Altman & D'Angelo sides, so I'll just copy over my HN comment:

  • another reporting of internal OA complaints about Altman's manipulative/divisive behavior, see previously on HN

  • previously we knew Altman had been dividing-and-conquering the board by lying about others wanted to fire Toner, this says that specifically, Altman had lied about McCauley wanting to fire Toner; presumably, this was said to D'Angelo.

  • Concerns over Tigris had been mooted, but this says specifically that the board thought Altman had not been forthcoming about it; still unclear if he had tried to conceal Tigris entirely or if he had failed to mention something more specific like who he was trying to recruit for capital.

  • Sutskever had threatened to quit after Jakub Pachocki's promotion; previous reporting had said he was upset about it, but hadn't hinted at him being so a

... (read more)
[-]gwern4mo9317

The WSJ dashes our hopes for a quiet Christmas by dropping on Christmas Eve a further extension of all this reporting: "Sam Altman’s Knack for Dodging Bullets—With a Little Help From Bigshot Friends: The OpenAI CEO lost the confidence of top leaders in the three organizations he has directed, yet each time he’s rebounded to greater heights", Seetharam et al 2024-12-24 (Archive.is, HN; annotated excerpts).

This article confirms - among other things - what I suspected about there being an attempt to oust Altman from Loopt for the same reasons as YC/OA, adds some more examples of Altman amnesia & behavior (including what is, since people apparently care, being caught in a clearcut unambiguous public lie), names the law firm in charge of the report (which is happening), and best of all, explains why Sutskever was so upset about the Jakub Pachocki promotion.


  • Loopt coup: Vox had hinted at this in 2014 but it was unclear; however, WSJ specifically says that Loopt was in chaos and Altman kept working on side-projects while mismanaging Loopt (so, nearly identical to the much later, unconnected, YC & OA accusations), leading to the 'senior employees' to (twice) appeal to the board

... (read more)
[-]gwern2mo17750

An OA update: it's been quiet, but the investigation is over. And Sam Altman won. (EDIT: yep.)

To recap, because I believe I haven't been commenting on this since December (this is my last big comment, skimming my LW profile): WilmerHale was brought in to do the investigation. The tender offer, to everyone's relief, went off. A number of embarrassing new details about Sam Altman have surfaced: in particular, about his enormous chip fab plan with substantial interest from giants like Temasek, and how the OA VC Fund turns out to be owned by Sam Altman (his explanation was it saved some paperwork and he just forgot to ever transfer it to OA). Ilya Sutskever remains in hiding and lawyered up (his silence became particularly striking with the release of Sora). There have been increasing reports the past week or two that the WilmerHale investigation was coming to a close - and I am told that the investigators were not offering confidentiality and the investigation was narrowly scoped to the firing. (There was also some OA drama with the Musk lawfare & the OA response, but aside from offering an abject lesson in how not to redact sensitive information, it's both irrelevant & unimpo... (read more)

5Matthew Barnett2mo
Looks like you were right, at least if the reporting in this article is correct, and I'm interpreting the claim accurately.
[-]gwern2mo268

At least from the intro, it sounds like my predictions were on-point: re-appointed Altman (I waffled about this at 60% because while his narcissism/desire to be vindicated requires him to regain his board seat, because anything less is a blot on his escutcheon, and also the pragmatic desire to lock down the board, both strongly militated for his reinstatement, it also seems so blatant a powergrab in this context that surely he wouldn't dare...? guess he did), released to an Altman outlet (The Information), with 3 weak apparently 'independent' and 'diverse' directors to pad out the board and eventually be replaced by full Altman loyalists - although I bet if one looks closer into these three women (Sue Desmond-Hellmann, Nicole Seligman, & Fidji Simo), one will find at least one has buried Altman ties. (Fidji Simo, Instacart CEO, seems like the most obvious one there: Instacart was YC S12.)

[-]gwern2mo7420

The official OA press releases are out confirming The Information: https://openai.com/blog/review-completed-altman-brockman-to-continue-to-lead-openai https://openai.com/blog/openai-announces-new-members-to-board-of-directors

“I’m pleased this whole thing is over,” Altman said at a press conference Friday.

He's probably right.


As predicted, the full report will not be released, only the 'summary' focused on exonerating Altman. Also as predicted, 'the mountain has given birth to a mouse' and the report was narrowly scoped to just the firing: they bluster about "reviewing 30,000 documents" (easy enough when you can just grep Slack + text messages + emails...), but then admit that they looked only at "the events concerning the November 17, 2023 removal" and interviewed hardly anyone ("dozens of interviews" barely even covers the immediate dramatis personae, much less any kind of investigation into Altman's chip stuff, Altman's many broken promises, Brockman's complainers etc). Doesn't sound like they have much to show for over 3 months of work by the smartest & highest-paid lawyers, does it... It also seems like they indeed did not promise confidentiality or set up any kind of ... (read more)

2ESRogs2mo
Nitpick: Larry Summers not Larry Sumners
4gwern2mo
(Fixed. This is a surname typo I make an unbelievable number of times because I reflexively overcorrect it to 'Sumners', due to reading a lot more of Scott Sumner than Larry Summers. Ugh - just caught myself doing it again in a Reddit comment...)
2ESRogs2mo
Yeah I figured Scott Sumner must have been involved.
2Zach Stein-Perlman1mo
Source?
2Zach Stein-Perlman1mo
@gwern I've failed to find a source saying that Hydrazine invested in OpenAI. If it did, that would be a big deal; it would make this a lie.
8gwern1mo
It was either Hydrazine or YC. In either case, my point remains true: he's chosen to not dispose of his OA stake, whatever vehicle it is held in, even though it would be easy for someone of his financial acumen to do so by a sale or equivalent arrangement, forcing an embarrassing asterisk to his claims to have no direct financial conflict of interest in OA LLC - and one which comes up regularly in bad OA PR (particularly by people who believe it is less than candid to say you have no financial interest in OA when you totally do), and a stake which might be quite large at this point*, and so is particularly striking given his attitude towards much smaller conflicts supposedly risking bad OA PR. (This is in addition to the earlier conflicts of interest in Hydrazine while running YC or the interest of outsiders in investing in Hydrazine, apparently as a stepping stone towards OA.) * if he invested a 'small' amount via some vehicle before he even went full-time at OA, when OA was valued at some very small amount like $50m or $100m, say, and OA's now valued at anywhere up to $90,000m or >900x more, and further, he strongly believes it's going to be worth far more than that in the near-future... Sure, it may be worth 'just' $500m or 'just' $1000m after dilution or whatever, but to most people that's pretty serious money!
1Rebecca4mo
Why do you think McCauley is likely to be the board member Labenz spoke to? I had inferred that it was someone not particularly concerned about safety given that Labenz reported them saying they could be easily request access to the model if they’d wanted to (and hadn’t). I took the point of the anecdote to be ‘here was a board member not concerned about safety’.
2gwern4mo
Because there is not currently any evidence that Toner was going around talking to a bunch of people, whereas this says McCauley was doing so. If I have to guess "did Labenz talk to the person who was talking to a bunch of people in OA, or did he talk to the person who was as far as I know not talking to a bunch of people in OA?", I am going to guess the former.
1Rebecca4mo
They weren’t the only non employee board members though - that’s what I meant by the part about not being concerned about safety, that I took it to rule out both Toner and McCauley. (Although it for some other reason you were only looking at Toner and McCauley, then no, I would say the person going around speaking to OAI employees is_less_ likely to be out of the loop on GPT-4’s capabilities)
7gwern4mo
The other ones are unlikely. Shivon Zilis & Reid Hoffman had left by this point; Will Hurd might or might not still be on the board at this point but wouldn't be described nor recommended by Labenz's acquaintance as researching AI safety, as that does not describe Hurd or D'Angelo; Brockman, Altman, and Sutskever are right out (Sutskever researches AI safety but Superalignment was a year away); by process of elimination, over 2023, the only board members he could have been plausibly contacting would be Toner and McCauley, and while Toner weakly made more sense before, now McCauley does. (The description of them not having used the model unfortunately does not distinguish either one - none of the writings connected to them sound like they have all that much hands-on experience and would be eagerly prompt-engineering away at GPT-4-base the moment they got access. And I agree that this is a big mistake, but it is, even more unfortunately, and extremely common one - I remain shocked that Altman had apparently never actually used GPT-3 before he basically bet the company on it. There is a widespread attitude, even among those bullish about the economics, that GPT-3 or GPT-4 are just 'tools', which are mere 'stochastic parrots', and have no puzzling internal dynamics or complexities. I have been criticizing this from the start, but the problem is, 'sampling can show the presence of knowledge and not the absence', so if you don't think there's anything interesting there, your prompts are a mirror which reflect only your low expectations; and the safety tuning makes it worse by hiding most of the agency & anomalies, often in ways that look like good things. For example, the rhyming poetry ought to alarm everyone who sees it, because of what it implies underneath - but it doesn't. This is why descriptions of Sydney or GPT-4-base are helpful: they are warning shots from the shoggoth behind the friendly tool-AI ChatGPT UI mask.)
1Rebecca4mo
I think you might be misremembering the podcast? Nathan said that he was assured that the board as a whole was serious about safety, but I don’t remember the specific board member being recommended as someone researching AI safety (or otherwise more pro safety than the rest of the board). I went back through the transcript to check and couldn’t find any reference to what you’ve said. “ And ultimately, in the end, basically everybody said, “What you should do is go talk to somebody on the OpenAI board. Don’t blow it up. You don’t need to go outside of the chain of command, certainly not yet. Just go to the board. And there are serious people on the board, people that have been chosen to be on the board of the governing nonprofit because they really care about this stuff. They’re committed to long-term AI safety, and they will hear you out. And if you have news that they don’t know, they will take it seriously.” So I was like, “OK, can you put me in touch with a board member?” And so they did that, and I went and talked to this one board member. And this was the moment where it went from like, “whoa” to “really whoa.”” (https://80000hours.org/podcast/episodes/nathan-labenz-openai-red-team-safety/?utm_campaign=podcast__nathan-labenz&utm_source=80000+Hours+Podcast&utm_medium=podcast#excerpt-from-the-cognitive-revolution-nathans-narrative-001513)
2gwern4mo
I was not referring to the podcast (which I haven't actually read yet because from the intro it seems wildly out of date and from a long time ago) but to Labenz's original Twitter thread turned into a Substack post. I think you misinterpret what he is saying in that transcript because it is loose and extemporaneous "they're committed" could just as easily refer to "are serious people on the board" who have "been chosen" for that (implying that there are other members of the board not chosen for that); and that is what he says in the written down post:
1Rebecca4mo
This quote doesn’t say anything about the board member/s being people who are researching AI safety though - it’s Nathan’s friends who are in AI safety research not the board members. I agree that based on this quote, it could have very well been just a subset of the board. But I believe Nathan’s wife works for CEA (and he’s previously MCed an EAG), and Tasha is (or was?) on the board of EVF US, and so idk, if it’s Tasha he spoke to and the “multiple people” was just her and Helen, I would have expected a rather different description of events/vibe. E.g. something like ‘I googled who was on the board and realised that two of them were EAs, so I reached out to discuss’. I mean maybe that is closer to what happened and it’s just being obfuscated, either way is confusing to me tbh. Btw, by “out of date” do you mean relative to now, or to when the events took place? From what I can see, the tweet thread, the substack post and the podcast were all published the same day - Nov 22nd 2023. The link I provided is just 80k excerpting the original podcast.
4Wei Dai5mo
There seems to be very little discussion of this story on Twitter. WP's tweet about it got only 75k views and 59 likes as of now, even though WP has 2M followers. (I guess Twitter will hide your tweets even from your followers if the engagement rate is low enough. Not sure what the cutoff is, but 1 like to 100 views doesn't seem uncommon for tweets, and this one is only 1:1000. BTW what's a good article to read to understand Twitter better?)
[-]gwern5mo190

There's two things going on. First, Musk-Twitter appears to massively penalize external links. Musk has vowed to fight 'spammers' who post links on Twitter to what are other sites (gasp) - the traitorous scum! Substack is only the most abhorred of these vile parasites, but all shall be brought to justice in due course. There is no need for other sites. You should be posting everything on Twitter as longform tweets (after subscribing), obviously.

You only just joined Twitter so you wouldn't have noticed the change, but even direct followers seem to be less likely to see a tweet if you've put a link in it. So tweeters are increasingly reacting by putting the external link at the end of a thread in a separate quarantine tweet, not bothering with the link at all, or just leaving Twitter under the constant silent treatment that high-quality tweeting gets you these days.* So, many of the people who would be linking or discussing it are either not linking it or not discussing it, and don't show up in the WaPo thread or by a URL search.

Second, OAers/pro-Altman tweets are practicing the Voldemort strategy: instead of linking the WaPo article at all (note that roon, Eigenrobot etc don't sho... (read more)

2Wei Dai5mo
Thanks for the explanations, but I'm not noticing a big "external links" penalty on my own tweets. Found some discussion of this penalty via Google, so it seems real but maybe not that "massive"? Also some of it dates to before Musk purchased Twitter. Can you point me to anything that says he increased the penalty by a lot? Ah Musk actually published Twitter's algorithms, confirming the penalty. Don't see anyone else saying that he increased the penalty though. BTW why do you "protect" your account (preventing non-followers from seeing your tweets)?
5gwern5mo
'The algorithm' is an emergent function of the entire ecosystem. I have no way of knowing what sort of downstream effects a tweak here or there would cause or the effects of post-Musk changes. I just know what I see: my tweets appear to have plummeted since Musk took over, particularly when I link to my new essays or documents etc. If you want to do a more rigorous analysis, I export my Twitter analytics every few months (thank goodness Musk hasn't disabled that to try to upsell people to the subscription - maybe he doesn't know it's there?) and could provide you my archives. (BTW, there is a moving window where you can only get the last few months, so if you think you will ever be interested in your Twitter traffic numbers, you need to start exporting them every 2-3 months now, or else the historical data will become inaccessible. I don't know if you can restore access to old ones by signing up as an advertiser.) As for the 'published' algorithm, I semi-believe it is genuine (albeit doubtless incomplete) because Musk was embarrassed that it exposed how some parts of the new algorithm are manipulating Twitter to make Musk look more popular (confirming earlier reporting that Musk had ordered such changes after getting angry his views were dropping due to his crummy tweets), but that is also why it hasn't been updated in almost half a year, apparently. God knows what the real thing is like by now...
1Rebecca4mo
Could you link to some examples of “ OAers being angry on Twitter today, and using profanity & bluster and having oddly strong opinions about how it is important to refer to roon as @tszzl and never as @roon”? I don’t have a twitter account so can’t search myself
5lc5mo
I've read your explanations of what happened, and it still seems like the board acted extremely incompetently. Call me an armchair general if you want. Specific choices that I take severe issue with: 1. The decision to fire Sam, instead of just ejecting him from the board Both kicking Sam off the board, and firing him, and kicking Greg off at the same time all at once with no real explanation is completely unnecessary and is also what ultimately gives Sam the cassus belli for organizing the revolt to begin with. It's also unnecessary to defend Helen from Sam's attacks. Consider what happens if Sam had just lost his board seat. First, his cost-benefit analysis looks different: Sam still has most of what he had before to lose, namely his actual position at OpenAI, and so probably no matter how mad he is he doesn't hold the entire organization hostage. Second, he is way, way more limited in what he can justifiably publicly do in response. Taking the nuclear actions he did - quitting in protest and moving to Microsoft - in response to losing control over a board he shouldn't have control over in the first place would look disloyal and vindictive. And if/when Sam tries to use his position as CEO to sabotage the company or subvert the board further (this time lacking his own seat), you'll have more ammunition to fire him later if you really need to. If I had been on the board, my first action after getting the five together is to call Greg and Mira into an office and explain what was going on. Then after a long conversation about our motivations (whether or not they'd agreed with our decision), I immediately call Sam in/over the internet and deliver the news that he is no longer a board member, and that the vote had already been passed. I then overtly and clearly explain the reasoning behind why he's losing the board seat ("we felt you were trying to compromise the integrity of the board with your attacks on Helen and playing of board members against one another"),

Thanks, this makes more sense than anything else I've seen, but one thing I'm still confused about:

If the factions were Altman-Brockman-Sutskever vs. Toner-McCauley-D'Angelo, then even assuming Sutskever was an Altman loyalist, any vote to remove Toner would have been tied 3-3. I can't find anything about tied votes in the bylaws - do they fail? If so, Toner should be safe. And in fact, Toner knew she (secretly) had Sutskever on her side, and it would have been 4-2. If Altman manufactured some scandal, the board could have just voted to ignore it.

So I still don't understand "why so abruptly?" or why they felt like they had to take such a drastic move when they held all the cards (and were pretty stable even if Ilya flipped).

Other loose ends:

  • Toner got on the board because of OpenPhil's donation. But how did McCauley get on the board?
  • Is D'Angelo a safetyist?
  • Why wouldn't they tell anyone, including Emmett Shear, the full story?
[-]gwern5mo326

I can't find anything about tied votes in the bylaws - do they fail?

I can't either, so my assumption is that the board was frozen ever since Hoffman/Hurd left for that reason.

And there wouldn't've been a vote at all. I've explained it before but - while we wait for phase 3 of the OA war to go hot - let me take another crack at it, since people seem to keep getting hung up on this and seem to imagine that it's a perfectly normal state of a board to be in a deathmatch between two opposing factions indefinitely, and so confused why any of this happened.

In phase 1, a vote would be pointless, and neither side could nor wanted to force it to a vote. After all, such a vote (regardless of the result) is equivalent to admitting that you have gone from simply "some strategic disagreements among colleagues all sharing the same ultimate goals and negotiating in good faith about important complex matters on which reasonable people of goodwill often differ" to "cutthroat corporate warfare where it's-them-or-us everything-is-a-lie-or-fog-of-war fight-to-the-death there-can-only-be-one". You only do such a vote in the latter situation; in the former, you just keep negotiating until you reach a ... (read more)

3faul_sname5mo
I note that the articles I have seen have said things like (emphasis mine). If Shear had been unable to get any information about the board's reasoning, I very much doubt that they would have included the word "written".
2Daniel 5mo
A 3-3 tie between the CEO founder of the company, the president founder of the company, and the chief scientist of the company vs three people with completely separate day jobs who never interact with rank-and-file employees is not a stable equilibrium. There are ways to leverage this sort of soft power into breaking the formal deadlock, for example: as we saw last week.
0Mitchell_Porter5mo
I have envisaged a scenario in which the US intelligence community has an interagency working group on AI, and Toner and McCauley were its defacto representatives on the OpenAI board, Toner for CIA, McCauley for NSA. Maybe someone who has studied the history of the board can tell me whether that makes sense, in terms of its shifting factions. 

Why would Toner be related to the CIA, and how is McCauley NSA?

If OpenaI is running out money, and is too dependent on Microsoft, defense/intelligence/government is not the worst place for them to look for money. There are even possible futures where they are partially nationalised in a crisis. Or perhaps they will help with regulatory assessment. This possibility certainly makes the Larry Summers appointment take on a different't light with his ties to not only Microsoft, but also the Government.

7David Hornbein5mo
Toner's employer, the Center for Security and Emerging Technology (CSET), was founded by Jason Matheny. Matheny was previously the Director of the Intelligence Advanced Research Projects Activity (IARPA), and is currently CEO of the RAND Corporation. CSET is currently led by Dewey Murdick, who previously worked at the Department of Homeland Security and at IARPA. Much of CSET's initial staff was former (or "former") U.S. intelligence analysts, although IIRC they were from military intelligence rather than the CIA specifically. Today many of CSET's researchers list prior experience with U.S. civilian intelligence, military intelligence, or defense intelligence contractors. Given the overlap in staff and mission, U.S. intelligence clearly and explicitly has a lot of influence at CSET, and it's reasonable to suspect a stronger connection than that. I don't see it for McCauley though.
3Mitchell_Porter5mo
Toner's university has a long history of association with the CIA. Just google "georgetown cia" and you'll see more than I can summarize.  As for McCauley, well, I did call this a "scenario"... The movie maker Oliver Stone rivals Chomsky as the voice of an elite political counterculture who are deadly serious in their opposition to what the American deep state gets up to, and whose ranks include former insiders who became leakers, whistleblowers, and ideological opponents of the system. When Stone, already known as a Wikileaks supporter, decided to turn his attention to NSA's celebrity defector Edward Snowden, he ended up casting McCauley's actor boyfriend as the star.  My hunch, my scenario, is that people associated with the agency, or formerly associated with the agency, put him forward for the role, with part of the reason being that he was already dating one of their own. What we know about her CV - robotics, geographic information systems, speaks Arabic, mentored by Alan Kay - obviously doesn't prove anything, but it's enough to make this scenario work, as a possibility. 
[-]lc5mo2113

We shall see. I'm just ignoring the mainstream media spins at this point.

[-]trevor5mo1110

For those of us who don't know yet, criticizing the accuracy of mainstream Western news outlets is NOT a strong bayesian update against someone's epistemics, especially on a site like Lesswrong (doesn't matter how many idiots you might remember ranting about "mainstream media" on other sites, the numbers are completely different here).

There is a well-known dynamic called Gell-Mann Amnesia, where people strongly lose trust in mainstream Western news outlets on a topic they are an expert on, but routinely forget about this loss of trust when they read coverage on a topic that they can't evaluate accuracy on. Western news outlets Goodhart readers by depicting themselves as reliable instead of prioritizing reliability.

If you read major Western news outlets, or are new to major news outlets due to people linking to them on Lesswrong recently, some basic epistemic prep can be found in Scott Alexander's The Media Very Rarely Lies and if it's important, the follow up posts.

Yeah, that makes sense and does explain most things, except that if I was Helen, I don't currently see why I wouldn't have just explained that part of the story early on?* Even so, I still think this sounds very plausible as part of the story.

*Maybe I'm wrong about how people would react to that sort of justification. Personally, I think the CEO messing with the board constitution to gain de facto ultimate power is clearly very bad and any good board needs to prevent that. I also believe that it's not a reason to remove a board member if they publish a piece of research that's critical of or indirectly harmful for your company. (Caveat that we're only reading a secondhand account of this, and maybe what actually happened would make Altman's reaction seem more understandable.) 

8Lukas_Gloor5mo
Hm, to add a bit more nuance, I think it's okay at a normal startup for a board to be comprised of people who are likely to almost always side with the CEO, as long as they are independent thinkers who could vote against the CEO if the CEO goes off the rails. So, it's understandable (or even good/necessary) for CEOs to care a lot about having "aligned" people on the board, as long as they don't just add people who never think for themselves. It gets more complex in OpenAI's situation where there's more potential for tensions between CEO and the board. I mean, there shouldn't necessarily be any tensions, but Altman probably had less of a say over who the original board members were than a normal CEO at a normal startup, and some degree of "norms-compliant maneuvering" to retain board control feels understandable because any good CEO cares a great deal about how to run things. So, it actually gets a bit murky and has to be judged case-by-case. (E.g., I'm sure Altman feels like what happened vindicated him wanting to push Helen off the board.) 
8Ben Pace5mo
I was confused about the counts, but I guess this makes sense if Helen cannot vote on her own removal. Then it's Altman/Brockman/Sutskever v Tasha/D'Angelo. Pretty interesting that Sutskever/Tasha/D'Angelo would be willing to fire Altman just to prevent Helen from going. They instead could have negotiated someone to replace her. Wouldn't you just remove Altman from the Board, or maybe remove Brockman? Why would they be willing to decapitate the company in order to retain Helen?
[-]gwern5mo9225

They instead could have negotiated someone to replace her.

Why do they have to negotiate? They didn't want her gone, he did. Why didn't Altman negotiate a replacement for her, if he was so very upset about the damages she had supposedly done OA...?

"I understand we've struggled to agree on any replacement directors since I kicked Hoffman out, and you'd worry even more about safety remaining a priority if she resigns. I totally get it. So that's not an obstacle, I'll agree to let Toner nominate her own replacement - just so long as she leaves soon."

When you understand why Altman would not negotiate that, you understand why the board could not negotiate that.

I was confused about the counts, but I guess this makes sense if Helen cannot vote on her own removal. Then it's Altman/Brockman/Sutskever v Tasha/D'Angelo.

Recusal or not, Altman didn't want to bring it to something as overt as a vote expelling her. Power wants to conceal itself and deny the coup. The point here of the CSET paper pretext is to gain leverage and break the tie any way possible so it doesn't look bad or traceable to Altman: that's why this leaking is bad for Altman, it shows him at his least fuzzy and PR-friend... (read more)

[-]habryka5mo3729

I... still don't understand why the board didn't say anything? I really feel like a lot of things would have flipped if they had just talked openly to anyone, or taken advice from anyone. Like, I don't think it would have made them global heroes, and a lot of people would have been angry with them, but every time any plausible story about what happened came out, there was IMO a visible shift in public opinion, including on HN, and the board confirming any story or giving any more detail would have been huge. Instead they apparently "cited legal reasons" for not talking, which seems crazy to me.

7Adam Scholl5mo
I can imagine it being the case that their ability to reveal this information is their main source of leverage (over e.g. who replaces them on the board).
7Linch5mo
My favorite low-probability theory is that he had blackmail material on one of the board members[1], who initially decided after much deliberation to go forwards despite the blackmail, and then when they realized they got outplayed by Sam not using the blackmail material, backpeddled and refused to dox themselves.  And the other 2-3 didn't know what to do afterwards, because their entire strategy was predicated on optics management around said blackmail + blackmail material. 1. ^ Like something actually really bad.
[-]Zvi5mo205

It would be sheer insanity to have a rule that you can't vote on your own removal, I would think, or else a tied board will definitely shrink right away.

4mako yass5mo
Wait, simple majority is an insane place to put the threshold for removal in the first place. Majoritarian shrinking is still basically inevitable if the threshold for removal is 50%, it should be a higher than that, maybe 62%. And generally, if 50% of a group thinks A and 50% thinks ¬A, that tells you that the group is not ready to make a decision about A.
7Chess3D5mo
It is not clear, in the non--profit structure of a board, that Helen cannot vote on her own removal. The vote to remove Sam may have been some trickery around holding a quorum meeting without notifying Sam or Greg.
4Linch5mo
I think it was most likely unanimous among the remaining 4, otherwise one of the dissenters would've spoken out by now.
5Tristan Wegner5mo
Here the paper: https://cset.georgetown.edu/publication/decoding-intentions/ Some more recent (Nov/Okt 2023) publications from her here: https://cset.georgetown.edu/staff/helen-toner/
4faul_sname5mo
Manifold says 23% (*edit: link doesn't link directly to that option, it shows up if you search "Helen") on as "a significant factor for why Sam Altman was fired". It would make sense as a motivation, though it's a bit odd that the board would say that Sam was "not consistently candid" and not "trying to undermine the governance structure of the organization" in that case.

When I read this part of the letter, the authors seem to be throwing it in the face of the board like it is a damning accusation, but actually, as I read it, it seems very prudent and speaks well for the board.

You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”

Maybe I'm missing some context, but wouldn't it be better for Open AI as an organized entity to be destroyed than for it to exist right up to the point where all humans are destroyed by an AGI that is neither benevolent nor "aligned with humanity" (if we are somehow so objectively bad as to not deserve care by a benevolent powerful and very smart entity).

This reminds me a lot of a blockchain project I served as an ethicist, which was initially a "project" that was interested in advancing a "movement" and ended up with a bunch of people whose only real goal was to cash big paychecks for a long time (at which point I handled my residual duties to the best of my ability and resigned, with lots of people expressing extreme confusion and asking why I was acting "foolishly" or "incompetently" (except for a tiny number who got angry at me for not causing a BIGGER ex... (read more)

[-]dr_s5mo3029

Maybe I'm missing some context, but wouldn't it be better for Open AI as an organized entity to be destroyed than for it to exist right up to the point where all humans are destroyed by an AGI that is neither benevolent nor "aligned with humanity" (if we are somehow so objectively bad as to deserve care by a benevolent powerful and very smart entity).

The problem I suspect is that people just can't get out of the typical "FOR THE SHAREHOLDERS" mindset, so a company that is literally willing to commit suicide rather than getting hijacked for purposes antithetic to its mission, like a cell dying by apoptosis rather than going cancerous, can be a very good thing, and if only there was more of this. You can't beat Moloch if you're not willing to precommit to this sort of action. And let's face it, no one involved here is facing homelessness and soup kitchens even if Open AI crashes tomorrow. They'll be a little worse off for a while, their careers will take a hit, and then they'll pick themselves up. If this was about the safety of humanity it would be a no-brainer that you should be ready to sacrifice that much.

5Michael Thiessen5mo
Sam's latest tweet suggests he can't get out of the "FOR THE SHAREHOLDERS" mindset. "satya and my top priority remains to ensure openai continues to thrive we are committed to fully providing continuity of operations to our partners and customers" This does sound antithetical to the charter and might be grounds to replace Sam as CEO.
[-]dr_s5mo2817

I feel like, not unlike the situation with SBF and FTX, the delusion that OpenAI could possibly avoid this trap maps on the same cognitive weak spot among EA/rationalists of "just let me slip on the Ring of Power this once bro, I swear it's just for a little while bro, I'll take it off before Moloch turns me into his Nazgul, trust me bro, just this once".

This is honestly entirely unsurprising. Rivers flow downhill and companies part of a capitalist economy producing stuff with tremendous potential economic value converge on making a profit.

[-]Sune5mo113

The corporate structure of OpenAI was set up as an answer to concerns (about AGI and control over AGIs) which were raised by rationalists. But I don’t think rationalists believed that this structure was a sufficient solution to the problem, anymore than non-rationalists believed it. The rationalists that I have been speaking to were generally mostly sceptical about OpenAI.

6dr_s5mo
Oh, I mean, sure, scepticism about OpenAI was already widespread, no question. But in general it seems to me like there's been too many attempts to be too clever by half from people at least adjacent in ways of thinking to rationalism/EA (like Elon) that go "I want to avoid X-risk but also develop aligned friendly AGI for myself" and the result is almost invariably that it just advances capabilities more than safety. I just think sometimes there's a tendency to underestimate the pull of incentives and how you often can't just have your cake and eat it. I remain convinced that if one wants to avoid X-risk from AGI the safest road is probably to just strongly advocate for not building AGI, and putting it in the same bin as "human cloning" as a fundamentally unethical technology. It's not a great shot, but it's probably the best one at stopping it. Being wishy-washy doesn't pay off.
2Seth Herd5mo
I think you're in the majority in this opinion around here. I am noticing I'm confused about the lack of enthusiasm for developing alignment methods for thetypes of AGI that are being developed. Trying to get people to stop building it would be ideal, but I don't see a path to it. The actual difficulty of alignment seems mostly unknown, so potentially vastly more tractable. Yet such efforts make up a tiny part of x-risk discussion. This isn't an argument for building ago, but for aligning the specific AGI others build.
3dr_s5mo
Personally I am fascinated by the problems of interpretability and I would consider "no more GPTs for you guys until you figure out at least the main functioning principles of GPT-3" a healthy exercise in actual ML science to pursue, but I also have to acknowledge that such an understanding would make distillation far more powerful and thus also lead to a corresponding advance in capabilities. I am honestly stumped at what "I want to do something" looks like that doesn't somehow end up backfiring. It maybe that the problem is just thinking this way in the first place, and this really is just a shudder political problem, and tech/science can only make it worse.
4Seth Herd5mo
That all makes sense. Except that this is exactly what I'm puzzled by: a focus on solutions that probably won't work ("no more GPTs for you guys" is approximately impossible), instead of solutions that still might - working on alignment, and trading off advances in alignment for advances in AGI. It's like the field has largely given up on alignment, and we're just trying to survive a few more months by making sure to not contribute to AGI at all. But that makes no sense. MIRI gave up on aligning a certain type of AGI for good reasons. But nobody has seriously analyzed prospects for aligning the types of AGI we're likely to get: language model agents or loosely brainlike collections of deep nets. When I and a few others write about plans for aligning those types of AGI, we're largely ignored. The only substantive comments are "well there are still ways those plans could fail", but not arguments that they're actually likely to fail. Meanwhile, everyone is saying we have no viable plans for alignment, and acting like that means it's impossible. I'm just baffled by what's going on in the collective unspoken beliefs of this field.
9dr_s5mo
I'll be real, I don't know what everyone else thinks, but personally I can say I wouldn't feel comfortable contributing to anything AGI-related at this point because I have very low trust even aligned AGI would result in a net good for humanity, with this kind of governance. I can imagine maybe amidst all the bargains with the Devil there is one that will genuinely pay off and is the lesser evil, but can't tell which one. I think the wise thing to do would be just not to build AGI at all, but that's not a realistically open path. So yeah, my current position is that literally any action I could take advances the kind of future I would want by an amount that is at best below the error margin of my guesses, and at worst negative. It's not a super nice spot to be in but it's where I'm at and I can't really lie to myself about it.
2[anonymous]5mo
In the cancer case, the human body has every cell begin aligned with the body. Anthropically this has to function until breeding age plus enough offspring to beat losses. And yes, if faulty cells self destruct instead of continuing this is good, there are cancer treatments that try to gene edit in clean copies of specific genes (p51 as I recall) that mediate this (works in rats...). However the corporate world/international competition world has many more actors and they are adversarial. OAI self destructing leaves the world's best AI researchers unemployed, removes them from competing in the next round of model improvements - whoever makes a gpt-5 at a competitor will have the best model outright. Coordination is hard. Consider the consequences if an entire town decided to stop consuming fossil fuels. They pay the extra costs and rebuild the town to be less car dependent. However the consequence is this lowers the market price of fossil fuels. So others use more. (Demand elasticity makes the effect still slightly positive)
8dr_s5mo
I mean, yes, a company self-destructing doesn't stop much if their knowledge isn't also actively deleted - and even then, it's just a setback of a few months. But also, by going "oh well we need to work inside the system to fix it somehow" at some point all you get is just another company racing with all others (and in this case, effectively being a pace setter). However you put it, OpenAI is more responsible than any other company for how close we may be to AGI right now, and despite their stated mission, I suspect they did not advance safety nearly as much as capability. So in the end, from the X-risk viewpoint, they mostly made things worse.
[-]ryan_b5mo217

I agree with all of this in principal, but I am hung up on the fact that it is so opaque. Up until now the board have determinedly remained opaque.

If corporate seppuku is on the table, why not be transparent? How does being opaque serve the mission?

I wrote a LOT of words in response to this, talking about personal professional experiences that are not something I coherently understand myself as having a duty (or timeless permission?) to share, so I have reduced my response to something shorter and more general. (Applying my own logic to my own words, in realtime!)

There are many cases (arguably stupid cases or counter-producive cases, but cases) that come up more and more when deals and laws and contracts become highly entangling.

Its illegal to "simply" ask people for money in exchange for giving them a transferable right future dividends on a project for how to make money, that you seal with a handshake. The SEC commands silence sometimes and will put you in a cage if you don't.

You get elected to local office and suddenly the Brown Act (which I'd repeal as part of my reboot of the Californian Constitution had I the power) forbids you from talking with your co-workers (other elected officials) about work (the city government) at a party. 

A Confessor is forbidden kinds of information leak.

Fixing <all of this (gesturing at nearly all of human civilization)> isn't something that we have the time or power to do before w... (read more)

5xpym5mo
This seems to presuppose that there is a strong causal effect from OpenAI's destruction to avoiding creation of an omnicidal AGI, which doesn't seem likely? The real question is whether OpenAI was, on the margin, a worse front-runner than its closest competitors, which is plausible, but then the board should have made that case loudly and clearly, because, entirely predictably, their silence has just made the situation worse.

Whatever else, there were likely mistakes from the side of the board, but man does the personality cult around Altman make me uncomfortable. 

[-]Daniel 5mo3423

It reminds me of the loyalty successful generals like Caesar and Napoleon commanded from their men. The engineers building GPT-X weren't loyal to The Charter, and they certainly weren't loyal to the board. They were loyal to the projects they were building and to Sam, because he was the one providing them resources to build and pumping the value of their equity-based compensation.

9Sune5mo
They were not loyal to the board, but it is not clear if they were loyal to The Charter since they were not given any concrete evidence of a conflict between Sam and the Charter.
4dr_s5mo
Feels like an apt comparison given that the way we find out now is what happens when some kind of Senate tries to cut to size the upstart general and the latter basically goes "you and what army?".
1Tristan Wegner5mo
From your last link: As the company was doing well recently, with ongoing talks about a investment imply a market cap of $90B, this would mean many employees might have hit their 10x already. The highest payout they would ever get. So all incentive to cash out now (or as soon as the 2-year lock will allow), 0 financial incentive to care about long term value. This seems worse in aligning employee interest with the long term interest of the company even compare to regular (unlimited allowed growth) equity, where each employee might hope that the valuation could get even higher. Also: So it seems the growth cap actually encourages short term thinking, which seems against their long term mission. Do you also understand these incentives this way? 
[-]dr_s5mo193

It's not even a personality cult. Until the other day Altman was a despicable doomer and decel, advocating for regulations that would clip humanity's wings. As soon as he was fired and the "what did Ilya see" narrative emerged (I don't even think it was all serious at the beginning), the immediate response from the e/acc crowd was to elevate him to the status of martyr in minutes and recast the Board as some kind of reactionary force for evil that wants humanity to live in misery forever rather than bask in the Glorious AI Future.

Honestly even without the doom stuff I'd be extremely worried about this being the cultural and memetic environment in which AI gets developed anyway. This stuff is pure poison.

It doesn't seem to me like e/acc has contributed a whole lot to this beyond commentary. The rallying of OpenAI employees behind Altman is quite plausibly his general popularity + ability to gain control of a situation. 

At least that seems likely if Paul Graham's assessment of him as a master persuader is to be believed (and why wouldn't it?). 

5dr_s5mo
I mean, the employees could be motivated by a more straightforward sense that the firing is arbitrary and threatens the functioning of OpenAI and thus their immediate livelihood. I'd be curious to understand how much of this is calculated self-interest and how much indeed personal loyalty to Sam Altman, which would make this incident very much a crossing of the Rubicon.

I do find it quite surprising that so many who work at OpenAI are so eager to follow Altman to Microsoft - I guess I assumed the folks at OpenAI valued not working for big tech (that's more(?) likely to disregard safety) more than it appears they actually did.

2Chess3D5mo
My guess is they feel that Sam and Greg (and maybe even Ilya) will provide enough of a safety net (compared to a randomized Board overlord) but also a large dose of self-interest once it gains steam and you know many of your coworkers will leave

The most likely explanation I can think of, for what look like about-faces by Ilya and Jan this morning, is realizing that the worst plausible outcome is exactly what we're seeing: Sam running a new OpenAI at Microsoft, free of that pesky charter. Any amount of backpedaling, and even resigning in favor of a less safety-conscious board, is preferable to that.

They came at the king and missed.

Yeah but if this is the case, I'd have liked to see a bit more balance than just retweeting the tribal-affiliation slogan ("OpenAI is nothing without its people") and saying that the board should resign (or, in Ilya's case, implying that he regrets and denounces everything he initially stood for together with the board). Like, I think it's a defensible take to think that the board should resign after how things went down, but the board was probably pointing to some real concerns that won't get addressed at all if the pendulum now swings way too much in the opposite direction, so I would have at least hoped for something like "the board should resign, but here are some things that I think they had a point about, which I'd like to see to not get shrugged under the carpet after the counter-revolution."

It's too late for a conditional surrender now that Microsoft is a credible threat to get 100% of OpenAI's capabilities team; Ilya and Jan are communicating unconditional surrender because the alternative is even worse.

I'm not sure this is an unconditional surrender. They're not talking about changing the charter, just appointing a new board. If the new board isn't much less safety conscious, then a good bit of the organization's original purpose and safeguards are preserved. So the terms of surrender would be negotiated in picking the new board.

[-]Linch5mo5343

AFAICT the only formal power the board has is in firing the CEO, so if we get a situation where whenever the board wants to fire Sam, Sam comes back and fires the board instead, well, it's not exactly an inspiring story for OpenAI's governance structure.

2TLK5mo
This is a very good point. It is strange, though, that the Board was able to fire Sam without the Chair agreeing to it. It seems like something as big as firing the CEO should have required at least a conversation with the Chair, if not the affirmative vote of the Chair. The way this was handled was a big mistake. There needs to be new rules in place to prevent big mistakes like this.

If actually enforcing the charter leads to them being immediately disempowered, it‘s not worth anything in the first place. We were already in the “worst case scenario”. Better to be honest about it. Then at least, the rest of the organisation doesn‘t get to keep pointing to the charter and the board as approving their actions when they don‘t.

The charter it is the board’s duty to enforce doesn‘t say anything about how the rest of the document doesn‘t count if investors and employees make dire enough threats, I‘m pretty sure.

If actually enforcing the charter leads to them being immediately disempowered, it‘s not worth anything in the first place.

If you pushed for fire sprinklers to be installed, then yell "FIRE", and turn on the fire sprinklers, causing a bunch of water damage, and then refuse to tell anyone where you thought the fire was and why you thought that, I don't think you should be too surprised when people contemplate taking away your ability to trigger the fire sprinklers.

Keep in mind that the announcement was not something like

After careful consideration and strategic review, the Board of Directors has decided to initiate a leadership transition. Sam Altman will be stepping down from his/her role, effective November 17, 2023. This decision is a result of mutual agreement and understanding that the company's long-term strategy and core values require a different kind of leadership moving forward.

Instead, the board announced

Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his

... (read more)
8aphyer5mo
  The situation is actually even less surprising than this, because the thing people actually initially contemplated doing in response to the board's actions was not even 'taking away your ability to trigger the fire sprinklers' but 'going off and living in a new building somewhere else that you can't flood for lulz'. As I'm understanding the situation OpenAI's board had and retained the legal right to stay in charge of OpenAI as all its employees left to go to Microsoft.  If they decide they would rather negotiate from their starting point of 'being in charge of an empty building' to 'making concessions' this doesn't mean that the charter didn't mean anything!  It means that the charter gave them a bunch of power which they wasted.

If they thought this would be the outcome of firing Sam, they would not have done so.

The risk they took was calculated, but man, are they bad at politics.

[-]dr_s5mo3115

I keep being confused by them not revealing their reasons. Whatever they are, there's no way that saying them out loud wouldn't give some ammo to those defending them, unless somehow between Friday and now they swung from "omg this is so serious we need to fire Altman NOW" to "oops looks like it was a nothingburger, we'll look stupid if we say it out loud". Do they think it's a literal infohazard or something? Is it such a serious accusation that it would involve the police to state it out loud?

5faul_sname5mo
At this point I'm beginning to wonder if a gag order is involved.
0Chess3D5mo
Interesting! Bad at politics is a good way to put it. So you think this was purely a political power move to remove Sam, and they were so bad at projecting the outcomes, all of them thought Greg would stay on board as President and employees would largely accept the change. 
5orthonormal5mo
No, I don't think the board's motives were power politics; I'm saying that they failed to account for the kind of political power moves that Sam would make in response.
1Amalthea5mo
It's hard to know for sure, but I think this is a reasonable and potentially helpful perspective. Some of the perceived repercussions on the state of AI safety might be "the band-aid being ripped off". 

The important question is, why now? Why with so little evidence to back-up what is such an extreme action?

[-]Alex A5mo4314

RE: the board’s vague language in their initial statement

Smart people who have an objective of accumulating and keeping control—who are skilled at persuasion and manipulation —will often leave little trace of wrongdoing. They’re optimizing for alibis and plausible deniability. Being around them and trying to collaborate with them is frustrating. If you’re self-aware enough, you can recognize that your contributions are being twisted, that your voice is going unheard, and that critical information is being withheld from you, but it’s not easy. And when you try to bring up concerns, they are very good at convincing you that those concerns are actually your fault.

I can see a world where the board was able to recognize that Sam’s behaviors did not align with OpenAI’s mission, while not having a smoking gun example to pin him on. Being unskilled politicians with only a single lever to push (who were probably morally opposed to other political tactics) the board did the only thing they could think of, after trying to get Sam to listen to their concerns. Did it play out well? No.

It’s clear that EA has a problem with placing people who are immature at politics in key political positions. I also believe there may be a misalignment in objectives between the politically skilled members of EA and the rest of us—politically skilled members may be withholding political advice/training from others out of fear that they will be outmaneuvered by those they advise. This ends up working against the movement as a whole.

[-]lc5mo2613

Feels sometimes like all of the good EAs are bad at politics and everybody on our side that's good at politics is not a good EA.

Yeah, I'm getting that vibe. EAs keep going "hell yeah, we got an actual competent mafioso on our side, but they're actually on our side!", and then it turns out the mafioso wasn't on their side, any more than any other mafioso in history had ever been on anyone's side.

5faul_sname5mo
Ok, but then why the statement implying severe misconduct rather than a generic "the board has decided that the style of leadership that Mr. Altman provides is not what OpenAI needs at this time"?

I'm surprised that nobody has yet brought up the development that the board offered Dario Amodei the position as a merger with Anthropic (and Dario said no!).

(There's no additional important content in the original article by The Information, so I linked the Reuters paywall-free version.)

Crucially, this doesn't tell us in what order the board made this offer to Dario and the other known figures (GitHub CEO Nat Friedman and Scale AI CEO Alex Wang) before getting Emmett Shear, but it's plausible that merging with Anthropic was Plan A all along. Moreover, I strongly suspect that the bad blood between Sam and the Anthropic team was strong enough that Sam had to be ousted in order for a merger to be possible.

So under this hypothesis, the board decided it was important to merge with Anthropic (probably to slow the arms race), booted Sam (using the additional fig leaf of whatever lies he's been caught in), immediately asked Dario and were surprised when he rejected them, did not have an adequate backup plan, and have been scrambling ever since.

P.S. Shear is known to be very much on record worrying that alignment is necessary and not likely to be easy; I'm curious what Friedman and Wang are on record as saying about AI x-risk.

Has this one been confirmed yet? (Or is there more evidence that this reporting that something like this happened?)

7Lukas_Gloor5mo
Having a "plan A" requires detailed advance-planning. I think it's much more likely that their decision was reactive rather than plan-based. They felt strongly that Altman had to go based on stuff that happened, and so they followed procedures – appoint an interim CEO and do a standard CEO search. Of course, it's plausible – I'd even say likely – that an "Anthropic merger" was on their mind as something that could happen as a result of this further down the line. But I doubt (and hope not) that this thought made a difference to their decision. Reasoning: * If they had a detailed plan that was motivating their actions (as opposed to reacting to a new development and figuring out what to do as things go on), they would probably have put in a bit more time gathering more potentially incriminating evidence or trying to form social alliances.  For instance, even just, in the months or weeks before, visiting OpenAI and saying hi to employees, introducing themselves as the board, etc., would probably have improved staff's perception of how this went down. Similarly, gathering more evidence by, e.g., talking to people close to Altman but sympathetic to safety concerns, asking whether they feel heard in the company, etc, could have unearthed more ammunition. (It's interesting that even the safety-minded researchers at OpenAI basically sided with Altman here, or, at the very least, none of them came to the board's help speaking up against Altman on similar counts. [Though I guess it's hard to speak up "on similar counts" if people don't even really know their primary concerns apart from the vague "not always candid."]) * If the thought of an Anthropic merge did play a large role in their decision-making (in the sense of "making the difference" to whether they act on something across many otherwise-similar counterfactuals), that would constitute a bad kind of scheming/plotting. People who scheme like that are probably less likely than baseline to underestimate power pol

https://twitter.com/i/web/status/1726526112019382275

"Before I took the job, I checked on the reasoning behind the change. The board did *not* remove Sam over any specific disagreement on safety, their reasoning was completely different from that. I'm not crazy enough to take this job without board support for commercializing our awesome models."

4Zvi5mo
Yeah, should have put that in the main, forgot. Added now.

Most likely explanation is the simplest fitting one:

  • The Board was angry on lack of communication for some time but with internal disagreement (Greg, Ilya)
  • The things sped up lately. Ilya thought it might be good to change CEO to someone who would slow down and look more into safety as Altman says a lot about safety but speeds up anyway. So he gave a green light on his side (acceptation of change)
  • Then the Board made the moves that they made
  • Then the new CEO wanted to try to hire back Altman so they changed her
  • Then that petition/letter started rolling because the prominent people saw those moves as harming to the company and the goal
  • Ilya also saw that the outcome is bad both for the company and for the goal of slowing down and he saw that if the letter will get more signatures it will be even worse, so he changed his mind and also signed

Take note about the language that Ilya uses. He didn't say they did bad to Altman or that decission was bad. He said that that he changed his mind because of consequences being harm for the company.

4angmoh5mo
This seems about right. Sam is a bit of a cowboy and probably doesn't bother involving the board more than he absolutely has to.

One thing I've realized more in the last 24h: 

  • It looks like Sam Altman is using a bunch of "tricks" now trying to fight his way back into more influence over OpenAI. I'm not aware of anything I'd consider unethical (at least if one has good reasons to believe one has been unfairly attacked), but it's still the sort of stuff that wouldn't come naturally to a lot of people and wouldn't feel fair to a lot of people (at least if there's a strong possibility that the other side is acting in good faith too).
  • Many OpenAI employees have large monetary incentives on the line and there's levels of peer pressure that are off the charts, so we really can't read too much into who tweeted how many hearts or signed the letter or whatever. 

Maybe the extent of this was obvious to most others, but for me, while I was aware that this was going on, I feel like I underestimated the extent of it. One thing that put things into a different light for me is this tweet

Which makes me wonder, could things really have gone down a lot differently? Sure, smoking-gun-type evidence would've helped the board immensely. But is it their fault that they don't have it? Not necessarily. If they had (1) t... (read more)

The board could (justifiably based on Sam's incredible mobilization the past days**) believe that they have little to no chance of winning the war of public opinion and focus on doing everything privately since that is where they feel on equal footing.

This doesn't explain fully why they haven't stated reasons in private, but it does seem they provided at least something to Emmett Shear as he said he had a reason from the board that wasn't safety or commercialization (PPS of https://twitter.com/eshear/status/1726526112019382275)

** Very few fires employees would even consider pushing back, but to be this successful this quickly is impressive. Not taking a side on it being good or evil, just stating the fact of his ability to fight back after things seemed gloom (betting markets were down below 10%)

2Chess3D5mo
Well, seems like the board did provide zero evidence in private, too! https://twitter.com/emilychangtv/status/1727228431396704557 Quite the saga: glad it is over and think that Larry Summers is a great independent thinker that could help the board make some smart expansion decisions
3dr_s5mo
I feel like at this point the only truly rational comment is: what the absolute fuck.
3Siebe5mo
this Washington Post article supports the 'Scheming Sam' Hypothesis: anonymous reports mostly from his time at Y Combinator
[-]gilch5mo2310

He's back. Again. Maybe.

https://twitter.com/OpenAI/status/1727205556136579362

We have reached an agreement in principle for Sam [Altman] to return to OpenAI as CEO with a new initial board of Bret Taylor (Chair), Larry Summers, and Adam D'Angelo.

We are collaborating to figure out the details. Thank you so much for your patience through this.

Anyone know how Larry or Bret feel about x-risk?

5trevor5mo
The verge article is better, shows tweets by Toner and Nadella confirming that it wasn't just someone getting access to the OpenAI twitter/x account (unless of course someone acquired access to all the accounts, which doesn't seem likely).
3gilch5mo
WSJ: https://www.wsj.com/tech/openai-says-sam-altman-to-return-as-ceo-766349a5
1O O5mo
Paywall?
2gilch5mo
Uh, try this one.

Fun story.

I met Emmett Shear once at a conference, and have read a bunch of his tweeting.

On Friday I turned to a colleague and asked for Shear's email, so that I could email him suggesting he try to be CEO, as he's built a multi-billion company before and has his head screwed on about x-risk.

My colleague declined, I think they thought it was a waste of time (or didn't think it was worth their social capital).

Man, I wish I had done it, that would have been so cool to have been the one to suggest it to him.

[-]dr_s5mo202

Man, Sutskever's back and forth is so odd. Hard to make obvious sense of, especially if we believe Shear's claim that this was not about disagreements on safety. Any chance that it was Annie Altman's accusations towards Sam that triggered this whole thing? It seems strange since you'd expect it to only happen if public opinion built up to unsustainable levels.

My guess: Sutskever was surprised by the threatened mass exodus. Whatever he originally planned to achieve, he no longer thinks he can succeed. He now thinks that falling on his sword will salvage more of what he cares about than letting the exodus happen.

7dr_s5mo
This would be very consistent with the problem being about safety (Altman at MSFT is worse than Altman at OAI for that), but then Shear is lying (understandable that he might have to for political reasons). Or I suppose anything that involved the survival of Open AI, which at this point is threatened anyway.

Maybe Shear was lying. Maybe the board lied to Shear, and he truthfully reported what they told him. Maybe "The board did *not* remove Sam over any specific disagreement on safety" but did remove him over a *general* disagreement which, in Sutskever's view, affects safety. Maybe Sutskever wanted to remove Altman for a completely different reason which also can't be achieved after a mass exodus. Maybe different board members had different motivations for removing Altman.

I agree, it's critical to have a very close reading of "The board did *not* remove Sam over any specific disagreement on safety".

This is the kind of situation where every qualifier in a statement needs to be understood as essential—if the statement were true without the word "specific", then I can't imagine why that word would have been inserted.

To elaborate on that, Shear is presumably saying exactly as much as he is allowed to say in public. This implies that if the removal had nothing to do with safety, then he would say "The board did not remove Sam over anything to do with safety". His inserting of that qualifier implies that he couldn't make a statement that broad, and therefore that safety considerations were involved in the removal.

7Michael Thiessen5mo
According to Bloomberg, "Even CEO Shear has been left in the dark, according to people familiar with the matter. He has told people close to OpenAI that he doesn’t plan to stick around if the board can’t clearly communicate to him in writing its reasoning for Altman’s sudden firing." Evidence that Shear simply wasn't told the exact reason, though the "in writing" part is suspicious. Maybe he was told not in writing and wants them to write it down so they're on the record.
8DanielFilan5mo
He was probably kinda sleep deprived and rushed, which could explain inessential words being added.
3ryan_b5mo
I would normally agree with this, except it does not seem to me like the board is particularly deliberate about their communication so far. If they are conscientious enough about their communication to craft it down to the word, why did they handle the whole affair in the way they seem to have so far? I feel like a group of people who did not see fit to provide context or justifications to either their employees or largest shareholder when changing company leadership and board composition probably also wouldn't weigh each word carefully when explaining the situation to a total outsider. We still benefit from a very close reading, mind you; I just believe there's a lot more wiggle room here than we would normally expect from corporate boards operating with legal advice based on the other information we have.
  1. The quote is from Emmett Shear, not a board member.
  2. The board is also following the "don't say anything literally false" policy by saying practically nothing publicly.
  3. Just as I infer from Shear's qualifier that the firing did have something to do with safety, I infer from the board's public silence that their reason for the firing isn't one that would win back the departing OpenAI members (or would only do so at a cost that's not worth paying). 
  4. This is consistent with it being a safety concern shared by the superalignment team (who by and large didn't sign the statement at first) but not by the rest of OpenAI (who view pushing capabilities forward as a good thing, because like Sam they believe the EV of OpenAI building AGI is better than the EV of unilaterally stopping). That's my current main hypothesis.
5ryan_b5mo
Ah, oops! My expectations are reversed for Shear; him I strongly expect to be as exact as humanly possible. With that update, I'm inclined to agree with your hypothesis.
5dr_s5mo
That's the part that confuses me most. An NDA wouldn't be strong enough reason at this point. As you say, safety concerns might, but that seems pretty wild unless they literally already have AGI and are fighting over what to do with it. The other thing is anything that if said out loud might involve the police, so revealing the info would be itself an escalation (and possibly mutually assured destruction, if there's criminal liability on both sides). I got nothing.

The facts very strongly suggest that the board is not a monolithic entity.  Its inability to tell a sensible story about the reasons for Sam's firing might be due to such a single comprehensible story not existing but different board members having different motives that let them agree on the firing initially but ultimately not on a story that they could jointly endorse.  

There's... too many things here. Too many unexpected steps, somehow pointing at too specific an outcome. If there's a plot, it is horrendously Machiavellian.

(Hinton's quote, which keeps popping into my head: "These things will have learned from us by reading all the novels that ever were and everything Machiavelli ever wrote, that how to manipulate people, right? And if they're much smarter than us, they'll be very good at manipulating us. You won't realise what's going on. You'll be like a two year old who's being asked, do you want the peas or the cauliflower? And doesn't realise you don't have to have either. And you'll be that easy to manipulate. And so even if they can't directly pull levers, they can certainly get us to pull levers. It turns out if you can manipulate people, you can invade a building in Washington without ever going there yourself.")

(And Altman: "i expect ai to be capable of superhuman persuasion well before it is superhuman at general intelligence, which may lead to some very strange outcomes")

If an AI were to spike in capabilities specifically relating to manipulating individuals and groups of people, this is roughly how I would expect the outcome to look l... (read more)

7Seth Herd5mo
I think we can discount it as a real possibility, while accepting Altman's "i expect ai to be capable of superhuman persuasion well before it is superhuman at general intelligence, which may lead to some very strange outcomes". I think it might be weakly superhuman at persuasion for things like "buy our products", but that doesn't imply being superhuman at working out complex consequences of political maneuvering. Doing that would firmly imply a generally superhuman intelligence, I think. So I think if this has anything to do with internal AI breakthroughs, it's tangential at most.
6dr_s5mo
I mean, this would not be too hard though. It could be achieved by a simple trick of appearing smarter to some people and then dumber at subsequent interactions with others, scaring the safety conscious and then making them look insane for being scared. I don't think that's what's going on (why would even an AGI model they made be already so cleverly deceptive and driven? I would expect OAI to not be stupid enough to build the most straightforward type of maximizer) but it wouldn't be particularly hard to think up or do.
6Odd anon5mo
Time for some predictions. If this is actually from AI developing social manipulation superpowers, I would expect: 1. We never find out any real reasonable-sounding reason for Altman's firing. 2. OpenAI does not revert to how it was before. 3. More instances of people near OpenAI's safety people doing bizarre unexpected things that have stranger outcomes. 4. Possibly one of the following: 1. Some extreme "scissors statements" pop up which divide AI groups into groups that hate each other to an unreasonable degree. 2. An OpenAI person who directly interacted with some scary AI suddenly either commits suicide or becomes a vocal flat-earther or similar who is weirdly convincing to many people. 3. An OpenAI person skyrockets to political power, suddenly finding themselves in possession of narratives and phrases which convince millions to follow them. (Again, I don't think it's that likely, but I do think it's possible.)
3faul_sname5mo
Things might be even weirder than that if this is a narrowly superhuman AI that is specifically superhuman at social manipulation, but still has the same inability to form new gears-level models exhibited by current LLMs (e.g. if they figured out how to do effective self-play on the persuasion task, but didn't actually crack AGI).
3Chess3D5mo
While I don't think this is true, it's a fun thought (and can also be pointed at Altman himself, rather than an AGI). Neither are true, but fun to think about

I love how short this post is! Zvi, you should do more posts like this (in addition to your normal massive-post fare).

Adam D'Angelo retweeted a tweet implying that hidden information still exists and will come out in the future:

Have known Adam D’Angelo for many years and although I have not spoken to him in a while, the idea that he went crazy or is being vindictive over some feature overlap or any of the other rumors seems just wrong. It’s best to withhold judgement until more information comes out.

#14: If there have indeed been secret capability gains, so that Altman was not joking about reaching AGI internally (it seems likely that he was joking, though given the stakes, it's probably not the sort of thing to joke about), then the way I read their documents, the board should make that determination:

Fifth, the board determines when we've attained AGI. Again, by AGI we mean a highly autonomous system that outperforms humans at most economically valuable work. Such a system is excluded from IP licenses and other commercial terms with Microsoft, which only apply to pre-AGI technology.

Once they've made that determination, then Microsoft will not have access to the AGI technology. Given the possible consequences, I doubt that Microsoft would have found such a joke very amusing.

[-]dr_s5mo135

Honestly this does seem... possible. A disagreement on whether GPT-5 counts as AGI would have this effect. The most safety minded would go "ok, this is AGI, we can't give it to Microsoft". The more business oriented and less conservative would go "no, this isn't AGI yet, it'll make us a fuckton of money though". There would be conflict. But for example seeing how now everyone might switch to Microsoft and simply rebuild the thing from scratch there, Ilya despairs and decides to do a 180 because at least this way he gets to supervise the work somehow.

This conflict has inescapably taken place in the context of US-China competition over AI, as leaders in both countries are well known to pursue AI acceleration for applications like autonomous low-flying nuclear cruise missiles (e.g. in contingencies where military GPS networks fail), economic growth faster than the US/China/rest of the world, and information warfare.

I think I could confidently bet against Chinese involvement, that seems quite reasonable. I can't bet so confidently against US involvement; although I agree that it remains largely unclear, i... (read more)

There had been various clashes between Altman and the board. We don’t know what all of them were. We do know the board felt Altman was moving too quickly, without sufficient concern for safety, with too much focus on building consumer products, while founding additional other companies. ChatGPT was a great consumer product, but supercharged AI development counter to OpenAI’s stated non-profit mission.


Does anyone have proof of the board's unhappiness about speed, lack of safety concern and disagreement with founding other companies. All seem plausible but have seen basically nothing concrete.

The theory that my mind automatically generates seeing these happenings is that Ilya was in cahoots with Sam&Greg, and the pantomime was a plot to oust external members of the board.

However, I like to think I'm wise enough to give this 5% probability on reflection.

[-]nem5mo4-2

Is there any chance that Altman himself triggered this? Did something that he knew would cause the board to turn on him, with knowledge that Microsoft would save him?

I'm 90% sure that the issue here was an inexperienced board with Chief Scientist that didn't understand the human dimension of leadership. 

Most independent board members usually have a lot of management experience and so understand that their power on paper is less than their actual power. They don't have day-to-day factual knowledge about the business of the company and don't have a good grasp of relationships between employees. So, they normally look to management to tell them what to do.

Here, two of the board members lacked the organizational exper... (read more)

[-]dr_s5mo30

What about this?

https://twitter.com/robbensinger/status/1726387432600613127

We can definitely say that the board's decision was not made in response to malfeasance or anything related to our financial, business, safety, or security/privacy practices. This was a breakdown in communication between Sam and the board.

If considered reputable (and not a lie), this would significantly narrow the space of possible reasons.

[-]gh4n5mo30

Mira Mutari.

Typo here and below: Murati

2Zvi5mo
Thanks.

The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year. Will this post make the top fifty?

Jan Leike, the other head of the superalignment team, Tweeted that he worked through the weekend on the crisis, and that the board should resign.

No link for this one?

3Tristan Wegner5mo
https://x.com/janleike/status/1726600432750125146?s=20

What's the source of that 505 employees letter? I mean the contents aren't too crazy, but isn't it strange that the only thing we have is a screenshot of the first page?

3Robert_AIZI5mo
It was covered in Axios, who also link to it as a separate pdf with all 505 signatories.
6Zvi5mo
Now claim that it's up to 650/770.
3Robert_AIZI5mo
That link is broken for me, did you mean to link to this Lilian Weng tweet?
2Zvi5mo
Initially I saw it from Kara Swisher (~1mm views) then I saw it from a BB employee. I presume it is genuine.
[+]MadHatter5mo-8-13
[+][comment deleted]5mo20
[+][comment deleted]5mo12