Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

Basically just the title, see the OAI blog post for more details.

Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.

In a statement, the board of directors said: “OpenAI was deliberately structured to advance our mission: to ensure that artificial general intelligence benefits all humanity. The board remains fully committed to serving this mission. We are grateful for Sam’s many contributions to the founding and growth of OpenAI. At the same time, we believe new leadership is necessary as we move forward. As the leader of the company’s research, product, and safety functions, Mira is exceptionally qualified to step into the role of interim CEO. We have the utmost confidence in her ability to lead OpenAI during this transition period.”


EDIT:

Also, Greg Brockman is stepping down from his board seat:

As a part of this transition, Greg Brockman will be stepping down as chairman of the board and will remain in his role at the company, reporting to the CEO.

The remaining board members are:

OpenAI chief scientist Ilya Sutskever, independent directors Quora CEO Adam D’Angelo, technology entrepreneur Tasha McCauley, and Georgetown Center for Security and Emerging Technology’s Helen Toner.


EDIT 2:

Sam Altman tweeted the following.

i loved my time at openai. it was transformative for me personally, and hopefully the world a little bit. most of all i loved working with such talented people. 

will have more to say about what’s next later. 

🫡 

Greg Brockman has also resigned.

New to LessWrong?

New Comment
75 comments, sorted by Click to highlight new comments since: Today at 12:40 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Update: Greg Brockman quit.

Update: Sam and Greg say:

Sam and I are shocked and saddened by what the board did today.

Let us first say thank you to all the incredible people who we have worked with at OpenAI, our customers, our investors, and all of those who have been reaching out.

We too are still trying to figure out exactly what happened. Here is what we know:

- Last night, Sam got a text from Ilya asking to talk at noon Friday. Sam joined a Google Meet and the whole board, except Greg, was there. Ilya told Sam he was being fired and that the news was going out very soon.

- At 12:19pm, Greg got a text from Ilya asking for a quick call. At 12:23pm, Ilya sent a Google Meet link. Greg was told that he was being removed from the board (but was vital to the company and would retain his role) and that Sam had been fired. Around the same time, OpenAI published a blog post.

- As far as we know, the management team was made aware of this shortly after, other than Mira who found out the night prior.

The outpouring of support has been really nice; thank you, but please don’t spend any time being concerned. We will be fine. Greater things coming soon.

Update: three more resignations including Jakub... (read more)

Perhaps worth noting: one of the three resignations, Aleksander Madry, was head of the preparedness team which is responsible for preventing risks from AI such as self-replication. 

[-]Buck5mo283

Note that Madry only just started, iirc.

7O O5mo
Also: Jakub Pachocki who was the director of research
[-]Max H5mo4213

Also seems pretty significant:

As a part of this transition, Greg Brockman will be stepping down as chairman of the board and will remain in his role at the company, reporting to the CEO.

The remaining board members are:

OpenAI chief scientist Ilya Sutskever, independent directors Quora CEO Adam D’Angelo, technology entrepreneur Tasha McCauley, and Georgetown Center for Security and Emerging Technology’s Helen Toner.

Has anyone collected their public statements on various AI x-risk topics anywhere?

Adam D'Angelo via X:

Oct 25

This should help access to AI diffuse throughout the world more quickly, and help those smaller researchers generate the large amounts of revenue that are needed to train bigger models and further fund their research.

Oct 25

We are especially excited about enabling a new class of smaller AI research groups or companies to reach a large audience, those who have unique talent or technology but don’t have the resources to build and market a consumer application to mainstream consumers.

Sep 17

This is a pretty good articulation of the unintended consequences of trying to pause AI research in the hope of reducing risk: [citing Nora Belrose's tweet linking her article]

Aug 25

We (or our artificial descendants) will look back and divide history into pre-AGI and post-AGI eras, the way we look back at prehistoric vs "modern" times today.

Aug 20

It’s so incredible that we are going to live through the creation of AGI. It will probably be the most important event in the history of the world and it will happen in our lifetimes.

Has anyone collected their public statements on various AI x-risk topics anywhere?

A bit, not shareable.

Helen is an AI safety person. Tasha is on the Effective Ventures board. Ilya leads superalignment. Adam signed the CAIS statement

For completeness - in addition to Adam D’Angelo, Ilya Sutskever and Mira Murati signed the CAIS statement as well.

Didn't Sam Altman also sign it?

Yes, Sam has also signed it.

[-]evhub5mo458

Notably, of the people involved in this, Greg Brockman did not sign the CAIS statement, and I believe that was a purposeful choice.

Also D'Angelo is on the board of Asana, Moskovitz's company (Moskovitz who funds Open Phil).

Judging from his tweets, D'Angelo seems like significantly not concerned with AI risk, so I was quite taken aback to find out he was on the OpenAI board. This might be misinterpreting his views based on vibes.

[-]Person5mo150

I couldn't remember where from, but I know that Ilya Sutskever at least takes x-risk seriously. I remember him recently going public about how failing alignment would essentially mean doom. I think it was published as an article on a news site rather than an interview, which are what he usually does. Someone with a way better memory than me could find it.

EDIT: Nevermind, found them.

1LawrenceC5mo
Thanks, edited.
[-]Burny5mo372

"OpenAI’s ouster of CEO Sam Altman on Friday followed internal arguments among employees about whether the company was developing AI safely enough, according to people with knowledge of the situation.

Such disagreements were high on the minds of some employees during an impromptu all-hands meeting following the firing. Ilya Sutskever, a co-founder and board member at OpenAI who was responsible for limiting societal harms from its AI, took a spate of questions.

At least two employees asked Sutskever—who has been responsible for OpenAI’s biggest research breakthroughs—whether the firing amounted to a “coup” or “hostile takeover,” according to a transcript of the meeting. To some employees, the question implied that Sutskever may have felt Altman was moving too quickly to commercialize the software—which had become a billion-dollar business—at the expense of potential safety concerns."

Kara Swisher also tweeted:

"More scoopage: sources tell me chief scientist Ilya Sutskever was at the center of this. Increasing tensions with Sam Altman and Greg Brockman over role and influence and he got the board on his side."

"The developer day and how the store was introduced was in inflection moment of... (read more)

https://twitter.com/karaswisher/status/1725678898388553901 Kara Swisher @karaswisher

Sources tell me that the profit direction of the company under Altman and the speed of development, which could be seen as too risky, and the nonprofit side dedicated to more safety and caution were at odds. One person on the Sam side called it a “coup,” while another said it was the the right move.

Came across this account via a random lawyer I'm following on Twitter (for investment purposes), who commented, "Huge L for the e/acc nerds tonight". Crazy times...

[-]trevor5mo100

I think this makes sense as an incentive for AI acceleration- even if someone is trying to accelerate AI for altruistic reasons e.g. differential tech development (e.g. maybe they calculate that LLMs have better odds of interpretability succeeding because they think in English), then they should still lose access to their AI lab shortly after accelerating AI. 

They get so much personal profit from accelerating AI, so only people prepared to personally lose it all within 3 years are prepared to sacrifice enough to do something as extreme as burning the remaining timeline.

I'm generally not on board with leadership shakeups in the AI safety community, because the disrupted alliance webs create opportunities for resourceful outsiders to worm their way in. I worry especially about incentives for the US natsec community to do this. But when I look at it from the game theory/moloch perspective, it might be worth the risk, if it means setting things up so that the people who accelerate AI always fail to be the ones who profit off of it, and therefore can only accelerate because they think it will benefit the world.

[-]Sune5mo245

It seems the sources are supporters of Sam Altman. I have not seen any indication of this from the boards side.

1Sune5mo
Ok, looks like he was invited in to OpenAIs office for some reason at least https://twitter.com/sama/status/1726345564059832609

This seems to suggest a huge blunder

9trevor5mo
This is the market itself, not a screenshot! Click one of the "bet" buttons. An excellent feature.
9the gears to ascension5mo
2TheBayesian5mo
Note:  Those are two different markets. Nathan's market is this one and Sophia Wisdom's market (currently the largest one by far) is this one. 
[-]O O5mo91

I expect investors will take the non-profit status of these companies more seriously going forwards.

I hope Ilya et al. realize what they’ve done.

Edit: I think I’ve been vindicated a bit. As I expected money would just flock to for profit AGI labs, as it is poised to right now. I hope OpenAI remains a non profit but I think Ilya played with fire.

[-]O O5mo101

So, Meta disbanded its responsible AI team. I hope this story reminds everyone about the dangers of acting rashly.

Firing Sam Altman was really a one time use card.

Microsoft probably threatened to pull its investments and compute which would let Sam Altman new competitor pull ahead regardless as OpenAI would be in an eviscerated state both in terms of funding and human capital. This move makes sense if you’re at the precipice of AGI, but not before that.

1quetzal_rainbow5mo
Their Responsible AI team was in pretty bad shape after recent lay-offs. I think Facebook just decided to cut costs.
5Lukas_Gloor5mo
It was anyway weird that they had LeCun in charge and a thing called "Responsible AI team" in the same company. No matter what one thinks about Sam Altman now, compared to LeCun, the things he said about AI risks sounded 100 times more reasonable.
1Siebe5mo
Meta's actions seem unrelated?

Now he’s free to run for governor of California in 2026:

I was thinking about it because I think the state is in a very bad place, particularly when it comes to the cost of living and specifically the cost of housing. And if that doesn’t get fixed, I think the state is going to devolve into a very unpleasant place. Like one thing that I have really come to believe is that you cannot have social justice without economic justice, and economic justice in California feels unattainable. And I think it would take someone with no loyalties to sort of very powerful

... (read more)

Aside from obvious questions on how it will impact the alignment approach of OpenAI and whether or not it is a factional war of some sort, I really hope this has nothing to do with Sama's sister. Both options—"she is wrong but something convinced the OpenAI leadership that's she's right" and "she is actually right and finally gathered some proof of her claims"—are very bad. ...On the other hand, as cynical and grim as that is, sexual harassment probably won't spell a disaster down the line, unlike a power struggle among the tops of an AGI-pursuing company.

7zby5mo
Speculation on the available info: They must have questioned him on that. Discovering that he was not entirely candid with them would be a good explanation of this announcement. And shadowbanning would be the most discoverable here.
4Ninety-Three5mo
Surely they would use different language than "not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities" to describe a #metoo firing.
0Taisia Terumi5mo
Yeah, I also think this is very unlikely. Just had to point out the possibility for completeness sake. In other news, someone on Twitter (a leaker? not sure) said that there probably will be more firings and that this is a struggle of for-profit vs non-profit sides of the company, with Sama representing the for-profit side.
7Daniel_Eth5mo
I think they said that there were more departures to come. I assumed that was referring to people quitting because they disagreed with the decision.
3Viliam5mo
That reminds me of the post we had here a month ago. When I asked how exactly are we supposed to figure out the truth about something that happened in private many years ago, I was told that: * OP is conducting a research; we should wait for the conclusions (should I keep holding my breath?) * we should wait whether other victims come forward, and update accordingly (did we actually?) Now I wonder whether Less Wrong was used as a part of a character-assassination campaign designed to make people less likely to defend Sam Altman in case of a company takeover. And we happily played along. (This is unrelated to whether firing Sam Altman was good or bad from the perspective of AI safety.)

How surprising is this to the alignment community professionals (e.g. people at MIRI, Redwood Research, or similar)? From an outside view, the volatility/flexibility and movement away from pure growth and commercialization seems unexpected and could be to alignment researchers' benefit (although it's difficult to see the repercussions at this point).  While it is surprising to me because I don't know the inner workings of OpenAI, I'm surprised that it seems similarly surprising to the LW/alignment community as well.  

Perhaps the insiders are stil... (read more)

[-]Sune5mo3329

It seems this was a surprise to almost everyone even at OpenAI, so I don’t think it is evidence that there isn’t much information flow between LW and OpenAI.

3DanielFilan5mo
I'm at CHAI and it's shocking to me, but I'm not the most plugged-in person.

Someone writes anonymously, "I feel compelled as someone close to the situation to share additional context about Sam and company. . . ."

https://www.reddit.com/r/OpenAI/comments/17xoact/comment/k9p7mpv/

[This comment is no longer endorsed by its author]Reply

I read their other comments and I'm skeptical. The tone is wrong.

3Ben Pace5mo
It read like propaganda to me, whether the person works at the company or not.

I wonder what changes will happen after Sam and Greg's exit.. I Hope they install a better direction towards AI safety.

8MiguelDev5mo
I expect Sam to open up a new AI company.
3mishka5mo
Yeah... On one hand, I am excited about Sam and Greg hopefully trying more interesting things than just scaling Transformer LLMs, especially considering Sam' answer to the last question on Nov. 1 at Cambridge Union, 1:01:45 in https://www.youtube.com/watch?v=NjpNG0CJRMM where he seems to think that more than Transformer-based LLMs are needed for AGI/ASI (in particular, he correctly says that "true AI" must be able to discover new physics, and he doubts LLMs are good enough for that). On the other hand, I was hoping for a single clear leader in the AI race, and I thought that Ilya Sutskever was one of the best possible leaders for an AI safety project. And now Ilya vs. Sam and Greg Brockman are enemies, https://twitter.com/gdb/status/1725736242137182594, and if Sam and Greg would find a way to beat OpenAI, would they be able to be sufficiently mindful about safety?

Hmmm. The way Sam behaves I can't see a path of him leading an AI company towards safety. The way I interpreted his world tour (22 countries?) talking about OpenAI or AI in general, is him trying to occupy the mindspaces of those countries.  A CEO I wish OpenAI has - is someone who stays at the offices, ensuring that we are on track of safely steering arguably the most revolutionary tech ever created - not trying to promote the company or the tech, I think it's unnecessary to do a world tour if one is doing AI development and deployment safely. 

(But I could be wrong too. Well, let's all see what's going to happen next.)

5mishka5mo
Interesting, how sharply people disagree... It would be good to be able to attribute this disagreement to a particular part of the comment. Is that about me agreeing with Sam about "True AI" needing to be able to do novel physics? Or about me implicitly supporting the statement that LLMs would not be good enough (I am not really sure, I think LLMs would probably be able to create non-LLMs based AIs, so even if they are not good enough to achieve the level of "True AI" directly, they might be able to get there by creating differently-architected AIs)? Or about having a single clear leader being good for safety? Or about Ilya being one of the best safety project leaders, based on the history of his thinking and his qualification? Or about Sam and Greg having a fighting chance against OpenAI? Or about me being unsure of them being able to do adequate safety work on the level which Ilya is likely to provide? I am curious which of these seem to cause disagreement...
8MiguelDev5mo
I did not press the disagreement button but here is where I disagree:
4mishka5mo
Do you mean this in the sense that this would be particularly bad safety-wise, or do you mean this in the sense they are likely to just build huge LLMs like everyone else is doing, including even xAI?
4MiguelDev5mo
I'm still figuring out Elon's xAI.  But with regards with how Sam behaves - if he doesn't improve his framing[1] of what AI could be for the future of humanity - I expect the same results.   1. ^ (I think he frames it with him as the main person that steers the tech rather than an organisation or humanity steering the tech - that's how it feels for me, the way he behaves.)
4mishka5mo
They released a big LLM, the "Grok". With their crew of stars I hoped for a more interesting direction, but an LLM as a start is not unreasonable (one does need a performant LLM as a component). Yeah... I thought he deferred to Ilya and to the new "superalignment team" Ilya has been co-leading safety-wise... But perhaps he was not doing that consistently enough...
3MiguelDev5mo
I haven't played around with Grok so I'm not sure how capable or safe it is. But I hope Elon and his team of experts gets the safety problem right - as he has created companies with extraordinary achievements.  At least, Elon have demonstrated his aspirations to better humanity in other fields of sciences (Internet /Satellites, Space Exploration and EVs) and hope it translate to xAI and twitter.  I felt different about Ilya co-leading,  this seems to me that there's something happening inside OpenAI. When Ilya needed to co-lead the new safety direction this felt like: "something feels weird inside OpenAI and Ilya needed to co-lead the safety direction." So maybe the announcement today is related to that too. Pretty sure there will new info from OpenAI next week or two weeks from now. Hoping it favors more safety directions - long term.
8mishka5mo
I expect safety of that to be at zero (they don't think GPT-3.5-level LLMs are a problem in this sense; besides they market it almost as an "anything goes, anti-censorship LLM"). But that's not really the issue; when a system starts being capable to write code reasonably well, then one starts getting a problem... I hope when they come to that, to approaching AIs which can create better AIs, they'll start taking safety seriously... Otherwise, we'll be in trouble... I thought he was the appropriately competent person (he was probably the AI scientist #1 in the world). The right person for the most important task in the world... And the "superalignment" team at OpenAI was... not very strong. The original official "superalignment" approach was unrealistic and hence not good enough. I made a transcript of some of his thoughts, https://www.lesswrong.com/posts/TpKktHS8GszgmMw4B/ilya-sutskever-s-thoughts-on-ai-safety-july-2023-a, and it was obvious that his thinking was different from the previous OpenAI "superalignment" approach and much better (as in, "actually had a chance to succeed")... Of course, now, since it looks like the "coup" has mostly been his doing, I am less sure that this is the leadership OpenAI and OpenAI safety needs. The manner of that has certainly been too erratic. Safety efforts should not evoke the feel of "last minute emergency"...
4Kaj_Sotala5mo
At least it refuses to give you instructions for making cocaine.
4Thane Ruthenis5mo
Well. If nothing else, the sass is refreshing after the sycophancy of all the other LLMs.
1mishka5mo
That's good! So, at least a bit of safety fine-tuning is there... Good to know...
4MiguelDev5mo
Yeah, let's see where will they steer Grok. Yeah I agree with your analysis with the superalignment agenda, I think it's not a good use of the 20% of compute resources that they have. I even think the resource allocation of 20% on AI safety is not deep enough into the problem as I  think a 100% allocation[1] is necessary.  I haven't had much time studying Ilya, but I like the way he explains his arguments. I hope they (Ilya, the board and Mira or new CEO) will be better at expanding the tech than Sam is. Let's see.    1. ^ I think the safest AI will be the most profitable technoloy as everyone will want to promote and build on top of it.

So I guess OpenAI will keep pushing ahead on both safety and capabilities, but not so much on commercialization? 

Typical speculations: 

  • Annie Altman charges
  • Undisclosed financial interests (AGI, Worldcoin, or YC)
[-]O O5mo-1-2

Potentially relevant information: 

OpenAI insiders seem to also be blindsided and apparently angry at this move.

I personally think there were likely better ways to for Ilya's faction to get Sam's faction to negotiate with him, but this firing makes sense based on some reviews of this company having issues with communication as a whole and potentially having a toxic work environment. 

 

edit: link source now available in replies

6trevor5mo
The human brain seems to be structured such that 1. Factional lines are often drawn splitting up large groups like corporations, government agencies, and nonprofits, with the lines tracing networks of alliances, and also retaliatory commitments that are often used to make factions and individuals hardened against removal by rivals. 2. People are nonetheless occasionally purged along these lines rather than more efficient decision theory like values handshakes. 3. These conflicts and purges are followed by harsh rhetoric, since people feel urges to search languagespace and find combinations of words that optimize for retaliatory harm against others. I would be very grateful for sufficient evidence that the new leadership at OpenAI is popular or unpopular among a large portions of the employees, rather than a small number of anonymous people who might have been allied to the purged people.  I think it might be better to donate that info e.g. message LW mods via the intercom feature in the lower right corner, than to post it publicly.
1O O5mo
There are certainly factions in most large groups, with in-conflict, but this sort of coup is unprecedented. I think in the majority of cases, factions tend to cooperate or come to a resolution. If factions couldn't cooperate, most corporations would be fairly dysfunctional. If the solution was a coup, governments would be even more dysfunctional.  This is public information, so is there a particular reason I should have not posted it?  
2trevor5mo
Can you please link to it or say what app or website this is?
1O O5mo
Here it is: "Sam Altman’s reputation among OpenAI researchers (Tech Industry)" https://www.teamblind.com/us/s/Ji1QX120
[-]O O5mo-2-1

Can someone from OpenAI anonymously spill the 🍵?

2magfrump5mo
Not from OpenAI but the language sounds like this could be the board protecting themselves against securities fraud committed by Altman.
4Holly_Elmore5mo
What kind of securities fraud could he have committed? 

I'm just a guy but the impression I get from occasionally reading the Money Stuff newsletter is that basically anything bad you do at a public company is securities fraud, because if you do a bad thing and don't tell investors, then people who buy the securities you offer are doing so without full information because of you.

1[anonymous]5mo
I doubt the reason for his ousting was fraud-related, but if it was I think it's unlikely to be viewed as securities fraud simply because OpenAI hasn't issued any public securities. I'm not a securities lawyer, but my hunch is even if you could prosecute Altman for defrauding e.g. Microsoft shareholders, it would be far easier to sue directly for regular fraud.
2faul_sname5mo
MSFT market cap dropped about $40B in a 15 minute period on the news, so maybe someone can argue securities fraud on that basis? I dunno, I look forward to the inevitable Matt Levine article.

A wild (probably wrong) theory: Sam Altman announcing custom gpts was the thing that pushed the board to fire him.

 

customizable ai -> user can override rlhf (maybe, probably) -> we are at risk from AIs that have been finetunrd by bad actors