(Glances at investor's agreement...)
IMPORTANT
* * Investing in OpenAI Global, LLC is a high-risk investment * *
* * Investors could lose their capital contribution and not see any return * *
* * It would be wise to view any investment in OpenAI Global, LLC in the spirit of a donation, with the understanding that it may be difficult to know what role money will play in a post-AGI world * *
The Company exists to advance OpenAI, Inc.'s mission of ensuring that safe artificial general intelligence is developed and benefits all of humanity. The Company's duty to this mission and the principles advanced in the OpenAI, Inc. Charter take precedence over any obligation to generate a profit. The Company may never make a profit, and the Company is under no obligation to do so. The Company is free to re-invest any or all of the Company's cash flow into research and development activities and/or related expenses without any obligation to the Members. See Section 6.4 for additional details.
If it turns out that the investors actually have the ability to influence OpenAI's leadership, it means the structure has failed. That itself would be a good reason for most of its support to disappear, and for its (ideologically motivated) employees to leave. This situation may put the organization in a bit of a conundrum.
The structure was also supposed to function for some future where OpenAI has a tremendous amount of power, to guarantee in advance that OpenAI would not be forced to use that power for profit. The implication about whether Microsoft expects to be able to influence the decision is itself a significant hit to OpenAI.
Worth noting that Sam Altman has a history of hostile power plays e.g. at reddit
Paul Graham has written multiple times about Sam's thirst/skill for power:
https://www.newyorker.com/magazine/2016/10/10/sam-altmans-manifest-destiny
It seems like a strategy by investors or even large tech companies to create a self-fulfilling prophecy to create a coalition of OpenAI employees, when there previously was none.
How is this more likely than the alternative, which is simply that this is an already-existing coalition that supports Sam Altman as CEO? Considering that he was CEO until he was suddenly removed yesterday, it would be surprising if most employees and investors didn't support him. Unless I'm misunderstanding what you're claiming here?
Yeah, the impression I get is that if investors are going to the trouble to leak their plot to journalists (even if anonymously), then they are probably hoping to benefit from it acting as a rallying cry/Schelling point. This is a pretty basic thing for intra-org conflicts.
This indicates, at minimum, that the coalition they're aiming for will become stronger than if they had taken the default action and not leaked their plan to journalists. It seems to me more likely that the coalition they're hoping for doesn't exist at all or is too diffused, and they're trying to set people up to join a pro-Altman faction by claiming one is already strong and is due to receive outside support (from "anonymous sources familiar with the matter").
That's a good point in theory, but you'd think the board members can also speak to the media and say "no they are lying, we haven't so far had these talks with them that they claimed to have had with us about reverting the decision."
[Update Nov 20th, seems like you were right and the board maybe didn't have good enough media connections to react to this, or were too busy fighting their fight on another front.]
Eh, it appears that the Microsoft and other investors want Sam Altman back, but the employees are the ones that are making it happen. It appears that Sam was fired because of AI safety reasons but everyone who is really really concerned about that kind of thing already left and works for Anthropic, so Ilya's "But AI Safety" doesn't come off as very convincing, especially with the way Sam was fired. I suspect that people work at OpenAI because they want to, not because they have to. I mean, what were they expecting?
There's a bunch. Here's one: https://manifold.markets/NealShrestha58d3/sam-altman-will-return-to-openai-by
To the extent that Microsoft pressures the OpenAI board about their decision to oust Altman, won't it be easy to (I think accurately) portray Microsoft's behavior as unreasonable and against the common good?
It seems like the main (I would guess accurate) narrative in the media is that the reason for the board's actions was safety concerns.
Let's say Microsoft pulls whatever investments they can, revokes access to cloud compute resources, and makes efforts to support a pro-Altman faction in OpenAI. What happens if the OpenAI board decides to stall and put out statements to the effect of "we gotta do what we gotta do, it's not just for fun that we made this decision"? I would naively guess the public, the law, the media, and governments would be on the board's side. Unsure how much that would matter though.
To me at least it doesn't seem obviously bad for AI safety if OpenAI collapses or significantly loses employees and market value (would love to hear opinions on this).
Pros:
Cons:
I feel like I'm probably missing some reason Microsoft has more leverage than I'm assuming? Maybe other people are more worried about fragmentation of the AI landscape, and less optimistic about the public's and governments' perceptions of the situation, and the expected value of the actions they might take because of it?
Maybe I'm wrong about how all of this will be painted by the media, and the public / government's perceptions
So far, a lot of the media coverage has framed the issue so the board comes across as inexperienced and their mission-related concerns come across more like an overreaction rather than reasonable criticism of profit-oriented or PR-oriented company decisions putting safety at risk when building the most dangerous technology.
I suspect that this is mostly a function of how things went down, how these differences of vision between them and Altman came into focus, rather than an a feature of the current discourse window – we've seen that media coverage about AI risk concerns (and public reaction to it) isn't always negative. So, I think you're right that there are alternative circumstances where it would look quite problematic for Microsoft if they try to interfere with or circumvent the non-profit board structure. Unfortunately, it might be a bit late to change the framing now.
But there's still the chance that the board is sitting on more info and struggled to coordinate their communications amidst all the turmoil so far, or have other reasons for not explaining their side of things in a more compelling manner.
(My comment is operating under the assumption that it's indeed true that Altman isn't the sort of cautious good leader that one would want for the whole AI thing to go well. I personally think this might well be the case, but I want to flag that my views here aren't very resilient because I have little info, and also I'm acknowledging that seeing outpouring of support for him is at least moderate evidence of him being a good leader [but one should also be careful about not overupdating on this type of evidence of someone being well-liked in a professional network]. And by "good leader" I don't just mean "can make money for companies" – Elon Musk is also good at making money for companies, but I'm not sure people would come to his support in the same way they came to Altman's support, for instance. Also, I think "being a good leader" is way more important than "having the right view on AI risk," because who knows for sure what the exact right view is – the important thing is that a good leader will incrementally make sizeable updates in the right direction as more information comes in through the research landscape.)
To the extent that Microsoft pressures the OpenAI board about their decision to oust Altman, won't it be easy to (I think accurately) portray Microsoft's behavior as unreasonable and against the common good?
what would this change? is there any lack in people being outspoken about framing these organizations as against the common good? it seems to me that this is simply the old conflict between capital and commoner.
Very much appreciate the write-up trevor; especially the substantively lengthy quotes. As other commentators have pointed out, the dichotomy between the two apparent 'factions' is quite clear. A faction of non-profit focus, tentative approach towards development, and conviction of Large Language Models' climb to Artifical General Intelligence makes up one side. The other side is a faction looking to capitalize upon OpenAI's current advances before they are commonplace and which has reservations about the capability for Large Language Models to become a form of Artificial General Intelligence.
As to Sam Altman's return, perhaps the only possible cause of such an outcome will be investor/stakeholder pressure. Will/can such pressure be enough? The support that Samuel Altman has thus far received from colleagues and employees seems to me an organic, knee-jerk rally. This signals to me the possible spawning of a new organization by him from a position of strength. Not to say there was no willingness to collaborate and/or to finance before his ousting, just that the circumstances (story) yield an 'under-dog' bystanders would enthusiastically back (bet on).
E/acc seems to be really fired up about this:
https://twitter.com/ctjlewis/status/1725745699046948996
It is really hard to use social media to measure public opinion, even if Twitter/X doesn't have nearly as much security or influence capabilities as Facebook/Instagram, botnet accounts run by state-adjacent agencies can still game Twitter's algorithms by emulating human behavior and upvoting specific posts in order to game Twitter's newsfeed algorithm for the human users.
Social media has never been an environment that is friendly to independent researchers; if it was easy, then foreign intelligence agencies would run circles around independent researchers in order to research advanced strategies to manipulate public opinion (e.g. via their own social media botnets, or merely just knowing what to say when their leaders give speeches).
But yes. E/acc seems to be really fired up about this.
Things will get interesting if Sam gets reinstalled and he ended up attacking the board. Sam will then fire the OpenAI board for trying to do what they think is right? The chances of this happening? I would say that if this really will happen - It will not be a pretty situation for OpenAI.
I feel as though, MiguelDev, S. Altman's return to OpenAI would be brokered by a third party (Microsoft, et alia) with 'stability' as a hard condition to be met by both sides. Likewise, it isn't worth Mr. Altman's time — nor effort — to seek revenge. Not only would such an endeavor cost time and effort, but the vengeance would be exacted at the price of his tarnished character. The immature reaction would be noted by his peers and they will, in turn, react accordingly.
It was not me who thinks it will be brokered by Microsoft, its this forbes article outlined in the post:
https://www.forbes.com/sites/alexkonrad/2023/11/18/openai-investors-scramble-to-reinstate-sam-altman-as-ceo/?sh=2dbf6a5060da
Yes, I was informed by and was referencing that same article trevor linked to in his original posting. I did not, however, assume you "[think] it will be brokered by Microsoft". Regardless, I'd love to hear any critique or/and disagreements with my original reply — or even an explanation as to why you would believe I assumed you thought as much.
(Post-Scriptum: I was not the one whom down-voted the overall karma of your post.)
The only chance that there will be no response similar to a vengeful act is if Sam doesn't care about his image at all. Because of this, I disagree to the idea that Sam will be by default "not hostile" when he comes back and will treat what happened as "nothing".
There is a high chance that there will be changes - even somewhat an attempt to recover lost influence, image or glamour - judging again by his choice to promote OpenAI or himself "as the CEO" of a revolutionary tech all in many different countries this year.
BTW, I do not advocate hostility, but the pressure on them: Sam vs. Ilya & the Board on simply forgetting what happened is not possible.
It seems like a strategy by investors or even large tech companies to create a self-fulfilling prophecy to create a coalition of OpenAI employees, when there previously was none.
What? I think the developments made it reasonably clear there were organically two factions in the company. But Ilya’s faction strikes me as inexperienced and possibly emotionally immature. I just sincerely doubt there’s a conspiracy behind it. I’m willing to bet money the company will remain a capped for profit.
"Anonymous sources" are going to journalists and insisting that OpenAI employees are planning a "counter-coup" to reinstate Altman, some even claiming plans to overthrow the board.
It seems like a strategy by investors or even large tech companies to create a self-fulfilling prophecy to create a coalition of OpenAI employees, when there previously was none.
What's happening here reeks of a cheap easy move by someone big and powerful. It's important to note that AI investor firms and large tech companies are highly experienced and sophisticated at power dynamics, and potentially can even use the combination of AI with user data to do sufficient psychological research to wield substantial manipulation capabilities in unconventional environments, possibly already as far as in-person conversations but likely still limited to manipulation via digital environments like social media.
Companies like Microsoft also have ties to the US Natsec community and there's potentially risks coming from there as well (my model of the US Natsec community is that they are likely still confused or disinterested in AI safety, but potentially not at all confused nor disinterested any longer, and probably extremely interested and familiar with the use of AI and the AI industry to facilitate modern information warfare). Counter-moves by random investors seems like the best explanation for now, I just figured that was worth mentioning; it's pretty well known that companies like Microsoft are forces that ideally you wouldn't mess with.
If this is really happening, if AI safety really is going mano-a-mano against the AI industry, then these things are important to know.
Most of these articles are paywalled so I had to paste them a separate post from the main Altman discussion post, and it seems like there's all sorts of people in all sorts of places who ought to be notified ASAP.
Forbes: OpenAI Investors Plot Last-Minute Push With Microsoft To Reinstate Sam Altman As CEO (2:50 pm PST, paywalled)
Wall Street Journal: OpenAI Investors Trying to Get Sam Altman Back as CEO After Sudden Firing (3:28pm PST, paywalled)
The Verge: OpenAI Board in Discussions with Sam Altman to return as CEO (2:44 pm PST, not paywalled)