You're right that the full story still has never been publicly reported.

That is, unless the current favored cosmology is completely wrong, which is always in the cards.

FWIW, that's why I disagree with one of your minor conclusions: there being an inherent myopia to superintelligences which renders everything past a certain distance "exactly zero". There is quite a bit of possibility in the cards about one of the many assumptions being wrong, which creates both risk and reward for not being myopic. So the myopia there would not lead to exactly zero valuation - it might lead to something that is quite substantially larger than zero.

And since the cost of spitting out colonization starwisps seems to be so low in an absolute sense, per Anders, it wouldn't take much above zero value to motivate tons of colonization anyway.

Indeed, the fundamental epistemological & ontological uncertainities might lead you to problems of the total valuation being too large, because any possibility of being able to break lightspeed or change expansion or any of the other loopholes means both that you are now massively threatened by any other entity which cracks the loopholes, and that you can do the same to the universe - which might then be vastly larger - and now you are in infinite-fanaticism territory dealing with issues like Pascal's mugging where the mere possibility that any of the colonized resources might solve the problem leads to investing all resources in colonization in the hopes of one of them getting lucky. (This is analogous to other possible infinite-fanaticism traps: 'what if you can break out of the Matrix into a literally infinite universe? Surely the expected value of even the tiniest possibility of that justifies spending all resources on it?')

(There is also a modest effect from evolution/selection: if there is any variance between superintelligences about the value of blind one-way colonization, then there will be some degree of universe-wide selection for the superintelligences which happen to choose to colonize more blindly. Those colonies will presumably replicate that choice, and then go on to one-way colonize in their own local bubble, and so on, even as the bubbles become disconnected. Not immediately obvious to me how big this effect would be or what it converges to. Might be an interesting use of the Price equation.)


There has been some spirited debate on Twitter about it which might be relevant:

It's not obvious that 'uncommon' tokens are good or that that's a good approach.

They could also just be unlikely or garbage, and your screening method for filtering for 'uncommon' tokens may ensure that they are garbage. (This is the 'mammogram screening problem': even if you have a good filter, if you run it across trillions of tokens, you will wind up throwing out many good tokens and keeping many bad tokens. There are a number of LLM-related papers about the horrificly bad data you can wind up compiling if you neglect data cleaning, particularly in multilingual translation when you're trying to scrape rare languages off the general Internet.)

Nor are good datapoints necessarily made up of uncommon tokens: there are zero uncommon tokens in my 'microwave' example.

(Data pruning & active learning are hard.)

LLM's have turned out more human like, more oracle like than we imagined?

They have turned out far more human-like than Amodei suggested, which means they are not even remotely oracle like. There is nothing in a LLM which is remotely like 'looking things up in a database and doing transparent symbolic-logical manipulations'. That's about the last thing that describes humans too - it takes decades of training to get us to LARP as an 'oracle', and we still do it badly. Even the stuff LLMs do, like inner-monologue, which seem to be transparent, are actually just more Bayesian meta-RL agentic behavior, where the inner-monologue is a mish-mash of amortized computation and task location where the model is flexibly using the roleplay as hints rather than what everyone seems to think it does, which is turn into a little Turing machine mindlessly executing instructions (hence eg. the ability to distill inner-monologue into the forward pass, or insert errors into few-shot examples or the monologue and still get correct answers).

I can't find anything about tied votes in the bylaws - do they fail?

I can't either, so my assumption is that the board was frozen ever since Hoffman/Hurd left for that reason.

And there wouldn't've been a vote at all. I've explained it before but - while we wait for phase 3 of the OA war to go hot - let me take another crack at it, since people seem to keep getting hung up on this and seem to imagine that it's a perfectly normal state of a board to be in a deathmatch between two opposing factions indefinitely, and so confused why any of this happened.

In phase 1, a vote would be pointless, and neither side could nor wanted to force it to a vote. After all, such a vote (regardless of the result) is equivalent to admitting that you have gone from simply "some strategic disagreements among colleagues all sharing the same ultimate goals and negotiating in good faith about important complex matters on which reasonable people of goodwill often differ" to "cutthroat corporate warfare where it's-them-or-us everything-is-a-lie-or-fog-of-war fight-to-the-death there-can-only-be-one". You only do such a vote in the latter situation; in the former, you just keep negotiating until you reach a consensus or find a compromise that'll leave everyone mad.

That's not a switch to make lightly or lazily. You do not flip the switch from 'ally' to 'enemy' casually, and then do nothing and wait for them to find out and make the first move.

Imagine Altman showing up to the board and going "hi guys I'd like to vote right now to fire Toner - oh darn a tie, never mind" - "dude what the fuck?!"

As I read it, the board still hoped Altman was basically aligned (and it was all headstrongness or scurrilous rumors) right up until the end, when Sutskever defected with the internal Slack receipts revealing that the war had already started and Altman's switch had apparently flipped a while ago.

So I still don't understand "why so abruptly?" or why they felt like they had to take such a drastic move when they held all the cards (and were pretty stable even if Ilya flipped).

The ability to manufacture a scandal at any time is a good way to motivate non-procrastination, pace Dr Johnson about the wonderfully concentrating effects of being scheduled to hang. As I pointed out, it gives Altman a great pretext to, at any time, push for the resignation of Toner in a way where - if their switch has not been flipped, like he still believed it had not - still looking to the board like the good guy who is definitely not doing a coup and is just, sadly and regretfully, breaking the tie because of the emergency scandal that the careless disloyal Toner has caused them all, just as he had been warning the board all along. (Won't she resign and help minimize the damage, and free herself to do her academic research without further concern? If not, surely D'Angelo or McCauley appreciate how much damage she's done and can now see that, if she's so selfish & stubborn & can't sacrifice herself for the good of OA, she really needs to be replaced right now...?) End result: Toner resigns or is fired. It took way less than that to push out Hoffman or Zillis, after all. And Altman means so well and cares so much about OA's public image, and is so vital to the company, and has a really good point about how badly Toner screwed up, so at least one of you three have to give it to him. And that's all he needs.

(How well do you think Toner, McCauley, and D'Angelo all know each other? Enough to trust that none of the other two would ever flip on the other, or be susceptible to leverage, or scared, or be convinced?)

Of course, their switch having been flipped at this point, the trio could just vote 'no' 3-3 and tell Altman to go pound sand and adamantly refuse to ever vote to remove Toner... but such an 'unreasonable' response reveals their switch has been flipped. (And having Sutskever vote alongside them 4-2, revealing his new loyalty, would be even more disastrous.)

Why wouldn't they tell anyone, including Emmett Shear, the full story?

How do you know they didn't? Note that what they wouldn't provide Shear was a "written" explanation. (If Shear was so unconvinced, why was an independent investigation the only thing he negotiated for aside from the new board? His tweets since then also don't sound like someone who looked behind the curtain, found nothing, and is profoundly disgusted with & hates the old board for their profoundly incompetent malicious destruction.)

'If this is how they treat the CEO, how will they treat me?'

You just explained why it's totally disanalogous. An ordinary employee is not a CEO {{citation needed}}.

Yes, that would be immediately reward-hacked. It's extremely easy to never lose chess: you simply never play. After all, how do you force anyone to play chess...? "I'll give you a billion dollars if you play chess." "No, because I value not losing more than a billion dollars." "I'm putting a gun to your head and will kill you if you don't play!" "Oh, please do, thank you - after all, it's impossible to lose a game of chess if I'm dead!" This is why RL agents have a nasty tendency to learn to 'commit suicide' if you reward-shape badly or the environment is too hard. (Tom7's lexicographic agent famously learns to simply pause Tetris to avoid losing.)

unless they were that pressed for time.

They were because they had an extremely fragile coalition and only a brief window of opportunity.

They certainly did not have the power to tell Altman they were going to fire him in several weeks and expect that to stick. None of them, Sutskever included, have ever struck me as that suicidally naive. And it looks like they had good reason to expect that they had little time given the Slack comments Sutskever saw.

Also, remember that Altman has many, many options available to him. Since people seem to think that the board could've just dicked around and had the luxury of waiting a long time, I will highlight one specific tactic that the board should have been very worried about, which possibility did not permit any warning or hint to Altman, and which required moving as fast as possible once reality sank in & they decided to not cede control over OA to Altman: (WSJ)

Some OpenAI executives told her [Helen Toner] that everything relating to their company makes its way into the press.

That is, Altman (or those execs) had the ability to deniably manufacture a Toner scandal at any second by calling up a friendly reporter at, say, The Information, to highlight the (public) paper, which about an hour later (depending on local Pacific Time), would then 'prove' him right about it and provide grounds for an emergency board meeting that day to vote on expelling Toner if she was too stubborn to 'resign'. After which, of course, they would need to immediately vote on new board members to fill out a far-too-small board with Toner gone, whether or not that had been on the official agenda, and this new board would, of course, have to approve of any prior major decisions like 'firing the CEO'. Now, Altman hadn't done this because Altman didn't want the cost of a public scandal, however much of a tempest-in-a-teapot-nothingburger it would be, he was very busy with other things which seemed higher priority and had been neglecting the board, and he didn't think he needed to pay that cost to get Toner off the board. But if he suddenly needed Toner off the board fast as his #1 priority...

The board did not have 'a few weeks'. (After all, once that complex and overwhelmingly important sale was wrapped up... Altman would be less busy and turning his attention to wrapping up other unfinished business he'd neglected.) They did not have days. For all they knew, they could even have had negative hours if Altman had gotten impatient & leaked an hour ago & the scandal had started while they were still discussing what to do. Regardless of whether Toner realized the implied threat at the time (she may have but been unable to do anything about it), once they had Sutskever, they needed to move as fast as possible.

Even if they had decided to take the risk of delay, the only point would have been to do something that would not alert Altman at all, which would be... what, exactly? What sort of meaningful preparation demanded by the board's critics could have been done under those constraints? (Giving Satya Nadella a heads-up? Altman would know within 10 minutes. Trying to recruit Brockman to stay on? 1 minute.)

So, they decided quickly to remove Altman and gave him roughly the minimum notice required by the bylaws of 48h*, without being able to do much besides talk to their lawyers and write the press release - and here we are.

* you may be tempted to reply 'then Altman couldn't've kicked Toner out that fast because he'd need that 48h notice too'; you are very clever, but note that the next section says they can all waive that required notice at the tap of a button, and if he called an 'emergency meeting' & they still believed in him, then they of course would do so - refusing to do so & insisting on 48h amounts to telling him that the jig is up. Whereas them sending him notice for an 'ordinary' meeting in 48h is completely normal and not suspicious, and he had no clue.

Which means that ~all OpenAI employees oppose the OpenAI Charter.

It was striking seeing how many commenters and OA employees were quoting Toner quoting the OA Charter (which Sam Altman helped write & signed off on) as proof that she was an unhinged mindless zealot and proof that every negative accusation of the board was true.

It would be like the supermajority of Americans having never heard of the First Amendment and on hearing a president candidate say "the government should not abridge freedom of speech or the press", all start railing about how 'this is some libertarian moonbat trying to entryist the US government to impose their unprecedently extreme ideology about personal freedom, and obviously, totally unacceptable and unelectable. Not abridge speech?! When people abuse their freedom to say so many terrible things, sometimes even criticizing the government? You gotta be kidding - freedom of speech doesn't mean freedom from consequences, like being punished by laws!'

Hard not to see the OA LLC as too fundamentally unaligned with the mission at that point. It seems like at some point, possibly years ago, OA LLC became basically a place that didn't believe in the mission or that AGI risk is a thing and regarded all that stuff as so much PR kayfabe and not, like, serious (except for a few nuts over in the Superalignment group who thankfully can be ignored - after all, it's not like the redteaming ever turns up any real problems, right? you'd've heard). At that point, the OA double-structure has failed. Double-structures like Hershey or Mozilla never pit the nonprofit against the for-profit to this extent, and double-structures like Ikea where it's a tax gimmick, cannot. And it turns out, pitted that much, the for-profit holds most of the cards.

I don't know how much to fault the board for this. They may well have known how much the employee base had diverged from the mission, but what were they going to do? Fire Altman back in 2020, before he could bring in all the people from Dropbox etc who then hired more like them & backed him, never mind the damage to the LLC? (I'm not sure they ever had the votes to do that for any reason, much less a slippery slope reason.) Leak to the press - the press that Altman has spent 15 years leaking to and building up favors with - to try to embarrass him out? ('Lol. lmao. lel.') Politely notify him that it was open war and he had 3 months to defeat them before being fired? Yeah...

Thus far, I don't think there's much of a post-mortem to this other than 'like Arm China, at some point an entity is so misaligned that you can't stop it from collectively walking out the door and simply ignoring you, no matter how many de jure rights or powers you supposedly have or how blatant the entity's misalignment has become. And the only way to fix that is to not get into that situation to begin with'. But if you didn't do that, then OA at this point would probably have accomplished a lot less in terms of both safety & capability, so the choice looked obvious ex ante.

Load More