Wiki Contributions

Comments

When people argue many AIs competing will make us safe, Yud often counters with how AI will coordinate with each other but not us. This is probably true, but not super persuasive. I think a more intuitive explanation is that offense and defense are asymmetrical. An AI defending my home cannot simply wait for attacks to happen and then defend against them (eg another AI cuts off the power, or fries my AI's CPU with a laser). To truly defend my home, an AI would have to monitor and, importantly, control a hugely outsized part of the world (possibly the entire world).

In my non-tech circles people mostly complain about AI stealing jobs from artists, companies making money off of other people's work, etc.

People are also just scared of losing their own jobs.

Also, his statements in the verge are so bizarre to me:

"SA: I learned that the company can truly function without me, and that’s a very nice thing. I’m very happy to be back, don’t get me wrong on that. But I come back without any of the stress of, “Oh man, I got to do this, or the company needs me or whatever.” I selfishly feel good because either I picked great leaders or I mentored them well. It’s very nice to feel like the company will be totally fine without me, and the team is ready and has leveled up."

2 business days away and the company is ready to blow up if you don't come back and your takeaway is that it can function without you? I get that this is PR spin, but usually there's at least some amount of plausible believability.

Maybe these are all attempts to signal to investors that everything is fine, even if Sam were to leave it would still all be fine, but at some point if I'm an investor I have to wonder if given how hard Sam is trying to make it look like everything is fine, that things are very much not fine.

Let that last paragraph sink in. The leadership team ex-Greg is clearly ready to run the company without Altman.

I'm struggling to interpret this, so your guesses as to what this might mean would be helpful. It seems he clearly wanted to come back - is he threatening to leave again if he doesn't get his way?

Also note Ilya not included in the leadership team.

 

While Ilya will no longer serve on the board, we hope to continue our working relationship and are discussing how he can continue his work at OpenAI.

This statement also really stood out to me - if there really was no ill will, why would they have to discuss how Ilya can continue his work? Clearly there's something more going on here. Sounds like Ilya's getting the knife.

According to Bloomberg, "Even CEO Shear has been left in the dark, according to people familiar with the matter. He has told people close to OpenAI that he doesn’t plan to stick around if the board can’t clearly communicate to him in writing its reasoning for Altman’s sudden firing."

Evidence that Shear simply wasn't told the exact reason, though the "in writing" part is suspicious. Maybe he was told not in writing and wants them to write it down so they're on the record.

Sam's latest tweet suggests he can't get out of the "FOR THE SHAREHOLDERS" mindset.

"satya and my top priority remains to ensure openai continues to thrive

we are committed to fully providing continuity of operations to our partners and customers"

This does sound antithetical to the charter and might be grounds to replace Sam as CEO.

I do find it quite surprising that so many who work at OpenAI are so eager to follow Altman to Microsoft - I guess I assumed the folks at OpenAI valued not working for big tech (that's more(?) likely to disregard safety) more than it appears they actually did.

https://twitter.com/i/web/status/1726526112019382275

"Before I took the job, I checked on the reasoning behind the change. The board did *not* remove Sam over any specific disagreement on safety, their reasoning was completely different from that. I'm not crazy enough to take this job without board support for commercializing our awesome models."

It seems to me that the idea of scalable oversight itself was far easier to generate than to evaluate. If the idea had been generated by an alignment AI rather than various people independently suggesting similar strategies, would we be confident in our ability to evaluate it? Is there some reason to believe alignment AIs will generate ideas that are easier to evaluate than scalable alignment? What kind of output would we need to see to make an idea like scalable alignment easy to evaluate?

Load More