Posts

Sorted by New

Wiki Contributions

Comments

At least one of them has explicitly indicated they left because of AI safety concerns, and this thread seems to be insinuating some concern - Ilya Sutskever's conspicuous silence has become a meme, and Altman recently expressed that he is uncertain of Ilya's employment status. There still hasn't been any explanation for the boardroom drama last year.

If it was indeed run-of-the-mill office politics and all was well, then something to the effect of "our departures were unrelated, don't be so anxious about the world ending, we didn't see anything alarming at OpenAI" would obviously help a lot of people and also be a huge vote of confidence for OpenAI.

It seems more likely that there is some (vague?) concern but it's been overridden by tremendous legal/financial/peer motivations.

I've been thinking about these allegations often in the context of Altman's firing circus a few months ago. I've known multiple people who suffered early childhood abuse/sexual trauma - and even dated one for a few tumultuous years a decade ago. I had a perfectly normal, happy childhood myself, and eventually came to learn that this disconnect between who they were most times vs times of high-stress was tremendously unintuitive (and initially intriguing) for me. It also seemed to facilitate an certain meticulousness in duplicity/compartmentalization of presenting the required image and confidently saying whatever needed to be said, which often yielded great success in many situations. 

Elon Musk, as another example, has been quite public about his difficult childhood - and how it might have helped him professionally, and there is ample corroboration for this. There are also definite allusions to some psycho-sexual aspects.

I cannot help but see patterns of Extreme Disconnection with Sam and consequently with OpenAI. There seems to be a clear division between people who are on his side, and people who aren't. He was quite literally fired for not being candid with the OpenAI's board, and his initial reaction was completely contradictory to the tone and messaging of "benefit for all mankind".  The (mostly) seamless transition from a relentlessly vocalized emphasis on the "open" benevolent non-profit with an all-powerful board to whatever OpenAI is now, the selective silence of the board and especially Ilya Sutskever, presumably in the face of legal and financial muscle-flexing, Geoffrey Irving's tweet - all seem to speak to this idea of a world in which many well meaning, intelligent people who have never been in actual conflict with him, and have massive aligned incentives, would readily believe him to be a certain kind of "good" person X who would never extrapolate to be a kind of "bad" person Y, not accounting for the unconscious-level disconnection that undergirds this. 

I guess I'm wondering if I'm being unreasonably concerned about this in regard to the "future of humanity", or just projecting my own biases and experiences.