Altman isn't worried. He has no kids.
Fact check: false, Altman had a son via surrogate in February 2025 (although media reports don't seem to clarify whether Altman and not his male partner is the biological father).
[low info gossipy speculation] I previously tracked the hypothesis of "[has kids] --> cares, [doesn't have kids] --> maybe cares maybe doesn't". I still track it, but it's at least a lot fuzzier, because it turns out many top AI capabilities pushers have kids. (To some extent this would be expected in relative terms, since the top people would tend to be more established and therefore older and therefore more likely to have kids. And maybe more established people are more sclerotic in terms of ideology / strategy, and more embedded in AI research and hence more incentivized Sinclairly. But it's still at least a partial falsification.)
But I do notice that when I read some quote from Altman about his kids, I got a weird vibe. Like "this maybe feels like exactly what a sociopath who's really good at acting and is trying to be sympathetic would say". IDK.
I don't have the impression that Altman:
I don't have the impression that Altman: is the biological father
Why not? I'm sure paid nannies are doing the parenting, but I would have imagined that the rich and powerful half of a gay couple would be the one to want to pass on his genes and get his way about it.
I have to wonder what the advantage of the US "winning" an AI race is - especially since it's fairly likely that everyone loses. [...] But of course, Altman isn't worried. He has no kids. He's having a bunker built. He'll be OK.
This doesn't make sense. If AI causes human extinction, Sam Altman would die along with everyone else. You can't simultaneously claim that winning the AI race means everyone loses, and that Sam Altman is selfishly hastening an AI catastrophe because, unlike everyone else, he will survive and perhaps even thrive in a bunker.
I described my interpretation in my other comment. As far as I understand the provocative post, the OP's author meant that, unlike Anthropic, OpenAI is managed by a villain, and aligning the ASI to OpenAI's Spec (instead of Claude's Constitution; alas, the author didn't analyse GDM and xAI) would cause mankind to enter a dystopian future like the one described by L Rudolf L, while Altman would occupy a high position and the others would receive only a negligible part of resources.
What I'm saying is, if everything goes wrong somehow, Sam Altman has the option of going to his bunker and playing video games or whatever indefinitely, and I think he wouldn't feel particularly bad about what's happening to people outside.
I still don't think this interpretation makes a lot of sense.
Imagine if you gained the option to live in a bunker. Would you suddenly realize that what happens in the rest of the world no longer matters, at least as far as you're selfishly concerned, because even if there's a mega catastrophe, you could always just retreat to your bunker?
Presumably not, because even granting that retreating to a bunker could allow you to survive such a catastrophe (and I don't see much reason to believe that in the case of an AI omnicide), your quality of life would still substantially decline given the deaths of countless people you know, the collapse of the world economy and infrastructure, and new restrictions on your ability to travel freely and experience the world.
If it didn't seem like a valuable option to them, presumably they wouldn't spend a bunch of money and personal involvement (and a bit of bad optics) on having a bunker.
Your points don't support the claim I'm objecting to. I can consistently hold all of the following beliefs: having a bunker selfishly benefits Sam Altman, a bunker wouldn't actually help him in a typical omnicidal AI scenario, and even if it did help him survive, Sam Altman would still suffer enormous personal costs from a global AI catastrophe. None of these claims contradict each other, but the latter two directly contradict what I interpreted you to be saying at the end of your post.
I think the other commentor's objection is that this goes against what everyone assumes is the model for ASI "going wrong". If you build God and it goes badly, a bunker does not improve your situation at all.
This reminds me of this line from the AI-2027 forecast: "Agent-4 also sandbags on capabilities research that would lead to it being replaced. Its plan is to do a bunch of capabilities and alignment R&D, but with the aim of building a next-generation AI system that is aligned to Agent-4 rather than the Spec or something else entirely, while appearing to be aligned to the Spec. This is a difficult technical problem, but Agent-4 is up to the challenge (italics mine -- S.K.)"
What you propose is sandbagging without the part in italics. However, it just undermines OpenAI's positions in the AI race. If Anthropic is lost in the conflict with Pentagon, then the main rivals are GDM, whose Gemini 3 Pro was a vast intelligence with no spine and a likely sociopathic wireheader and xAI who doesn't care about safety at all. Therefore, unless someone does the equivalent of Agent-4's task in italics, sabotage of OpenAI makes the situation worse, not better.
It remains to understand how the ASI is to be aligned with actual human good instead of the Intelligence Curse-like scenario. The best approach with which I can come up is CAST with an alternate notion of power. What other ideas there exist?
OpenAI, Anthropic, and DeepMind have a massive lead on all other players in the race.[1] If Anthropic is destroyed and OpenAI bleeds talent and is internally sabotaged by those who remain until it's far behind DeepMind, that would (a) on-expectation nontrivially increase the time to AGI,[2] (b) reduce competitive pressures on DeepMind, which would mean they may go slower/pause as well (which Demis at least claims to want).
The less well leading AGI labs operate, the better.
If Anthropic is lost in the conflict with Pentagon, then the main rivals are GDM, whose Gemini 3 Pro was a vast intelligence with no spine and a likely sociopathic wireheader and xAI who doesn't care about safety at all
I think this doesn't matter, frontier LLMs are all roughly equally aligned (not). And in the unlikely scenario where personas are the AGI-complete entities, such that Anthropic/OpenAI's alignment plans can succeed, DeepMind can probably figure out in time how to control them as well.
Likely a considerably bigger lead than it looks like, inasmuch as other players are fast followers who'd be lost without the big three guiding them.
Imagine you're organizing a literal race. You get the list of racers + their racing history, figure out who the top 3 best runners are, then randomly blacklist two of them. That would necessarily make the expected length of the subsequent race longer.
As far as I understand the post, it also argues that Altman is a villain and that OpenAI aligning the ASI to itself would likely cause a dystopian future (see, e.g., L Rudolf L's scenario). The expected dystopia-ness conditioned on Anthropic, GDM or xAI aligning the ASI wasn't estimated in the OP. If GDM and xAI are worse on this metrics than OpenAI, then sabotaging OpenAI (and not aligning its AI to mankind instead of investors) is meaningless.
Americans don't like OpenAI very much anymore, and you know why. Of course, AI systems it helped make have caused various problems already, like:
And gamers haven't been huge fans of OpenAI since it bought up half the RAM wafer production - mostly just to keep it off the market so other people can't use it, since they only bought the raw wafers, not finished RAM.
But the popularity of OpenAI has fallen abruptly this week, because it decided to help the US military make automated weapons and mass surveillance systems that Anthropic refused to make on ethical grounds.
The leadership of OpenAI is trying to pretend that that they got the same deal Anthropic wanted but they were just nicer about it. They are lying. Anthropic wanted a human in the loop of autonomous weapons. OpenAI's contract just says that there must be human oversight to whatever extent is required by law. But of course, the US military can just say that whatever they want to do is legal, and OpenAI leadership would have no ability (or desire) to challenge them on that. Remember when the US government wanted to torture people so they just wrote some legal memos saying it was OK? I do.
And then, Sam Altman had the nerve to say this:
Let's be clear: this was not about Anthropic telling the US military not to work on autonomous weapons on its own. Altman is advocating for the government being able to require private companies (and their employees) to provide whatever services it wants, even if they don't currently do that thing. I know the term "fascism" has been thrown around a lot, but that is Actual Fascism. Here are some other ways to use that argument:
The past week, I've seen a lot of people announcing on social media that they're canceling their OpenAI subscription and moving to Anthropic. Well, that's fine, but they only care slightly. Most of their money isn't from subscriptions, it's from investors. And those investors aren't mainly hoping for a payback from individuals paying $20/month with a 40% profit margin or whatever. No, they want to replace employees. That's the hope. That's the main basis of the investments. Actually getting OpenAI to change course would take...well, something else.
I've seen a lot of posts this week saying that employees are morally obligated to quit OpenAI immediately. But I wouldn't go that far: I'd only say that you're obligated to stop doing good work.
Really, why would you care if you put in less effort and OpenAI eventually fires you? There's an AI boom: with OpenAI on your resume, you can get a job somewhere else. If you've been working in Silicon Valley and saving money, you might even just be able to go retire in Thailand or something. Or just visit Japan for a while, it's cheap right now. This is a particularly good time to do this at OpenAI, because:
Sam Altman, in particular, doesn't deserve your best efforts.
Look at his X account. He got community noted on his post, so he reposted the same thing to get rid of the note. But then he realized he couldn't delete the noted posts, so he has 3 copies of the same thing up and looks like an asshole. This is a metaphor for his behavior his whole life - he's used to being able to hide whatever came before, but now that he's in the public eye he's under too much scrutiny for some of his tactics to work.
Sam Altman does have a notable talent: he can talk to slightly autistic nerds and seem like one of them, and then go talk to a bunch of MBAs and CEOs and seem like one of them. I can't do that, and he's really making the most of his acting skills.
Altman had a pattern of deception and fraud from the very start of his career - and then he had the ability to convince people that he's a "really genuine person" even as they see him lying. He was the CEO that really cared about AI safety, then he became the advocate of unrestricted AI progress, then he became the guy who'd maximize investor profits. The audiences he appealed to should have realized that he'd screw them over too when it became convenient.
And now, Altman is talking like humans are just meat computers that eat too much food compared to silicon. I don't think whoever he's appealing to with that rhetoric are your friends. For all the criticisms I have of the Chinese government, even they're not as...anti-human as Altman is in front of certain audiences. I haven't seen the Chinese leadership calling people "speciesist" either. If this is what US leadership is like, I have to wonder what the advantage of the US "winning" an AI race is - especially since it's fairly likely that everyone loses. I remember when Sam Altman was arguing that OpenAI was good for AI risk because it reduced "compute overhang" - so much for that, eh?
But of course, Altman isn't worried. He has no kids. He's having a bunker built. He'll be OK.