In late February, the Trump administration and the Department of Defense took the unprecedented step of declaring Anthropic a supply-chain risk, and cutting the company off from government contracts. I predicted in March that despite the administration’s efforts, Anthropic would not only prevail, but would come out ahead:
Bluntly, Trump needs Anthropic (and by extension, the hyperscalers, VC firms, and sovereign wealth funds [who support Anthropic]) far more than they need him. Even if the courts do side with the administration, Trump still has to contend with all the essential companies who have funded his largesse, and the fact that so many of these donors have a vested interest in Anthropic succeeding. That’s extensity.
No matter how powerful Trump thinks he is, no dictator can afford to piss off the coalition that actually keeps him in power. So, in the end, I don’t think Trump’s theatrics will hurt Anthropic much, because there are far too many big, important essential players who need Anthropic, and who have the leverage and ability to get Trump and Hegseth to blink first.
I mention all this, because on Friday, Dario Amodei met with senior White House officials, in an effort to reach a detente. I thought that was interesting, but what I was missing was the specific leverage point: Claude Mythos. Matija Vidmar explained things nicely.
Suddenly, a few things clicked.On March 22, I used FeedForward (my internal research & forecasting engine) to run a forecast on whether Anthropic will still be designated a supply chain risk by the US Government on May 31, 2026.[2] Here’s the current assessment, with increasing/decreasing likelihood measures.
Confidence: 5% (updated 2026-04-17)
Increasing likelihood:
Decreasing likelihood:
In short, what we’re witnessing is extensity in action. It’s not just a question of what AI model some agency uses for its internal chatbot. It’s about what happens when a single company becomes so integral that it can push back against the traditional levers and forces (regulatory, political, economic, or otherwise) that govern behavior.
Though some, like Devansh have called out the media for blindly parroting Anthropic’s press release and not, the primary sources, CVEs, exploit code, etc. He dug into those details and suggested that many of the capabilities might just be hype.
I ran a separate forecast on whether they would still be kept out of government contracts, notwithstanding the supply chain risk designation. I have much higher odds (43% chance vs. the community’s 75% chance) of that being the case, but even those odds are decreasing.
I believe Amodei pushed back on the administration’s demands precisely because he knew Mythos presented a risk
From what's publically known it seemed, they had a deal that prevented the administration to use Claude as a cyberweapon. The administration asked to use it for all purposes.
Amodei's red lines weren't about using Mythos as a cyberweapon either to hack targets or spread disinformation. He did not push back on either of those. He only pushed back on the model being used to make unsupervised kill decision and domestic surveillance.
To the extend that Mythos has dangerous capabilities it's probably in the cyberwar front and Amodei failed to push back on that front.
FWIW, I covered more of Anthropic's boundaries (which, if I'm reading you correctly, weren't actually all that substantial) in this post. I didn't rehash that here.
But it's possible he has other boundaries, e.g., hostile nation-state acts, cyberweapons, but those were not explicitly demanded by the DOD/administration. I don't know. I wasn't in the room.
FWIW, I covered more of Anthropic's boundaries
Given that you didn't talk about Anthropic's boundaries that they have in their actual contract, I think you did a poor job at that. Their Exceptions to our Usage Policy along with public statements by Dario suggests that the current contract has explicit limits to prohibiting use for disinformation campaigns, the design or use of weapons, censorship, domestic surveillance, and malicious cyber operations.
Anthropic was one of several AI companies to offer its AI tools at the bargain rate of $1 annually per agency, via the GSA’s 2025 OneGov initiative.
This looks like a misunderstanding of the dynamic. The DOD has a 200$ million contract with Anthropic.
Also, there’s a reason the DOD and members of the intelligence community use Claude and not Grok for analysis and warfighting capabilities, and it’s not just the GSA deal. Claude is a superior product for many use-cases, which is why Claude was the only LLM designated to handle classified materials until the 28th of February.
Given that the DOD does have a contract with xAI/Grok that's again misleading. It also suggest to a naive reader that the Claude models being superior is the reason, when the fact that Claude running within the classified AWS enviroment when Grok doesn't is probably more significant.
But it's possible he has other boundaries, e.g., hostile nation-state acts, cyberweapons, but those were not explicitly demanded by the DOD/administration.
The administration demanded no specific usecases explicitly. They demanded that all lawful use is allowed in the contract. I don't think you need to have been in the room to know that the position of the administration was "all lawful use". Hegseth seemed quite clear to me that he considers all the explicit limits in the current contract bad. This means getting rid of the current explicit limits for disinformation campaigns/malicious cyber operations.
With respect, I'm not sure you fully read the post. I called out the existing contract specifically, though I did not mention the dollar amount.
On February 26, after weeks of negotiation, talks broke down between the Pentagon and Anthropic after Anthropic’s CEO, Dario Amodei refused to accede to DOD demands for unfettered “lawful”1 uses of its AI tools by the military. Anthropic was generally fine with Claude being used by the military generally (as it had been since July 2025), and by other strategic military contractors, such as Palantir, Amazon, Oracle, and Lockheed Martin, for everything from supply chain logistics and cyber operations, to foreign surveillance & intelligence gathering.
I also mentioned the fact that Anthropic was unique in that it ran in the classified environment. I quoted Anthropic's language explicitly.
Given that, I don't think we're actually in disagreement here, unless you're objecting to my tone or something not related to the substance of my piece, in which case, there's nothing more I have to say.