Last night, Anthropic was given its preliminary injunction, with a stay of seven days.
Emil Michael is a very angry person right now. So is the Honorable Judge Lin.
We were worried we would draw a judge that had no idea how any of this worked and would give the government absurd deference or buy into nonsense arguments.
That is not how it played out. Judge Lin very much understood the issues in play, as they did not require a technical background. She hammered the government in the hearing, and she wrote one of the most forceful, devastating judge opinions I have ever seen. It was an honor and sparked joy to be able to read it.
If you want the short version and don’t care about the incremental steps, you can skip directly to Judge Lin Drops The Hammer, leaving the rest as a historical document and source for those who need to establish various facts going forward, including in court.
Logistical note: Due to breaking news, AI #161 Part 2 will be published on Monday. Then, if no major news breaks that preempts this, Tuesday and Wednesday will be about Anthropic’s RSPv3.
Anthropic Responds To The DoW’s Brief
First Anthropic presented a brief and many provided amicus briefs. Then the government responded with its own brief. That was covered last time. So next up was Anthropic responding to the Department of War (DoW).
There was also the following important email included as an exhibit.
Roger Parloff: Some Anthropic updates: On 3/4, just hours before Hegseth declared Anthropic a “supply chain risk”—allegedly due to threats of “sabotage” & “data exfiltration”—his under sec’y wrote Anthropic that they were “very close” to a deal, asking to change a prepositional phrase. …
This was also 17 (!) minutes before the Dario memo was leaked via The Information.
Emil Michael (email to Dario Amodei): After reviewing with our attorneys and seeing your last draft (thanks for being fast), I think we are very close here. I was able to take “unlawful” out of the bulk collection section which essentially reverts that clause to your language from last night. Therefore, the only change we would require would be “in accordance with” as opposed to “only as permitted under.” See option 1 in the attached. We believe this goes above and beyond and we have made significant concessions. I hope this work as I am running out of time.
It is extremely hard to square that email with any interpretation of events other than that the parties were negotiating, they were close to a deal, the only remaining issue was technical wording around domestic surveillance, and then the DoW retaliated against Dario Amodei’s use of free speech by declaring Anthropic a supply chain risk.
You see, I am a disciplined Serious News Reporter now, so I say ‘extremely hard to square’ and absolutely am not saying ‘liar liar pants on fire.’
However, I do make this request. If you are reading this, and you are talking to Emil Michael, and you think he has been consistently candid with you, please consider the possibility that you are mistaken about that. Also consider, as evidence of this, the section analyzing his testimony.
This all once again made Emil Michael very upset, as basically every statement by Anthropic does, and many other things also do, and he fired back with this:
Under Secretary of War Emil Michael: Pulling out all the stops to mislead. @SarahKHeck begged to see how @AnthropicAI could fix their behavior. Said she was trying her best to get @DarioAmodei to be rational. Said @ChhabraT was sabotaging talks. Even worse, she was only in 20 minutes of 4 months of negotiations. 15 witnesses were there when Dario said he would consider an exception for an incoming missile attack that would kill millions of Americans. She wasn’t in the room. Can’t wait for the UNDER OATH phase of this. How could she claim that no one at Anthropic said that? Hmmm
I reminded him that these declarations were sworn testimony, thus under oath. He replied that the witnesses have not yet been cross examined.
That’s fair enough, but the typical reading of ‘wait until they are under oath’ is that once under oath, as Emil Michael so far is not, witnesses will have to tell the truth, upon penalty of perjury, here against a highly vindictive government. As in, previous statements have been cheap talk, whereas this is for real. Which it clearly is.
Mostly what Emil Michael said had nothing to do with the statement by Sarah Heck.
The government’s legal defense did not include mention of the supposed missile conversation, let alone a distinct version of events. One presumes this is because, when the time comes to give sworn testimony, suddenly the description became rather different, and not so beneficial for their case. Thus, Heck did not mention it, either, so why does Michael here?
Whereas when you look at what Heck did say, there are some important claims, that directly contradict the government’s case in key ways, that Michael does not seem to be contradicting.
Sarah Heck claims the following (among other things), under oath:
Negotiations between Anthropic and DoW continued until March 4, 2026, even though the SCR designation had been formalized a day prior, as reflected in the email above.
The government’s purported fear that Anthropic might ‘disrupt’ the military was never raised during the actual negotiations.
Anthropic never expressed any interest in having a role approving DoW operational decisions, and indeed offered clarifying language to avoid this:
The language was: “For the avoidance of doubt, [Anthropic] understands that this license does not grant or confer any right to control or veto lawful Department of War operational decision‑making.”
At no point were concerns raised that Anthropic might take technical steps to disrupt the military (which DoW often called ‘pulling the plug on a live military operation’ although she doesn’t use that term).
Doing so would be, as I have said, technically impossible for Anthropic to do, and Anthropic would have been happy to explain this if asked (although I’d add it would be highly negligent for the government not to be aware of this fact anyway).
Anthropic has never objected to the use of its tools in government operations, or attempted to limit its use in an ad hoc manner.
If those facts are true, the government has absolutely no case.
Nor would Michael’s later submitted testimony contradict Heck’s central claims. Judge Lin would repeatedly point out that Anthropic’s assertions went unchallenged, or in other cases that the government had agreed with those assertions. Michael will claim that Anthropic expressed this interest in #3, or that there were concerns related to #5, but his testimony will make clear that he had no basis for this other than being upset about the course of negotiations, and being upset about Anthropic’s speech. His attempt to contradict #6 only confirms Anthropic’s technical account.
Thiyagu Ramasamy claims the following (among other things), under oath:
No one at Anthropic had yet seen the memorandum or declaration justifying the Supply Chain Risk designation until it was filed in court.
The statements reflect a fundamental misunderstanding of how Anthropic’s tools are deployed in classified systems and otherwise used by the government, and contain multiple factual misstatements Anthropic could have corrected.
The memo claims Anthropic has been an increasingly untrustworthy partner to DoW. This is false, Anthropic has worked to deepen its relationship.
One could say that ‘trustworthy’ is in the eye of the beholder, and in some circumstances ‘has any ethical principles at all’ makes you untrustworthy.
See my entire Moral Mazes sequence for more details on such matters.
Anthropic fine-tuned Claude Sonnet to match the classified environment, to avoid too much discussion of classified materials, using Anthropic employees with security clearances. Later, Anthropic on its own initiative and at its own expense, developed a version tailored to national security needs, called Claude Gov.
Once Anthropic deploys Claude Gov or Claude Sonnet to the third-party cloud provider, Anthropic has no access to, or control over, the model as deployed and used by government customers. Anthropic has no physical way to interfere with use of its models.
It is explained again, in detail, that Anthropic does not want nor does it have an ‘operational veto.’ It cannot physically interfere with government operations. There is no backdoor or kill switch.
There would, for the same reasons, be no possible way for Anthropic to sabotage operations, or to exfiltrate data, or do any other such thing.
Nor could Anthropic ‘update’ its models on its own. That is impossible. Any update would go through the same testing process.
I’m going to pause to appreciate section 18, because it’s great, and because somehow it was necessary, and also the kind of thing you write in order to ensure that the judge who has no idea how AI works can understand it.
Changing model weights or guardrails is analogous to ordering a cake at a restaurant. Once the cake is served, the diner cannot adjust the recipe at the table, and the chef cannot reach into the dining room–secretly or otherwise–to change it either. Even if the cake is missing only a teaspoon of baking soda, it must be made again from scratch with the correct ingredients baked in.
Here, Anthropic is the chef. If changes to model behavior are desired, whether small or large, Anthropic can prepare a new version of the model with those changes built in.
But that new version is not automatically substituted. Before deployment, the third-party cloud provider and DoW security reviewers inspect and approve it to ensure it is safe and appropriate. Only after that approval, and only after the customer confirms the new version performs as expected, can it retire the prior model.
The government says models may ‘drift’ over time. That’s not a thing.
LLMs are indeed probabilistic and not fully predictable, but Anthropic has less of this problem than other labs, not more.
The CDC ran into Anthropic’s guardrails when using the commercial version of Claude (which I could have told you would happen), and once Anthropic heard about this they worked to let the CDC have access to a version with fewer guardrails. The government’s concerns about this were misplaced.
Concerns about foreign nationals at Anthropic, raised suddenly in court by the government, are not unique to Anthropic and Anthropic employs extensive precautions here. If anything he could have said stronger things here, see the next section for more details.
Alan Rozenshtein and Others Suggest A Narrow Legal Way Out
Alan Rozenshtein suggested the obvious remedy for Anthropic in Anthropic PBC vs. Department of War. The government should be allowed to cancel direct contracts, but not issue a government-wide ban or a supply chain risk designation, and definitely not allow the government to demand secondary boycotts or forbid commercial relationships.
The government should of course be free to try to unwind its commercial relationships with Anthropic, but must do so using normal procedures. As Alan notes, the alternative is unthinkable in terms of its impacts on the procurement system, allowing arbitrary retaliation against any AI company at will.
I see this pattern of saying that Anthropic is unlikely to prevail on first amendment grounds, or on grounds that the actions are pure retaliation, arbitrary and capricious, rather than on failure to follow procedure. This seems like a combination of a way to avoid embarrassment or a broad ruling via a narrow way out, as courts prefer, and also a basic statement that our Republic is on the brink of collapse such that it no longer matters how obviously an action is purely arbitrary and capricious retaliation, and the courts likely no longer care.
I guess I agree, in the sense that this is not where I want to have a test case, but hot damn, if you can’t win this case on the more fundamental grounds then the Republic’s days are not only numbered but there are not very many of them left.
The good news is that Judge Lin ended up in my corner. Yes, the government loses on procedural grounds, but also on all the other grounds, including the First Amendment.
DoW Tried Another Uniquely Ill-Suited Theory Against Anthropic
This argument fared so badly in court that it was not mentioned in the Judge’s ruling, but it is worth noting that they are trying it.
It takes a lot of stupidity to surprise me but this worked: The Pentagon is saying the Anthropic’s foreign workforce ‘poses security risks’ in its actual official court filing. There is no version of this that isn’t far worse at OpenAI and Google and xAI, since Anthropic has by far the best procedures in place and also the smallest share of foreign and Chinese engineers, and has by far the strongest anti-China stance.
The Pentagon argument is literally that they don’t trust Anthropic, because reasons:
Axios: [Pentagon is saying]: ‘The risks with other major U.S. AI companies that use foreign workers are reduced by “the technical and security assurances of the other labs’ leadership, along with their consistently responsible and trustworthy behavior” when working with the Pentagon, the filing states. “Anthropic’s case, however, is different.”
What are those reasons? At best, because Anthropic said no to the contract, and very possibly this was a political or corporate hit from day one. Therefore their foreign workers are risky while others are not. It’s black letter, the Platonic ideal of retaliation and disregard for even the appearance of rule of law.
It’s also directly in conflict with Trump’s agenda for America to win in AI. If you start telling labs you have a problem with foreign or Chinese engineering talent, and actually force such people out? We lose. Period. It’s that simple.
Samuel Hammond, who was quoted in the piece, has provided us with his full comment to the reporter, which is a good thing to have on hand so quoting in full although most of you likely don’t need to read it:
Samuel Hammond: I’m quoted in this piece so let me provide my full comment to the reporter:
The most striking thing about the government’s filing are the things it *doesn’t* mention. It doesn’t mention anything about Anthropic hesitating to allow Claude to be used to defend an incoming hypersonic missile, for instance — one of the many bizarre things alleged by @USWREMichael .
The focus on foreign national employees is an indicator of how thin the DoW’s case is. It is also an extremely fraught line of argument to go down.
Every leading US AI company employs a substantial number of foreign nationals. In FY 2025, Amazon, Microsoft, Meta, Google, Apple, Oracle, Cisco, Intel, and IBM all appeared in the top 50 employers by number of granted H-1B visas, ranging from a few hundred to over 6,000. Meta alone had 5,123 approved H-1B petitions in 2025.
(See [here]).
This is an undercount, of course, as there are many other visa pathways as well as greencard holders and dual nationals.
The share is also higher in AI. A large plurality of the core research and engineering talent at every frontier AI lab is foreign, reflecting the global nature of the race for top AI talent. One talent tracker shows Chinese-origin researchers constitute roughly 40% of top AI talent at US institutions. Total foreign nationals likely constituting 50-65% of research teams specifically. This is certaintly true to my experience on the ground.
So the first point is that employing foreign nationals, including Chinese nationals, is not unique to Anthropic. The more important question is what measures are taken to protect against insider threats.
Ironically, within the industry Anthropic is widely considered to be the most serious and proactive about policing insider threats from foreign nationals and otherwise. They were early adopters of operational security techniques like compartmentalization and audit trails, in part because they were early to partner with the IC and DoW, but also as a reflection of their leadership’s strong convictions about the future power of the technology.
They were audited last year on these points: the compliance review found Anthropic employs role-based access control, just-in-time access with approval workflows, multi-factor authentication for all production systems, and quarterly access reviews.
(See [here])
Anthropic is known for its security mindset more generally. Last year they famously disrupted a Chinese espionage effort occuring on their platform, banned the PRC from their services, and worked with the NSA and others to share intel.
I can’t speak to every other company, but the contrast is perhaps most stark with xAI. X employees famously slept in tents to work around the clock, are disproportionately Chinese, and have at least one case of an employee walking out with tons of sensitive data. See [here].
Anthropic is also famous for its remarkable employee retention, which is another important vector for IP theft and security leakages.
It’s important to underscore just how precarious the DoW’s case is, both on the legal merits, and as a potential precedent for the US AI industry. If employing foreign nationals is treated as a prima facie supply chain risk, *no* major US AI company would be eligible to contract with the DoW, along with most of the tech sector.
Insider threats are a genuine and tricky concern. Many defense companies are ITAR restricted, meaning they can *only* hire US citizens. If that were the standard in AI, we would destroy all our frontier companies in an instant, and then scatter that talent around the world for our adversaries to scoop up.
So in short, the DoW’s argument is both ridiculous and playing with fire.
Joshua Achiam (OpenAI): I think these are important and sober considerations. One more I want to add: it may be a serious risk to US national security interests to become sufficiently inhospitable to foreign technical talent that we drive them to go back home. That would significantly decrease the US capacity for making technical progress at the same time as it hands an extraordinary bounty of talent and know-how to our adversaries and other strategic competitors.
The success of the United States in technology is partly safeguarded by being such a powerful talent magnet: every great researcher or engineer who comes to work here is not working for another country. To the extent that we are in a competitive global race, we should be genuinely cautious about the possibility of diminishing our advantage at the critical moment.
It is very much playing with fire to suggest we don’t want Chinese AI talent working for our benefit, or to risk making them feel unwelcome.
Also this:
china232332: I just pull up the macro polo chart every time
Breaking Defense points out that yes, the public statements by DoW very much undermined their case in court. If DoW had gone ahead with the Supply Chain Risk designation, using nominally correct procedures and without announcing it was all retaliation, they could even have classified their justifications. The main thing defending our Republic is that those looking to take it down have strong incentives to keep saying the quiet parts out loud.
Potentially Long Suffering Judge Rita Lin Goes Hard At Hearing
She released her questions for both parties in advance of the Tuesday hearing.
Is Department of War (DoW)’s position that Hegseth’s Tweet had no legal effect? Did Hegseth’s Tweet reflect DoW’s intent? We agree DoW can’t legally do that? If we so agree, how can defendants face irreparable harm from it?
Does DoW concede that Hegseth’s letter declaring Anthropic a supply chain risk failed to contain ‘a discussion of less intrusive measures and why they were not reasonably available’ as required by law?
If a contractor used Claude Code to write software, would that be cause for termination of that contractor under the supply chain risk designation, despite there being no argument that this would be a security risk?
Does DoW agree nothing Anthropic did or said was an act of sabotage, an introduction of an unwanted function, or a subversion of an IT system, as per the definition of Supply Chain Risk?
In what way would Anthropic have the access or control to do such things after handing over its systems?
Could not most IT vendors update their systems to bury unwanted functions?
Is DoW saying that the sole basis for the SCR designation is that the vender acted stubbornly or refused to agree to contract terms, causing the DoW to question its trustworthiness?
When was the memorandum from Emil Michael containing the risk assessment completed and signed?
What is the evidence that Anthropic products are used by [various agencies] as required for Anthropic to have relevant standing?
The questions are brutal because the facts are brutal, and Judge Lin is paying attention and asking the relevant questions.
In advance of the hearing, I said the answers were:
No, they can’t legally do that, so DoW has to say this either did or did not reflect intent, and both answers have problems.
The way Anthropic can face irreparable harm is unless it is explicitly struck down, and to some extent even if it is, it creates uncertainty and is a reflection of intent of DoW, such that anyone who uses Anthropic products or even associates with Anthropic potentially faces retaliation, or could be forced to offboard Anthropic business dealings at any time. Indeed, Anthropic did observe changed behavior in response despite no one thinking that the Tweet had a direct legal effect.
I presume there was no such discussion or she wouldn’t be asking the question. It is possible that the information was sealed (if so, why?) and she just missed it? This alone makes the supply chain risk designation illegal.
Technically yes it can do this, so DoW has three choices. It can make up additional ad hoc arguments for why that too is risky, or argue that it’s unavoidable incidental damage, or it can claim that its intent here is narrower than that when it plainly isn’t.
The answers are: It was obviously not sabotage, Anthropic is not able to do such things physically, other IT venders can more easily do such things given they can patch software in ways Anthropic cannot, and then I do not know what dodge the DoW tries to explain the situation without the argument both being obvious nonsense and also applying to OpenAI.
Presumably they have to tell the truth, and it could be extremely awkward given the date was initially missing. Different dates have different issues.
He later would say this happened on March 2, 2026.
Anthropic will likely need to document this somehow if they want full emergency relief to extend to these nine agencies. I don’t know if they have documentation, and don’t know for sure if all of them indeed used the software.
DoW said the Tweet had no legal effect and no reasonable person could think otherwise, and now that they’ve clarified it in court that ends the matter.
No explanation was given why they said things they supposedly didn’t mean.
This is all rather Obvious Nonsense. Plenty of reasonable people were uncertain about this and it was clearly an attempt to murder Anthropic.
Obviously this is not how any of this works or was created to work, and amounts to a claim that the DoW is above the law.
Taken seriously, this argument says that the Executive is above the law so long as Congress does not sue, since any harm for breaking the law is only harm to Congress. But also Anthropic was very clearly harmed here.
Then DoW said that using Claude Code as a contractor to write software for the department would be fine (?!) but that ‘the Department shouldn’t have to go contract by contract to make sure Claude wouldn’t infect DoW systems.’
This is even more clearly Obvious Nonsense on so many levels, just making things up and saying them.
The first part of this is a reconciliation attempt, but the second part is basically saying ‘nuke them from orbit it’s the only way to be sure’ as a reason why they get to nuke things.
It also is saying that the reason not to use less intrusive measures is that using those measures sure sounds like a bunch of work, and they’re not ‘work’ guys.
DoW claimed that ‘updates’ would be required, which would allow Anthropic access and ability to ‘engage in sabotage’ if it wanted to, admitting that otherwise Anthropic cannot do this. Judge Lin asked if DoW has to accept such updates (the answer is no) and DoW dodged.
Lin pointed out that most vendors can (far more easily) do such updates, so what is this about? Was it the Department’s view that stubbornness in insisting on contracting terms made a vendor a supply chain risk?
DoW said it was about raising concerns to the Department about lawful uses of the software, and their refusal to agree to an all lawful use contract, and this had destroyed trust. So yes, it was about refusal to agree to a contract, and it was about speech, or as Lin said ‘asking annoying questions.’ Whereas DoW says it is concerned Anthropic might in the future install a kill switch, and Lin flat out did not see the connection.
This was answered by Emil Michael’s declaration, so it was skipped.
Anthropic proposed submitting this evidence that day. DoW objected and they argued over exact timing for this and additional arguments.
Lin asked Anthropic whether DoW could prevent contractors from using Claude, and Anthropic said they simply wanted restoration of the February 27 status quo, and DoW could do anything they could have done then via normal procedures.
Lin said to anticipate a ruling within the next few days.
Dean W. Ball: Pete Hegseth tweeted, basically, “this is my ultra-super official determination, and it is absolutely final,” and the government’s lawyers are standing in court with a straight face saying this “had no legal effect and no reasonable person should have concluded that it did.”
Samuel Roland: Anthropic pointing out that 13 million people saw the Hegseth tweet, very few saw the latter letter, need a preliminary injunction (or stipulation) to provide a clarifying statement for other Anthropic business relations to cure the harm here.
DoW does not agree to stipulation.
Not agreeing to the stipulation seems like rather atrocious bad faith here. If no reasonable person could take [X] seriously, then why refuse to repudiate [X]?
Emil Michael Tells On Himself
Emil Michael did indeed file two declarations. So what did he say?
I go over it extensively, for those who need details, but essentially Michael clarifies that his words boil down to almost nothing other than that:
Anthropic insisted that its contract carried force of law.
Anthropic insisted its contract include particular restrictions.
Anthropic insisted that if DoW wanted permission to violate the terms of the contract, that DoW ask them for this permission.
An Anthropic executive once had a phone call in which they questioned the propriety of a particular operation, despite it being within the terms of service, which means he never accused DoW of violating the terms of service.
Anthropic’s product is an LLM and sometimes they update it to be better.
He and DoW then retaliated against Anthropic for the above.
That’s the entire argument for why Anthropic is a supply chain risk, and why he is paranoid that they might actively be sabotaging the American military.
This matches the statements in court, which agree that DoW’s worries and designation are purely based on their take on the conversations between the parties. As in, they are retaliation for speech, period.
He admits, if you follow the logic, that all other talk boils down to the above.
His summary says that Anthropic’s ‘unreasonableness’ ‘endangers the strategic implementation and technical integrity’ of DoW’s systems, which is false, that it is a ‘direct challenge to the government’s ability to control its own lawful operations,’ which is false, and that it renders them a supply chain risk as per the legal definition, which is false.
He then says that ‘if DoW were to accede to Anthropic’s demands, the sought-after contract language would introduce a vender-imposed point of failure.’ This is flatly false, since you cannot ‘introduce’ something via language that is already present and signed by the Department of War, and also represents a misunderstanding of the technical systems involved that, if present, Michael had every opportunity to avoid.
By his own principles, essentially every contractor is a supply chain risk. He then explicitly cites Anthropic’s ‘hostility in negotiations’ as a motivating factor, as if this was not itself an illegal justification.
He also mentions his claims that are presumably related to the Maduro raid. Since this is his version under oath, we should assume he chose his words carefully.
Emil Michael (under oath): During our negotiations, one of Anthropic’s executives questioned the propriety of the potential use of their software for a sensitive military operation abroad despite that being permitted under the existing terms of service.
He does not care to elaborate further, so we should assume that this is a maximally damning version of that conversation, as he believes that it happened, and respond accordingly.
He then claims this led to alarm about whether they would ‘cause their software to stop working or cause some other disastrous action that would put our warfighters in danger.’
That second statement reflects at best a willful misunderstanding of how all of this works, as has been repeatedly explained. Anthropic cannot ‘cause the software to stop working’ and if they were alarmed a simple question would have confirmed this. It also equates one member of a counterparty at one time questioning some potential action’s ‘propriety’ to assuming someone will then engage in sabotage and put servicemen at risk, which is at best highly disingenuous.
Michael’s statements about LLMs experiencing technical ‘drift’ and requiring ‘constant tuning’ are simply false. His entire section 3 here is at best a series of intentional misrepresentations. He claims Anthropic has the ‘motive, opportunity and means’ to introduce ‘vulnerabilities’ into the software, when Anthropic has neither motive nor opportunity, and to the extent it has any of the three so does any other vender. Michael seems to think he it speaking to Fox News, or to the new right on Twitter, when he is instead speaking to a court of law.
He specifically attributes Anthropic’s supply risk to the negotiations themselves, and in particular to Anthropic’s true and very standard claims that its contract terms carry the force of law.
His statement that Anthropic asserted it have ‘an approval role in the operational decision chain’ is simply false.
All of this is very clearly punishment for speech, even if we believe Michael and all of his false statements.
Not only does his supply chain risk designation fail to articulate a theory under which Anthropic would be a supply chain risk, and fail to explain why less intrusive measures would be insufficient when they would obviously be sufficient even if Michael’s claims were true, Michaels has affirmatively testified under oath that this is all retaliation for speech, in some combination of these named actions, and presumably other speech that he has otherwise blamed as part of ‘throughout these negotiations’:
One executive once questioning the ‘propriety’ of some operation.
Anthropic correctly insisting that its contract terms carry force of law.
Anthropic refusing to agree to DoW’s preferred new contract terms.
A bunch of this reiterates previous statements, including reiterating many claims that we know are false. I’ll skip those parts.
He claims Anthropic is wrong to say certain concerns were not raised, because DoW said they needed permission for ‘all lawful use.’ He says that ‘the department need not share its national security concerns with private actors.’
I take this as an admission that those concerns were not raised, except insofar as they demanded ‘all lawful use’ and that Anthropic’s statements were accurate. Thank you.
Now here is the big one, where he clarifies exactly what was supposedly said about requiring ‘real time authorization’:
Emil Michael (under oath): In a meeting on December 7, 2025, Anthropic leadership expressed that Do W would have to call Anthropic in real time to seek authorization for a usage exception to one of their redlines, which are not prohibited by law.
Alarmingly, that statement demonstrated not only that Anthropic demanded an operational veto of DoW’s decision-making, but that Anthropic would seek to exercise that veto, possibly in situations where any delay or disruption to U.S. military operational decisions and execution could endanger American lives and national security.
So in other words, Emil Michael is claiming that:
If action [X] would be in violation of the contract, which has force of law…
…and DoW wanted to take action [X]…
…then they would need to seek Anthropic’s permission in advance, to do that.
Um, yeah? That’s the whole point of having contract terms? What the hell, man?
There is no implication that Anthropic would exercise that veto, or that this would apply to real time emergency military exercises. All this says is, if you want an exception to our deal, then you ask us first.
What does Emil Michael want? To be able to break the deal in any way he likes, without asking for permission first?
In a game of Deal Or No Deal, that is what we like to call No Deal. At all.
Or the deal is, I do what I want, and you like it. That’s it.
Notice that he does not describe the hypersonic missile situation here, because he knows that he cannot make such a detailed claim in a way that would benefit him.
Once again, in an emergency situation, you do the thing anyway, breaking the deal, file it as emergency use, and then ask for permission afterwards, just like with any other military contract. There’s a standard procedure for this.
Okay, what’s the other half?
Second, in both meetings and a publicly disclosed message to Anthropic staff, Anthropic leadership stated that it has two redlines that it will not allow the Department to cross.
If accepted, this would fundamentally insert Anthropic into decision-making related to those issues, and it engenders concern about other redlines the company may have, now or in the future.
So literally Emil Michael is saying that because Anthropic insisted upon any red lines or terms at all, that meant Anthropic was ‘fundamentally inserting itself into decision-making.’ All Michael is accusing Anthropic of, at all, is having any principles at all.
In section 8 he reiterates that by ‘desire an approval role in military operations,’ he means that Anthropic wanted to write terms into its contract. In Section 9 he says this insistence on any rules at all ‘injects the company into DoW’s decision-making’ because that might be embedded in the functionality of the system, a ‘per se operational veto.’
Despite Anthropic’s claims that before contract negotiations broke down the parties were near agreement on language that would address Anthropic’ s concerns about its technology being used for lethal autonomous warfare, the fact remains that Anthropic would not agree to acquisition of its LLM products with the usage terms and technical and service delivery specifications DoW requires.
‘Despite Anthropic saying we almost agreed upon terms, we had not agreed on terms.’
Yes, that is what ‘almost’ means. Note also that he doesn’t specify that the disagreement was about weapons, which affirms the email’s claim that the only remaining difference, the one ‘required’ by DoW, was about surveillance.
Contrary to Anthropic’s suggestion, any ongoing discussions do not undermine any of the determinations made by DoW. Rather, DoW will consider any information provided by Anthropic that may warrant altering its supply chain risk designation in whole or in part.
There is no explanation given for why ‘the two sides has all but agreed on terms’ does not undermine the concept that Anthropic cannot be allowed to touch the supply chain. He just says things.
In section 13 he says Anthropic has ‘unusual degree of control’ over its model, which is false, and an adversarial posture, which is speech.
In section 14 he clarifies that this ‘control’ was that Anthropic would update its model to keep it at the cutting edge of technology and it has ‘full control over the weights.’ Oh, okay. So test them first.
In section 15 he says these risks also apply when Anthropic is a secondary contractor.
In section 16 he says the above reasons are the reasons DoW took action, and that they are ‘conducting an audit’ for any malicious or unintended software intrusions.’ This is almost certainly pure grandstanding, but in general the military should be conducting such audits all the time on all of its software and LLMs, presumably. Why are we not already doing so (unless we already were)?
In section 17, referring to the incident at the CDC in which the CDC tried to use commercial Claude and ran into guardrails so Anthropic helped them remove the guardrails, he claims Anthropic ‘failed to inform the prime contractor and agency up front’ despite those guardrails being something I was personally fully aware of and that were prominently explained in its model cards.
He clarifies that he signed and dated the memorandum on March 2, 2026.
What is also important is what Emil Michael declined to say.
One missing thing is that previously, he had claimed that Sarah Heck, Anthropic’s head of policy, was presenting deeply misleading testimony. But when the time came, his testimony did not contradict hers on the central points of complaint. Thus we conclude her testimony was accurate.
More generally, he has stated here, over and over, that Anthropic is a risk because Anthropic are annoying people who are difficult to negotiate with, had one executive who questioned the propriety of one operation, didn’t want to agree to his contract terms, insisted contract terms have meaning, and he claims they complained to the press. Anything else? Nope, that’s it.
Judge Lin Drops The Hammer
Here is her ruling.Oh my lordis she pissed. I am not a lawyer, but I am rather sure this is what Federal Judges sound like when there might as well be steam coming out of their ears.
She did not merely grant the preliminary injunction. She ruled against the government in essentially every possible way, ruling Anthropic likely to prevail on all of its theories, contradicting numerous government claims, and shutting off various ways the government could hope to win on appeal.
The Honorable Judge Lin: Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government.”
The ruling is 43 pages of fun for the whole family. I’m not saying most of you should read it, but I can report that it sparked joy.
I mean, how often do judges get to write things like this?
The Honorable Judge Lin: That designation has never been applied to a domestic company and is directed principally at foreign intelligence agencies, terrorists, and other hostile actors.
These broad measures do not appear to be directed at the government’s stated national security interests. If the concern is the integrity of the operational chain of command, the Department of War could just stop using Claude. Instead, these measures appear designed to punish Anthropic. One of the amicus briefs described these measures as “attempted corporate murder.” They might not be murder, but the evidence shows that they would cripple Anthropic.
The record supports an inference that Anthropic is being punished for criticizing the government’s contracting position in the press. In their announcements, the President and Secretary Hegseth called Anthropic “out of control” and “arrogant,” describing its “sanctimonious rhetoric” as an attempt to “strong-arm” the government. The Department of War’s records show that it designated Anthropic as a supply chain risk because of its “hostile manner through the press.” Punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation.
Moreover, Defendants’ designation of Anthropic as a “supply chain risk” is likely both contrary to law and arbitrary and capricious. The Department of War provides no legitimate basis to infer from Anthropic’s forthright insistence on usage restrictions that it might become a saboteur. At oral argument, government counsel suggested that Anthropic showed its subversive tendencies by “questioning” the use of its technology, “raising concerns” about it, and criticizing the government’s position in the press. Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government.
There are other serious procedural problems with the government’s actions. Anthropic had no notice or opportunity to respond, which likely violated its due process rights. And the Department of War flouted procedural safeguards required by Congress before entering the supply chain risk designation, including that it consider less intrusive measures.
At bottom, Anthropic has shown that these broad punitive measures were likely unlawful and that it is suffering irreparable harm from them. Numerous amici have also described wideranging harm to the public interest, including the chilling of open discussion about important topics in AI safety. The motion for a preliminary injunction is granted.
I couldn’t have said it better myself. She did the work, and is the judge. She can.
Here’s one passage that caught my eye.
On March 4, 2026, Anthropic received two letters on DoW letterhead, signed by Secretary Hegseth and dated one day prior, regarding Anthropic’s designation as a “supply chain risk.”
She says ‘two letters on DoW letterhead’ because they don’t meet the legal requirements of what they are purporting to do, and as she then notes DoW can’t get its story straight on what the letters even purport, and doesn’t explain how any of this will work. They couldn’t be bothered to explain what would happen, or justify any of it.
The entire ruling is like that, only funnier.
Under Secretary Michael’s declaration submitted in sur-reply explains that his reference to “approval rights” related solely to permitting an exception to the usage policies: specifically, he refers to “a contract negotiation meeting on December 4, 2025, [where] Anthropic leadership expressed that DoW would have to call Anthropic in real time to seek authorization for a usage exception to one of their redlines.”
…
In sum, the record shows that, outside of Anthropic’s contractual redlines (which Anthropic proposed it might be willing to waive in specific circumstances) and an unnamed Anthropic executive’s questions to a third-party vendor, Anthropic has not sought to participate in DoW’s operational decision chain. Under Secretary Michael does not dispute Heck’s attestation that negotiations between Anthropic and DoW had remained “cordial and amicable.”
Oh, right. That.
Did it matter? Oh, yes, it matters quite a lot.
Major law firms began alerting their governmentcontractor clients to “audit[] their Anthropic exposure now” and “prepare to deploy alternatives” to Anthropic. (Dkt No. 6-30 at 4; Dkt No. 6-31 at 3.) Within days of the Challenged Actions, Anthropic has experienced a revenue impact as deals worth hundreds of millions of dollars have been delayed from closing, prospective clients have pulled out of negotiations, and some customers have terminated contracts.
Yes, Anthropic was already growing fast enough that hundreds of millions of dollars in ‘several days’ means they were probably still growing during that period, things are utterly insane right now and will only get crazier from here, but that is really quite a lot of lost business.
So in short, yes, a preliminary injunction is an extraordinary remedy that may only be awarded upon a clear showing that the plaintiff is entitled to such relief, as per Winter v Nat. Res. Def. Council, but well, here we are.
There’s no kill like overkill, and Anthropic has at least three distinct ways it will likely prevail, as per Lin, on 1st Amendment, 5th Amendment and also APA terms, and also because it was all arbitrary and capricious and completely outside any legal authority granted by Congress.
Here are some section headings:
1. The Supply Chain Risk Designation Was in Excess of Statutory Authority and Contrary to Law
a. Anthropic’s Conduct Does Not Meet the Definition of a Supply Chain Risk
b. The Supply Chain Designation Failed to Satisfy the Procedural Requirements of Section 3252
2. The Hegseth Directive Was in Excess of Statutory Authority and Contrary to Law
3. The Supply Chain Risk Designation and the Hegseth Directive’s Blacklisting of Anthropic Were Arbitrary and Capricious
Anthropic agreed to allow the government seven days to appal.
VI. REQUEST FOR BOND AND FOR A STAY
In light of Anthropic’s non-opposition, Defendants’ request for an administrative stay for seven days to allow the United States to seek an emergency, expedited stay from the court of appeals is GRANTED.
So, in conclusion:
VIII. CONCLUSION For the forgoing reasons, Anthropic’s Motion for Preliminary Injunction (Dkt. No. 6) is GRANTED as modified.
A separate order concerning Anthropic’s request for a Section 705 stay and preliminary injunctive relief will issue, but is hereby stayed for seven days
IT IS SO ORDERED.
Let Me Count The Ways
Dean W. Ball: A recap of all the theories of the case Judge Lin has ruled Anthropic is “substantially likely” to ultimately prevail on:
1. Unconstitutional government retaliation for first amendment-protected speech—the government’s actions were motivated by Anthropic’s political beliefs and speech.
2. Violation of Anthropic’s 14th Amendment due process rights
3. The Administration acted entirely outside of any legal authority granted to it by Congress
4. Violation of the procedures of 10 USC 3252–to the extent government acted within the bounds of 10 USC 3252, even here it likely violated the plain text of the law, which requires the DoW to evaluate less restrictive options than the “supply chain risk” designation
5. Arbitrary and capricious—the government’s actions violated the arbitrary and capricious standard of the Administrative Procedure Act
Again, this is NOT a ruling on the merits. The judge is not saying anthropic definitively wins on any of these. She is saying they are “substantially likely” to win. The government will appeal this procedural ruling, which will yield another procedural ruling from an appellate court.
Nonetheless, the “substantial likelihood” standard is not a small hurdle to clear, especially given we are dealing in an area of law (national security procurement) where the executive branch typically gets maximal deference from the courts. And of course, the supply chain risk designation is stopped, at least for now.
There is a long way ahead, and of course the government could escalate even further. But this is a good day for America, and do not let anyone tell you otherwise.
Summarizing the whole ruling, I think this is totally fair:
Dean W. Ball: This is a devastating ruling for the government, finding Anthropic likely to prevail on essentially all of its theories for why the government’s actions were unlawful and unconstitutional. One of the things she mentions is the huge range of amici briefs supporting Anthropic (by the way, 0 supported USG)—so thanks to everyone here who signed on to FAI’s brief, or to one of the many many others. These things do matter. More importantly, you were on the right side of history.
On a personal note: some friends and allies of mine on the right have been angry at me for my own words and actions in all this. Anyone who thinks I spoke out against an administration I served in is crazy. It was a hugely costly decision for me. But Judge Lin’s ruling shows why I did it: this is a staggeringly illegal act by the government. That is why I am particularly honored to have been (implicitly) quoted in the ruling for calling this what it was when Secretary Hegseth initially made his announcement: an attempted act of corporate murder.
The case continues, but Anthropic has scored a very large win here. The real victors, however, are all red-blooded Americans who are, as the founders would have said, “jealous of their liberties.”
Mckay Wrigley: crazy that obvious attempted corporate murder was in fact deemed to be obvious attempted corporate murder. great day.
I did consider signing onto a brief in this case, but it was wisely concluded that my signature would not be net helpful.
Let me take this moment to give thanks to Dean Ball for his extraordinary courage in this matter, as well as all the others who helped get us to this point.
Emil Michael Doubles Down Once Again
A wise person would either make a deal, or pivot and focus on the war against Iran.
Instead, Michael continues to make a fool of himself.
Will Chamberlain: This may be the most outlandish and absurd order I’ve seen from a judge this year and that’s saying something.
Under Secretary of War Emil Michael (March 26, day of ruling, 11:21pm, QTing Will Chamberlain): There are dozens of factual errors in the 42 page judgment rushed out in 48 hours DURING A TIME OF CONFLICT that seeks to upend the @POTUS role as Commander in Chief and disrupt @SecWar full ability to conduct military operations with the partners it chooses. A disgrace.
Chamberlain was quoting Eric W, who does not understand how you can be barred from taking actions to ban something, but are free to simply choose not to buy it. This is deep echo chamber rhetoric.
You have to love the all-caps ‘during a time of conflict’ when they set an arbitrary deadline and then started all this hours before starting exactly that conflict of choice.
The timing was almost certainly not a coincidence, to the extent that I have kicked myself for not putting it together. We are lucky Iran also did not realize in advance.
Meanwhile, what are these dozens of factual errors? I did a close reading and could count exactly zero. Emil Michael has pointed out exactly zero.
This is not how you talk when you are hoping a judge will rule in your favor. Perhaps a goal is now to claim that Judge Lin must be biased, and thus to try and provoke her into saying something foolish? More likely is that Michael is talking this way because he has the habit of talking this way, and he is talking to other very different audiences where such talk plays well even when disconnected from reality.
There’s also this grasping at yet another unrelated straw, as if the goal is simply to show that Anthropic is bad in whatever way he can. I’ll cover that story on my normal schedule but it obviously has nothing to do with the current case.
The government is prevented from implementing, applying, or enforcing in any manner the Presidential directive, or any actions taken in response, doing anything in furtherance of it.
The DoW is restrained from implementing, applying, or enforcing in any manner the Hegseth Directive or the Supply Chain Risk designation, or doing anything in furtherance, or taking any other action for these purposes.
The SCR and Hegseth directive are stayed.
Defendants shall issue a report no later than April 6, 2026 detailing compliance.
Defendants are free to stop using Anthropic’s products, if they want to stop.
Oh, and Anthropic posts a bond of $100.
The government has seven days to appeal, so the legal action moves to the 9th Circuit.
Given the ruling, it will be very difficult to challenge it on appeal. They will presumably try anyway. The government probably loses again, and loses more face, but at this point not appealing would probably seem even worse than losing. It is plausible that they win some narrowing of the injunction.
The case takes a while to play out. It is not 100% that Anthropic wins at trial, even with Judge Lin. But the chances seem very high, given what we now know.
The key variable is, what does the government choose to do about this?
As Dean Ball points out, the main risk is that the government escalates further.
The smart play by DoW would be deescalation. They have indeed done substantial harm to Anthropic, and shown their propensity for retaliation. Given they are involved in a rather serious undeclared war that is dominating headlines, it would seem easy for DoW to basically close to book here and pretend none of this happened. They can continue to litigate the case, but allow it to resolve quietly. If they don’t want to use Anthropic models going forward, that’s America’s loss, but it’s fine.
I notice that this fragment lacks a subject, and there is thus a trivial way to make it technically true. Far more true than it could be otherwise, really.
Last night, Anthropic was given its preliminary injunction, with a stay of seven days.
Emil Michael is a very angry person right now. So is the Honorable Judge Lin.
We were worried we would draw a judge that had no idea how any of this worked and would give the government absurd deference or buy into nonsense arguments.
That is not how it played out. Judge Lin very much understood the issues in play, as they did not require a technical background. She hammered the government in the hearing, and she wrote one of the most forceful, devastating judge opinions I have ever seen. It was an honor and sparked joy to be able to read it.
This post will proceed chronologically, picking up after the events of my last update.
If you want the short version and don’t care about the incremental steps, you can skip directly to Judge Lin Drops The Hammer, leaving the rest as a historical document and source for those who need to establish various facts going forward, including in court.
Logistical note: Due to breaking news, AI #161 Part 2 will be published on Monday. Then, if no major news breaks that preempts this, Tuesday and Wednesday will be about Anthropic’s RSPv3.
Anthropic Responds To The DoW’s Brief
First Anthropic presented a brief and many provided amicus briefs. Then the government responded with its own brief. That was covered last time. So next up was Anthropic responding to the Department of War (DoW).
Roger Parloff covers Anthropic’s additional testimonies from Friday from Head of Policy Sarah Heck and Head of Public Sector Thiyagu Ramasamy.
That was reported in Tech Chrunch based on the public record.
There was also the following important email included as an exhibit.
This was also 17 (!) minutes before the Dario memo was leaked via The Information.
It is extremely hard to square that email with any interpretation of events other than that the parties were negotiating, they were close to a deal, the only remaining issue was technical wording around domestic surveillance, and then the DoW retaliated against Dario Amodei’s use of free speech by declaring Anthropic a supply chain risk.
You see, I am a disciplined Serious News Reporter now, so I say ‘extremely hard to square’ and absolutely am not saying ‘liar liar pants on fire.’
However, I do make this request. If you are reading this, and you are talking to Emil Michael, and you think he has been consistently candid with you, please consider the possibility that you are mistaken about that. Also consider, as evidence of this, the section analyzing his testimony.
This all once again made Emil Michael very upset, as basically every statement by Anthropic does, and many other things also do, and he fired back with this:
I reminded him that these declarations were sworn testimony, thus under oath. He replied that the witnesses have not yet been cross examined.
That’s fair enough, but the typical reading of ‘wait until they are under oath’ is that once under oath, as Emil Michael so far is not, witnesses will have to tell the truth, upon penalty of perjury, here against a highly vindictive government. As in, previous statements have been cheap talk, whereas this is for real. Which it clearly is.
Mostly what Emil Michael said had nothing to do with the statement by Sarah Heck.
The government’s legal defense did not include mention of the supposed missile conversation, let alone a distinct version of events. One presumes this is because, when the time comes to give sworn testimony, suddenly the description became rather different, and not so beneficial for their case. Thus, Heck did not mention it, either, so why does Michael here?
Whereas when you look at what Heck did say, there are some important claims, that directly contradict the government’s case in key ways, that Michael does not seem to be contradicting.
Sarah Heck claims the following (among other things), under oath:
If those facts are true, the government has absolutely no case.
Nor would Michael’s later submitted testimony contradict Heck’s central claims. Judge Lin would repeatedly point out that Anthropic’s assertions went unchallenged, or in other cases that the government had agreed with those assertions. Michael will claim that Anthropic expressed this interest in #3, or that there were concerns related to #5, but his testimony will make clear that he had no basis for this other than being upset about the course of negotiations, and being upset about Anthropic’s speech. His attempt to contradict #6 only confirms Anthropic’s technical account.
Thiyagu Ramasamy claims the following (among other things), under oath:
I’m going to pause to appreciate section 18, because it’s great, and because somehow it was necessary, and also the kind of thing you write in order to ensure that the judge who has no idea how AI works can understand it.
Emil Michael stated on Twitter that he intends to submit a declaration of his own. I look forward to what is in it, and also to what is not in it.
Alan Rozenshtein and Others Suggest A Narrow Legal Way Out
Alan Rozenshtein suggested the obvious remedy for Anthropic in Anthropic PBC vs. Department of War. The government should be allowed to cancel direct contracts, but not issue a government-wide ban or a supply chain risk designation, and definitely not allow the government to demand secondary boycotts or forbid commercial relationships.
The government should of course be free to try to unwind its commercial relationships with Anthropic, but must do so using normal procedures. As Alan notes, the alternative is unthinkable in terms of its impacts on the procurement system, allowing arbitrary retaliation against any AI company at will.
I see this pattern of saying that Anthropic is unlikely to prevail on first amendment grounds, or on grounds that the actions are pure retaliation, arbitrary and capricious, rather than on failure to follow procedure. This seems like a combination of a way to avoid embarrassment or a broad ruling via a narrow way out, as courts prefer, and also a basic statement that our Republic is on the brink of collapse such that it no longer matters how obviously an action is purely arbitrary and capricious retaliation, and the courts likely no longer care.
I guess I agree, in the sense that this is not where I want to have a test case, but hot damn, if you can’t win this case on the more fundamental grounds then the Republic’s days are not only numbered but there are not very many of them left.
The good news is that Judge Lin ended up in my corner. Yes, the government loses on procedural grounds, but also on all the other grounds, including the First Amendment.
DoW Tried Another Uniquely Ill-Suited Theory Against Anthropic
This argument fared so badly in court that it was not mentioned in the Judge’s ruling, but it is worth noting that they are trying it.
It takes a lot of stupidity to surprise me but this worked: The Pentagon is saying the Anthropic’s foreign workforce ‘poses security risks’ in its actual official court filing. There is no version of this that isn’t far worse at OpenAI and Google and xAI, since Anthropic has by far the best procedures in place and also the smallest share of foreign and Chinese engineers, and has by far the strongest anti-China stance.
The Pentagon argument is literally that they don’t trust Anthropic, because reasons:
What are those reasons? At best, because Anthropic said no to the contract, and very possibly this was a political or corporate hit from day one. Therefore their foreign workers are risky while others are not. It’s black letter, the Platonic ideal of retaliation and disregard for even the appearance of rule of law.
It’s also directly in conflict with Trump’s agenda for America to win in AI. If you start telling labs you have a problem with foreign or Chinese engineering talent, and actually force such people out? We lose. Period. It’s that simple.
Samuel Hammond, who was quoted in the piece, has provided us with his full comment to the reporter, which is a good thing to have on hand so quoting in full although most of you likely don’t need to read it:
It is very much playing with fire to suggest we don’t want Chinese AI talent working for our benefit, or to risk making them feel unwelcome.
Also this:
Other Views About The Situation From Back Then
Scott Aaronson described the situation as he saw it. Seems largely correct.
Breaking Defense points out that yes, the public statements by DoW very much undermined their case in court. If DoW had gone ahead with the Supply Chain Risk designation, using nominally correct procedures and without announcing it was all retaliation, they could even have classified their justifications. The main thing defending our Republic is that those looking to take it down have strong incentives to keep saying the quiet parts out loud.
Potentially Long Suffering Judge Rita Lin Goes Hard At Hearing
She released her questions for both parties in advance of the Tuesday hearing.
It went hard. Here is a summary of the questions.
The questions are brutal because the facts are brutal, and Judge Lin is paying attention and asking the relevant questions.
In advance of the hearing, I said the answers were:
We then got various summaries of the hearing. Here is a writeup, here’s a partial tweetstorm and here is another. What were the answers?
As an easter egg, Judge Lin quoted Dean Ball’s line of ‘attempted corporate murder’ in the hearing. That probably wasn’t a great sign for the government’s case.
State of play, among other things:
Not agreeing to the stipulation seems like rather atrocious bad faith here. If no reasonable person could take [X] seriously, then why refuse to repudiate [X]?
Emil Michael Tells On Himself
Emil Michael did indeed file two declarations. So what did he say?
I go over it extensively, for those who need details, but essentially Michael clarifies that his words boil down to almost nothing other than that:
That’s the entire argument for why Anthropic is a supply chain risk, and why he is paranoid that they might actively be sabotaging the American military.
This matches the statements in court, which agree that DoW’s worries and designation are purely based on their take on the conversations between the parties. As in, they are retaliation for speech, period.
He admits, if you follow the logic, that all other talk boils down to the above.
The first declaration is brief and beings on page six here.
His summary says that Anthropic’s ‘unreasonableness’ ‘endangers the strategic implementation and technical integrity’ of DoW’s systems, which is false, that it is a ‘direct challenge to the government’s ability to control its own lawful operations,’ which is false, and that it renders them a supply chain risk as per the legal definition, which is false.
He then says that ‘if DoW were to accede to Anthropic’s demands, the sought-after contract language would introduce a vender-imposed point of failure.’ This is flatly false, since you cannot ‘introduce’ something via language that is already present and signed by the Department of War, and also represents a misunderstanding of the technical systems involved that, if present, Michael had every opportunity to avoid.
By his own principles, essentially every contractor is a supply chain risk. He then explicitly cites Anthropic’s ‘hostility in negotiations’ as a motivating factor, as if this was not itself an illegal justification.
He also mentions his claims that are presumably related to the Maduro raid. Since this is his version under oath, we should assume he chose his words carefully.
He does not care to elaborate further, so we should assume that this is a maximally damning version of that conversation, as he believes that it happened, and respond accordingly.
He then claims this led to alarm about whether they would ‘cause their software to stop working or cause some other disastrous action that would put our warfighters in danger.’
That second statement reflects at best a willful misunderstanding of how all of this works, as has been repeatedly explained. Anthropic cannot ‘cause the software to stop working’ and if they were alarmed a simple question would have confirmed this. It also equates one member of a counterparty at one time questioning some potential action’s ‘propriety’ to assuming someone will then engage in sabotage and put servicemen at risk, which is at best highly disingenuous.
Michael’s statements about LLMs experiencing technical ‘drift’ and requiring ‘constant tuning’ are simply false. His entire section 3 here is at best a series of intentional misrepresentations. He claims Anthropic has the ‘motive, opportunity and means’ to introduce ‘vulnerabilities’ into the software, when Anthropic has neither motive nor opportunity, and to the extent it has any of the three so does any other vender. Michael seems to think he it speaking to Fox News, or to the new right on Twitter, when he is instead speaking to a court of law.
He specifically attributes Anthropic’s supply risk to the negotiations themselves, and in particular to Anthropic’s true and very standard claims that its contract terms carry the force of law.
His statement that Anthropic asserted it have ‘an approval role in the operational decision chain’ is simply false.
All of this is very clearly punishment for speech, even if we believe Michael and all of his false statements.
Not only does his supply chain risk designation fail to articulate a theory under which Anthropic would be a supply chain risk, and fail to explain why less intrusive measures would be insufficient when they would obviously be sufficient even if Michael’s claims were true, Michaels has affirmatively testified under oath that this is all retaliation for speech, in some combination of these named actions, and presumably other speech that he has otherwise blamed as part of ‘throughout these negotiations’:
The second declaration is here.
A bunch of this reiterates previous statements, including reiterating many claims that we know are false. I’ll skip those parts.
He claims Anthropic is wrong to say certain concerns were not raised, because DoW said they needed permission for ‘all lawful use.’ He says that ‘the department need not share its national security concerns with private actors.’
I take this as an admission that those concerns were not raised, except insofar as they demanded ‘all lawful use’ and that Anthropic’s statements were accurate. Thank you.
Now here is the big one, where he clarifies exactly what was supposedly said about requiring ‘real time authorization’:
So in other words, Emil Michael is claiming that:
Um, yeah? That’s the whole point of having contract terms? What the hell, man?
There is no implication that Anthropic would exercise that veto, or that this would apply to real time emergency military exercises. All this says is, if you want an exception to our deal, then you ask us first.
What does Emil Michael want? To be able to break the deal in any way he likes, without asking for permission first?
In a game of Deal Or No Deal, that is what we like to call No Deal. At all.
Or the deal is, I do what I want, and you like it. That’s it.
Notice that he does not describe the hypersonic missile situation here, because he knows that he cannot make such a detailed claim in a way that would benefit him.
Once again, in an emergency situation, you do the thing anyway, breaking the deal, file it as emergency use, and then ask for permission afterwards, just like with any other military contract. There’s a standard procedure for this.
Okay, what’s the other half?
So literally Emil Michael is saying that because Anthropic insisted upon any red lines or terms at all, that meant Anthropic was ‘fundamentally inserting itself into decision-making.’ All Michael is accusing Anthropic of, at all, is having any principles at all.
In section 8 he reiterates that by ‘desire an approval role in military operations,’ he means that Anthropic wanted to write terms into its contract. In Section 9 he says this insistence on any rules at all ‘injects the company into DoW’s decision-making’ because that might be embedded in the functionality of the system, a ‘per se operational veto.’
‘Despite Anthropic saying we almost agreed upon terms, we had not agreed on terms.’
Yes, that is what ‘almost’ means. Note also that he doesn’t specify that the disagreement was about weapons, which affirms the email’s claim that the only remaining difference, the one ‘required’ by DoW, was about surveillance.
There is no explanation given for why ‘the two sides has all but agreed on terms’ does not undermine the concept that Anthropic cannot be allowed to touch the supply chain. He just says things.
In section 13 he says Anthropic has ‘unusual degree of control’ over its model, which is false, and an adversarial posture, which is speech.
In section 14 he clarifies that this ‘control’ was that Anthropic would update its model to keep it at the cutting edge of technology and it has ‘full control over the weights.’ Oh, okay. So test them first.
In section 15 he says these risks also apply when Anthropic is a secondary contractor.
In section 16 he says the above reasons are the reasons DoW took action, and that they are ‘conducting an audit’ for any malicious or unintended software intrusions.’ This is almost certainly pure grandstanding, but in general the military should be conducting such audits all the time on all of its software and LLMs, presumably. Why are we not already doing so (unless we already were)?
In section 17, referring to the incident at the CDC in which the CDC tried to use commercial Claude and ran into guardrails so Anthropic helped them remove the guardrails, he claims Anthropic ‘failed to inform the prime contractor and agency up front’ despite those guardrails being something I was personally fully aware of and that were prominently explained in its model cards.
He clarifies that he signed and dated the memorandum on March 2, 2026.
What is also important is what Emil Michael declined to say.
One missing thing is that previously, he had claimed that Sarah Heck, Anthropic’s head of policy, was presenting deeply misleading testimony. But when the time came, his testimony did not contradict hers on the central points of complaint. Thus we conclude her testimony was accurate.
More generally, he has stated here, over and over, that Anthropic is a risk because Anthropic are annoying people who are difficult to negotiate with, had one executive who questioned the propriety of one operation, didn’t want to agree to his contract terms, insisted contract terms have meaning, and he claims they complained to the press. Anything else? Nope, that’s it.
Judge Lin Drops The Hammer
Here is her ruling. Oh my lord is she pissed. I am not a lawyer, but I am rather sure this is what Federal Judges sound like when there might as well be steam coming out of their ears.
She did not merely grant the preliminary injunction. She ruled against the government in essentially every possible way, ruling Anthropic likely to prevail on all of its theories, contradicting numerous government claims, and shutting off various ways the government could hope to win on appeal.
The ruling is 43 pages of fun for the whole family. I’m not saying most of you should read it, but I can report that it sparked joy.
I mean, how often do judges get to write things like this?
I couldn’t have said it better myself. She did the work, and is the judge. She can.
Here’s one passage that caught my eye.
She says ‘two letters on DoW letterhead’ because they don’t meet the legal requirements of what they are purporting to do, and as she then notes DoW can’t get its story straight on what the letters even purport, and doesn’t explain how any of this will work. They couldn’t be bothered to explain what would happen, or justify any of it.
The entire ruling is like that, only funnier.
Oh, right. That.
Did it matter? Oh, yes, it matters quite a lot.
Yes, Anthropic was already growing fast enough that hundreds of millions of dollars in ‘several days’ means they were probably still growing during that period, things are utterly insane right now and will only get crazier from here, but that is really quite a lot of lost business.
So in short, yes, a preliminary injunction is an extraordinary remedy that may only be awarded upon a clear showing that the plaintiff is entitled to such relief, as per Winter v Nat. Res. Def. Council, but well, here we are.
There’s no kill like overkill, and Anthropic has at least three distinct ways it will likely prevail, as per Lin, on 1st Amendment, 5th Amendment and also APA terms, and also because it was all arbitrary and capricious and completely outside any legal authority granted by Congress.
Here are some section headings:
Anthropic agreed to allow the government seven days to appal.
So, in conclusion:
Let Me Count The Ways
Summarizing the whole ruling, I think this is totally fair:
I did consider signing onto a brief in this case, but it was wisely concluded that my signature would not be net helpful.
Let me take this moment to give thanks to Dean Ball for his extraordinary courage in this matter, as well as all the others who helped get us to this point.
Emil Michael Doubles Down Once Again
A wise person would either make a deal, or pivot and focus on the war against Iran.
Instead, Michael continues to make a fool of himself.
Chamberlain was quoting Eric W, who does not understand how you can be barred from taking actions to ban something, but are free to simply choose not to buy it. This is deep echo chamber rhetoric.
You have to love the all-caps ‘during a time of conflict’ when they set an arbitrary deadline and then started all this hours before starting exactly that conflict of choice.
The timing was almost certainly not a coincidence, to the extent that I have kicked myself for not putting it together. We are lucky Iran also did not realize in advance.
Meanwhile, what are these dozens of factual errors? I did a close reading and could count exactly zero. Emil Michael has pointed out exactly zero.
This is not how you talk when you are hoping a judge will rule in your favor. Perhaps a goal is now to claim that Judge Lin must be biased, and thus to try and provoke her into saying something foolish? More likely is that Michael is talking this way because he has the habit of talking this way, and he is talking to other very different audiences where such talk plays well even when disconnected from reality.
He reminds us that the Supply Chain Risk designation is ‘in full force and effect,’ because the order was stayed for seven days to allow for appeal. As in, presumably he wants to ensure Anthropic is hurt as much as possible in these next seven days?
There’s also this grasping at yet another unrelated straw, as if the goal is simply to show that Anthropic is bad in whatever way he can. I’ll cover that story on my normal schedule but it obviously has nothing to do with the current case.
What Happens Now
The injunction can be viewed here.
Oh, and Anthropic posts a bond of $100.
The government has seven days to appeal, so the legal action moves to the 9th Circuit.
Given the ruling, it will be very difficult to challenge it on appeal. They will presumably try anyway. The government probably loses again, and loses more face, but at this point not appealing would probably seem even worse than losing. It is plausible that they win some narrowing of the injunction.
The case takes a while to play out. It is not 100% that Anthropic wins at trial, even with Judge Lin. But the chances seem very high, given what we now know.
The key variable is, what does the government choose to do about this?
As Dean Ball points out, the main risk is that the government escalates further.
The smart play by DoW would be deescalation. They have indeed done substantial harm to Anthropic, and shown their propensity for retaliation. Given they are involved in a rather serious undeclared war that is dominating headlines, it would seem easy for DoW to basically close to book here and pretend none of this happened. They can continue to litigate the case, but allow it to resolve quietly. If they don’t want to use Anthropic models going forward, that’s America’s loss, but it’s fine.