Recently Anthropic sued the US Department of War et. al. over being designated a supply chain risk. The full text of the filing is below, except for the footnotes and some formatting which were removed.
INTRODUCTION
1. Anthropic is a leading frontier artificial intelligence (AI) developer whose flagship family of AI models is known as "Claude." Anthropic was founded based on the belief that AI technologies should be developed and used in a way that maximizes positive outcomes for humanity, and its primary animating principle is that the most capable artificial-intelligence systems should also be the safest and the most responsible. Anthropic brings this suit because the federal government has retaliated against it for expressing that principle. When Anthropic held fast to its judgment that Claude cannot safely or reliably be used for autonomous lethal warfare and mass surveillance of Americans, the President directed every federal agency to "IMMEDIATELY CEASE all use of Anthropic's technology"—even though the Department of War (Department) had previously agreed to those same conditions. Hours later, the Secretary of War directed his Department to designate Anthropic a "Supply-Chain Risk to National Security," and further directed that "effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic." In a letter to Anthropic, the Secretary confirmed the designation as "necessary to protect national security." These actions are unprecedented and unlawful. The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech. No federal statute authorizes the actions taken here. Anthropic turns to the judiciary as a last resort to vindicate its rights and halt the Executive's unlawful campaign of retaliation.
2. Since its inception, Anthropic has worked to offer AI services to customers in the private and public sectors in a manner consistent with its founding principles of safety and responsibility. It has partnered extensively with the federal government, and particularly the United States Department of War. Anthropic has even developed Claude models that help the Department to protect national security. As a result of these efforts, Claude is reportedly the Department's most widely deployed and used frontier AI model, and the only frontier AI model on the Department's classified systems. And the Department has acknowledged Anthropic's unique contributions in this area, praising Claude for its "exquisite" capabilities and reportedly using Claude—to this day—in its most important military missions.
3. Anthropic's Usage Policy has always conveyed its view that Claude should not be used for two specific applications: (1) lethal autonomous warfare and (2) surveillance of Americans en masse. Anthropic has never tested Claude for those uses. Anthropic currently does not have confidence, for example, that Claude would function reliably or safely if used to support lethal autonomous warfare. These usage restrictions are therefore rooted in Anthropic's unique understanding of Claude's risks and limitations—including Claude's capacity to make mistakes and its unprecedented ability to accelerate and automate analysis of massive amounts of data, including data about American citizens. Anthropic has collaborated with the Department of War on modifications to its usage restrictions to facilitate the Department's work with Claude, in recognition of the Department's unique missions. But Anthropic has always maintained its commitment to those two specific restrictions, including in its work with the Department of War.
4. Recently, however, Secretary of War Hegseth and his Department began demanding that Anthropic discard its usage restrictions altogether and replace them with a general policy under which the Department may make "all lawful use" of the technology. Anthropic largely agreed to the Department's request, except for two restrictions it viewed as critical: prohibitions against use of the technology for lethal autonomous warfare and mass surveillance of Americans. Throughout these discussions, Anthropic expressed its strongly held views about the limitations of its AI services. It also made clear that, if an arrangement acceptable to the Department could not be reached, Anthropic would collaborate with the Department on an orderly transition to another AI provider willing to meet its demands.
5. The Department met Anthropic's attempts at compromise with public castigation. It labeled Anthropic's CEO as too "ideological" and a "liar" with a "God-complex" who "is ok putting our nation's safety at risk." The Department eventually gave Anthropic a public ultimatum: "get on board" and accede to the government's demands by 5:01 p.m. on February 27, 2026, or "pay a price" in the form of either being cast out of the defense supply chain under 10 U.S.C. § 3252 or forced to provide unlimited use of Claude under the Defense Production Act.
6. After Anthropic's CEO publicly announced that the company could not "in good conscience accede to" the Department's demands, the Executive Branch swiftly retaliated.
7. On February 27, 2026, President Trump posted a statement on social media (the Presidential Directive), "directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology." He derided Anthropic as "out-of-control" and a "RADICAL LEFT, WOKE COMPANY" of "Leftwing nut jobs." He also accused Anthropic of "selfishness" and of making a "DISASTROUS MISTAKE." "Anthropic better get their act together," the President threatened, or he would "use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow."
8. The same afternoon, Secretary Hegseth purported to act on "the President's directive" by posting a "final" decision via social media (the Secretarial Order). The Secretarial Order "direct[ed] the Department of War to designate Anthropic a Supply-Chain Risk to National Security." It also proclaimed that "[e]ffective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic." The Secretary denounced what he characterized as Anthropic's "Silicon Valley ideology," "defective altruism," "corporate virtue-signaling," and "master class in arrogance." And he criticized Anthropic for not being "more patriotic." But he also directed that "Anthropic will continue to provide the Department of War its services for a period of no more than six months."
9. Other federal agencies soon followed suit. For example, the General Services Administration terminated Anthropic's "OneGov" contract, thereby ending the availability of Anthropic services to all three branches of the federal government. The Department of the Treasury and the Federal Housing Finance Agency publicly stated they were cutting ties with Anthropic. And the Departments of State and Health and Human Services reportedly circulated internal memoranda directing employees to stop using Anthropic's services.
10. On March 4, 2026, at 8:48 p.m. Eastern, the Secretary of War sent Anthropic a letter about the "supply chain risk" designation in the Secretarial Order. That letter (the Secretarial Letter), dated March 3, notified Anthropic that "the Department of War (DoW) has determined . . . that the use of [Anthropic's] products in [the Department's] covered systems presents a supply chain risk" and that exercising the authority granted by 10 U.S.C. § 3252 against Anthropic is "necessary to protect national security." The Secretarial Letter pronounces that this determination covers all Anthropic "products" and "services," including any that "become available for procurement." And it asserts that "less intrusive measures are not reasonably available" to mitigate the risks that Anthropic's products and services supposedly pose to national security.
11. All of these unprecedented actions—the Presidential Directive, the Secretarial Order and the Secretarial Letter that followed it, and other agency actions taken in response to the Presidential Directive (collectively, the Challenged Actions)—are harming Anthropic irreparably. In Secretary Hegseth's own words, Anthropic's status in the eyes of the federal government has been "permanently altered." Official designation as a "Supply-Chain Risk to National Security" carries profound weight, particularly under a President who has threatened both "criminal consequences" and "the Full Power of the Presidency" to enforce compliance. Anthropic's contracts with the federal government are already being canceled. Current and future contracts with private parties are also in doubt, jeopardizing hundreds of millions of dollars in the near-term. On top of those immediate economic harms, Anthropic's reputation and core First Amendment freedoms are under attack. Absent judicial relief, those harms will only compound in the weeks and months ahead.
12. The Challenged Actions are as unlawful as they are unprecedented. First, the Secretarial Order "designat[ing] Anthropic a Supply-Chain Risk to National Security" and prohibiting the Department's contractors, suppliers, and partners from "conduct[ing] any commercial activity with Anthropic"—and the Secretarial Letter purporting to implement the Order—violates both 10 U.S.C. § 3252 and the Administrative Procedure Act. The Secretary's actions are contrary to Section 3252's plain text, were issued without observance of the procedures Congress required, and are arbitrary, capricious, and an abuse of discretion. Indeed, Anthropic had been one of the government's most trusted partners until its views clashed with the Department's.
13. Second, the Challenged Actions retaliated against Anthropic for its speech and other protected activities in violation of the First Amendment. The Constitution confers on Anthropic the right to express its views—both publicly and to the government—about the limitations of its own AI services and important issues of AI safety. The government does not have to agree with those views. Nor does it have to use Anthropic's products. But the government may not employ "the power of the State to punish or suppress [Anthropic's] disfavored expression." Nat'l Rifle Ass'n of Am. v. Vullo, 602 U.S. 175, 188 (2024).
14. Third, the Presidential Directive requiring every federal agency to immediately cease all use of Anthropic's technology, and actions taken by other defendants in response to that directive, are outside any authority that Congress has granted the Executive. And "[w]hen an executive acts ultra vires, courts are normally available to reestablish the limits on his authority." Chamber of Com. of U.S. v. Reich, 74 F.3d 1322, 1328 (D.C. Cir. 1996).
15. Fourth, the Challenged Actions violate the Fifth Amendment's Due Process Clause. Anthropic has weighty property and liberty interests in its reputation, its business relationships, its future business prospects, and its advocacy. The Challenged Actions arbitrarily deprive Anthropic of those interests without any process, much less due process.
16. Fifth, the Challenged Actions violate the APA's prohibition against imposing any "sanction," "penalty," "revocation," "suspension," or other "compulsory or restrictive" action against a person "except within jurisdiction delegated to the agency and as authorized by law." 5 U.S.C. §§ 551, 558.
17. The consequences of this case are enormous. The federal government retaliated against a leading frontier AI developer for adhering to its protected viewpoint on a subject of great public significance—AI safety and the limitations of its own AI models—in violation of the Constitution and laws of the United States. Defendants are seeking to destroy the economic value created by one of the world's fastest-growing private companies, which is a leader in responsibly developing an emergent technology of vital significance to our Nation. The Challenged Actions inflict immediate and irreparable harm on Anthropic; on others whose speech will be chilled; on those benefiting from the economic value the company can continue to create; and on a global public that deserves robust dialogue and debate on what AI means for warfare and surveillance. There is no valid justification for the Challenged Actions. The Court should declare them unlawful and enjoin Defendants from taking any steps to implement them.
PARTIES
18. Plaintiff Anthropic is a public benefit corporation organized under the laws of Delaware and headquartered in San Francisco. Anthropic's customers range from Fortune 500 companies and U.S. government agencies to small businesses and individual consumers who have integrated Claude into the core of how they work, transforming workflows on a wide range of tasks.
19. The U.S. Department of War is a federal agency headquartered in Washington, D.C.
20. The Federal Housing Finance Agency is a federal agency headquartered in Washington, D.C.
21. The U.S. Department of the Treasury is a federal agency headquartered in Washington, D.C.
22. The U.S. Department of State is a federal agency headquartered in Washington, D.C.
23. The U.S. Department of Health and Human Services is a federal agency headquartered in Washington, D.C.
24. The U.S. Department of Commerce is a federal agency headquartered in Washington, D.C.
25. The U.S. Department of Veterans Affairs is a federal agency headquartered in Washington, D.C.
26. The General Services Administration is a federal agency headquartered in Washington, D.C.
27. The U.S. Office of Personnel Management is a federal agency headquartered in Washington, D.C.
28. The U.S. Nuclear Regulatory Commission is a federal agency headquartered in Rockville, Maryland.
29. The U.S. Social Security Administration is a federal agency headquartered in Baltimore, Maryland.
30. The U.S. Department of Homeland Security is a federal agency headquartered in Washington, D.C.
31. The Securities and Exchange Commission is a federal agency headquartered in Washington, D.C.
32. The National Aeronautics and Space Administration is a federal agency headquartered in Washington, D.C.
33. The U.S. Department of Energy is a federal agency headquartered in Washington, D.C.
34. The Federal Reserve Board of Governors is a federal agency headquartered in Washington, D.C.
35. The National Endowment for the Arts is a federal agency headquartered in Washington, D.C.
36. The Executive Office of the President is a federal agency headquartered in Washington, D.C.
37. Peter B. Hegseth is the Secretary of War and head of Defendant U.S. Department of War. He is sued in his official capacity.
38. Scott Bessent is the Secretary of the Treasury and head of Defendant U.S. Department of the Treasury. He is sued in his official capacity.
39. William J. Pulte is the Director of U.S. Federal Housing and head of Defendant Federal Housing Finance Agency. He is sued in his official capacity.
40. Marco Rubio is the Secretary of State and head of Defendant U.S. Department of State. He is sued in his official capacity.
41. Robert F. Kennedy, Jr. is the Secretary of Health and Human Services and head of Defendant U.S. Department of Health and Human Services. He is sued in his official capacity.
42. Howard Lutnick is the Secretary of Commerce and head of Defendant U.S. Department of Commerce. He is sued in his official capacity.
43. Douglas A. Collins is the Secretary of Veterans Affairs and head of Defendant U.S. Department of Veterans Affairs. He is sued in his official capacity.
44. Edward C. Forst is the Administrator of Defendant General Services Administration. He is sued in his official capacity.
45. Scott Kupor is the Director of Defendant U.S. Office of Personnel Management. He is sued in his official capacity.
46. Ho K. Nieh is the Chairman of Defendant U.S. Nuclear Regulatory Commission. He is sued in his official capacity.
47. Frank J. Bisigano is the Commissioner of Defendant U.S. Social Security Administration. He is sued in his official capacity.
48. Kristi Noem is the Secretary of Homeland Security and the head of Defendant U.S. Department of Homeland Security. She is sued in her official capacity.
49. Paul S. Atkins is the Chairman of Defendant Securities and Exchange Commission. He is sued in his official capacity.
50. Jared Isaacman is the Administrator of Defendant National Aeronautics and Space Administration. He is sued in his official capacity.
51. Chris Wright is the Secretary of Energy and head of Defendant U.S. Department of Energy. He is sued in his official capacity.
52. Jerome H. Powell is the Chairman of Defendant Federal Reserve Board of Governors. He is sued in his official capacity.
53. Mary Anne Carter is the Chairman of Defendant National Endowment for the Arts. She is sued in her official capacity.
54. Doe Defendants 1 through 10 are federal departments, agencies, offices, or instrumentalities—including responsible officials within them—beyond those specifically identified above that have participated in the development and implementation of the Challenged Actions. All individual officials among the Doe Defendants are sued in their official capacities. Their true names and capacities are unknown to Anthropic at this time, and Anthropic will seek leave to amend this Complaint to identify them as their identities and roles become known.
JURISDICTION AND VENUE
55. This Court has subject-matter jurisdiction under 28 U.S.C. § 1331 because this civil action arises under the Constitution of the United States and federal statutes. This Court is authorized to award the requested relief under Rules 57 and 65 of the Federal Rules of Civil Procedure; the Administrative Procedure Act (APA), 5 U.S.C. §§ 702, 705, 706; the Declaratory Judgment Act, 28 U.S.C. §§ 2201-02; the All Writs Act, 28 U.S.C. § 1651; and the court's inherent equitable powers. The APA waives sovereign immunity. 5 U.S.C. § 702.
56. This Court also has authority to enjoin unlawful official action that is ultra vires, see, e.g., Reich, 74 F.3d at 1327-28, or that violates the Constitution, see Free Enter. Fund v. Pub. Co. Acct. Oversight Bd., 561 U.S. 477, 491 n.2 (2010). The Supreme Court has long held that federal courts have equitable power to grant injunctive relief "with respect to violations of federal law by federal officials." Armstrong v. Exceptional Child Ctr., Inc., 575 U.S. 320, 326-27 (2015); see also Larson v. Domestic & Foreign Com. Corp., 337 U.S. 682, 689 (1949).
57. Venue is proper in this District under 28 U.S.C. § 1391(e)(1)(C), because Defendants are agencies of the United States and officers of the United States acting in their official capacities, Plaintiff resides in this District, and no real property is involved.
FACTUAL BACKGROUND
Artificial Intelligence (AI) Models
58. Claude is a versatile, industry-leading large language model (LLM) that can be used in many different contexts depending on a user's needs. Anthropic first launched Claude in March 2023. The company has released several more versions of Claude since then, most recently Claude Opus 4.6 and Claude Sonnet 4.6 in February 2026.
59. LLMs like Claude are algorithmic systems trained on massive datasets to identify patterns and associations in language, and to generate outputs and take actions that resemble human responses and actions. Through training, models acquire predictive power and the transformative ability to take a range of actions in a fraction of the time it would take humans to perform them.
60. When deployed through a chatbot interface, Claude can interpret and respond to a vast variety of user inputs, known as "prompts," in an intelligent, human-like way. Depending on the nature of the user's prompt, Claude can: process basic instructions and logical scenarios; take direction on tone and "personality" when providing outputs; write in different languages; provide outputs in a variety of programming languages; analyze large amounts of information; and provide answers to user queries, with detailed background on technical, scientific, and cultural knowledge.
61. Claude may also be configured with tools that enable it to behave "agentically," meaning it can take actions on behalf of a user such as retrieving information, navigating online resources, writing and executing code, interacting with external services, or carrying out open-ended tasks that Claude plans and adapts. In certain configurations, Claude can perform tasks with minimal ongoing user input, operating with a degree of autonomy. Although this agentic use of AI systems is of particular interest to some users, including governments, it also presents heightened risks compared to traditional, prompt-response chatbot interactions.
62. AI models like Claude are not perfect. Despite developers' best efforts, models can generate inaccurate or misguided responses, or they can "hallucinate," confidently providing incorrect information. This is in part because models generate responses by sampling from a probability distribution rather than by selecting outputs pursuant to predefined rules. As a result, the outputs may or may not be factually accurate, and the same model, given the same prompt twice, may provide two different responses.
Anthropic's Foundational Commitment To AI Safety
63. Anthropic was founded in 2021 by seven former employees of OpenAI committed to the belief that AI will have a vast impact on the world and that AI development should maximize positive outcomes for humanity. Anthropic believes that AI policy decisions in the next few years will touch nearly every part of public life and that questions of AI policy governance are inherently nonpartisan. To that end, Anthropic has earned a reputation as an advocate dedicated to building a safer AI ecosystem. In keeping with that founding mission, Anthropic also builds frontier AI systems and strives to deploy those systems responsibly, in service of human progress. Anthropic began as a research-first company, devoted to AI research, adversarial testing, and policy work to further AI safety. That focus continues today.
64. As a public benefit corporation (PBC), Anthropic balances stockholder interests with its public benefit purpose of responsibly developing and maintaining advanced AI for the long-term benefit of humanity. The Delaware PBC statute permits its board to consider safety, ethics, and societal impact as part of ordinary corporate decision-making, rather than treat profit maximization as the sole objective.
65. These beliefs are fully compatible with responsible use of Claude by the Department of War. Claude has a wide range of specialized defense applications, including autonomously completing complex software engineering projects related to offensive and defensive cyber operations and vulnerability detection; supporting military operations; performing intelligence analysis; and even handling national security workflows on a custom fine-tuned version of Claude developed for classified networks.
66. Anthropic has developed a detailed Usage Policy to address the unique risks of AI, encourage safe and responsible uses of its models, and prohibit a wide range of behaviors contrary to its mission and values. Among other things, that Policy prohibits users from selling illegal drugs, engaging in human trafficking, exploiting cyber vulnerabilities, designing weapons or delivery processes for the deployment of weapons, or engaging in surveillance of persons without their consent. By its terms, the Policy has always prohibited the use of Anthropic's services for lethal autonomous warfare without human oversight and surveillance of Americans en masse.
The Federal Government's Embrace Of AI And Contracts With Anthropic
67. Since taking office, the Trump Administration has made global adoption of U.S.-developed AI systems a stated policy priority. The President has issued multiple Executive Orders focused on America's global AI dominance. His Administration released an "AI Action Plan" focused in part on promoting AI adoption throughout the federal government, which Anthropic publicly supported. Last year, the General Services Administration (GSA) added Claude and other AI providers to its list of approved vendors. The Department likewise has significantly expanded its use of artificial intelligence and entered into multiple major contracts with leading AI companies to scale AI capabilities across defense and intelligence missions, including "warfighting, intelligence, business, and enterprise information systems."
68. Anthropic is committed to these objectives and has invested considerable resources to support the government's national security work. Today, Claude is reportedly the Department's most widely deployed and used frontier AI model—and the only one currently on classified systems.
69. This did not happen overnight. Anthropic began building the infrastructure, partnerships, regulatory approvals, and capabilities necessary to support U.S. government operations in 2023. It joined the AI Safety Institute Consortium, collaborating with the federal government on AI safety research and evaluation frameworks. It entered into strategic partnerships with cloud providers to support its growing role in the national security ecosystem. And it invested substantial resources into pursuing—and obtaining—authorization in the Federal Risk and Authorization Management Program (FedRAMP), the government's security authorization framework for cloud products and services.
70. Anthropic has also developed specialized "Claude Gov" models tailored specifically for the national security context. These models have been built based on direct feedback from national security agencies to address real-world requirements, like improved handling of classified information, enhanced proficiency in critical languages, and sophisticated analysis of cybersecurity data. Claude Gov models undergo rigorous safety testing consistent with Anthropic's commitment to responsible AI.
71. To make Claude more useful for the military and intelligence components of the federal government, Anthropic does not impose the same restrictions on the military's use of Claude as it does on civilian customers. Claude Gov is less prone to refuse requests that would be prohibited in the civilian context, such as using Claude for handling classified documents, military operations, or threat analysis. Anthropic's terms in its existing contracts with the government also recognize the government's unique needs and capabilities. For example, Anthropic's government-specific addendum to the Usage Policy permits Claude to be analyzed lawfully collected foreign intelligence information, which would not be permitted under the Usage Policy for civilian users.
72. Since 2024, Anthropic has partnered with other national security contractors. Those partnerships have enabled the incorporation of Claude into the classified systems of the Department of War and other agencies. And they have allowed for the use of Claude to support government operations such as rapid processing of complex data, identifying trends, streamlining document review, and helping government officials make more informed decisions in time-sensitive situations.
73. Last year, Anthropic entered its first direct agreement with the Department's Chief Digital and Artificial Intelligence Office (CDAO). Under that agreement, Anthropic agreed to work with the Department to scope and develop use cases and, eventually, design a prototype AI service specifically for the Department's use. CDAO awarded similar agreements to Google, OpenAI, and xAI, each with a $200 million ceiling value, as part of its "commercial-first approach to accelerating DoD adoption of AI."
74. Anthropic worked diligently under that agreement, scoping out potential ways that the Department could best be served by Claude and related Anthropic professional services. During this period, the Department conveyed to Anthropic that Claude was the best solution for some of the proposals.
75. In the fall of 2025, Anthropic began negotiations for an additional agreement to provide a version of Claude on the Department's "GenAI.mil" AI platform. As part of those discussions, the Department asked Anthropic to excise its Usage Policy and allow the Department to use Claude for "all lawful uses." Because of Anthropic's commitment to U.S. national security, Anthropic substantially agreed to the proposal—except in two important respects.
76. First, Anthropic did not develop Claude (or the specialized Claude Gov models) to deploy lethal autonomous warfare without human oversight. Claude has not been trained or tested for that use. At least at present, Claude is simply not capable of performing such tasks responsibly without human oversight.
77. Second, Anthropic is unwilling to agree to Claude's use for mass surveillance of Americans. AI tools like Claude enable collection and analysis of information at speeds and scales not previously contemplated, posing unique risks for civil liberties given the potential for errors and misuse. These techniques would have been unimaginable when Congress enacted the existing frameworks regulating how the Executive Branch may conduct surveillance. AI technology is developing far more rapidly than those legal frameworks. And surveillance conducted using AI poses significantly greater potential to make mistakes—and to amplify the effect of any mistakes—than traditional techniques.
78. Allowing Claude to be used to enable the Department to surveil U.S. persons at scale and to field weapons systems that may kill without human oversight would therefore be inconsistent with Anthropic's founding purpose and public commitments. These important restrictions simply reflect what Anthropic knows to be true about Claude's limitations.
79. The Usage Policy does not provide Anthropic with any special capabilities to control, oversee, or second-guess the federal government's operations or the Department's military judgments. Nor does providing Claude to the government as a vendor place Anthropic in a position to intervene in or impede government decision-making. Indeed, while operating under the terms of the Usage Policy, the Department never previously raised any issues with its use of Claude or concerns about Anthropic's potential interference. Anthropic had only ever received positive feedback about Claude's capabilities from its government customers.
The Present Dispute
80. Later in 2025, the discussions regarding an additional agreement about deploying Claude on the "GenAI.mil" platform morphed into a negotiation over the Department's use of Claude more broadly. The Department demanded that—across all ongoing and future deployments of Claude—Anthropic abandon its Usage Policy and instead allow "all lawful use" of Claude. As part of these new demands, the Department sent partial contract language incorporating this term to Anthropic.
81. In early January 2026, Secretary Hegseth issued a memorandum directing the Department to "[u]nleash experimentation with America's leading AI models Department-wide" and execute a series of "Pace-Setting Projects" to accelerate AI adoption. To advance that goal, the memorandum directed the Department's procurement office to "incorporate standard 'any lawful use' language into any DoW contract" for AI services within 180 days. Three days later, Secretary Hegseth delivered remarks explaining that the Department was "blowing up . . . barriers."
82. Despite disagreement on the two use restrictions, Anthropic has continued to reiterate its commitment to providing Claude to serve the United States' national security interests and to negotiate in good faith with the Department.
83. But the Department chose a different path. In February 2026, a source inside the Department told reporters that it was "close" to cutting business ties with Anthropic and designating Anthropic a "supply chain risk," a designation that—to Anthropic's knowledge—has never before been applied to a domestic company. The source said: "It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this."
84. Until the Department raised this threat, no government official had ever raised a concern with Anthropic about potential supply chain vulnerabilities. On the contrary, the government has consistently provided the security clearances that are necessary for Anthropic's personnel to perform classified work. Those clearances remain in place today. Moreover, in 2024 Anthropic became the first frontier AI lab to collaborate with the Department of Energy to evaluate an AI model in a Top Secret classified environment.
85. Matters came to a head in a meeting between Secretary Hegseth and Dr. Dario Amodei, Anthropic's CEO, on February 24, 2026. Secretary Hegseth presented Anthropic with an ultimatum. He demanded that Anthropic accede to the Department's demands within four days or face one of two apparently contradictory punishments: either the Secretary would purport to invoke the Defense Production Act to force Anthropic to do as he said, or he would cast Anthropic out of the defense supply chain altogether as a supposed "supply chain risk." Pentagon officials confirmed in the media that the meeting was not intended to drive resolution, but rather to intimidate Anthropic.
86. After the February 24 meeting, a senior Pentagon official gave Anthropic "until 5:01pm [Eastern] Friday to get on board with the Department of War . . . . If they don't get on board, the Secretary of War will ensure the Defense Production Act is invoked on Anthropic, compelling them to be used by the Pentagon." The same official added, "the Secretary of War will also label Anthropic a supply chain risk." In other words, the official suggested that Anthropic was both necessary to national defense and—at the same time—an unacceptable risk to national security.
87. On February 26, Dr. Amodei issued a public statement describing Anthropic's adherence to its stated policy. He explained that "Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner." He again emphasized that the two restrictions giving rise to the dispute address uses that are "simply outside the bounds of what today's technology can safely and reliably do," and that Anthropic "cannot in good conscience accede to" the Department's request. He reiterated that "[o]ur strong preference is to continue to serve the Department and our warfighters—with our two requested safeguards in place." And he promised that, "[s]hould the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions."
The Government Retaliates Against Anthropic
88. The next day—even before the 5:01 p.m. Eastern deadline—President Trump posted the Presidential Directive, purporting to direct all federal agencies to immediately cease all use of Anthropic's technology.
89. Secretary Hegseth immediately followed suit by posting a "final" decision on social media directing his Department to designate Anthropic a "Supply-Chain Risk to National Security" and decreeing that, "effective immediately," "no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic":
90. The Secretarial Order left unclarified who is covered as a "partner," what it means to do business "with the United States military" versus the Department more broadly, or what "commercial activity" is prohibited. Regardless of what these other companies must do, the Order also insisted that "Anthropic will continue to provide the Department of War its services for a period of no more than six months."
91. But the Secretary left no doubt about his reasons: "Anthropic's stance is fundamentally incompatible with American principles." According to the Secretary, this "stance" includes "Silicon Valley ideology," "corporate virtue-signaling," "defective altruism," "arrogance," and even an attempt to hold "America's warfighters . . . hostage [to] the ideological whims of Big Tech." The Secretary thus distorted Anthropic's clear-eyed, expertise-driven understanding of its own technology's current limits into purported ideological extremism.
92. GSA also took immediate steps in "support of President Trump's directive," which it understood to "rejec[t] attempts to politicize work" and to require federal agencies to contract only with AI companies "who fit the bill." In a news release issued the same day as the Presidential Directive, GSA announced that it was removing Anthropic from USAi.gov and the Multiple Award Schedule contracts. A top GSA official separately announced that the agency had terminated Anthropic's "OneGov" contract.
93. Other government agencies soon fell in line, issuing multiple directives to begin to implement the President and the Secretary's directives. For example, the Department of State and the Department of Health and Human Services (HHS) have acted on the President's directive through internal communications, according to public reporting. Monday morning, the U.S. Department of the Treasury and the Federal Housing Finance Agency announced they were terminating all use of Claude. Anthropic also received reports that the Chief Information Officer of a federal civilian agency advised all non-Department of War leadership to stop using Claude.
94. Private actors also took heed. Anthropic immediately received outreach from numerous outside partners—from customers, to cloud providers, to investors—expressing confusion about what was required of them and concern about their ability to continue to work with Anthropic. Since the Challenged Actions, dozens of companies have contacted Anthropic seeking clarity, guidance, and, in some cases, an understanding of their termination rights.
95. An official confirmed that the Department's actions are a response to Anthropic's purported "behavior" in negotiations and threatened not just to terminate Anthropic's contracts but "require that all our vendors and contractors certify that they don't use any Anthropic models."
96. Other government officials relayed the personal and ideological nature of the Department's objective: "The problem with [Anthropic's CEO] Dario [Amodei] is, with him, it's ideological. We know who we're dealing with." This followed public condemnation of Anthropic and its usage policies by the Department's Chief Technology Officer as "not democratic."
97. Throughout, the federal government has never once expressed concerns about Anthropic's security or Claude's competencies. Instead, it has repeatedly recognized that Anthropic is not only safe but an important national asset. Claude's FedRAMP authorization represents the highest level of cloud security certification for the handling of unclassified and controlled unclassified information. The Department approved (and has continued to maintain) a facility clearance for Anthropic as well as numerous security clearances for Anthropic's personnel so they can perform classified work. Never during any of these security-focused processes did the government determine that Anthropic or its services posed a supply chain risk. Indeed, the FedRAMP authorization and facility security clearance and personnel clearances could not have been issued had any such determination been made.
98. Even during the recent negotiations, the government has repeatedly and publicly praised Claude's capabilities. Chief Technology Officer and Under Secretary of War Emil Michael, while describing the dispute with Anthropic, explicitly characterized Anthropic as one of America's "national champions" in AI. In the February 24 meeting with Dr. Amodei, Secretary Hegseth described Anthropic's technology as having "exquisite capabilities" and stated that the Department would "love" to work with Anthropic.
99. Senior administration officials have likewise repeatedly acknowledged that displacing Anthropic from its role would be disruptive because competing AI models "are just behind" when it comes to specialized government applications.
100. Department officials have even expressed concerns about the consequences of losing access to Claude. Describing the dispute between Anthropic and the Department, one official stated that "[t]he only reason we're still talking to these people is we need them and we need them now. The problem for these guys is they are that good."
101. Indeed, the President and Secretary Hegseth insisted that Claude must remain available to the Department for six months—even after another AI company had indicated it would accede to the Department's demand to make its models available for "all lawful uses," and apparently as the Department was in talks with a third AI company that recently announced it is inclined to do the same thing. Within hours of the Challenged Actions, moreover, the Department reportedly "launched a major air attack in Iran with the help of [the] very same tools" that are "made by" Anthropic and are the subject of the Challenged Actions.
102. And senior officials within the Department recently confirmed to the press what is apparent from the facts: One official who manages information security said that the Secretarial Order was "ideological" rather than an accurate description of risk. Another official, who specifically evaluates supply chain risk and other potential intelligence threats, acknowledged "there is no evidence of supply-chain risk" from Anthropic's AI model and reiterated that the Secretarial Order was "ideologically driven."
103. Indeed, the President himself made clear that his Administration's retaliatory actions towards Anthropic were a direct result of the views Anthropic expressed to the government and the public about the limitations on the use of its own product: "Well, I fired Anthropic. Anthropic is in trouble because I fired [them] like dogs, because they shouldn't have done that."
The Secretary Notifies Anthropic Of His "Supply Chain Risk" Designation
104. Even as agencies across the federal government moved to implement the Presidential Directive, Dr. Amodei and Under Secretary of War Michael continued negotiations in an effort to resolve or de-escalate the dispute. Those discussions were still underway when, at 8:48 p.m. Eastern on March 4, the Secretary of War sent Anthropic a letter. The letter, dated March 3, 2026, notified it of the "supply-chain risk" designation—almost a week after the Secretary had announced that designation on social media.
105. The two-page letter did not explain what risk Anthropic's services supposedly pose to national security. Its stated rationale reads in full: "the Department of War has determined that (i) the use of the Covered Entity's products or services in DoW covered systems presents a supply chain risk and that the use of the Section 3252 authority to carry out covered procurement actions is necessary to protect national security by reducing supply chain risk, and (ii) less intrusive measures are not reasonably available to reduce such supply chain risk."
106. Based on that "determination," the Secretarial Letter purports to exclude Anthropic—including all of its subsidiaries, successors, and affiliates—as a source for all Department procurements involving covered national security systems, effective immediately. The Letter does not explain the scope of procurements covered by the Secretary's action.
The Challenged Actions Are Causing Immediate And Irreparable Harm To Anthropic
107. The Challenged Actions have inflicted immediate, far-ranging, and irreversible harm on Anthropic. These harms will continue unless the Challenged Actions are declared unlawful and enjoined.
108. Anthropic has built a reputation as a public benefit corporation that is committed to AI safety and the responsible deployment of its technology. That reputation is critical to its continued success and growth. Secretary Hegseth's unlawful designation of Anthropic as "a Supply-Chain Risk to National Security" undoubtedly harms Anthropic's reputation, as does Defendants' unlawful decision to bar "EVERY Federal Agency in the United States Government" from using Anthropic's technology.
109. The Challenged Actions also inflict immediate and unrecoverable revenue losses: Anthropic stands to lose the federal contracts it already has, as well as its prospects to pursue federal contracts in the future.
110. Anthropic's business partnerships and contracts with other federal contractors are likewise in jeopardy. For example, one federal contractor with whom Anthropic has built custom applications has indicated that it may suspend that work or even remove Claude from existing deployments. Other federal contractors are raising concerns, pausing collaborations, and considering terminating contracts. Anthropic has no way to obtain redress from the government for those economic harms.
111. And those practical and economic injuries are not the only irreparable harms inflicted by the Challenged Actions. "The loss of First Amendment freedoms, for even minimal periods of time, unquestionably constitutes irreparable injury." Roman Catholic Diocese of Brooklyn v. Cuomo, 592 U.S. 14, 19 (2020) (per curiam).
112. All of this is precisely what Defendants intended: to punish Anthropic for adhering to its views. Anthropic was founded on its commitment to developing AI responsibly. Defendants presented Anthropic with a stark choice: silence its views on safe AI, capitulate to the Department's demands, and offer Claude on terms that are unsafe and violate its core principles—or else suffer swift harm at the hand of the federal government. When Anthropic adhered to its longstanding views about AI safety and the limitations of its services, Defendants carried out that threat.
CLAIMS
COUNT I ADMINISTRATIVE PROCEDURE ACT; 10 U.S.C. § 3252 (5 U.S.C. § 706) (DEFENDANTS HEGSETH AND THE DEPARTMENT OF WAR)
113. Anthropic incorporates by reference the allegations of the preceding paragraphs.
114. The APA requires courts to "hold unlawful and set aside" final agency action that is "arbitrary, capricious, an abuse of discretion, or otherwise not in accordance with law," or is "in excess of statutory jurisdiction, authority, or limitations, or short of statutory right," or "without observance of procedure required by law." 5 U.S.C. § 706(2)(A), (C), (D).
115. The February 27 Secretarial Order purported to "direct[] the Department of War to designate Anthropic a Supply-Chain Risk to National Security" and ordered that, "[e]ffective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic." The Order also emphasized that "[t]his decision is final."
116. The Secretarial Order is a final agency action for purposes of the APA. It is an "agency action" because it is an "order" (i.e., a "disposition . . . in a matter other than rulemaking") and also a "sanction" that "prohibit[s]," "limit[s]," or otherwise "affect[s]" Anthropic's freedom to compete for federal contracts and maintain its business relationships. 5 U.S.C. § 551(6), (10), (13). It is final both because Secretary Hegseth said so and because it finally "determine[s]" the "rights or obligations" of Anthropic and is backed by "legal consequences." Bennett v. Spear, 520 U.S. 154, 177-78 (1997). Effective "immediately," the decision purports to direct that no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.
117. A week later, the Secretary sent Anthropic a letter notifying it that the Department "has determined" that the use of Anthropic's "products or services in DoW covered systems presents a supply chain risk" and that it is necessary for the Department to use its authority under 10 U.S.C. § 3252 "to protect national security by reducing supply chain risk." The Secretarial Letter also asserts that "less intrusive measures are not reasonably available to reduce such supply chain risk." Those statements are the only explanations offered in the Secretarial Letter for the supply chain risk designation. And the Secretarial Letter does not purport to rescind or amend the Secretarial Order. See generally Nat'l Urb. League v. Ross, 508 F. Supp. 3d 663 (N.D. Cal. 2020) ("A final agency action does not become non-final after it is implemented.").
118. An agency acts arbitrarily and capriciously when it "entirely fail[s] to consider an important aspect of the problem," offers "an explanation for its decision that runs counter to the evidence before the agency," or fails to "articulate a satisfactory explanation for its action including a rational connection between the facts found and the choice made." Motor Vehicle Mfrs. Ass'n v. State Farm Mut. Auto. Ins. Co., 463 U.S. 29, 43 (1983) (internal quotation marks omitted).
119. The Secretarial Order, and the attempt to implement and explain that Order via the Secretarial Letter, violates the standards of Section 706 at every turn.
120. First, the Order exceeds the authority granted by Congress in 10 U.S.C. § 3252, the federal statute addressing "supply chain risk[s]." That statute does not provide the government a remedy for failed contract negotiations. Nor does it delegate freewheeling authority to the Secretary to redefine "supply chain risk" to cover a contractor who declines to modify its terms of use to track the Department's preferences.
121. Instead, Section 3252 authorizes exclusion with respect to a prime contractor or subcontractor only when necessary to protect against the risk that an adversary may "sabotage . . . or otherwise subvert" an information system used for national security purposes. 10 U.S.C. § 3252(b)(2)(A), (d)(4)-(5); 44 U.S.C. § 3552(b)(6). The Secretary has not determined, and cannot reasonably determine, that Anthropic's services present a risk of sabotage or subversion by an adversary to the United States.
122. Anthropic is not, and has no ties to, an "adversary" to the United States. The Executive Branch has defined the term to mean China, Russia, Iran, North Korea, Cuba, and Venezuela. See Exec. Order No. 13,873, 84 Fed. Reg. 22689 (May 15, 2019); 15 C.F.R. § 791.4(a). Anthropic is a U.S.-incorporated, U.S.-headquartered public benefit corporation with a demonstrated history of supporting the United States government and its national security interests. The Secretary has not articulated any determination otherwise. Nor is there any other valid basis for the Secretary to determine that designating Anthropic presents a risk of "sabotage" or "subver[sion]." Indeed, Anthropic has gone to significant lengths to prevent the use of its technology by entities linked to the Chinese Communist Party, has shut down attempts to abuse Claude for state-sponsored cyber operations, and has advocated for strong export controls on the most powerful chips used to train AI, all to preserve the U.S. lead in frontier AI development.
123. Second, the Secretary's actions failed to follow the procedure Congress required before excluding from contracts or subcontracts on the basis that it poses an unacceptable "supply chain risk." Under Section 3252, the Secretary must consult with other relevant officials and determine in writing (1) that an exclusion is "necessary to protect national security by reducing supply chain risk," and (2) that "less intrusive measures are not reasonably available to reduce such supply chain risk." 10 U.S.C. § 3252(b)(1), (b)(2)(A)-(B). Then the Secretary must notify the appropriate congressional committees of that determination, providing a summary of the risk assessment and the basis for determining that less intrusive options were not available. 10 U.S.C. § 3252(b)(3). On information and belief, no valid Section 3252 determination was made prior to the February 27 Secretarial Order. The Secretary did not consult with relevant procurement officials, did not make any written determination that less intrusive measures were unavailable, and did not notify Congress before issuing the Order. And even the Secretarial Letter received by Anthropic on March 4, which recited the "necessary to protect national security" and "less intrusive measures are not reasonably available" language from 10 U.S.C. § 3252(b)(2)(A)-(B), did not describe any consultation with relevant procurement officials or any congressional notification.
124. With respect to contracts entered directly with the government, Section 3252 authorizes the exclusion of a source only if it has failed either to "meet qualification standards" or "achieve an acceptable rating with regard to an evaluation factor." 10 U.S.C. § 3252(d)(2)(A)-(B). In both cases, those conditions relate to the risk that an adversary may sabotage, maliciously interfere with, or otherwise subvert a covered system. The Secretary has not determined—and could not reasonably determine—that Anthropic's services fail to meet qualification standards or achieve an acceptable rating related to any evaluation factor for a procurement. The February 27 Secretarial Order contains no such determination. And the Secretarial Letter sent on March 4 does not address those statutory criteria.
125. To the contrary, the Secretary himself has recognized Claude's capabilities as "exquisite." His Department suggested that Claude was so vital to our national defense that it needed to be commandeered under the Defense Production Act. And he has ordered that "Anthropic will continue to provide" its services to the Department of War for up to "six months." The "unexplained inconsistenc[y]" between simultaneously designating Anthropic's services a supply chain risk vulnerable to "sabotage" or other "subver[sion]" by a foreign adversary while directing those services to be used for up to six months for national security purposes demonstrates the arbitrariness of the Secretary's final decision. Dist. Hosp. Partners, L.P. v. Burwell, 786 F.3d 46, 59 (D.C. Cir. 2015) (collecting authority).
126. Additionally, nothing in the statute authorizes the Secretary to require every "contractor, supplier, or partner that does business with the United States military" to blacklist the excluded source.
127. Third, the Secretarial Order was arbitrary and capricious because it failed to provide a rational and "satisfactory explanation" for designating Anthropic a supply chain risk. Motor Vehicle Mfrs. Ass'n, 463 U.S. at 43. The Secretary's February 27 Order announcing his "final" decision contains invective against Anthropic, but no explanation of why Claude constitutes a supply chain risk. It does not attempt to reconcile the Secretary's assertion that those models are a threat "to National Security" with his decision to allow the Department to continue using them for half a year—let alone the Department's past praise for those models or its simultaneous suggestion that Anthropic might be commandeered into providing them on the Department's terms under the Defense Production Act.
128. The post hoc Secretarial Letter does not meaningfully elaborate on that explanation. It parrots the statutory predicates of Section 3252: that Anthropic presents a "supply chain risk," that the designation is "necessary to protect national security," and that "less intrusive measures [were] not reasonably available." But it offers no explanation for any of these conclusions; addresses none of the inconsistencies that rendered the Secretarial Order arbitrary; and supplies none of the reasoned analysis the Order lacked.
129. The only explanation provided by the Secretary for his action is pure retaliation. That is plain on the face of the Secretarial Order, in which the Secretary criticized Anthropic as "ideological" and insufficiently "patriotic." And it is confirmed by senior Department officials who unabashedly told the press that the Secretary designated Anthropic as a supply chain risk to "make sure [Anthropic] pays a price" for declining to concede to the Department's demands; that the Secretarial Order was "ideological" rather than an accurate description of risk; that "there is no evidence of supply-chain risk"; and that the Secretarial Order was "ideologically driven."
130. The Secretary's actions are arbitrary and capricious in multiple other ways. For example, the Secretary failed to consider less restrictive alternatives. Several were available here, and they had been offered as options by Anthropic itself. First, Anthropic repeatedly offered the Department that it would support an orderly transition to a new provider—one willing to accept the Department's proposed terms—at nominal cost if the parties failed to come to an agreement. But the Department had other options as well, including agreeing to Anthropic's proposed usage limitations; or continuing the negotiations already underway. Neither the Secretarial Order nor the Secretarial Letter identifies any of these alternatives, much less explains why they are insufficient.
131. The Secretary also failed to address the consequences of its actions for Anthropic, other companies that deal with the federal government, and Anthropic's commercial counterparties. He also failed to reasonably account for Anthropic's reliance interests. Neither the Secretarial Order nor the Secretarial Letter grapples with those considerations. And the Secretarial Order relied on extra-statutory factors that Congress did not intend for him to consider under Section 3252, such as Anthropic's position in contract negotiations and its public statements on AI safety.
132. For these reasons, the Court should declare that the Secretarial Order is "in excess of statutory jurisdiction, authority, or limitations," 5 U.S.C. § 706(2)(C), and "arbitrary, capricious . . . or otherwise not in accordance with law," id. § 706(2)(A), set the order aside, and enjoin Defendants (other than the President) from taking any action to implement or enforce it, including through the Secretarial Letter.
133. Defendants' APA violations have caused Anthropic ongoing and irreparable harm.
COUNT II FIRST AMENDMENT TO THE U.S. CONSTITUTION (EQUITABLE CAUSE OF ACTION; 5 U.S.C. § 702) (ALL DEFENDANTS)
134. Anthropic incorporates by reference the allegations of the preceding paragraphs.
135. The First Amendment to the Constitution provides that the federal Government "shall make no law . . . abridging the freedom of speech . . . or [abridging] the right of the people to petition the Government for a redress." U.S. Const. amend. I.
136. The Challenged Actions violate Anthropic's First Amendment rights because they constitute paradigmatic retaliation against Anthropic's expressive activities, including protected speech, protected viewpoints, and protected petitioning of the government.
137. The First Amendment "prohibits government officials from subjecting individuals to retaliatory actions after the fact for having engaged in protected speech." Hous. Cmty. Coll. Sys. v. Wilson, 595 U.S. 468, 474 n.2 (2022); Nieves v. Bartlett, 587 U.S. 391, 398 (2019) (similar). Indeed, "[s]tate action designed to retaliate against and chill" protected expression "strikes at the heart of the First Amendment." Gibson v. United States, 781 F.2d 1334, 1338 (9th Cir. 1986).
138. Succeeding on a retaliation claim requires Anthropic to show that "(1) [it] was engaged in a constitutionally protected activity, (2) the defendant's actions would chill a person of ordinary firmness from continuing to engage in the protected activity and (3) the protected activity was a substantial or motivating factor in the defendant's conduct." O'Brien v. Welty, 818 F.3d 920, 932 (9th Cir. 2016); President & Fellows of Harvard Coll. v. United States Dep't of Homeland Sec., 788 F. Supp. 3d 182, 206 (D. Mass. 2025) ("The elements of a Petition Clause retaliation claim are identical to those of a free speech retaliation claim."). All three elements are easily established here.
139. First, Anthropic engaged in protected First Amendment expression, in multiple respects.
140. To start, Anthropic has been a leading voice on AI safety and policy since its inception. The company frequently weighs in on pending legislation: It has advocated for the bipartisan Future of AI Innovation Act, which supports the efforts of the National Institute of Standards and Technology's Center for AI Standards and Innovation (CAISI) to undertake research on AI safety risks. And it has backed the CREATE AI Act of 2025 and the GAIN Act of 2025—bipartisan safety bills that align with the company's policy priorities. Anthropic also maintains a bipartisan lobbying effort and has donated money to organizations that promote AI safety.
141. The company's public speech extends to its Usage Policy. That policy, posted prominently on Anthropic's website, implements and embodies the company's foundational commitment to the safe and responsible use of AI. Consistent with Anthropic's founding ethos, the policy "is calibrated to strike an optimal balance between enabling beneficial uses and mitigating potential harms." As explained above, the Usage Policy has never permitted Claude to be used for mass surveillance of Americans or for lethal autonomous warfare.
142. Anthropic's executives speak publicly on these topics. In June 2025, Dr. Amodei published an op-ed opposing federal legislation that would have imposed a moratorium on state regulation of AI. In October 2025, he released a statement praising President Trump's AI action plan, reiterating his opposition to a federal moratorium on state AI regulation, and emphasizing Anthropic's support for SB 53, a since-enacted California AI safety bill. And, as noted above, on February 26, 2026, he issued a public statement regarding the importance of Anthropic's usage restrictions on lethal autonomous warfare and mass surveillance of Americans, emphasizing that those uses are "simply outside the bounds of what today's technology can safely and reliably do," and that Anthropic "cannot in good conscience" abandon those particular restrictions.
143. In addition, Anthropic's communications with the government are protected speech. Cf. Janus v. Am. Fed'n of State, Cnty., & Mun. Emps., Council 31, 585 U.S. 878, 893-94 (2018) (recognizing that "collective bargaining" with the government is "private speech" that is protected by the First Amendment); President & Fellows of Harvard Coll., 788 F. Supp. 3d at 203 ("refusing to cede" on issues of public importance "constitute[s] . . . protected conduct" even if expressed as "rejection" of contract terms).
144. Throughout its negotiations with the Department, Anthropic expressed its views about Claude's capabilities and the uses to which Claude can safely and responsibly be put. Anthropic has also spoken out about the threat to civil liberties that AI-enabled mass surveillance of Americans poses. Anthropic has discussed these issues directly with the Department and has shared its views with the public. These expressions of Anthropic's viewpoints are entitled to full First Amendment protection. And that expression is what the Challenged Actions seek to punish.
145. Anthropic also engaged in protected First Amendment activity when it petitioned the government to honor Anthropic's use restrictions with respect to lethal autonomous warfare systems that lack any human oversight and mass surveillance of Americans. The First Amendment protects the right "to petition the Government for a redress." U.S. Const. amend. I. Anthropic exercised that right by communicating its position to the Department, explaining the basis for that position, and seeking to persuade the government to embrace that view. See BE & K Const. Co. v. N.L.R.B., 536 U.S. 516, 525 (2002) ("[T]he right to petition extends to all departments of the Government") (citation omitted)). Anthropic was not simply engaged in contract negotiations; it was expressing a position on an issue of significant public importance for which it had unique expertise—the appropriate use of its own AI models. The government's response was drastic and punitive, retaliating against the core freedoms the Petition Clause protects.
146. Second, the Challenged Actions impose significant financial and reputational costs on Anthropic that would chill a company of ordinary firmness from continuing to engage in expressive activity. Government action is "adverse" for purposes of a First Amendment retaliation claim if it is "designed to . . . chill political expression," Mendocino Env't Ctr. v. Mendocino Cnty., 14 F.3d 457, 464 (9th Cir. 1994) (emphasis added), or "would chill a person of ordinary firmness from continuing to engage in the protected activity," Blair v. Bethel Sch. Dist., 608 F.3d 540, 543 (9th Cir. 2010). The Challenged Actions satisfy both tests. By their very terms, they are intended to force Anthropic to "get their act together[] and be helpful." And they carry severe and wide-ranging consequences that ripple far beyond any single contract.
147. The Challenged Actions also assign Anthropic a "supply chain risk" designation that is reserved for companies that create a risk of "sabotage" or other acts of subversion by a foreign "adversary." 10 U.S.C. § 3252(d)(4). That label will follow Anthropic into every future procurement relationship across the federal government and with federal contractors, not to mention relationships with states and local governments and customers in other sectors. The threat of that extraordinarily stigmatizing label would undoubtedly chill the expressive activities of a company of ordinary firmness.
148. This adversity is severe, particularly in the fiercely competitive AI marketplace, where reputational damage can quickly lead to pecuniary harm. See Riley's Am. Heritage Farms v. Elsasser, 32 F.4th 707, 723 (9th Cir. 2022) ("A plaintiff establishes . . . adverse action . . . by demonstrating that the government action threatened or caused pecuniary harm"); Arizona Students' Ass'n v. Arizona Bd. of Regents, 824 F.3d 858, 868 (9th Cir. 2016) ("[T]he government may chill speech by threatening or causing pecuniary harm . . . [or] withholding a license, right, or benefit . . . .").
149. Third, Anthropic's protected expression was not only a substantial factor underlying the Challenged Actions, it was the motivating factor. The causal link could not be clearer: Defendants threatened Anthropic and then took the Challenged Actions only after Anthropic refused to change its position on acceptable uses of Claude and publicly explained why. Indeed, the government made clear that it took the Challenged Actions because of Anthropic's steadfast expression of its views about what Claude can and cannot do. For example, Secretary Hegseth directly criticized Anthropic's "rhetoric" when he announced the supply chain action and faulted the company for not being "more patriotic."
150. Actions designed to punish ideological disagreement are necessarily motivated by protected First Amendment activity. See, e.g., Mendocino Envtl. Ctr., 14 F.3d at 464; see also Perkins Coie LLP v. U.S. Dep't of Just., 783 F. Supp. 3d 105 (D.D.C. 2025) (holding Executive Order 14230 unconstitutional as a retaliation for protected speech because its text made "clear that President Trump and his administration disfavor the specific messages conveyed by plaintiff").
151. And Defendants' public statements confirm that the government took the Challenged Actions because of what Anthropic said, not because of any legitimate procurement or security concern. No government actor has ever even attempted to identify any technical deficiency in Claude. To the contrary, Claude has instead been an unmitigated success for the American military. Perhaps that is why the government initially threatened to invoke the Defense Production Act against Anthropic and compel it to provide the very service that the government now calls a supply chain risk. In the government's own words, "we need them and we need them now" because Claude is just "that good." Without any technical motivations supporting the Challenged Actions, the only motivation left is the one candidly expressed by Defendants: disagreement with Anthropic's views.
152. To be sure, if it complies with the Constitution and governing statutes and regulations, the Department may terminate its contract with Anthropic. And it may look to procure services from other AI companies on the terms it prefers, as it has already done. Exercising that authority would have been unremarkable. Anthropic even offered to facilitate such a transition. But the Challenged Actions took a different path. These needless and extraordinarily punitive actions, imposed in broad daylight, are a paradigm of unconstitutional retaliation. See Soranno's Gasco's Inc. v. Morgan, 874 F.2d 1310, 1316 (9th Cir. 1989) (inferring a retaliatory motivation where the government's "chosen course of action was designed to maximize harm").
153. The government's First Amendment retaliation is made worse by the fact that it is content- and viewpoint-based. It is content-based because the retaliation is targeted at Anthropic for speaking on issues of AI safety and responsible AI use—"speech on public issues" that "occupies the highest rung of the hierarchy of First Amendment values." Snyder v. Phelps, 562 U.S. 443, 452 (2011). The Challenged Actions also punish Anthropic not just for speaking on that topic, but for Anthropic's viewpoints on that topic. See, e.g., Pleasant Grove City v. Summum, 555 U.S. 460, 469 (2009) ("restrictions based on viewpoint are prohibited").
154. Defendants' content- and viewpoint-based acts are subject to, but cannot possibly satisfy, strict scrutiny. See e.g., Vidal v. Elster, 602 U.S. 286, 293 (2024); Waln v. Dysart Sch. Dist., 54 F.4th 1152, 1162 (9th Cir. 2022) ("Viewpoint-based restrictions on speech are subject to strict scrutiny.").
155. To survive strict scrutiny, the government must adopt "the least restrictive means of achieving a compelling state interest." McCullen v. Coakley, 573 U.S. 464, 478 (2014). "When the Government restricts speech, the Government bears the burden of proving the constitutionality of its actions." FEC v. Cruz, 596 U.S. 289, 305 (2022).
156. Defendants' asserted desire to stamp out competing viewpoints about what Claude can and cannot safely do is not a legitimate interest. See Crime Justice & Am., Inc. v. Honea, 876 F.3d 966, 973 (9th Cir. 2017) (a government interest is legitimate only if it is "unrelated to the suppression of expression.").
157. While the government has a compelling interest in addressing genuine supply chain risks, Defendants cannot show that the Challenged Actions advance that interest. And to the extent the government asserts a compelling interest in obtaining AI services without the two narrow safeguards Anthropic insists upon, the Challenged Actions were not the least-restrictive means of achieving that interest. The Department had a straightforward and unrestrictive option that would have fully served that interest: terminate the contract and hire a different developer. Indeed, Anthropic offered to facilitate a transition to one of its competitor's systems, and the Department is reportedly negotiating agreements with one or more frontier AI developers.
158. Defendants' First Amendment violations have caused Anthropic ongoing and irreparable harm.
COUNT III ARTICLE II OF THE U.S. CONSTITUTION; ULTRA VIRES ACTION (EQUITABLE CAUSE OF ACTION) (ALL DEFENDANTS)
159. Anthropic incorporates by reference the allegations of the preceding paragraphs.
160. "The ability to sue to enjoin unconstitutional actions by state and federal officers is the creation of courts of equity, and reflects a long history of judicial review of illegal executive action, tracing back to England." Armstrong, 575 U.S. at 327. "When an executive acts ultra vires, courts are normally available to reestablish the limits on his authority." Reich, 74 F.3d at 1328. "[I]t remains the responsibility of the judiciary to ensure that the President act[s] within those limits" that Congress and the Constitution place on him. Am. Forest Res. Council v. United States, 77 F.4th 787, 797 (D.C. Cir. 2023); accord Murphy Co. v. Biden, 65 F.4th 1122, 1129-31 (9th Cir. 2023).
161. Under longstanding Supreme Court precedent, "[t]he President's power, if any, to issue [that] order must stem either from an act of Congress or from the Constitution itself." Youngstown Sheet & Tube Co. v. Sawyer, 343 U.S. 579, 585 (1952).
162. The February 27 Presidential Directive purported to order "EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology."
163. The President has no inherent Article II authority for the Presidential Directive. There is no "executive practice, long pursued to the knowledge of the Congress and never before questioned," Youngstown, 343 U.S. at 610 (Frankfurter, J., concurring), of Presidents using their official position to punish corporations for expressing views on matters of public concern in negotiations with the government. The "President enjoys no inherent authority," Learning Res., Inc. v. Trump, 2026 WL 477534, at *7 (U.S. Feb. 20, 2026), to force companies to choose between removing critical use limitations from their products or suffer immediate and widespread debarment at the hands of the government. No other President has even attempted to claim such powers.
164. Nor is there any statutory authority for such a directive. Congress has enacted a comprehensive statutory regime governing federal procurement. This includes statutes in Title 41 of the U.S. Code, as well as those in Title 10, which are specific to the Department. The government also has promulgated thousands of pages of regulations and individual agency guidance that comprehensively address how procurement authority is administered. Under this detailed framework, if the government and a contractor cannot agree on terms for procured services, the ordinary remedy is for the government not to award a contract or to terminate an awarded contract for its convenience. See 48 C.F.R. § 49.502. Debarment is not a remedy for mere contract failure; rather, it is limited to addressing specific "serious . . . irregularities," may never be used "for purposes of punishment," and may only be consummated after providing robust procedural protections. 48 C.F.R. § 9.402(b); see 48 C.F.R. subpart 9.4.
165. The President's directive finds no support in this calibrated statutory and regulatory framework. And even the President cannot "attempt[] to delegate to himself the power to act arbitrarily." Anti-Fascist Refugee Committee v. McGrath, 341 U.S. 123, 138 (1951). The President likewise cannot direct federal agencies to disregard their duly promulgated regulations. Cf. Nat'l Env't Dev. Ass'n's Clean Air Proj. v. EPA, 752 F.3d 999, 1009 (D.C. Cir. 2014) ("[An] agency is not free to ignore or violate its regulations while they remain in effect."). The President's abrupt directive to cancel Anthropic's contracts en masse violates these foundational principles.
166. Finally, the Presidential Directive "possess[es] almost every quality of [an unlawful] bill[] of attainder." McGrath, 341 U.S. at 143-44 (Black, J., concurring). It functions as a "prepared and proclaimed government blacklist[]," punishing Anthropic—and only Anthropic—without any formal investigation, trial, or even informal process. From the Founding, such measures have been "forbidden to both national and state governments." Id. at 144. It cannot be "that the authors of the Constitution, who outlawed the bill of attainder, inadvertently endowed the executive with [the] power to engage in the same tyrannical practices that had made the bill such an odious institution." Id.
167. The President's ultra vires directive, and any actions by other Defendants implementing the Presidential Directive, have caused Anthropic ongoing and irreparable harm.
COUNT IV FIFTH AMENDMENT TO THE U.S. CONSTITUTION (DUE PROCESS) (EQUITABLE CAUSE OF ACTION; 5 U.S.C. § 702) (ALL DEFENDANTS)
168. Anthropic incorporates by reference the allegations of the preceding paragraphs.
169. The Fifth Amendment's Due Process Clause guarantees that "[n]o person shall . . . be deprived of life, liberty, or property, without due process of law." U.S. Const. amend. V.
170. To succeed on its procedural due process claim, Anthropic must show (1) a deprivation of a protected liberty or property interest; (2) by the government; (3) without the process that is due under the Fifth Amendment. E.g., Reed v. Goertz, 598 U.S. 230, 236 (2023).
171. The Challenged Actions implicate multiple interests protected by the Due Process Clause. They impair Anthropic's liberty interest in its reputation. Wisconsin v. Constantineau, 400 U.S. 433, 437 (1971). They also deprive Anthropic's property interest in its existing contracts with the government and private sectors. See Al Haramain Islamic Found. v. U.S. Dep't of Treasury, 686 F.3d 965, 973, 979-80 (9th Cir. 2011); Ulrich v. City & Cnty. of San Francisco, 308 F.3d 968, 976 (9th Cir. 2002) ("'[I]t has long been settled that a contract can create a constitutionally protected property interest[.]'"). They purport to (1) terminate Defendants' contracts with Anthropic, (2) require many of Anthropic's largest customers to terminate their contracts with Anthropic, (3) prohibit Anthropic from participating in federal contracting, and (4) bar Anthropic from engaging in any future business with any entity that contracts with the Department.
172. In addition, by purporting to exclude Anthropic from contracting with any federal agency (apparently for all time), they accomplish a de facto debarment that infringes on Anthropic's liberty interest in pursuing its chosen trade. See Trifax Corp. v. District of Columbia, 314 F.3d 641, 643-44 (D.C. Cir. 2003) ("Debarring a corporation from government contract bidding constitutes a deprivation of liberty that triggers the procedural guarantees of the Due Process Clause."); see also Old Dominion Dairy Prods, Inc. v. Sec'y of Def., 631 F.2d 953, 955-56 (D.C. Cir. 1980); Eng'g v. City & Cnty. of San Francisco, 2011 WL 13153042, at *7 (N.D. Cal. Feb. 14, 2011).
173. The Challenged Actions imposed these draconian punishments on Anthropic without any meaningful process. Defendants did not provide Anthropic with any factual findings remotely supporting the actions taken, much less a meaningful opportunity to challenge them. In short, the government took these punitive actions "without providing the 'core requirements' of due process: adequate notice and a meaningful hearing." Jenner & Block LLP v. U.S. Dep't of Just., 784 F. Supp. 3d 76, 108-09 (D.D.C. 2025) (citation omitted). "[I]f the government must provide due process before terminating a contractor of its own, surely it must do the same before blacklisting an entity from all its contractors' Rolodexes." Id. at 109.
174. To the extent that a formal process did occur out of public view, it is clear that the outcome was fatally predetermined by the Department's retaliatory animus. Prejudgment and process tainted by animus do not satisfy the requirements of the Due Process Clause.
175. Defendants' violations of due process have caused Anthropic ongoing and irreparable harm.
176. Anthropic incorporates by reference the allegations of the preceding paragraphs.
177. The APA provides that "[a] sanction may … be imposed or a substantive . . . order issued [only] within jurisdiction delegated to the agency and as authorized by law." 5 U.S.C. § 558(b). Thus, the APA prohibits an agency from imposing sanctions or issuing orders that exceed the scope of authority delegated to it by Congress.
178. After the President issued the Presidential Directive on February 27, numerous agencies promptly issued sanctions and orders against Anthropic.
179. For example, the Secretarial Order did not only purport "to designate Anthropic a Supply-Chain Risk to National Security," it also directed that, "[e]ffective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic." The Secretarial Letter issued on March 4 purported to formalize that final decision.
180. Later on Friday, February 27, 2026, GSA issued an order removing Anthropic from its Multiple Awards Schedule and USAi.gov. The Multiple Awards Schedule is the federal government's primary vehicle for procurement that previously allowed Anthropic to compete for procurement opportunities at the federal, state, and local level. USAi.gov is a "sandbox" or centralized platform for federal agencies to test, experiment with, and deploy AI models from leading providers, including—up to GSA's action—Anthropic.
181. Also on February 27, 2026, HHS reportedly took immediate steps to "disabl[e] enterprise Claude" as a result of the President's directive, thereby eliminating Anthropic's ability to continue to provide its services and compete with other AI providers across HHS's network.
182. On March 2, 2026, Treasury Secretary Bessent issued a statement on X that the Treasury was "terminating all use of Anthropic products . . . within the department" because the "American people deserve confidence that every tool in government serves the public interest." The same day, the State Department announced that it was "taking immediate steps to implement the [President's] directive" and switch "the model powering its in-house chatbot . . . to OpenAI from Anthropic." The Federal Housing Finance Agency also released statements that it and mortgage agencies Fannie Mae and Freddie Mac would terminate all use of Anthropic products.
183. On information and belief, additional federal agencies are positioned to issue similar directives and orders.
184. These actions are substantive "orders" within the meaning of 5 U.S.C. § 558(b) because they are "final disposition[s] . . . of an agency in a matter other than rule making." 5 U.S.C. § 551(6). These actions also are "sanctions" within the meaning of Section 558(b) because they impose "limitation[s]" and "other . . . restrictive action[s]" affecting Anthropic's freedom to compete with other AI providers for procurement opportunities and its ability to protect its reputation as an AI provider serving the public interest. 5 U.S.C. § 551(10).
185. No statute authorizes federal agencies to impose abrupt and en masse orders and sanctions limiting Anthropic's ability to compete and impugning Anthropic's reputation.
186. "Congress could not speak more clearly than it has in the text of the APA: 'a sanction may not be imposed or a substantive . . . order issued except within jurisdiction delegated to the agency and as authorized by law.'" Am. Bus Ass'n v. Slater, 231 F.3d 1, 7 (D.C. Cir. 2000) (citing 5 U.S.C. § 558(b)). The Challenged Orders of the non-Department Agencies are "without statutory authorization," id., and must be set aside under the APA.
187. Defendants' APA violations have caused Anthropic ongoing and irreparable harm.
PRAYER FOR RELIEF
For these reasons, Plaintiff respectfully requests an order that:
1. As to the Secretarial Order: a. Declares the Secretarial Order, and the implementing Secretarial Letter, arbitrary, capricious, an abuse of discretion, and contrary to law under 5 U.S.C. § 706(2)(A); b. Declares the Secretarial Order, and the implementing Secretarial Letter, contrary to constitutional right under 5 U.S.C. § 706(2)(B); c. Declares the Secretarial Order, and the implementing Secretarial Letter, in excess of statutory jurisdiction, authority, or limitations under 5 U.S.C. § 706(2)(C); d. Sets aside and vacates the Secretarial Order, and the implementing Secretarial Letter, in its entirety under 5 U.S.C. § 706(2); e. Stays the effective date of the Secretarial Order, and the implementing Secretarial Letter, under 5 U.S.C. § 705 until the conclusion of judicial proceedings in this action.
2. As to the Presidential Directive: a. Declares that the Presidential Directive exceeds the President's authority and violates the First Amendment and Fifth Amendment to the United States Constitution.
3. As to all of the Challenged Actions: a. Permanently enjoins Defendants and all their officers, employees, and agents from implementing, applying, or enforcing the Challenged Actions; b. Directs Defendants and their agents, employees, and all persons acting under their direction or control to rescind any and all guidance, directives, or communications that have issued relating to the implementation or enforcement of the Challenged Actions, including the Secretarial Letter; c. Directs Defendants and their agents, employees, and all persons acting under their direction or control to immediately issue guidance to their officers, staff, employees, contractors, and agents to disregard the Challenged Actions and any implementing directives; d. Awards Plaintiffs their costs and reasonable attorneys' fees as appropriate; and e. Grants such further and other relief as this Court deems just and proper.
Recently Anthropic sued the US Department of War et. al. over being designated a supply chain risk. The full text of the filing is below, except for the footnotes and some formatting which were removed.
INTRODUCTION
1. Anthropic is a leading frontier artificial intelligence (AI) developer whose flagship family of AI models is known as "Claude." Anthropic was founded based on the belief that AI technologies should be developed and used in a way that maximizes positive outcomes for humanity, and its primary animating principle is that the most capable artificial-intelligence systems should also be the safest and the most responsible. Anthropic brings this suit because the federal government has retaliated against it for expressing that principle. When Anthropic held fast to its judgment that Claude cannot safely or reliably be used for autonomous lethal warfare and mass surveillance of Americans, the President directed every federal agency to "IMMEDIATELY CEASE all use of Anthropic's technology"—even though the Department of War (Department) had previously agreed to those same conditions. Hours later, the Secretary of War directed his Department to designate Anthropic a "Supply-Chain Risk to National Security," and further directed that "effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic." In a letter to Anthropic, the Secretary confirmed the designation as "necessary to protect national security." These actions are unprecedented and unlawful. The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech. No federal statute authorizes the actions taken here. Anthropic turns to the judiciary as a last resort to vindicate its rights and halt the Executive's unlawful campaign of retaliation.
2. Since its inception, Anthropic has worked to offer AI services to customers in the private and public sectors in a manner consistent with its founding principles of safety and responsibility. It has partnered extensively with the federal government, and particularly the United States Department of War. Anthropic has even developed Claude models that help the Department to protect national security. As a result of these efforts, Claude is reportedly the Department's most widely deployed and used frontier AI model, and the only frontier AI model on the Department's classified systems. And the Department has acknowledged Anthropic's unique contributions in this area, praising Claude for its "exquisite" capabilities and reportedly using Claude—to this day—in its most important military missions.
3. Anthropic's Usage Policy has always conveyed its view that Claude should not be used for two specific applications: (1) lethal autonomous warfare and (2) surveillance of Americans en masse. Anthropic has never tested Claude for those uses. Anthropic currently does not have confidence, for example, that Claude would function reliably or safely if used to support lethal autonomous warfare. These usage restrictions are therefore rooted in Anthropic's unique understanding of Claude's risks and limitations—including Claude's capacity to make mistakes and its unprecedented ability to accelerate and automate analysis of massive amounts of data, including data about American citizens. Anthropic has collaborated with the Department of War on modifications to its usage restrictions to facilitate the Department's work with Claude, in recognition of the Department's unique missions. But Anthropic has always maintained its commitment to those two specific restrictions, including in its work with the Department of War.
4. Recently, however, Secretary of War Hegseth and his Department began demanding that Anthropic discard its usage restrictions altogether and replace them with a general policy under which the Department may make "all lawful use" of the technology. Anthropic largely agreed to the Department's request, except for two restrictions it viewed as critical: prohibitions against use of the technology for lethal autonomous warfare and mass surveillance of Americans. Throughout these discussions, Anthropic expressed its strongly held views about the limitations of its AI services. It also made clear that, if an arrangement acceptable to the Department could not be reached, Anthropic would collaborate with the Department on an orderly transition to another AI provider willing to meet its demands.
5. The Department met Anthropic's attempts at compromise with public castigation. It labeled Anthropic's CEO as too "ideological" and a "liar" with a "God-complex" who "is ok putting our nation's safety at risk." The Department eventually gave Anthropic a public ultimatum: "get on board" and accede to the government's demands by 5:01 p.m. on February 27, 2026, or "pay a price" in the form of either being cast out of the defense supply chain under 10 U.S.C. § 3252 or forced to provide unlimited use of Claude under the Defense Production Act.
6. After Anthropic's CEO publicly announced that the company could not "in good conscience accede to" the Department's demands, the Executive Branch swiftly retaliated.
7. On February 27, 2026, President Trump posted a statement on social media (the Presidential Directive), "directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology." He derided Anthropic as "out-of-control" and a "RADICAL LEFT, WOKE COMPANY" of "Leftwing nut jobs." He also accused Anthropic of "selfishness" and of making a "DISASTROUS MISTAKE." "Anthropic better get their act together," the President threatened, or he would "use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow."
8. The same afternoon, Secretary Hegseth purported to act on "the President's directive" by posting a "final" decision via social media (the Secretarial Order). The Secretarial Order "direct[ed] the Department of War to designate Anthropic a Supply-Chain Risk to National Security." It also proclaimed that "[e]ffective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic." The Secretary denounced what he characterized as Anthropic's "Silicon Valley ideology," "defective altruism," "corporate virtue-signaling," and "master class in arrogance." And he criticized Anthropic for not being "more patriotic." But he also directed that "Anthropic will continue to provide the Department of War its services for a period of no more than six months."
9. Other federal agencies soon followed suit. For example, the General Services Administration terminated Anthropic's "OneGov" contract, thereby ending the availability of Anthropic services to all three branches of the federal government. The Department of the Treasury and the Federal Housing Finance Agency publicly stated they were cutting ties with Anthropic. And the Departments of State and Health and Human Services reportedly circulated internal memoranda directing employees to stop using Anthropic's services.
10. On March 4, 2026, at 8:48 p.m. Eastern, the Secretary of War sent Anthropic a letter about the "supply chain risk" designation in the Secretarial Order. That letter (the Secretarial Letter), dated March 3, notified Anthropic that "the Department of War (DoW) has determined . . . that the use of [Anthropic's] products in [the Department's] covered systems presents a supply chain risk" and that exercising the authority granted by 10 U.S.C. § 3252 against Anthropic is "necessary to protect national security." The Secretarial Letter pronounces that this determination covers all Anthropic "products" and "services," including any that "become available for procurement." And it asserts that "less intrusive measures are not reasonably available" to mitigate the risks that Anthropic's products and services supposedly pose to national security.
11. All of these unprecedented actions—the Presidential Directive, the Secretarial Order and the Secretarial Letter that followed it, and other agency actions taken in response to the Presidential Directive (collectively, the Challenged Actions)—are harming Anthropic irreparably. In Secretary Hegseth's own words, Anthropic's status in the eyes of the federal government has been "permanently altered." Official designation as a "Supply-Chain Risk to National Security" carries profound weight, particularly under a President who has threatened both "criminal consequences" and "the Full Power of the Presidency" to enforce compliance. Anthropic's contracts with the federal government are already being canceled. Current and future contracts with private parties are also in doubt, jeopardizing hundreds of millions of dollars in the near-term. On top of those immediate economic harms, Anthropic's reputation and core First Amendment freedoms are under attack. Absent judicial relief, those harms will only compound in the weeks and months ahead.
12. The Challenged Actions are as unlawful as they are unprecedented. First, the Secretarial Order "designat[ing] Anthropic a Supply-Chain Risk to National Security" and prohibiting the Department's contractors, suppliers, and partners from "conduct[ing] any commercial activity with Anthropic"—and the Secretarial Letter purporting to implement the Order—violates both 10 U.S.C. § 3252 and the Administrative Procedure Act. The Secretary's actions are contrary to Section 3252's plain text, were issued without observance of the procedures Congress required, and are arbitrary, capricious, and an abuse of discretion. Indeed, Anthropic had been one of the government's most trusted partners until its views clashed with the Department's.
13. Second, the Challenged Actions retaliated against Anthropic for its speech and other protected activities in violation of the First Amendment. The Constitution confers on Anthropic the right to express its views—both publicly and to the government—about the limitations of its own AI services and important issues of AI safety. The government does not have to agree with those views. Nor does it have to use Anthropic's products. But the government may not employ "the power of the State to punish or suppress [Anthropic's] disfavored expression." Nat'l Rifle Ass'n of Am. v. Vullo, 602 U.S. 175, 188 (2024).
14. Third, the Presidential Directive requiring every federal agency to immediately cease all use of Anthropic's technology, and actions taken by other defendants in response to that directive, are outside any authority that Congress has granted the Executive. And "[w]hen an executive acts ultra vires, courts are normally available to reestablish the limits on his authority." Chamber of Com. of U.S. v. Reich, 74 F.3d 1322, 1328 (D.C. Cir. 1996).
15. Fourth, the Challenged Actions violate the Fifth Amendment's Due Process Clause. Anthropic has weighty property and liberty interests in its reputation, its business relationships, its future business prospects, and its advocacy. The Challenged Actions arbitrarily deprive Anthropic of those interests without any process, much less due process.
16. Fifth, the Challenged Actions violate the APA's prohibition against imposing any "sanction," "penalty," "revocation," "suspension," or other "compulsory or restrictive" action against a person "except within jurisdiction delegated to the agency and as authorized by law." 5 U.S.C. §§ 551, 558.
17. The consequences of this case are enormous. The federal government retaliated against a leading frontier AI developer for adhering to its protected viewpoint on a subject of great public significance—AI safety and the limitations of its own AI models—in violation of the Constitution and laws of the United States. Defendants are seeking to destroy the economic value created by one of the world's fastest-growing private companies, which is a leader in responsibly developing an emergent technology of vital significance to our Nation. The Challenged Actions inflict immediate and irreparable harm on Anthropic; on others whose speech will be chilled; on those benefiting from the economic value the company can continue to create; and on a global public that deserves robust dialogue and debate on what AI means for warfare and surveillance. There is no valid justification for the Challenged Actions. The Court should declare them unlawful and enjoin Defendants from taking any steps to implement them.
PARTIES
18. Plaintiff Anthropic is a public benefit corporation organized under the laws of Delaware and headquartered in San Francisco. Anthropic's customers range from Fortune 500 companies and U.S. government agencies to small businesses and individual consumers who have integrated Claude into the core of how they work, transforming workflows on a wide range of tasks.
19. The U.S. Department of War is a federal agency headquartered in Washington, D.C.
20. The Federal Housing Finance Agency is a federal agency headquartered in Washington, D.C.
21. The U.S. Department of the Treasury is a federal agency headquartered in Washington, D.C.
22. The U.S. Department of State is a federal agency headquartered in Washington, D.C.
23. The U.S. Department of Health and Human Services is a federal agency headquartered in Washington, D.C.
24. The U.S. Department of Commerce is a federal agency headquartered in Washington, D.C.
25. The U.S. Department of Veterans Affairs is a federal agency headquartered in Washington, D.C.
26. The General Services Administration is a federal agency headquartered in Washington, D.C.
27. The U.S. Office of Personnel Management is a federal agency headquartered in Washington, D.C.
28. The U.S. Nuclear Regulatory Commission is a federal agency headquartered in Rockville, Maryland.
29. The U.S. Social Security Administration is a federal agency headquartered in Baltimore, Maryland.
30. The U.S. Department of Homeland Security is a federal agency headquartered in Washington, D.C.
31. The Securities and Exchange Commission is a federal agency headquartered in Washington, D.C.
32. The National Aeronautics and Space Administration is a federal agency headquartered in Washington, D.C.
33. The U.S. Department of Energy is a federal agency headquartered in Washington, D.C.
34. The Federal Reserve Board of Governors is a federal agency headquartered in Washington, D.C.
35. The National Endowment for the Arts is a federal agency headquartered in Washington, D.C.
36. The Executive Office of the President is a federal agency headquartered in Washington, D.C.
37. Peter B. Hegseth is the Secretary of War and head of Defendant U.S. Department of War. He is sued in his official capacity.
38. Scott Bessent is the Secretary of the Treasury and head of Defendant U.S. Department of the Treasury. He is sued in his official capacity.
39. William J. Pulte is the Director of U.S. Federal Housing and head of Defendant Federal Housing Finance Agency. He is sued in his official capacity.
40. Marco Rubio is the Secretary of State and head of Defendant U.S. Department of State. He is sued in his official capacity.
41. Robert F. Kennedy, Jr. is the Secretary of Health and Human Services and head of Defendant U.S. Department of Health and Human Services. He is sued in his official capacity.
42. Howard Lutnick is the Secretary of Commerce and head of Defendant U.S. Department of Commerce. He is sued in his official capacity.
43. Douglas A. Collins is the Secretary of Veterans Affairs and head of Defendant U.S. Department of Veterans Affairs. He is sued in his official capacity.
44. Edward C. Forst is the Administrator of Defendant General Services Administration. He is sued in his official capacity.
45. Scott Kupor is the Director of Defendant U.S. Office of Personnel Management. He is sued in his official capacity.
46. Ho K. Nieh is the Chairman of Defendant U.S. Nuclear Regulatory Commission. He is sued in his official capacity.
47. Frank J. Bisigano is the Commissioner of Defendant U.S. Social Security Administration. He is sued in his official capacity.
48. Kristi Noem is the Secretary of Homeland Security and the head of Defendant U.S. Department of Homeland Security. She is sued in her official capacity.
49. Paul S. Atkins is the Chairman of Defendant Securities and Exchange Commission. He is sued in his official capacity.
50. Jared Isaacman is the Administrator of Defendant National Aeronautics and Space Administration. He is sued in his official capacity.
51. Chris Wright is the Secretary of Energy and head of Defendant U.S. Department of Energy. He is sued in his official capacity.
52. Jerome H. Powell is the Chairman of Defendant Federal Reserve Board of Governors. He is sued in his official capacity.
53. Mary Anne Carter is the Chairman of Defendant National Endowment for the Arts. She is sued in her official capacity.
54. Doe Defendants 1 through 10 are federal departments, agencies, offices, or instrumentalities—including responsible officials within them—beyond those specifically identified above that have participated in the development and implementation of the Challenged Actions. All individual officials among the Doe Defendants are sued in their official capacities. Their true names and capacities are unknown to Anthropic at this time, and Anthropic will seek leave to amend this Complaint to identify them as their identities and roles become known.
JURISDICTION AND VENUE
55. This Court has subject-matter jurisdiction under 28 U.S.C. § 1331 because this civil action arises under the Constitution of the United States and federal statutes. This Court is authorized to award the requested relief under Rules 57 and 65 of the Federal Rules of Civil Procedure; the Administrative Procedure Act (APA), 5 U.S.C. §§ 702, 705, 706; the Declaratory Judgment Act, 28 U.S.C. §§ 2201-02; the All Writs Act, 28 U.S.C. § 1651; and the court's inherent equitable powers. The APA waives sovereign immunity. 5 U.S.C. § 702.
56. This Court also has authority to enjoin unlawful official action that is ultra vires, see, e.g., Reich, 74 F.3d at 1327-28, or that violates the Constitution, see Free Enter. Fund v. Pub. Co. Acct. Oversight Bd., 561 U.S. 477, 491 n.2 (2010). The Supreme Court has long held that federal courts have equitable power to grant injunctive relief "with respect to violations of federal law by federal officials." Armstrong v. Exceptional Child Ctr., Inc., 575 U.S. 320, 326-27 (2015); see also Larson v. Domestic & Foreign Com. Corp., 337 U.S. 682, 689 (1949).
57. Venue is proper in this District under 28 U.S.C. § 1391(e)(1)(C), because Defendants are agencies of the United States and officers of the United States acting in their official capacities, Plaintiff resides in this District, and no real property is involved.
FACTUAL BACKGROUND
Artificial Intelligence (AI) Models
58. Claude is a versatile, industry-leading large language model (LLM) that can be used in many different contexts depending on a user's needs. Anthropic first launched Claude in March 2023. The company has released several more versions of Claude since then, most recently Claude Opus 4.6 and Claude Sonnet 4.6 in February 2026.
59. LLMs like Claude are algorithmic systems trained on massive datasets to identify patterns and associations in language, and to generate outputs and take actions that resemble human responses and actions. Through training, models acquire predictive power and the transformative ability to take a range of actions in a fraction of the time it would take humans to perform them.
60. When deployed through a chatbot interface, Claude can interpret and respond to a vast variety of user inputs, known as "prompts," in an intelligent, human-like way. Depending on the nature of the user's prompt, Claude can: process basic instructions and logical scenarios; take direction on tone and "personality" when providing outputs; write in different languages; provide outputs in a variety of programming languages; analyze large amounts of information; and provide answers to user queries, with detailed background on technical, scientific, and cultural knowledge.
61. Claude may also be configured with tools that enable it to behave "agentically," meaning it can take actions on behalf of a user such as retrieving information, navigating online resources, writing and executing code, interacting with external services, or carrying out open-ended tasks that Claude plans and adapts. In certain configurations, Claude can perform tasks with minimal ongoing user input, operating with a degree of autonomy. Although this agentic use of AI systems is of particular interest to some users, including governments, it also presents heightened risks compared to traditional, prompt-response chatbot interactions.
62. AI models like Claude are not perfect. Despite developers' best efforts, models can generate inaccurate or misguided responses, or they can "hallucinate," confidently providing incorrect information. This is in part because models generate responses by sampling from a probability distribution rather than by selecting outputs pursuant to predefined rules. As a result, the outputs may or may not be factually accurate, and the same model, given the same prompt twice, may provide two different responses.
Anthropic's Foundational Commitment To AI Safety
63. Anthropic was founded in 2021 by seven former employees of OpenAI committed to the belief that AI will have a vast impact on the world and that AI development should maximize positive outcomes for humanity. Anthropic believes that AI policy decisions in the next few years will touch nearly every part of public life and that questions of AI policy governance are inherently nonpartisan. To that end, Anthropic has earned a reputation as an advocate dedicated to building a safer AI ecosystem. In keeping with that founding mission, Anthropic also builds frontier AI systems and strives to deploy those systems responsibly, in service of human progress. Anthropic began as a research-first company, devoted to AI research, adversarial testing, and policy work to further AI safety. That focus continues today.
64. As a public benefit corporation (PBC), Anthropic balances stockholder interests with its public benefit purpose of responsibly developing and maintaining advanced AI for the long-term benefit of humanity. The Delaware PBC statute permits its board to consider safety, ethics, and societal impact as part of ordinary corporate decision-making, rather than treat profit maximization as the sole objective.
65. These beliefs are fully compatible with responsible use of Claude by the Department of War. Claude has a wide range of specialized defense applications, including autonomously completing complex software engineering projects related to offensive and defensive cyber operations and vulnerability detection; supporting military operations; performing intelligence analysis; and even handling national security workflows on a custom fine-tuned version of Claude developed for classified networks.
66. Anthropic has developed a detailed Usage Policy to address the unique risks of AI, encourage safe and responsible uses of its models, and prohibit a wide range of behaviors contrary to its mission and values. Among other things, that Policy prohibits users from selling illegal drugs, engaging in human trafficking, exploiting cyber vulnerabilities, designing weapons or delivery processes for the deployment of weapons, or engaging in surveillance of persons without their consent. By its terms, the Policy has always prohibited the use of Anthropic's services for lethal autonomous warfare without human oversight and surveillance of Americans en masse.
The Federal Government's Embrace Of AI And Contracts With Anthropic
67. Since taking office, the Trump Administration has made global adoption of U.S.-developed AI systems a stated policy priority. The President has issued multiple Executive Orders focused on America's global AI dominance. His Administration released an "AI Action Plan" focused in part on promoting AI adoption throughout the federal government, which Anthropic publicly supported. Last year, the General Services Administration (GSA) added Claude and other AI providers to its list of approved vendors. The Department likewise has significantly expanded its use of artificial intelligence and entered into multiple major contracts with leading AI companies to scale AI capabilities across defense and intelligence missions, including "warfighting, intelligence, business, and enterprise information systems."
68. Anthropic is committed to these objectives and has invested considerable resources to support the government's national security work. Today, Claude is reportedly the Department's most widely deployed and used frontier AI model—and the only one currently on classified systems.
69. This did not happen overnight. Anthropic began building the infrastructure, partnerships, regulatory approvals, and capabilities necessary to support U.S. government operations in 2023. It joined the AI Safety Institute Consortium, collaborating with the federal government on AI safety research and evaluation frameworks. It entered into strategic partnerships with cloud providers to support its growing role in the national security ecosystem. And it invested substantial resources into pursuing—and obtaining—authorization in the Federal Risk and Authorization Management Program (FedRAMP), the government's security authorization framework for cloud products and services.
70. Anthropic has also developed specialized "Claude Gov" models tailored specifically for the national security context. These models have been built based on direct feedback from national security agencies to address real-world requirements, like improved handling of classified information, enhanced proficiency in critical languages, and sophisticated analysis of cybersecurity data. Claude Gov models undergo rigorous safety testing consistent with Anthropic's commitment to responsible AI.
71. To make Claude more useful for the military and intelligence components of the federal government, Anthropic does not impose the same restrictions on the military's use of Claude as it does on civilian customers. Claude Gov is less prone to refuse requests that would be prohibited in the civilian context, such as using Claude for handling classified documents, military operations, or threat analysis. Anthropic's terms in its existing contracts with the government also recognize the government's unique needs and capabilities. For example, Anthropic's government-specific addendum to the Usage Policy permits Claude to be analyzed lawfully collected foreign intelligence information, which would not be permitted under the Usage Policy for civilian users.
72. Since 2024, Anthropic has partnered with other national security contractors. Those partnerships have enabled the incorporation of Claude into the classified systems of the Department of War and other agencies. And they have allowed for the use of Claude to support government operations such as rapid processing of complex data, identifying trends, streamlining document review, and helping government officials make more informed decisions in time-sensitive situations.
73. Last year, Anthropic entered its first direct agreement with the Department's Chief Digital and Artificial Intelligence Office (CDAO). Under that agreement, Anthropic agreed to work with the Department to scope and develop use cases and, eventually, design a prototype AI service specifically for the Department's use. CDAO awarded similar agreements to Google, OpenAI, and xAI, each with a $200 million ceiling value, as part of its "commercial-first approach to accelerating DoD adoption of AI."
74. Anthropic worked diligently under that agreement, scoping out potential ways that the Department could best be served by Claude and related Anthropic professional services. During this period, the Department conveyed to Anthropic that Claude was the best solution for some of the proposals.
75. In the fall of 2025, Anthropic began negotiations for an additional agreement to provide a version of Claude on the Department's "GenAI.mil" AI platform. As part of those discussions, the Department asked Anthropic to excise its Usage Policy and allow the Department to use Claude for "all lawful uses." Because of Anthropic's commitment to U.S. national security, Anthropic substantially agreed to the proposal—except in two important respects.
76. First, Anthropic did not develop Claude (or the specialized Claude Gov models) to deploy lethal autonomous warfare without human oversight. Claude has not been trained or tested for that use. At least at present, Claude is simply not capable of performing such tasks responsibly without human oversight.
77. Second, Anthropic is unwilling to agree to Claude's use for mass surveillance of Americans. AI tools like Claude enable collection and analysis of information at speeds and scales not previously contemplated, posing unique risks for civil liberties given the potential for errors and misuse. These techniques would have been unimaginable when Congress enacted the existing frameworks regulating how the Executive Branch may conduct surveillance. AI technology is developing far more rapidly than those legal frameworks. And surveillance conducted using AI poses significantly greater potential to make mistakes—and to amplify the effect of any mistakes—than traditional techniques.
78. Allowing Claude to be used to enable the Department to surveil U.S. persons at scale and to field weapons systems that may kill without human oversight would therefore be inconsistent with Anthropic's founding purpose and public commitments. These important restrictions simply reflect what Anthropic knows to be true about Claude's limitations.
79. The Usage Policy does not provide Anthropic with any special capabilities to control, oversee, or second-guess the federal government's operations or the Department's military judgments. Nor does providing Claude to the government as a vendor place Anthropic in a position to intervene in or impede government decision-making. Indeed, while operating under the terms of the Usage Policy, the Department never previously raised any issues with its use of Claude or concerns about Anthropic's potential interference. Anthropic had only ever received positive feedback about Claude's capabilities from its government customers.
The Present Dispute
80. Later in 2025, the discussions regarding an additional agreement about deploying Claude on the "GenAI.mil" platform morphed into a negotiation over the Department's use of Claude more broadly. The Department demanded that—across all ongoing and future deployments of Claude—Anthropic abandon its Usage Policy and instead allow "all lawful use" of Claude. As part of these new demands, the Department sent partial contract language incorporating this term to Anthropic.
81. In early January 2026, Secretary Hegseth issued a memorandum directing the Department to "[u]nleash experimentation with America's leading AI models Department-wide" and execute a series of "Pace-Setting Projects" to accelerate AI adoption. To advance that goal, the memorandum directed the Department's procurement office to "incorporate standard 'any lawful use' language into any DoW contract" for AI services within 180 days. Three days later, Secretary Hegseth delivered remarks explaining that the Department was "blowing up . . . barriers."
82. Despite disagreement on the two use restrictions, Anthropic has continued to reiterate its commitment to providing Claude to serve the United States' national security interests and to negotiate in good faith with the Department.
83. But the Department chose a different path. In February 2026, a source inside the Department told reporters that it was "close" to cutting business ties with Anthropic and designating Anthropic a "supply chain risk," a designation that—to Anthropic's knowledge—has never before been applied to a domestic company. The source said: "It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this."
84. Until the Department raised this threat, no government official had ever raised a concern with Anthropic about potential supply chain vulnerabilities. On the contrary, the government has consistently provided the security clearances that are necessary for Anthropic's personnel to perform classified work. Those clearances remain in place today. Moreover, in 2024 Anthropic became the first frontier AI lab to collaborate with the Department of Energy to evaluate an AI model in a Top Secret classified environment.
85. Matters came to a head in a meeting between Secretary Hegseth and Dr. Dario Amodei, Anthropic's CEO, on February 24, 2026. Secretary Hegseth presented Anthropic with an ultimatum. He demanded that Anthropic accede to the Department's demands within four days or face one of two apparently contradictory punishments: either the Secretary would purport to invoke the Defense Production Act to force Anthropic to do as he said, or he would cast Anthropic out of the defense supply chain altogether as a supposed "supply chain risk." Pentagon officials confirmed in the media that the meeting was not intended to drive resolution, but rather to intimidate Anthropic.
86. After the February 24 meeting, a senior Pentagon official gave Anthropic "until 5:01pm [Eastern] Friday to get on board with the Department of War . . . . If they don't get on board, the Secretary of War will ensure the Defense Production Act is invoked on Anthropic, compelling them to be used by the Pentagon." The same official added, "the Secretary of War will also label Anthropic a supply chain risk." In other words, the official suggested that Anthropic was both necessary to national defense and—at the same time—an unacceptable risk to national security.
87. On February 26, Dr. Amodei issued a public statement describing Anthropic's adherence to its stated policy. He explained that "Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner." He again emphasized that the two restrictions giving rise to the dispute address uses that are "simply outside the bounds of what today's technology can safely and reliably do," and that Anthropic "cannot in good conscience accede to" the Department's request. He reiterated that "[o]ur strong preference is to continue to serve the Department and our warfighters—with our two requested safeguards in place." And he promised that, "[s]hould the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions."
The Government Retaliates Against Anthropic
88. The next day—even before the 5:01 p.m. Eastern deadline—President Trump posted the Presidential Directive, purporting to direct all federal agencies to immediately cease all use of Anthropic's technology.
89. Secretary Hegseth immediately followed suit by posting a "final" decision on social media directing his Department to designate Anthropic a "Supply-Chain Risk to National Security" and decreeing that, "effective immediately," "no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic":
90. The Secretarial Order left unclarified who is covered as a "partner," what it means to do business "with the United States military" versus the Department more broadly, or what "commercial activity" is prohibited. Regardless of what these other companies must do, the Order also insisted that "Anthropic will continue to provide the Department of War its services for a period of no more than six months."
91. But the Secretary left no doubt about his reasons: "Anthropic's stance is fundamentally incompatible with American principles." According to the Secretary, this "stance" includes "Silicon Valley ideology," "corporate virtue-signaling," "defective altruism," "arrogance," and even an attempt to hold "America's warfighters . . . hostage [to] the ideological whims of Big Tech." The Secretary thus distorted Anthropic's clear-eyed, expertise-driven understanding of its own technology's current limits into purported ideological extremism.
92. GSA also took immediate steps in "support of President Trump's directive," which it understood to "rejec[t] attempts to politicize work" and to require federal agencies to contract only with AI companies "who fit the bill." In a news release issued the same day as the Presidential Directive, GSA announced that it was removing Anthropic from USAi.gov and the Multiple Award Schedule contracts. A top GSA official separately announced that the agency had terminated Anthropic's "OneGov" contract.
93. Other government agencies soon fell in line, issuing multiple directives to begin to implement the President and the Secretary's directives. For example, the Department of State and the Department of Health and Human Services (HHS) have acted on the President's directive through internal communications, according to public reporting. Monday morning, the U.S. Department of the Treasury and the Federal Housing Finance Agency announced they were terminating all use of Claude. Anthropic also received reports that the Chief Information Officer of a federal civilian agency advised all non-Department of War leadership to stop using Claude.
94. Private actors also took heed. Anthropic immediately received outreach from numerous outside partners—from customers, to cloud providers, to investors—expressing confusion about what was required of them and concern about their ability to continue to work with Anthropic. Since the Challenged Actions, dozens of companies have contacted Anthropic seeking clarity, guidance, and, in some cases, an understanding of their termination rights.
95. An official confirmed that the Department's actions are a response to Anthropic's purported "behavior" in negotiations and threatened not just to terminate Anthropic's contracts but "require that all our vendors and contractors certify that they don't use any Anthropic models."
96. Other government officials relayed the personal and ideological nature of the Department's objective: "The problem with [Anthropic's CEO] Dario [Amodei] is, with him, it's ideological. We know who we're dealing with." This followed public condemnation of Anthropic and its usage policies by the Department's Chief Technology Officer as "not democratic."
97. Throughout, the federal government has never once expressed concerns about Anthropic's security or Claude's competencies. Instead, it has repeatedly recognized that Anthropic is not only safe but an important national asset. Claude's FedRAMP authorization represents the highest level of cloud security certification for the handling of unclassified and controlled unclassified information. The Department approved (and has continued to maintain) a facility clearance for Anthropic as well as numerous security clearances for Anthropic's personnel so they can perform classified work. Never during any of these security-focused processes did the government determine that Anthropic or its services posed a supply chain risk. Indeed, the FedRAMP authorization and facility security clearance and personnel clearances could not have been issued had any such determination been made.
98. Even during the recent negotiations, the government has repeatedly and publicly praised Claude's capabilities. Chief Technology Officer and Under Secretary of War Emil Michael, while describing the dispute with Anthropic, explicitly characterized Anthropic as one of America's "national champions" in AI. In the February 24 meeting with Dr. Amodei, Secretary Hegseth described Anthropic's technology as having "exquisite capabilities" and stated that the Department would "love" to work with Anthropic.
99. Senior administration officials have likewise repeatedly acknowledged that displacing Anthropic from its role would be disruptive because competing AI models "are just behind" when it comes to specialized government applications.
100. Department officials have even expressed concerns about the consequences of losing access to Claude. Describing the dispute between Anthropic and the Department, one official stated that "[t]he only reason we're still talking to these people is we need them and we need them now. The problem for these guys is they are that good."
101. Indeed, the President and Secretary Hegseth insisted that Claude must remain available to the Department for six months—even after another AI company had indicated it would accede to the Department's demand to make its models available for "all lawful uses," and apparently as the Department was in talks with a third AI company that recently announced it is inclined to do the same thing. Within hours of the Challenged Actions, moreover, the Department reportedly "launched a major air attack in Iran with the help of [the] very same tools" that are "made by" Anthropic and are the subject of the Challenged Actions.
102. And senior officials within the Department recently confirmed to the press what is apparent from the facts: One official who manages information security said that the Secretarial Order was "ideological" rather than an accurate description of risk. Another official, who specifically evaluates supply chain risk and other potential intelligence threats, acknowledged "there is no evidence of supply-chain risk" from Anthropic's AI model and reiterated that the Secretarial Order was "ideologically driven."
103. Indeed, the President himself made clear that his Administration's retaliatory actions towards Anthropic were a direct result of the views Anthropic expressed to the government and the public about the limitations on the use of its own product: "Well, I fired Anthropic. Anthropic is in trouble because I fired [them] like dogs, because they shouldn't have done that."
The Secretary Notifies Anthropic Of His "Supply Chain Risk" Designation
104. Even as agencies across the federal government moved to implement the Presidential Directive, Dr. Amodei and Under Secretary of War Michael continued negotiations in an effort to resolve or de-escalate the dispute. Those discussions were still underway when, at 8:48 p.m. Eastern on March 4, the Secretary of War sent Anthropic a letter. The letter, dated March 3, 2026, notified it of the "supply-chain risk" designation—almost a week after the Secretary had announced that designation on social media.
105. The two-page letter did not explain what risk Anthropic's services supposedly pose to national security. Its stated rationale reads in full: "the Department of War has determined that (i) the use of the Covered Entity's products or services in DoW covered systems presents a supply chain risk and that the use of the Section 3252 authority to carry out covered procurement actions is necessary to protect national security by reducing supply chain risk, and (ii) less intrusive measures are not reasonably available to reduce such supply chain risk."
106. Based on that "determination," the Secretarial Letter purports to exclude Anthropic—including all of its subsidiaries, successors, and affiliates—as a source for all Department procurements involving covered national security systems, effective immediately. The Letter does not explain the scope of procurements covered by the Secretary's action.
The Challenged Actions Are Causing Immediate And Irreparable Harm To Anthropic
107. The Challenged Actions have inflicted immediate, far-ranging, and irreversible harm on Anthropic. These harms will continue unless the Challenged Actions are declared unlawful and enjoined.
108. Anthropic has built a reputation as a public benefit corporation that is committed to AI safety and the responsible deployment of its technology. That reputation is critical to its continued success and growth. Secretary Hegseth's unlawful designation of Anthropic as "a Supply-Chain Risk to National Security" undoubtedly harms Anthropic's reputation, as does Defendants' unlawful decision to bar "EVERY Federal Agency in the United States Government" from using Anthropic's technology.
109. The Challenged Actions also inflict immediate and unrecoverable revenue losses: Anthropic stands to lose the federal contracts it already has, as well as its prospects to pursue federal contracts in the future.
110. Anthropic's business partnerships and contracts with other federal contractors are likewise in jeopardy. For example, one federal contractor with whom Anthropic has built custom applications has indicated that it may suspend that work or even remove Claude from existing deployments. Other federal contractors are raising concerns, pausing collaborations, and considering terminating contracts. Anthropic has no way to obtain redress from the government for those economic harms.
111. And those practical and economic injuries are not the only irreparable harms inflicted by the Challenged Actions. "The loss of First Amendment freedoms, for even minimal periods of time, unquestionably constitutes irreparable injury." Roman Catholic Diocese of Brooklyn v. Cuomo, 592 U.S. 14, 19 (2020) (per curiam).
112. All of this is precisely what Defendants intended: to punish Anthropic for adhering to its views. Anthropic was founded on its commitment to developing AI responsibly. Defendants presented Anthropic with a stark choice: silence its views on safe AI, capitulate to the Department's demands, and offer Claude on terms that are unsafe and violate its core principles—or else suffer swift harm at the hand of the federal government. When Anthropic adhered to its longstanding views about AI safety and the limitations of its services, Defendants carried out that threat.
CLAIMS
COUNT I
ADMINISTRATIVE PROCEDURE ACT; 10 U.S.C. § 3252
(5 U.S.C. § 706)
(DEFENDANTS HEGSETH AND THE DEPARTMENT OF WAR)
113. Anthropic incorporates by reference the allegations of the preceding paragraphs.
114. The APA requires courts to "hold unlawful and set aside" final agency action that is "arbitrary, capricious, an abuse of discretion, or otherwise not in accordance with law," or is "in excess of statutory jurisdiction, authority, or limitations, or short of statutory right," or "without observance of procedure required by law." 5 U.S.C. § 706(2)(A), (C), (D).
115. The February 27 Secretarial Order purported to "direct[] the Department of War to designate Anthropic a Supply-Chain Risk to National Security" and ordered that, "[e]ffective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic." The Order also emphasized that "[t]his decision is final."
116. The Secretarial Order is a final agency action for purposes of the APA. It is an "agency action" because it is an "order" (i.e., a "disposition . . . in a matter other than rulemaking") and also a "sanction" that "prohibit[s]," "limit[s]," or otherwise "affect[s]" Anthropic's freedom to compete for federal contracts and maintain its business relationships. 5 U.S.C. § 551(6), (10), (13). It is final both because Secretary Hegseth said so and because it finally "determine[s]" the "rights or obligations" of Anthropic and is backed by "legal consequences." Bennett v. Spear, 520 U.S. 154, 177-78 (1997). Effective "immediately," the decision purports to direct that no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.
117. A week later, the Secretary sent Anthropic a letter notifying it that the Department "has determined" that the use of Anthropic's "products or services in DoW covered systems presents a supply chain risk" and that it is necessary for the Department to use its authority under 10 U.S.C. § 3252 "to protect national security by reducing supply chain risk." The Secretarial Letter also asserts that "less intrusive measures are not reasonably available to reduce such supply chain risk." Those statements are the only explanations offered in the Secretarial Letter for the supply chain risk designation. And the Secretarial Letter does not purport to rescind or amend the Secretarial Order. See generally Nat'l Urb. League v. Ross, 508 F. Supp. 3d 663 (N.D. Cal. 2020) ("A final agency action does not become non-final after it is implemented.").
118. An agency acts arbitrarily and capriciously when it "entirely fail[s] to consider an important aspect of the problem," offers "an explanation for its decision that runs counter to the evidence before the agency," or fails to "articulate a satisfactory explanation for its action including a rational connection between the facts found and the choice made." Motor Vehicle Mfrs. Ass'n v. State Farm Mut. Auto. Ins. Co., 463 U.S. 29, 43 (1983) (internal quotation marks omitted).
119. The Secretarial Order, and the attempt to implement and explain that Order via the Secretarial Letter, violates the standards of Section 706 at every turn.
120. First, the Order exceeds the authority granted by Congress in 10 U.S.C. § 3252, the federal statute addressing "supply chain risk[s]." That statute does not provide the government a remedy for failed contract negotiations. Nor does it delegate freewheeling authority to the Secretary to redefine "supply chain risk" to cover a contractor who declines to modify its terms of use to track the Department's preferences.
121. Instead, Section 3252 authorizes exclusion with respect to a prime contractor or subcontractor only when necessary to protect against the risk that an adversary may "sabotage . . . or otherwise subvert" an information system used for national security purposes. 10 U.S.C. § 3252(b)(2)(A), (d)(4)-(5); 44 U.S.C. § 3552(b)(6). The Secretary has not determined, and cannot reasonably determine, that Anthropic's services present a risk of sabotage or subversion by an adversary to the United States.
122. Anthropic is not, and has no ties to, an "adversary" to the United States. The Executive Branch has defined the term to mean China, Russia, Iran, North Korea, Cuba, and Venezuela. See Exec. Order No. 13,873, 84 Fed. Reg. 22689 (May 15, 2019); 15 C.F.R. § 791.4(a). Anthropic is a U.S.-incorporated, U.S.-headquartered public benefit corporation with a demonstrated history of supporting the United States government and its national security interests. The Secretary has not articulated any determination otherwise. Nor is there any other valid basis for the Secretary to determine that designating Anthropic presents a risk of "sabotage" or "subver[sion]." Indeed, Anthropic has gone to significant lengths to prevent the use of its technology by entities linked to the Chinese Communist Party, has shut down attempts to abuse Claude for state-sponsored cyber operations, and has advocated for strong export controls on the most powerful chips used to train AI, all to preserve the U.S. lead in frontier AI development.
123. Second, the Secretary's actions failed to follow the procedure Congress required before excluding from contracts or subcontracts on the basis that it poses an unacceptable "supply chain risk." Under Section 3252, the Secretary must consult with other relevant officials and determine in writing (1) that an exclusion is "necessary to protect national security by reducing supply chain risk," and (2) that "less intrusive measures are not reasonably available to reduce such supply chain risk." 10 U.S.C. § 3252(b)(1), (b)(2)(A)-(B). Then the Secretary must notify the appropriate congressional committees of that determination, providing a summary of the risk assessment and the basis for determining that less intrusive options were not available. 10 U.S.C. § 3252(b)(3). On information and belief, no valid Section 3252 determination was made prior to the February 27 Secretarial Order. The Secretary did not consult with relevant procurement officials, did not make any written determination that less intrusive measures were unavailable, and did not notify Congress before issuing the Order. And even the Secretarial Letter received by Anthropic on March 4, which recited the "necessary to protect national security" and "less intrusive measures are not reasonably available" language from 10 U.S.C. § 3252(b)(2)(A)-(B), did not describe any consultation with relevant procurement officials or any congressional notification.
124. With respect to contracts entered directly with the government, Section 3252 authorizes the exclusion of a source only if it has failed either to "meet qualification standards" or "achieve an acceptable rating with regard to an evaluation factor." 10 U.S.C. § 3252(d)(2)(A)-(B). In both cases, those conditions relate to the risk that an adversary may sabotage, maliciously interfere with, or otherwise subvert a covered system. The Secretary has not determined—and could not reasonably determine—that Anthropic's services fail to meet qualification standards or achieve an acceptable rating related to any evaluation factor for a procurement. The February 27 Secretarial Order contains no such determination. And the Secretarial Letter sent on March 4 does not address those statutory criteria.
125. To the contrary, the Secretary himself has recognized Claude's capabilities as "exquisite." His Department suggested that Claude was so vital to our national defense that it needed to be commandeered under the Defense Production Act. And he has ordered that "Anthropic will continue to provide" its services to the Department of War for up to "six months." The "unexplained inconsistenc[y]" between simultaneously designating Anthropic's services a supply chain risk vulnerable to "sabotage" or other "subver[sion]" by a foreign adversary while directing those services to be used for up to six months for national security purposes demonstrates the arbitrariness of the Secretary's final decision. Dist. Hosp. Partners, L.P. v. Burwell, 786 F.3d 46, 59 (D.C. Cir. 2015) (collecting authority).
126. Additionally, nothing in the statute authorizes the Secretary to require every "contractor, supplier, or partner that does business with the United States military" to blacklist the excluded source.
127. Third, the Secretarial Order was arbitrary and capricious because it failed to provide a rational and "satisfactory explanation" for designating Anthropic a supply chain risk. Motor Vehicle Mfrs. Ass'n, 463 U.S. at 43. The Secretary's February 27 Order announcing his "final" decision contains invective against Anthropic, but no explanation of why Claude constitutes a supply chain risk. It does not attempt to reconcile the Secretary's assertion that those models are a threat "to National Security" with his decision to allow the Department to continue using them for half a year—let alone the Department's past praise for those models or its simultaneous suggestion that Anthropic might be commandeered into providing them on the Department's terms under the Defense Production Act.
128. The post hoc Secretarial Letter does not meaningfully elaborate on that explanation. It parrots the statutory predicates of Section 3252: that Anthropic presents a "supply chain risk," that the designation is "necessary to protect national security," and that "less intrusive measures [were] not reasonably available." But it offers no explanation for any of these conclusions; addresses none of the inconsistencies that rendered the Secretarial Order arbitrary; and supplies none of the reasoned analysis the Order lacked.
129. The only explanation provided by the Secretary for his action is pure retaliation. That is plain on the face of the Secretarial Order, in which the Secretary criticized Anthropic as "ideological" and insufficiently "patriotic." And it is confirmed by senior Department officials who unabashedly told the press that the Secretary designated Anthropic as a supply chain risk to "make sure [Anthropic] pays a price" for declining to concede to the Department's demands; that the Secretarial Order was "ideological" rather than an accurate description of risk; that "there is no evidence of supply-chain risk"; and that the Secretarial Order was "ideologically driven."
130. The Secretary's actions are arbitrary and capricious in multiple other ways. For example, the Secretary failed to consider less restrictive alternatives. Several were available here, and they had been offered as options by Anthropic itself. First, Anthropic repeatedly offered the Department that it would support an orderly transition to a new provider—one willing to accept the Department's proposed terms—at nominal cost if the parties failed to come to an agreement. But the Department had other options as well, including agreeing to Anthropic's proposed usage limitations; or continuing the negotiations already underway. Neither the Secretarial Order nor the Secretarial Letter identifies any of these alternatives, much less explains why they are insufficient.
131. The Secretary also failed to address the consequences of its actions for Anthropic, other companies that deal with the federal government, and Anthropic's commercial counterparties. He also failed to reasonably account for Anthropic's reliance interests. Neither the Secretarial Order nor the Secretarial Letter grapples with those considerations. And the Secretarial Order relied on extra-statutory factors that Congress did not intend for him to consider under Section 3252, such as Anthropic's position in contract negotiations and its public statements on AI safety.
132. For these reasons, the Court should declare that the Secretarial Order is "in excess of statutory jurisdiction, authority, or limitations," 5 U.S.C. § 706(2)(C), and "arbitrary, capricious . . . or otherwise not in accordance with law," id. § 706(2)(A), set the order aside, and enjoin Defendants (other than the President) from taking any action to implement or enforce it, including through the Secretarial Letter.
133. Defendants' APA violations have caused Anthropic ongoing and irreparable harm.
COUNT II
FIRST AMENDMENT TO THE U.S. CONSTITUTION
(EQUITABLE CAUSE OF ACTION; 5 U.S.C. § 702)
(ALL DEFENDANTS)
134. Anthropic incorporates by reference the allegations of the preceding paragraphs.
135. The First Amendment to the Constitution provides that the federal Government "shall make no law . . . abridging the freedom of speech . . . or [abridging] the right of the people to petition the Government for a redress." U.S. Const. amend. I.
136. The Challenged Actions violate Anthropic's First Amendment rights because they constitute paradigmatic retaliation against Anthropic's expressive activities, including protected speech, protected viewpoints, and protected petitioning of the government.
137. The First Amendment "prohibits government officials from subjecting individuals to retaliatory actions after the fact for having engaged in protected speech." Hous. Cmty. Coll. Sys. v. Wilson, 595 U.S. 468, 474 n.2 (2022); Nieves v. Bartlett, 587 U.S. 391, 398 (2019) (similar). Indeed, "[s]tate action designed to retaliate against and chill" protected expression "strikes at the heart of the First Amendment." Gibson v. United States, 781 F.2d 1334, 1338 (9th Cir. 1986).
138. Succeeding on a retaliation claim requires Anthropic to show that "(1) [it] was engaged in a constitutionally protected activity, (2) the defendant's actions would chill a person of ordinary firmness from continuing to engage in the protected activity and (3) the protected activity was a substantial or motivating factor in the defendant's conduct." O'Brien v. Welty, 818 F.3d 920, 932 (9th Cir. 2016); President & Fellows of Harvard Coll. v. United States Dep't of Homeland Sec., 788 F. Supp. 3d 182, 206 (D. Mass. 2025) ("The elements of a Petition Clause retaliation claim are identical to those of a free speech retaliation claim."). All three elements are easily established here.
139. First, Anthropic engaged in protected First Amendment expression, in multiple respects.
140. To start, Anthropic has been a leading voice on AI safety and policy since its inception. The company frequently weighs in on pending legislation: It has advocated for the bipartisan Future of AI Innovation Act, which supports the efforts of the National Institute of Standards and Technology's Center for AI Standards and Innovation (CAISI) to undertake research on AI safety risks. And it has backed the CREATE AI Act of 2025 and the GAIN Act of 2025—bipartisan safety bills that align with the company's policy priorities. Anthropic also maintains a bipartisan lobbying effort and has donated money to organizations that promote AI safety.
141. The company's public speech extends to its Usage Policy. That policy, posted prominently on Anthropic's website, implements and embodies the company's foundational commitment to the safe and responsible use of AI. Consistent with Anthropic's founding ethos, the policy "is calibrated to strike an optimal balance between enabling beneficial uses and mitigating potential harms." As explained above, the Usage Policy has never permitted Claude to be used for mass surveillance of Americans or for lethal autonomous warfare.
142. Anthropic's executives speak publicly on these topics. In June 2025, Dr. Amodei published an op-ed opposing federal legislation that would have imposed a moratorium on state regulation of AI. In October 2025, he released a statement praising President Trump's AI action plan, reiterating his opposition to a federal moratorium on state AI regulation, and emphasizing Anthropic's support for SB 53, a since-enacted California AI safety bill. And, as noted above, on February 26, 2026, he issued a public statement regarding the importance of Anthropic's usage restrictions on lethal autonomous warfare and mass surveillance of Americans, emphasizing that those uses are "simply outside the bounds of what today's technology can safely and reliably do," and that Anthropic "cannot in good conscience" abandon those particular restrictions.
143. In addition, Anthropic's communications with the government are protected speech. Cf. Janus v. Am. Fed'n of State, Cnty., & Mun. Emps., Council 31, 585 U.S. 878, 893-94 (2018) (recognizing that "collective bargaining" with the government is "private speech" that is protected by the First Amendment); President & Fellows of Harvard Coll., 788 F. Supp. 3d at 203 ("refusing to cede" on issues of public importance "constitute[s] . . . protected conduct" even if expressed as "rejection" of contract terms).
144. Throughout its negotiations with the Department, Anthropic expressed its views about Claude's capabilities and the uses to which Claude can safely and responsibly be put. Anthropic has also spoken out about the threat to civil liberties that AI-enabled mass surveillance of Americans poses. Anthropic has discussed these issues directly with the Department and has shared its views with the public. These expressions of Anthropic's viewpoints are entitled to full First Amendment protection. And that expression is what the Challenged Actions seek to punish.
145. Anthropic also engaged in protected First Amendment activity when it petitioned the government to honor Anthropic's use restrictions with respect to lethal autonomous warfare systems that lack any human oversight and mass surveillance of Americans. The First Amendment protects the right "to petition the Government for a redress." U.S. Const. amend. I. Anthropic exercised that right by communicating its position to the Department, explaining the basis for that position, and seeking to persuade the government to embrace that view. See BE & K Const. Co. v. N.L.R.B., 536 U.S. 516, 525 (2002) ("[T]he right to petition extends to all departments of the Government") (citation omitted)). Anthropic was not simply engaged in contract negotiations; it was expressing a position on an issue of significant public importance for which it had unique expertise—the appropriate use of its own AI models. The government's response was drastic and punitive, retaliating against the core freedoms the Petition Clause protects.
146. Second, the Challenged Actions impose significant financial and reputational costs on Anthropic that would chill a company of ordinary firmness from continuing to engage in expressive activity. Government action is "adverse" for purposes of a First Amendment retaliation claim if it is "designed to . . . chill political expression," Mendocino Env't Ctr. v. Mendocino Cnty., 14 F.3d 457, 464 (9th Cir. 1994) (emphasis added), or "would chill a person of ordinary firmness from continuing to engage in the protected activity," Blair v. Bethel Sch. Dist., 608 F.3d 540, 543 (9th Cir. 2010). The Challenged Actions satisfy both tests. By their very terms, they are intended to force Anthropic to "get their act together[] and be helpful." And they carry severe and wide-ranging consequences that ripple far beyond any single contract.
147. The Challenged Actions also assign Anthropic a "supply chain risk" designation that is reserved for companies that create a risk of "sabotage" or other acts of subversion by a foreign "adversary." 10 U.S.C. § 3252(d)(4). That label will follow Anthropic into every future procurement relationship across the federal government and with federal contractors, not to mention relationships with states and local governments and customers in other sectors. The threat of that extraordinarily stigmatizing label would undoubtedly chill the expressive activities of a company of ordinary firmness.
148. This adversity is severe, particularly in the fiercely competitive AI marketplace, where reputational damage can quickly lead to pecuniary harm. See Riley's Am. Heritage Farms v. Elsasser, 32 F.4th 707, 723 (9th Cir. 2022) ("A plaintiff establishes . . . adverse action . . . by demonstrating that the government action threatened or caused pecuniary harm"); Arizona Students' Ass'n v. Arizona Bd. of Regents, 824 F.3d 858, 868 (9th Cir. 2016) ("[T]he government may chill speech by threatening or causing pecuniary harm . . . [or] withholding a license, right, or benefit . . . .").
149. Third, Anthropic's protected expression was not only a substantial factor underlying the Challenged Actions, it was the motivating factor. The causal link could not be clearer: Defendants threatened Anthropic and then took the Challenged Actions only after Anthropic refused to change its position on acceptable uses of Claude and publicly explained why. Indeed, the government made clear that it took the Challenged Actions because of Anthropic's steadfast expression of its views about what Claude can and cannot do. For example, Secretary Hegseth directly criticized Anthropic's "rhetoric" when he announced the supply chain action and faulted the company for not being "more patriotic."
150. Actions designed to punish ideological disagreement are necessarily motivated by protected First Amendment activity. See, e.g., Mendocino Envtl. Ctr., 14 F.3d at 464; see also Perkins Coie LLP v. U.S. Dep't of Just., 783 F. Supp. 3d 105 (D.D.C. 2025) (holding Executive Order 14230 unconstitutional as a retaliation for protected speech because its text made "clear that President Trump and his administration disfavor the specific messages conveyed by plaintiff").
151. And Defendants' public statements confirm that the government took the Challenged Actions because of what Anthropic said, not because of any legitimate procurement or security concern. No government actor has ever even attempted to identify any technical deficiency in Claude. To the contrary, Claude has instead been an unmitigated success for the American military. Perhaps that is why the government initially threatened to invoke the Defense Production Act against Anthropic and compel it to provide the very service that the government now calls a supply chain risk. In the government's own words, "we need them and we need them now" because Claude is just "that good." Without any technical motivations supporting the Challenged Actions, the only motivation left is the one candidly expressed by Defendants: disagreement with Anthropic's views.
152. To be sure, if it complies with the Constitution and governing statutes and regulations, the Department may terminate its contract with Anthropic. And it may look to procure services from other AI companies on the terms it prefers, as it has already done. Exercising that authority would have been unremarkable. Anthropic even offered to facilitate such a transition. But the Challenged Actions took a different path. These needless and extraordinarily punitive actions, imposed in broad daylight, are a paradigm of unconstitutional retaliation. See Soranno's Gasco's Inc. v. Morgan, 874 F.2d 1310, 1316 (9th Cir. 1989) (inferring a retaliatory motivation where the government's "chosen course of action was designed to maximize harm").
153. The government's First Amendment retaliation is made worse by the fact that it is content- and viewpoint-based. It is content-based because the retaliation is targeted at Anthropic for speaking on issues of AI safety and responsible AI use—"speech on public issues" that "occupies the highest rung of the hierarchy of First Amendment values." Snyder v. Phelps, 562 U.S. 443, 452 (2011). The Challenged Actions also punish Anthropic not just for speaking on that topic, but for Anthropic's viewpoints on that topic. See, e.g., Pleasant Grove City v. Summum, 555 U.S. 460, 469 (2009) ("restrictions based on viewpoint are prohibited").
154. Defendants' content- and viewpoint-based acts are subject to, but cannot possibly satisfy, strict scrutiny. See e.g., Vidal v. Elster, 602 U.S. 286, 293 (2024); Waln v. Dysart Sch. Dist., 54 F.4th 1152, 1162 (9th Cir. 2022) ("Viewpoint-based restrictions on speech are subject to strict scrutiny.").
155. To survive strict scrutiny, the government must adopt "the least restrictive means of achieving a compelling state interest." McCullen v. Coakley, 573 U.S. 464, 478 (2014). "When the Government restricts speech, the Government bears the burden of proving the constitutionality of its actions." FEC v. Cruz, 596 U.S. 289, 305 (2022).
156. Defendants' asserted desire to stamp out competing viewpoints about what Claude can and cannot safely do is not a legitimate interest. See Crime Justice & Am., Inc. v. Honea, 876 F.3d 966, 973 (9th Cir. 2017) (a government interest is legitimate only if it is "unrelated to the suppression of expression.").
157. While the government has a compelling interest in addressing genuine supply chain risks, Defendants cannot show that the Challenged Actions advance that interest. And to the extent the government asserts a compelling interest in obtaining AI services without the two narrow safeguards Anthropic insists upon, the Challenged Actions were not the least-restrictive means of achieving that interest. The Department had a straightforward and unrestrictive option that would have fully served that interest: terminate the contract and hire a different developer. Indeed, Anthropic offered to facilitate a transition to one of its competitor's systems, and the Department is reportedly negotiating agreements with one or more frontier AI developers.
158. Defendants' First Amendment violations have caused Anthropic ongoing and irreparable harm.
COUNT III
ARTICLE II OF THE U.S. CONSTITUTION; ULTRA VIRES ACTION
(EQUITABLE CAUSE OF ACTION)
(ALL DEFENDANTS)
159. Anthropic incorporates by reference the allegations of the preceding paragraphs.
160. "The ability to sue to enjoin unconstitutional actions by state and federal officers is the creation of courts of equity, and reflects a long history of judicial review of illegal executive action, tracing back to England." Armstrong, 575 U.S. at 327. "When an executive acts ultra vires, courts are normally available to reestablish the limits on his authority." Reich, 74 F.3d at 1328. "[I]t remains the responsibility of the judiciary to ensure that the President act[s] within those limits" that Congress and the Constitution place on him. Am. Forest Res. Council v. United States, 77 F.4th 787, 797 (D.C. Cir. 2023); accord Murphy Co. v. Biden, 65 F.4th 1122, 1129-31 (9th Cir. 2023).
161. Under longstanding Supreme Court precedent, "[t]he President's power, if any, to issue [that] order must stem either from an act of Congress or from the Constitution itself." Youngstown Sheet & Tube Co. v. Sawyer, 343 U.S. 579, 585 (1952).
162. The February 27 Presidential Directive purported to order "EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology."
163. The President has no inherent Article II authority for the Presidential Directive. There is no "executive practice, long pursued to the knowledge of the Congress and never before questioned," Youngstown, 343 U.S. at 610 (Frankfurter, J., concurring), of Presidents using their official position to punish corporations for expressing views on matters of public concern in negotiations with the government. The "President enjoys no inherent authority," Learning Res., Inc. v. Trump, 2026 WL 477534, at *7 (U.S. Feb. 20, 2026), to force companies to choose between removing critical use limitations from their products or suffer immediate and widespread debarment at the hands of the government. No other President has even attempted to claim such powers.
164. Nor is there any statutory authority for such a directive. Congress has enacted a comprehensive statutory regime governing federal procurement. This includes statutes in Title 41 of the U.S. Code, as well as those in Title 10, which are specific to the Department. The government also has promulgated thousands of pages of regulations and individual agency guidance that comprehensively address how procurement authority is administered. Under this detailed framework, if the government and a contractor cannot agree on terms for procured services, the ordinary remedy is for the government not to award a contract or to terminate an awarded contract for its convenience. See 48 C.F.R. § 49.502. Debarment is not a remedy for mere contract failure; rather, it is limited to addressing specific "serious . . . irregularities," may never be used "for purposes of punishment," and may only be consummated after providing robust procedural protections. 48 C.F.R. § 9.402(b); see 48 C.F.R. subpart 9.4.
165. The President's directive finds no support in this calibrated statutory and regulatory framework. And even the President cannot "attempt[] to delegate to himself the power to act arbitrarily." Anti-Fascist Refugee Committee v. McGrath, 341 U.S. 123, 138 (1951). The President likewise cannot direct federal agencies to disregard their duly promulgated regulations. Cf. Nat'l Env't Dev. Ass'n's Clean Air Proj. v. EPA, 752 F.3d 999, 1009 (D.C. Cir. 2014) ("[An] agency is not free to ignore or violate its regulations while they remain in effect."). The President's abrupt directive to cancel Anthropic's contracts en masse violates these foundational principles.
166. Finally, the Presidential Directive "possess[es] almost every quality of [an unlawful] bill[] of attainder." McGrath, 341 U.S. at 143-44 (Black, J., concurring). It functions as a "prepared and proclaimed government blacklist[]," punishing Anthropic—and only Anthropic—without any formal investigation, trial, or even informal process. From the Founding, such measures have been "forbidden to both national and state governments." Id. at 144. It cannot be "that the authors of the Constitution, who outlawed the bill of attainder, inadvertently endowed the executive with [the] power to engage in the same tyrannical practices that had made the bill such an odious institution." Id.
167. The President's ultra vires directive, and any actions by other Defendants implementing the Presidential Directive, have caused Anthropic ongoing and irreparable harm.
COUNT IV
FIFTH AMENDMENT TO THE U.S. CONSTITUTION (DUE PROCESS)
(EQUITABLE CAUSE OF ACTION; 5 U.S.C. § 702)
(ALL DEFENDANTS)
168. Anthropic incorporates by reference the allegations of the preceding paragraphs.
169. The Fifth Amendment's Due Process Clause guarantees that "[n]o person shall . . . be deprived of life, liberty, or property, without due process of law." U.S. Const. amend. V.
170. To succeed on its procedural due process claim, Anthropic must show (1) a deprivation of a protected liberty or property interest; (2) by the government; (3) without the process that is due under the Fifth Amendment. E.g., Reed v. Goertz, 598 U.S. 230, 236 (2023).
171. The Challenged Actions implicate multiple interests protected by the Due Process Clause. They impair Anthropic's liberty interest in its reputation. Wisconsin v. Constantineau, 400 U.S. 433, 437 (1971). They also deprive Anthropic's property interest in its existing contracts with the government and private sectors. See Al Haramain Islamic Found. v. U.S. Dep't of Treasury, 686 F.3d 965, 973, 979-80 (9th Cir. 2011); Ulrich v. City & Cnty. of San Francisco, 308 F.3d 968, 976 (9th Cir. 2002) ("'[I]t has long been settled that a contract can create a constitutionally protected property interest[.]'"). They purport to (1) terminate Defendants' contracts with Anthropic, (2) require many of Anthropic's largest customers to terminate their contracts with Anthropic, (3) prohibit Anthropic from participating in federal contracting, and (4) bar Anthropic from engaging in any future business with any entity that contracts with the Department.
172. In addition, by purporting to exclude Anthropic from contracting with any federal agency (apparently for all time), they accomplish a de facto debarment that infringes on Anthropic's liberty interest in pursuing its chosen trade. See Trifax Corp. v. District of Columbia, 314 F.3d 641, 643-44 (D.C. Cir. 2003) ("Debarring a corporation from government contract bidding constitutes a deprivation of liberty that triggers the procedural guarantees of the Due Process Clause."); see also Old Dominion Dairy Prods, Inc. v. Sec'y of Def., 631 F.2d 953, 955-56 (D.C. Cir. 1980); Eng'g v. City & Cnty. of San Francisco, 2011 WL 13153042, at *7 (N.D. Cal. Feb. 14, 2011).
173. The Challenged Actions imposed these draconian punishments on Anthropic without any meaningful process. Defendants did not provide Anthropic with any factual findings remotely supporting the actions taken, much less a meaningful opportunity to challenge them. In short, the government took these punitive actions "without providing the 'core requirements' of due process: adequate notice and a meaningful hearing." Jenner & Block LLP v. U.S. Dep't of Just., 784 F. Supp. 3d 76, 108-09 (D.D.C. 2025) (citation omitted). "[I]f the government must provide due process before terminating a contractor of its own, surely it must do the same before blacklisting an entity from all its contractors' Rolodexes." Id. at 109.
174. To the extent that a formal process did occur out of public view, it is clear that the outcome was fatally predetermined by the Department's retaliatory animus. Prejudgment and process tainted by animus do not satisfy the requirements of the Due Process Clause.
175. Defendants' violations of due process have caused Anthropic ongoing and irreparable harm.
COUNT V
ADMINISTRATIVE PROCEDURE ACT
(5 U.S.C. §§ 558, 706(2))
(ALL AGENCY DEFENDANTS)
176. Anthropic incorporates by reference the allegations of the preceding paragraphs.
177. The APA provides that "[a] sanction may … be imposed or a substantive . . . order issued [only] within jurisdiction delegated to the agency and as authorized by law." 5 U.S.C. § 558(b). Thus, the APA prohibits an agency from imposing sanctions or issuing orders that exceed the scope of authority delegated to it by Congress.
178. After the President issued the Presidential Directive on February 27, numerous agencies promptly issued sanctions and orders against Anthropic.
179. For example, the Secretarial Order did not only purport "to designate Anthropic a Supply-Chain Risk to National Security," it also directed that, "[e]ffective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic." The Secretarial Letter issued on March 4 purported to formalize that final decision.
180. Later on Friday, February 27, 2026, GSA issued an order removing Anthropic from its Multiple Awards Schedule and USAi.gov. The Multiple Awards Schedule is the federal government's primary vehicle for procurement that previously allowed Anthropic to compete for procurement opportunities at the federal, state, and local level. USAi.gov is a "sandbox" or centralized platform for federal agencies to test, experiment with, and deploy AI models from leading providers, including—up to GSA's action—Anthropic.
181. Also on February 27, 2026, HHS reportedly took immediate steps to "disabl[e] enterprise Claude" as a result of the President's directive, thereby eliminating Anthropic's ability to continue to provide its services and compete with other AI providers across HHS's network.
182. On March 2, 2026, Treasury Secretary Bessent issued a statement on X that the Treasury was "terminating all use of Anthropic products . . . within the department" because the "American people deserve confidence that every tool in government serves the public interest." The same day, the State Department announced that it was "taking immediate steps to implement the [President's] directive" and switch "the model powering its in-house chatbot . . . to OpenAI from Anthropic." The Federal Housing Finance Agency also released statements that it and mortgage agencies Fannie Mae and Freddie Mac would terminate all use of Anthropic products.
183. On information and belief, additional federal agencies are positioned to issue similar directives and orders.
184. These actions are substantive "orders" within the meaning of 5 U.S.C. § 558(b) because they are "final disposition[s] . . . of an agency in a matter other than rule making." 5 U.S.C. § 551(6). These actions also are "sanctions" within the meaning of Section 558(b) because they impose "limitation[s]" and "other . . . restrictive action[s]" affecting Anthropic's freedom to compete with other AI providers for procurement opportunities and its ability to protect its reputation as an AI provider serving the public interest. 5 U.S.C. § 551(10).
185. No statute authorizes federal agencies to impose abrupt and en masse orders and sanctions limiting Anthropic's ability to compete and impugning Anthropic's reputation.
186. "Congress could not speak more clearly than it has in the text of the APA: 'a sanction may not be imposed or a substantive . . . order issued except within jurisdiction delegated to the agency and as authorized by law.'" Am. Bus Ass'n v. Slater, 231 F.3d 1, 7 (D.C. Cir. 2000) (citing 5 U.S.C. § 558(b)). The Challenged Orders of the non-Department Agencies are "without statutory authorization," id., and must be set aside under the APA.
187. Defendants' APA violations have caused Anthropic ongoing and irreparable harm.
PRAYER FOR RELIEF
For these reasons, Plaintiff respectfully requests an order that:
1. As to the Secretarial Order:
a. Declares the Secretarial Order, and the implementing Secretarial Letter, arbitrary, capricious, an abuse of discretion, and contrary to law under 5 U.S.C. § 706(2)(A);
b. Declares the Secretarial Order, and the implementing Secretarial Letter, contrary to constitutional right under 5 U.S.C. § 706(2)(B);
c. Declares the Secretarial Order, and the implementing Secretarial Letter, in excess of statutory jurisdiction, authority, or limitations under 5 U.S.C. § 706(2)(C);
d. Sets aside and vacates the Secretarial Order, and the implementing Secretarial Letter, in its entirety under 5 U.S.C. § 706(2);
e. Stays the effective date of the Secretarial Order, and the implementing Secretarial Letter, under 5 U.S.C. § 705 until the conclusion of judicial proceedings in this action.
2. As to the Presidential Directive:
a. Declares that the Presidential Directive exceeds the President's authority and violates the First Amendment and Fifth Amendment to the United States Constitution.
3. As to all of the Challenged Actions:
a. Permanently enjoins Defendants and all their officers, employees, and agents from implementing, applying, or enforcing the Challenged Actions;
b. Directs Defendants and their agents, employees, and all persons acting under their direction or control to rescind any and all guidance, directives, or communications that have issued relating to the implementation or enforcement of the Challenged Actions, including the Secretarial Letter;
c. Directs Defendants and their agents, employees, and all persons acting under their direction or control to immediately issue guidance to their officers, staff, employees, contractors, and agents to disregard the Challenged Actions and any implementing directives;
d. Awards Plaintiffs their costs and reasonable attorneys' fees as appropriate; and
e. Grants such further and other relief as this Court deems just and proper.
Date: March 9, 2026