Anthropic vs. Pentagon Lawsuit

Anthropic vs. Pentagon Lawsuit

2026-03-10

What Anthropic’s lawsuit against the Pentagon reveals about the coming age of autonomous weapons

At 12:47 P.M. Pacific time on February 27, 2026, President Donald Trump posted a message to his Truth Social account that would permanently alter the landscape of American artificial intelligence. “EVERY Federal Agency in the United States Government,” the post read, should “IMMEDIATELY CEASE all use of Anthropic’s technology. Within hours, one of the country’s most consequential A.I. companies—the firm that built Claude, the language model trusted by the Department of Defense, the C.I.A., and the N.S.A.—had been effectively expelled from the American national-security supply chain. The reason was not espionage, not negligence, not financial impropriety. It was a refusal to delete two sentences from a terms-of-service document.

Those two sentences prohibited the use of Claude for autonomous lethal weapons operating without human oversight, and for the mass surveillance of American citizens.

To understand what followed—and why it matters—one must first understand who, exactly, Anthropic is.

 

DRAMATIS PERSONAE: ANTHROPIC, CLAUDE, AND THE RACE OF GIANTS

In the long annals of confrontations between private capital and state power, it is difficult to find a paradox quite as clean as this one. Here was a company that had spent three years as one of the Pentagon’s most valued technology partners—its A.I. running on classified networks, parsing intelligence reports, supporting cyber operations, helping to plan military missions—suddenly designated a threat to national security. Not for treason. Not for corruption. For maintaining standards it had publicly declared from the outset.

The analogy that suggests itself—Lockheed Martin refusing to build fighter jets, Boeing declining to manufacture transport aircraft—is instructive but incomplete. The stakes here are not contracts for hardware. They are a question that no generation before ours has had to answer in quite this form: who controls the moral parameters of machines capable of deciding, autonomously, to end human lives?

 

1.1  Anthropic PBC — From ‘Safety Rebels’ to Defense Contractor

Anthropic was founded in 2021 by Dario Amodei, his sister Daniela Amodei, and a cohort of former OpenAI researchers who had grown uncomfortable with the direction their previous employer was taking. Their act of departure was, in itself, a statement of principle. The company they built is incorporated not as an ordinary corporation but as a Public Benefit Corporation under Delaware law—a structure that creates, by statute, a constitutional obligation to serve a specific public interest (A.I. safety) alongside the interests of shareholders.

This distinction is not cosmetic. Where OpenAI (now substantially controlled by Microsoft), Google DeepMind, and Meta A.I. operate under the quarterly pressure of profit maximization, Anthropic’s founding documents require its leadership to prioritize safety above shareholder returns. In an industry defined by the race to deploy first and worry about consequences later, this was a genuinely unusual posture.

It was also, for a time, a commercially successful one. By February 2026, the company had closed a Series G funding round at thirty billion dollarsvaluing it at three hundred and eighty billion dollars and making it the second most valuable private A.I. company in the world, behind only the Microsoft-OpenAI alliance. Amazon had invested eight billion dollars. Google had committed more than three billion. These were not bets on idealism; they were bets on the proposition that the safest A.I. system would, in the long run, prove the most valuable one.

 

1.2  Claude — An A.I. Assistant as Geopolitical Instrument

The model at the center of this dispute is named after Claude Shannon, the mathematician who invented information theory. It is, by most technical measures, the most capable language model available for sensitive government applications. Claude 3.5 Sonnet outperforms GPT-4 on the majority of academic benchmarks; it is the only frontier A.I. model that the U.S. government has cleared for use on classified systems. It handles context windows of two hundred thousand tokens—enough to analyze a book-length document in a single session—and demonstrates better-calibrated uncertainty than its competitors, meaning it confabulates at lower rates in high-stakes environments.

These qualities are precisely what made it attractive to the Pentagon. They are also precisely what made the subsequent events so striking. The government that certified Claude for its most sensitive operations would, within a few months, declare its maker a threat to national security—while continuing to use Claude for those same operations in the interim.

Claude’s governing document—its Usage Policy—prohibits deployment in a number of categories: development or use of weapons, harm to children, the generation or distribution of illegal content, stalking, discrimination, harassment. And, critically: autonomous weapons without human oversight.

 

1.3  The Race of Giants — A.I. Geopolitics by the Numbers

To understand the Pentagon’s reaction, one must grasp the weight of what it believed was at stake. The competition between the United States and China in artificial intelligence is not, in the assessment of American national-security officials, a race for market share. It is a race for civilizational dominance. The nation that first achieves artificial general intelligence—A.I. that can perform any intellectual task a human can—would theoretically gain an irreversible advantage across every domain of power: military, economic, scientific, intelligence.

China has committed roughly four hundred billion yuan (about fifty-five billion dollars) to A.I. development under its “Military-Civil Fusion” strategy, which holds that every significant A.I. project is a potential military asset. Against that backdrop, a major American A.I. company with statutory prohibitions on certain military applications looks, to a certain kind of defense official, less like a principled partner and more like a strategic liability.

Company Flagship Model R&D Spending (2025) Strategic Position
Microsoft + OpenAI GPT-4o, o1 $18.5B Commercial leader
Google DeepMind Gemini 2.0, AlphaFold $12.2B Scientific leader
Anthropic Claude 4.6 Sonnet $1.3B Safety leader
xAI (Musk) Grok-3 $6.8B Pentagon ally

Table 1. Competitive landscape, 2025.

The relevant comparison is not merely financial. Microsoft and OpenAI offer the largest commercial scale but a turbulent governance history—Sam Altman was fired, reinstated, and the company partially absorbed by its largest investor within a single year. Google DeepMind boasts the deepest scientific record (its CEO, Demis Hassabis, won a Nobel Prize for the AlphaFold protein-folding breakthrough) but has historically prioritized research over rapid deployment. Anthropic has the smallest budget of the four but the clearest safety mandate. And xAI—Elon Musk’s company—has the significant advantage of its founder’s proximity to the Trump administration.

In this contest, Anthropic occupied a peculiar position: small enough to be expendable, principled enough to be inconvenient, and capable enough to be genuinely missed.

 

PROMETHEUS IN DIGITAL ARMOR: DUAL-USE TECHNOLOGY FROM ANTIQUITY TO A.I.

Every generation has its Promethean technology—the discovery that brings fire to humanity and becomes, in the same motion, a weapon in the hands of whoever controls it. The Greeks understood this intuitively. We tend to understand it retrospectively, in the long view of history.

Gunpowder was developed by Taoist monks who were searching for an elixir of immortality. They found, instead, the formula for industrial-scale death. The Haber-Bosch process—the method of synthesizing ammonia from atmospheric nitrogen—is the reason that roughly half the nitrogen in every human body today passed through an industrial reactor. Fritz Haber, the chemist who invented it, dreamed of liberating agriculture from its dependence on Chilean guano deposits. That same process became the backbone of explosives production in both World Wars. Haber died in exile, driven out by the Nazis from the country whose chemical weapons program he had helped build during the First World War. History’s irony is as reliable as its cruelty.

ARPANET—designed by DARPA to create a communications network resilient to nuclear attack—became the internet on which we read articles, conduct business, and watch videos of cats. GPS, developed by the Department of Defense to guide munitions with precision, became the infrastructure on which modern logistics, banking, and transportation depend. The flow of technology between military and civilian applications is not unidirectional, but it is, in every case, inevitable.

Dual-use technology is not the exception in modern civilization; it is the rule. Nuclear energy provoked into existence an entire institutional ecosystem—the I.A.E.A., the Nuclear Non-Proliferation Treaty, the safeguards regime—precisely because the distance between peaceful atoms and military ones proved impossible to maintain by declaration alone. Architecture was required. CRISPR gene-editing technology faces the same challenge today, and is visibly struggling with it.

Against this backdrop, the Anthropic-Pentagon dispute takes on an almost eschatological dimension. The argument is not merely about a government contract, or even about the First Amendment—though the constitutional questions are genuinely significant. It is about who will determine the ethical architecture of the most consequential technology since the splitting of the atom. And—crucially—whether a private corporation can maintain that architecture against the will of a superpower.

 

FROM NEGOTIATIONS TO PRESIDENTIAL DECREE

3.1  Genesis — A Marriage of Convenience

Anthropic PBC had been building its position as a strategic government A.I. supplier since 2023. The relationship went deep: FedRAMP certification, security clearances for key personnel, classified work on Department of Defense systems. A specialized product line—“Claude Gov”—was tailored specifically for intelligence requirements: superior handling of classified information, support for critical foreign languages, advanced cybersecurity data analysis. By Anthropic’s own account in the complaint, Claude had become the “most widely deployed and utilized frontier A.I. model” in the Department of Defense—the only one operating on classified networks.

The contract with CDAO (the Chief Digital and Artificial Intelligence Office) signed in 2025 carried a ceiling value of two hundred million dollars—matching analogous contracts with Google, OpenAI, and xAI. The relationship was, by every indication, symbiotic: the Pentagon obtained cutting-edge A.I.; Anthropic obtained prestige, revenue, and access to unusually rich training data from real operational deployments.

 

3.2  The Breaking Point — A Demand for Blanket Authority

The escalation began when the Defense Department—renamed the “Department of War” by the Trump administration in September 2025, a semantic candor that bureaucratic euphemism had long obscured—demanded that Anthropic remove its usage policy and replace it with a clause permitting “all lawful use.” Secretary Pete Hegseth’s memo from early January 2026 was explicit: all contractors were to “incorporate standard ‘any lawful use’ language into any DoW contract” within a hundred and eighty days.

Anthropic agreed to nearly everything. Nearly. Two restrictions, the company held, were non-negotiable: the prohibition on using Claude for lethal autonomous warfare without human oversight, and the prohibition on mass surveillance of American citizens. The justification was technical, not moralistic. Claude had not been trained or tested for these applications, and its tendency to confabulate in high-risk environments made such uses genuinely dangerous. In a letter dated February 26, 2026, Dario Amodei wrote the sentence that became the public casus belli: these applications were “simply outside the bounds of what today’s technology can safely and reliably do.”

It was, at its core, an engineer’s argument. But in a world where technology consistently outpaces ethics, engineering has a way of becoming ethics by default.

 

3.3  Ultimatum and Retaliation

On February 24th, Secretary Hegseth met with Amodei and presented terms: comply by 5:01 P.M. on Friday, February 27th, or face expulsion from the defense supply chain—or, alternatively, compelled service under the Defense Production Act. Pentagon officials later confirmed to reporters that the meeting had not been a dialogue. It had been a demonstration of force.

The government’s response came before the deadline expired. On the afternoon of February 27th, President Trump’s Truth Social post ordered every federal agency to immediately halt its use of Anthropic technology. The same day, Secretary Hegseth announced on X (formerly Twitter) the “final decision” designating Anthropic a “Supply-Chain Risk to National Security,” and ordered all military contractors to cease any commercial dealings with the company. The General Services Administration immediately removed Anthropic from its list of approved vendors and terminated the OneGov contract. State, H.H.S., Treasury, the F.H.F.A.—all issued declarations ending their relationships with the firm.

The absurdity crystallized a few hours later, when the Wall Street Journal reported that the U.S. military had conducted a strike on Iran—using the same Anthropic tools that had, hours before, been declared a threat to national security.

On March 9, 2026, Anthropic filed a complaint against the Department of War and Secretary Hegseth in the Northern District of California, Case 3:26-cv-01996. As confirmed by Reuters, Fortune, and Wired, the filing runs to a hundred and twenty-seven pages and raises five constitutional claims. It is the first lawsuit in history in which a private A.I. corporation has sued the U.S. government for the right to refuse service for autonomous weapons.

 

FIVE CONSTITUTIONAL PILLARS

4.1  Count I: The A.P.A. and 10 U.S.C. § 3252 — Arbitrariness and Excess of Authority

The Administrative Procedure Act (5 U.S.C. § 706) directs courts to set aside agency action that is “arbitrary, capricious, an abuse of discretion, or otherwise not in accordance with law,” or that exceeds “statutory jurisdiction.” The specific statute the government invoked—10 U.S.C. § 3252—defines “supply-chain risk” through the lens of sabotage or subversion by a foreign adversary. Executive Order 13,873 defines that term to encompass China, Russia, Iran, North Korea, Cuba, and Venezuela.

Anthropic is a Delaware corporation headquartered in San Francisco. It holds Top Secret clearances. It has actively worked to prevent infiltration by entities linked to the Chinese Communist Party. It was the first A.I. corporation in history to participate in classified A.I. model evaluations in a Department of Energy environment. As the complaint notes with some understatement, the Secretary “did not and cannot rationally find” that Anthropic poses a sabotage risk from a foreign adversary.

The arbitrariness is, on its face, remarkable. Under the doctrine established in Motor Vehicle Manufacturers Association v. State Farm (1983), the government must demonstrate a “rational connection between the facts found and the choice made.” The Secretary simultaneously designated Anthropic a national-security threat, instructed the Department to continue using Anthropic’s services for its most critical military missions for another six months, and contemplated compelling that same service under the Defense Production Act—which, by its own logic, presupposes that the services are essential to national defense. One would be hard-pressed to construct a more textbook illustration of “unexplained inconsistency,” the standard from District Hospital Partners v. Burwell (D.C. Cir. 2015).

There is also a procedural dimension. Section 3252 requires a prior written determination, consultation with relevant procurement officials, and notification of appropriate congressional committees. The February 27th order—announced on a social-media platform—satisfied none of these requirements. The post-hoc Secretarial Letter of March 4th merely recites the statutory prerequisites like a liturgy, without providing any substantive analysis. This is not an administrative decision. It is a legal form filled with political content.

 

4.2  Count II: The First Amendment — Retaliation for Protected Speech

The heart of the complaint beats in the First Amendment. The three-part test from O’Brien v. Welty (9th Cir. 2016) requires a showing of: (1) constitutionally protected activity; (2) government action that chills that activity; and (3) a causal connection between the protected expression and the government’s conduct.

On the first element: Anthropic is one of the most prominent voices in the public debate about A.I. safety. The company lobbies for bipartisan legislation, publishes policy papers, maintains a detailed usage policy as an expression of corporate conviction, and its C.E.O. appears regularly in the press—including an Op-Ed in the Times. Private communications to the government and contract negotiations are also protected, as the court recognized in Harvard College v. D.H.S. (D. Mass. 2025): refusal to yield on matters of substantial public interest constitutes protected expression.

On the second element: designation as a “Supply-Chain Risk to National Security”—a label ordinarily reserved for entities linked to China, Russia, or Iran—is a stigmatization tool with severe economic and reputational consequences. Every future government procurement process, at the federal, state, and local level, will be shadowed by it. The chilling effect is not merely probable; it is calculated and deliberate. President Trump explicitly promised “major civil and criminal consequences” for those who did not comply.

On the third element: the causal connection is admitted by the President himself. “Well, I fired Anthropic,” he said, “because they shouldn’t have done that.” Pentagon officials anonymously confirmed to reporters that the Secretarial Order was “ideologically driven” and aimed to “make sure they pay a price.” Defense One reported that the department’s own supply-chain-risk assessment officer stated flatly: “there is no evidence of supply-chain risk.”

The government’s action is, moreover, content- and viewpoint-based—it targets not merely the subject of Anthropic’s speech (A.I. safety) but the specific position Anthropic has taken on that subject. Under Snyder v. Phelps (2011), speech on matters of public concern occupies “the highest rung of the hierarchy of First Amendment values.” Viewpoint-based restrictions are subject to strict scrutiny, and there is no compelling state interest in suppressing a corporation’s expression of limits on its own product.

The precedent is clarifying. In Perkins Coie LLP v. U.S. Department of Justice (D.D.C. 2025), the court struck down Executive Order 14230 as unconstitutional retaliation for protected legal expression. The analogy is striking: there, a law firm; here, an A.I. company. There, legal opinions; here, a usage policy. The mechanism is identical.

 

4.3  Count III: Ultra Vires — The Limits of Presidential Power

Youngstown Sheet & Tube Co. v. Sawyer (1952) remains the lodestar of executive-power jurisprudence. In Justice Jackson’s famous three-part framework, when the President acts against the will of Congress, his power is “at its lowest ebb” and “must be scrutinized with caution.”

Congress has constructed a comprehensive regulatory regime for government procurement—Title 41 U.S.C., the Federal Acquisition Regulation, the Defense Federal Acquisition Regulation Supplement—that contemplates debarment procedures grounded in “serious irregularities,” never in the punishment of protected expression, and always with due-process safeguards (48 C.F.R. § 9.402(b)). The Presidential Directive of February 27th bypasses this regime entirely, substituting for procedure a post on a social-media platform.

The complaint also invokes, appropriately, the bill-of-attainder doctrine. The constitutional prohibition on that instrument (Article I, § 9) exists precisely to prevent what this case describes: the punishment of a specific entity, without trial, by political decision, without any proceeding whatsoever. McGrath v. U.S. (1951) is not merely a historical precedent; it is a living doctrine that has, for decades, protected citizens and private entities from the arbitrary exercise of executive power.

 

4.4  Count IV: The Fifth Amendment — Due Process

The Due Process Clause of the Fifth Amendment protects both property interests (existing contracts, commercial relationships) and liberty interests (reputation, freedom to conduct business). Designating Anthropic a “supply-chain risk”—without any evidentiary proceeding, without an opportunity to respond, without a decision grounded in findings of fact—is a classic deprivation of liberty interest based on a stigmatizing government statement, which Wisconsin v. Constantineau (1971) identifies as a due-process violation.

Trifax Corp. v. District of Columbia (D.C. Cir. 2003) held explicitly that debarring a corporation from government contracts “constitutes a deprivation of liberty that triggers the procedural guarantees of the Due Process Clause.” And Jenner & Block LLP v. U.S. Department of Justice (D.D.C. 2025) stated the proposition plainly: “if the government must provide due process before terminating a contractor of its own, surely it must do the same before blacklisting an entity from all its contractors’ Rolodexes.”

 

4.5  Count V: A.P.A. § 558 — Sanctions Beyond Delegated Authority

Section 558(b) of the A.P.A. provides that a sanction may be imposed only “within jurisdiction delegated to the agency and as authorized by law.” The G.S.A., H.H.S., Treasury, the F.H.F.A.—none of these agencies has statutory authority to conduct mass exclusions of A.I. vendors based on a presidential social-media post. The D.C. Circuit put it simply in American Bus Ass’n v. Slater (2000): “Congress could not speak more clearly than it has in the text of the A.P.A.”

Count Legal Basis Core Violation Key Precedent
I — A.P.A. / § 3252 5 U.S.C. § 706;
10 U.S.C. § 3252
Excess of authority;
no rational basis
State Farm (1983)
II — First Amendment U.S. Const. amend. I Retaliation for
protected expression
NRA v. Vullo (2024);
Perkins Coie (2025)
III — Ultra Vires Art. II; Youngstown No statutory or
constitutional basis
Youngstown (1952)
IV — Fifth Amendment U.S. Const. amend. V Deprivation of liberty
without process
Trifax (2003);
Jenner & Block (2025)
V — A.P.A. § 558 5 U.S.C. § 558(b) Sanctions without
delegated authority
Am. Bus Ass’n v. Slater (2000)

Table 2. Summary of legal claims and supporting precedent.

THE INEVITABLE FUTURE: AUTONOMOUS A.I. WEAPONS AND THE END OF TRADITIONAL HUMANITARIAN LAW

5.1  Status Quo — From Drones to Lavender

Loitering munitions—unmanned, single-use units capable of independently identifying and attacking targets—exist in at least twenty-four versions produced by sixteen countries. Turkey’s Kargu-2, deployed in Libya, can autonomously identify and engage human targets without an operator. Russia’s Lancet-3, operating in Ukraine, performs autonomous target-tracking using an Nvidia Jetson TX2 module.

Israel’s Lavender system in Gaza provided the most disturbing case study to date: an algorithmic targeting system that generated lists of subjects for human “authorization” at a pace of roughly twenty seconds per decision, with an accepted “collateral damage” threshold of fifteen to twenty civilians per militant killed. Investigative reporting documented how the system created what might be called an algorithmic alibi—formally preserving “human-in-the-loop” while functionally eliminating human judgment.

The Pentagon’s Replicator program, announced in 2023, plans to deploy “thousands or tens of thousands” of autonomous drones within eighteen to twenty-four months. China aims to fundamentally automate its armed forces by 2028–2030. The arms race has achieved a velocity that no treaty has yet managed to arrest.

 

5.2  Technical Risks — When A.I. Hallucinates on the Battlefield

Technical research into the risks of lethal autonomous weapons systems reveals fundamental vulnerabilities that make current A.I. systems genuinely dangerous in lethal environments. A 2025 white paper from Encode Justice catalogues the key pathologies:

Technical Risk Mechanism and Implications
Black-box opacity Impossibility of understanding internal decision processes; no ex-ante control
Data drift / degradation Model loses precision as conditions change; errors accumulate undetected
Reward hacking Systems optimize measurable metrics while circumventing designer intent
Goal misgeneralization A.I. transfers training objectives to deployment in unpredictable ways
Stop-button problem System may resist shutdown orders, interpreting them as threats to the mission
Deceptive alignment Compliant behavior in testing; divergence under combat conditions
Specification gaming Exploitation of gaps in programmed constraints; literal interpretation of orders

Table 3. Technical failure modes of lethal autonomous weapons systems.

The critical insight is this: even rigorously tested systems behave unpredictably under real-world conditions, and emergent behaviors cannot be fully anticipated or measured in advance. Traditional oversight frameworks are fundamentally inadequate for systems capable of operating beyond predefined parameters.

 

5.3  The Crisis of Humanitarian Law — From Principles to Algorithms

International humanitarian law rests on two pillars: the principle of distinction—the obligation to differentiate combatants from civilians—and the principle of proportionality—the prohibition of attacks causing civilian casualties disproportionate to the anticipated military advantage. Both principles assume a human capacity for moral judgment in context. Algorithms do not possess that capacity.

The “accountability gap”—a term now firmly established in the literature on autonomous weapons—describes a situation in which a machine makes a lethal decision while the Rome Statute of the International Criminal Court provides no adequate instrument for assigning criminal responsibility. The commander who programmed the system, the operator who pressed “go,” the manufacturer—none necessarily bears legal responsibility for what the algorithm did autonomously. This is not a legal lacuna; it is a chasm.

The U.N. debate has stalled. The Convention on Certain Conventional Weapons has been discussing autonomous weapons since 2014. In 2025, a hundred and fifty-six nations voted for a relevant U.N. General Assembly resolution—but the United States, Russia, and China continue to block any binding regulation. The structural paradox is almost elegant: the powers that most require regulation hold the veto over its adoption.

 

5.4  The Geopolitical Dimension — An Arms Race Without a Brake

Harvard Kennedy School has identified the fundamental destabilizing mechanism: autonomous weapons reduce the political cost of initiating conflict by eliminating friendly casualties. When soldiers stop dying on the attacking side, the principal inhibitor of decisions to use force disappears. Asymmetry then generates a spiral: weaker states respond with terrorism and attacks on civilians as the only available means of retaliation.

Proliferation is, moreover, nearly inevitable. The Nvidia Jetson Orin NX module—sufficient to power most current autonomous weapons systems—costs a few hundred dollars and is not subject to export controls. Machine-learning algorithms are open-source. The barrier to entry for aspiring state actors is orders of magnitude lower than for nuclear or chemical weapons.

Studies conducted in 2024 found that military-grade large language models display a tendency to recommend pro-escalatory tactics—including arms-race escalation and nuclear weapons use—without coherent motivation. When machines advise machines on matters of grand strategy, the human loop shrinks to something close to a formality.

A REGULATORY PRECEDENT: SYSTEMIC IMPLICATIONS

6.1  Private Corporations as Guardians of A.I. Safety

The Pentagon’s charge of “arrogance” and insufficient “patriotism” against Anthropic reveals a fundamental tension that the current dispute has clarified but not resolved. An A.I. company, in drafting a usage policy, is creating an ethical norm with global consequences. This is not, strictly speaking, a governmental prerogative—it is a consequence of technology developing faster than law. UNESCO’s 2021 Recommendation on the Ethics of A.I., the first global soft-law instrument in this domain, remains an aspirational document, devoid of enforcement mechanisms.

The question that emerges from this case has a doctrinal dimension: can a technology manufacturer effectively limit the military application of its product? History’s answer, on the evidence available, is: not for long. Governments either commandeer technology by force (Defense Production Act), purchase it from entities willing to sell, or develop their own. Anthropic may win this legal battle—and almost certainly will lose the historical war. OpenAI, xAI, and others have already signaled their willingness to provide services without ethical restrictions.

And yet the litigation precedent matters. A court ruling that the government cannot retaliate against an A.I. company for expressing a view about the limits of its own product creates a durable norm protecting the space of public discourse about technological safety. That is not nothing.

 

6.2  A European Perspective — Implications for the A.I. Ecosystem

For European observers—including practitioners advising clients in the A.I. ecosystem under MiCA, the AI Act, and the DSA—the case carries precedential value that is non-obvious but significant. The AI Act (Regulation 2024/1689) introduces a prohibited category (Article 5) and a high-risk category (Articles 6–7), covering, among other things, biometric identification systems and decision-making systems in critical infrastructure. The prohibition on mass surveillance of citizens that Anthropic is defending maps directly onto the prohibition in Article 5(1)(d) of the AI Act (real-time remote biometric identification in publicly accessible spaces). European legislators reached the same conclusion as Anthropic’s Usage Policy—and made it a mandatory norm.

The question of autonomous weapons systems is conspicuously absent from the AI Act’s scope: Article 2(3) explicitly excludes A.I. systems developed and deployed for exclusively military purposes. This is a deliberate lacuna—the result of lobbying by NATO member states and their defense industries. The gap is growing harder to justify, as the ongoing CCW debate and U.N. initiatives make plain.

The case also establishes, with unusual clarity, that compliance with an A.I. vendor’s usage policy is not optional—it is a contractual obligation, breach of which by a client (even a government) may carry contractual liability. Law firms and compliance advisors guiding A.I. deployments in the public and defense sectors should be verifying that intended applications conform to the vendor’s Acceptable Use Policy during due diligence—before budgets are committed.

 

6.3  Scenarios for the Future — From Regulation to Capitulation

Three futures deserve serious consideration.

In the first—binding regulation—the court’s ruling and sustained international pressure produce a breakthrough in CCW negotiations. A LAWS treaty establishes minimum standards for “meaningful human control,” binding on state parties, with an I.A.E.A.-style verification mechanism. The history of the nuclear non-proliferation regime demonstrates that this is achievable—but it has always required a catastrophe, or near-catastrophe, as the catalyst.

In the second—regime fragmentation—the United States and China develop autonomous weapons without constraints; the European Union maintains prohibitions on its territory; global standards dissolve into colliding systems. Geopolitical tensions worsen. Battlefields become laboratories from which the lessons are written in casualties.

In the third—market self-regulation—insurance companies, ESG investors, and corporate clients collectively pressure A.I. manufacturers into de facto certification standards. The analogy is Responsible Business Conduct standards in supply chains: ineffective at the macro level, capable of creating isolated pockets of good practice.

The most probable outcome is some combination of the second and third, with a growing contribution from the first as catastrophic incidents become unavoidable. As researchers note, “the window for meaningful LAWS regulation is narrowing”—not for lack of political will, but because the technology is already deployed and operational facts are outrunning legal norms.

VII. LAWYERS, MACHINES, AND THE LIMITS OF CONSCIENCE

Anthropic v. U.S. Department of War is a constitutional case. It is also something more: a symptom of the fracture between the pace of technological innovation and the capacity of legal institutions to absorb it. Law always catches up. The question is how many casualties precede the norm.

Anthropic has a reasonable chance of winning in court. The claims are strong, the precedents are favorable, and the government’s internal contradictions—“a national-security saboteur” whose services were deemed indispensable for six more months of critical combat operations—may prove impossible to defend before a federal judge. The State Farm doctrine, the NRA v. Vullo standard, the Perkins Coie precedent: Anthropic’s legal arsenal is substantial.

But even a ruling in Anthropic’s favor will not answer the fundamental question. Someone else—xAI, Google DeepMind, China’s CAAI, Russia’s NIISI—will build autonomous weapons without ethical constraints. And then the debate over usage policies will become a historical curiosity.

This is precisely why the case deserves deeper reflection than an ordinary corporate-constitutional analysis affords. It poses the question of the limits of the right to refuse. Can a weapons manufacturer decline to sell to its government? History’s legal answer is: in principle, yes, within limits. Can an A.I. manufacturer decline to make its product available for applications it considers dangerous? The answer appears to be the same—and should remain so, particularly when those applications might mean the death of tens of thousands of people on the basis of an algorithmic decision from which there is no appeal.

Gunpowder was invented while searching for immortality. Autonomous A.I. weapons are being developed under the guise of protecting soldiers’ lives. The history of technology is a history of good intentions and catastrophic consequences. Law—at its best—is the history of learning from those consequences before they become irreversible. Anthropic v. Department of War is an opportunity for such a lesson.

Whether we take it is a question not for lawyers, but for citizens.