Published on

When the Pentagon Declared War on Its Own AI

Authors

The federal government's confrontation with Anthropic last week was not a procurement dispute. It was a stress test — for the AI industry, for constitutional law, and for the question of whether private companies can maintain ethical guardrails when the government is the customer and tells them to remove those guardrails. The results were instructive, and not particularly comforting.


A Week That Moved Fast and Broke Things

The timeline is worth laying out plainly, because the speed of it is part of the story.

On February 27, Defense Secretary Pete Hegseth met with Anthropic CEO Dario Amodei and issued an ultimatum: provide the Pentagon with an AI model "free from usage policy constraints that may limit lawful military applications," or be designated a supply chain risk to national security and potentially subject to compelled cooperation under the Defense Production Act. Pentagon's Anthropic Designation Won't Survive First Contact with the Legal System Amodei refused, publishing a public letter citing constitutional, ethical, and technical objections to two specific prohibitions embedded in Anthropic's usage policy: AI-enabled mass domestic surveillance of American citizens, and fully autonomous weapons systems — meaning systems that could execute lethal force according to their own judgment, without a human being in the decision loop. Anthropic Labeled a Supply Chain Risk, Banned from Federal Government Contracts

On February 28 — a Friday evening, as these things tend to happen — President Trump ordered every federal agency to immediately cease using Anthropic's technology via Truth Social, calling the company "Leftwing nut jobs" attempting to "STRONG-ARM the Department of War." Hegseth followed with a formal supply chain risk designation, declaring that "no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic." 1

Hours later, OpenAI CEO Sam Altman announced a Pentagon deal, claiming it included the same two protections Anthropic had fought for. OpenAI's "compromise" with the Pentagon is what Anthropic feared

Three days. One ultimatum. One blacklisting. One rushed replacement deal. The pace alone should give everyone pause.


when the pentagon declared war on its own ai 1

The Contradiction at the Heart of It All

The most glaring problem with the government's position is not political — it is logical.

The Pentagon had, earlier in that same week, threatened to invoke the Defense Production Act to compel Anthropic's cooperation, according to Lawfare's account of the February 27 meeting. The Defense Production Act is a Korean War-era statute designed to mobilize private industry in moments of genuine national emergency — to treat a private company, in effect, like a wartime munitions factory. That threat was abandoned in favor of something more dramatic: a formal supply chain risk designation issued the following day. 2

Mark Dalton, senior director of technology and innovation at the R Street Institute, put it plainly in a statement quoted by Reason: the Pentagon was guilty of "consider[ing] Anthropic's technology so vital to national defense that they thought that invoking the Defense Production Act was justified to retain access," and then "suddenly [designating the company] a supply chain risk." 1 The contradiction is not subtle. And it has downstream consequences — Dalton warned that "the next time the designation is applied to a company with actual ties to a foreign adversary, the credibility to make that case will be diminished."

To underscore the absurdity: the designation came with a six-month transition period during which the Pentagon will continue using Claude. The company is simultaneously a national security threat and the AI model the Pentagon needs six months to phase out of its operations.

It is worth noting what the Pentagon has not done: it has not articulated a substantive operational rationale for why Anthropic's specific restrictions — on mass domestic surveillance and fully autonomous weapons — are unworkable in practice. The government's stated position, as relayed through Hegseth and Trump, is principally that private companies cannot use terms of service to constrain "lawful" government activities. That is a real argument about contractor leverage in defense procurement. But it is a different argument from claiming the restrictions themselves endanger troops or degrade military capability — and the administration has largely conflated the two without providing supporting detail. The Pentagon did not respond to requests for comment on the operational case against Anthropic's specific prohibitions.


The political theater aside, the legal underpinnings of the designation are, by most expert accounts, a mess. The following analysis draws substantially on Lawfare's detailed review of the statutory framework. 2

Hegseth invoked 10 U.S.C. § 3252, a rarely-used Defense Department procurement statute. According to Lawfare, the statute was built to address foreign adversary threats to the IT supply chain — not to sanction a domestic company over a contract dispute. There is exactly one known prior use of comparable authority: a September 2025 designation by the Office of the Director of National Intelligence against Acronis AG, a Swiss cybersecurity firm with reported Russian ties, limited to intelligence community contracts. Anthropic is a Delaware-incorporated American company, and no domestic company is known to have been previously designated under either § 3252 or its sister statute.

The problems compound from there. Section 3252 is a Defense Department procurement statute — it does not reach other federal agencies. A government-wide ban requires separate legal authority: specifically, the Federal Acquisition Supply Chain Security Act (FASCSA, 41 U.S.C. §§ 1321–1328 and 4713), which mandates an interagency council process, 30-day notice to the targeted company, and an opportunity to respond before any exclusion order issues. According to Lawfare's analysis, none of that appears to have occurred. Trump's Truth Social directive to "EVERY Federal Agency" to cease using Anthropic's technology has, as Lawfare notes, "no apparent statutory basis." Agencies complying with it are acting on a presidential social media post — and any contract terminations undertaken on that basis would be independently challengeable. 2

Even within § 3252's narrower scope, the designation appears procedurally deficient. The statute requires the Secretary of Defense to consult with procurement and other relevant officials, and then make a written determination containing three mandatory findings: that exclusion is necessary to protect national security, that less intrusive measures are not reasonably available, and that any limitation on disclosure is justified. Congressional notification is also required. Three days from ultimatum to designation leaves little room for any of that. Lawfare notes these defects are probably curable on remand — they do not go to the core legality of the designation — but they reinforce the picture of an action taken without the deliberation the statute contemplates. 2

Perhaps most damaging to the government's litigation posture is a structural feature of § 3252 itself. The statute contains a judicial review bar — a provision that would ordinarily shield the designation from challenge in federal court. But that bar is conditional: it only triggers when the government limits disclosure of its determination for national security reasons. The logic is that courts should defer to the executive when classified information is at stake. Hegseth did the opposite. He publicly broadcast his rationale in vivid terms — "arrogance and betrayal," "duplicity," "corporate virtue-signaling," "defective altruism." By voluntarily publicizing his reasoning rather than restricting it, Hegseth appears to have forfeited the very mechanism that would have made the designation court-proof. As Lawfare concludes, the designation "won't survive first contact with the legal system." 2


OpenAI's Compromise: Pragmatism or Capitulation?

The OpenAI deal deserves scrutiny, because the administration and Altman have both framed it as proof that reasonable accommodation was always possible. The framing is convenient. The substance is murkier.

Altman acknowledged the negotiations were "definitely rushed" — begun only after the Pentagon publicly reprimanded Anthropic. OpenAI published a limited excerpt of its contract, which references existing laws and directives: the Fourth Amendment, the Foreign Intelligence Surveillance Act, a 2023 Pentagon directive on autonomous weapons systems design and testing (which does not prohibit autonomous weapons but establishes guidelines for them), and the Posse Comitatus Act. The published language states that the Pentagon "may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols." Altman described OpenAI's approach as "cit[ing] applicable laws, which we felt comfortable with," rather than seeking explicit contractual prohibitions. OpenAI's "compromise" with the Pentagon is what Anthropic feared

Jessica Tillipman, associate dean for government procurement law studies at George Washington University, analyzed the published language and found that it "does not give OpenAI an Anthropic-style, free-standing right to prohibit otherwise-lawful government use." What rights do AI companies have in government contracts? The operative standard remains "all lawful purposes."

This is precisely what Anthropic refused. The company sought explicit carve-outs from "lawful use" — contractual prohibitions on mass domestic surveillance and fully autonomous weapons that would hold regardless of how the government chose to define legality at any given moment. To be precise about what those terms mean: "mass domestic surveillance" refers to bulk collection and monitoring of American citizens' private information without individualized suspicion; "fully autonomous weapons" refers to systems that select and engage targets without meaningful human oversight of the lethal decision. The distinction matters, because the surveillance practices exposed by Edward Snowden were, at the time, deemed legal by internal agencies — and were ruled unlawful only after drawn-out legal battles. MIT Technology Review put the point succinctly: "an assumption that federal agencies won't break the law is little assurance to anyone who remembers" that history. Legality is not a fixed point. 3

OpenAI's implicit position — as MIT Technology Review characterized it, and as OpenAI has not publicly disputed — is that it trusts the government to follow applicable law, and that citing those laws in the contract is sufficient protection. OpenAI claims a second line of defense: that it maintains control over the safety rules embedded in its models and will not provide a version stripped of those controls. Boaz Barak, an OpenAI employee Altman deputized to speak on the issue, wrote that the company can "embed our red lines — no mass surveillance and no directing weapons systems without human involvement — directly into model behavior." This is meaningful in theory. How it will be enforced in a classified military setting, on a rushed timeline, with no public oversight mechanism, remains entirely unspecified. 3

Tillipman's analysis offers a useful counterweight to the framing that OpenAI simply capitulated. She notes that restating legal requirements in a contract may not change what the law requires, but it can change remedies — OpenAI could frame government noncompliance as a breach of its own agreement and terminate the contract. That is a real, if limited, form of leverage. It is also worth noting that Anthropic's approach would have placed a private contractor in the position of deciding which otherwise-lawful government uses were off-limits — a posture the administration found constitutionally unacceptable, and one that reasonable people can disagree about on the merits. What rights do AI companies have in government contracts?


What This Actually Means for AI Companies

The Anthropic dispute has clarified something that was previously theoretical: the federal government is willing to use procurement law as a punitive instrument against domestic AI companies that maintain usage restrictions it dislikes. The designation of Anthropic as a supply chain risk — a designation designed for foreign adversaries — signals that the administration views ethical guardrails as, at minimum, a negotiating problem and, at maximum, a national security threat.

The chilling effect on the broader AI industry is real. Politico characterized Trump's threats as "attempted corporate murder." Dean Ball, senior fellow at the Foundation for American Innovation, called it "the most damaging policy move I have ever seen USG try to take," as quoted by Reason. 1

For AI companies with government contracts or aspirations of them, the message is legible: the path of least resistance runs through "any lawful use." The path of principled resistance runs through federal court.

Anthropic has vowed to challenge the designation. The legal arguments available to it are substantial — ultra vires claims under the theory that § 3252 does not authorize the designation of a domestic company over a contract dispute, due process challenges to deprivation of contracting rights without notice, First Amendment retaliation claims given the viewpoint-based nature of the punishment, and the government's own procedural failures. Whether those arguments prevail, and on what timeline, is a separate question from whether the pressure campaign succeeds in the interim.


when the pentagon declared war on its own ai 2

Conclusion

The Anthropic-Pentagon standoff is not, at its core, about AI safety. It is about who gets to decide what constraints exist on the use of powerful technology when the government is the customer. Anthropic said: we do, in part. The administration said: you don't, at all. OpenAI found a middle lane that satisfies the contract requirement while leaving the harder questions unanswered.

According to MIT Technology Review, Claude was the only AI model the Pentagon actively used in classified operations at the time of the ban — which is precisely why Hegseth felt compelled to grant a six-month transition period rather than an immediate cutoff. MIT Technology Review also reports that Claude was reportedly still being used in some classified settings in the hours after the ban was issued, illustrating how deeply embedded it had become in Pentagon systems and why a clean break was never going to be simple. 3 The legal challenge is coming. And the AI industry is watching very carefully to see what the cost of principle turns out to be.

Footnotes

  1. Anthropic Labeled a Supply Chain Risk, Banned from Federal Government Contracts 2 3

  2. Pentagon's Anthropic Designation Won't Survive First Contact with the Legal System 2 3 4 5

  3. OpenAI's "compromise" with the Pentagon is what Anthropic feared 2 3