Published on

Anthropic vs. The Pentagon: Three Words at Stake

Authors

The Fight Over "Any Lawful Use"

There is a sentence buried inside a Pentagon memo from January that may determine the future of one of the most valuable artificial intelligence companies on earth. It is not a long sentence. It is not a complicated one. It demands that all AI procurement contracts include language permitting "any lawful use." Three words. Anthropic has refused to accept them, and the consequences of that refusal are now unfolding in real time.

As of today, February 25, 2026, Anthropic CEO Dario Amodei has just walked out of a meeting at the Pentagon with Defense Secretary Pete Hegseth — a meeting that an unnamed Defense official described, with characteristic Washington bluntness, as a "shit-or-get-off-the-pot meeting." 1 Hegseth has given Anthropic until Friday, 5:00 PM (22:00 GMT), to accept unrestricted use terms or face contract termination and a designation that could effectively exile the company from the entire defense industrial ecosystem. Amodei reportedly left the meeting without budging.

What is happening here is not simply a contract dispute. It is a collision between two fundamentally incompatible visions of what artificial intelligence should be allowed to do — and who gets to decide.


anthropic vs the pentagon three words at stake 1

A $380 Billion Company With Two Red Lines

Anthropic is, by most measures, an extraordinary commercial success. Earlier this month, the company closed a $30 billion funding round at a $380 billion valuation. 2 Its AI model, Claude, was the first to be approved for use on classified military networks. Major defense contractors — AWS, Palantir, Anduril — have built Claude into their Pentagon work. The company occupies a position of remarkable strategic importance.

And yet Anthropic has drawn two lines it will not cross.

The first is fully autonomous kinetic operations — lethal weapons systems that identify and engage targets without a human in the decision loop. The second is mass domestic surveillance of American citizens. A source familiar with the negotiations told The Verge that on the surveillance issue, the concern is that "the laws haven't caught up to what AI can do," and that such use may infringe on civil liberties. On autonomous weapons, the assessment is starker: the technology "isn't there yet for fully autonomous weapons with no humans in loop." 1

These are not arbitrary ethical preferences invented by a company wishing to seem virtuous. Hamza Chaudhry of the Future of Life Institute pointed out to The Verge that Anthropic's red lines directly mirror existing, unrepealed government policy. DoD Directive 3000.09 requires that autonomous weapon systems be designed so that commanders can "exercise appropriate levels of human judgment over the use of force." DoD Directive 5240.01 prohibits intelligence components from collecting information on U.S. persons except under specific legal authorities such as FISA. 1

In other words, Anthropic is refusing to agree to terms that the Department of Defense has itself, in principle, already committed to.


The Man Driving the Threats

Negotiating on behalf of the Pentagon is Emil Michael, the Undersecretary of Defense for Research and Engineering — a position functionally equivalent to the Pentagon's chief technology officer. Michael is a Trump appointee and a former top executive at Uber, where he built a reputation for aggressive tactics and once bragged about conducting opposition research on journalists. He was pushed out of Uber in 2017 following a board investigation into the company's culture of sexual harassment. 1

Michael has made his position clear and personal. A source familiar with the matter told The Verge that Michael is genuinely aggrieved that a private company would attempt to constrain the government's use of its own purchased technology. "This is truly a matter of principle for Emil," the source said. At a summit in Florida earlier this month, Michael stated flatly: "If any one company doesn't want to accommodate that, that's a problem for us." 2

The threats Michael has deployed are extraordinary by any historical standard. The Pentagon is threatening to designate Anthropic a "supply chain risk" — a classification typically reserved for foreign adversaries, hostile state actors, and malicious cyber threats. Geoffrey Gertz, a senior fellow at the Center for a New American Security, noted that under current federal regulations, the Pentagon could have made this designation privately, without public disclosure or stated justification. Instead, it chose to issue the threat openly, in press statements and public forums. 1

The public nature of the threat is itself the threat. If the designation is made official, every defense contractor that uses Anthropic's technology — and there are many — would be required to certify its removal from their systems. The $200 million contract would be the least of Anthropic's losses.


The Ghost of Caracas

The political pressure on Anthropic is not occurring in a vacuum. On February 14, it was revealed that Claude had been used in the January 3, 2026 U.S. special forces operation that abducted Venezuelan President Nicolás Maduro in Caracas. The operation resulted in 83 deaths, including 47 Venezuelan soldiers. 3

An unnamed Anthropic official told The Wall Street Journal that any use of Claude — by private sector or government actors — must comply with the company's usage policies. Those policies explicitly prohibit use for surveillance, weapons development, or inciting violence.

How Claude was used in the Caracas operation remains unclear. AI tools can be used to control drones, analyze imagery, and summarize intercepted communications. What is clear is that the operation occurred in apparent violation of Anthropic's stated terms — and that this fact became public knowledge less than two weeks before Amodei sat down across from Hegseth.

The timing is not incidental. It is the context in which Anthropic's insistence on usage restrictions must now be understood: not as theoretical ethics, but as a policy that was, by the company's own account, already being circumvented.


The Competitive Pressure

The Trump administration has not been subtle about its preferred outcome. David Sacks, the venture capitalist serving as the administration's AI and crypto czar, has publicly accused Anthropic of promoting "woke AI" because of its regulatory stance. 2 Hegseth's January memo declared that the Department of Defense would become an "AI-first warfighting force" operating "without ideological constraints." 1

Meanwhile, the Pentagon has made clear that Anthropic's competitors have already fallen into line. OpenAI and xAI have reportedly agreed to unrestricted use terms. The evening before Amodei's meeting with Hegseth, the Pentagon announced it had signed an agreement to deploy Grok — Elon Musk's AI model — on classified systems. The message was unmistakable. 1

The competitive framing is designed to make Anthropic's position look like stubbornness, or worse, disloyalty. Whether it succeeds may depend on how much Anthropic's investors, customers, and the broader public value the distinction the company is trying to maintain.


anthropic vs the pentagon three words at stake 2

A Question Congress Has Not Answered

Beneath the negotiating tactics and political theater lies a genuine constitutional question that nobody in power seems particularly eager to resolve. Legal analysts have argued — persuasively — that neither the Pentagon nor a private AI company should be the entity setting the rules for military use of artificial intelligence. 4 That is a job for Congress, which has not done it.

The law has not kept pace with what these systems can do. The Defense Production Act, which the Pentagon has threatened to invoke, gives the President broad authority to direct private companies to prioritize national security needs — but it was not written with large language models in mind. The supply chain risk designation framework was designed for foreign hardware threats, not domestic software ethics disputes.

What Anthropic is defending, at considerable financial risk, is the principle that some uses of AI should require deliberate human authorization — and that a contract clause should not be sufficient to erase that principle. Whether the company holds that line past Friday remains to be seen.

What is certain is that the outcome will matter well beyond this particular negotiation. The terms that govern how AI is used in warfare and domestic surveillance will be shaped, in part, by what happens between now and 5:00 PM on February 28, 2026.

That is a remarkable amount of weight for three words to carry.

Footnotes

  1. Inside Anthropic's existential negotiations with the Pentagon 2 3 4 5 6 7

  2. Anthropic is clashing with the Pentagon over AI use. Here's what each side wants 2 3

  3. Anthropic vs the Pentagon: Why AI firm is taking on Trump administration

  4. Congress—Not the Pentagon or Anthropic—Should Set Military AI Rules