
Where things stand with Anthropic and the Department of War
On March 5, 2026, three major developments converged in the Anthropic-Pentagon standoff: a formal supply-chain risk designation, reopened negotiations, and a legal challenge. Here is where things actually stand.
What Happened
Three developments converged on March 5, and the convergence is the story.
The US Department of Defense formally notified Anthropic that it has been designated a supply-chain risk — the first time that label has been applied to an American company. It is a classification historically reserved for foreign adversaries suspected of embedding backdoors in their technology. Defense contractors and vendors are now required to certify they do not use Anthropic's models. Defense Secretary Pete Hegseth — who has rebranded the Department of Defense the "Department of War" — ordered a six-month phase-out of Anthropic products across Pentagon systems.
On the same day, Anthropic CEO Dario Amodei reopened direct negotiations with Emil Michael, the Pentagon's under-secretary of defense for research and engineering. Amodei told reporters: "We have much more in common than we have differences. We are trying to deescalate the situation." Anthropic simultaneously announced it would challenge the designation in federal court, calling it "legally unsound."
The entire dispute traces back to one phrase in the proposed contract: "analysis of bulk acquired data." In a memo to staff reported by the Financial Times and confirmed by Bloomberg, Amodei wrote that the Pentagon's final offer had one condition: delete a single phrase from the contract. That phrase was "analysis of bulk acquired data" — "a line that exactly matched the scenario we were most worried about." Anthropic's interpretation: that phrase was a license to use its AI for mass domestic surveillance. The Pentagon's position, in Emil Michael's words: "You can't have an AI company sell AI to the Department of War and not let it do Department of War things."
Why It Matters
This is the first time the US government has used a supply-chain risk designation as leverage in a policy disagreement with a domestic company — and legal experts say the legal foundation for it is shaky.

The authority invoked — Title 10, Section 3252 — requires evidence of actual technical vulnerabilities or backdoors. There are none alleged here. An anonymous defense official told Defense One there is "no evidence of supply-chain risk" and called the designation "ideologically driven." Jessica Tillipman of George Washington University Law School was direct: "Everyone looks at this and goes, 'This is so legally dubious.' The government's outlandish legal theories designed to inflict damage will chill tech industry investment." Attorney Anthony Kuhn of Tully Rinckey added that the statute "deals with potential sabotage or backdoors" — not a contractual disagreement over permitted use cases.
There is a further irony the Pentagon has not explained: despite designating Anthropic a supply-chain risk, the DoD continues to use Anthropic's AI in Iran-related operations. The designation bars contractors from using Anthropic's products — but the government itself has not stopped.
OpenAI moved within hours of the designation to fill the vacancy, announcing its own Pentagon deal. It did not land cleanly. Amodei publicly called OpenAI's framing of its deal "straight up lies" and labeled it "safety theater" in a staff memo. Hundreds of OpenAI and Google employees signed a public letter urging the DoD to reverse the Anthropic designation. ChatGPT uninstalls rose 295% day-over-day. Claude hit number one in the App Store. Sam Altman subsequently acknowledged his company "shouldn't have rushed" the deal.
The Center for American Progress has used the dispute to call on Congress to establish statutory limits on how the military can deploy commercial AI — a recognition that this confrontation is unfolding in a legal vacuum that the current administration is filling on its own terms.
Three outcomes remain open. A negotiated compromise is possible — Amodei and Michael are talking. A court ruling could vacate the designation or uphold the government's authority. Or the two sides reach permanent separation, with the Pentagon relying on OpenAI and Google while Anthropic loses federal defense market access. Whichever way it resolves, the precedent will apply to every AI company that eventually faces the same pressure.
Sources
- T2
- T2
- T2
- T2
- T2
- T2
- T2
- T2
Stay informed. The best AI coverage, delivered weekly.