
Defense contractors quietly remove Anthropic's AI to protect Pentagon contracts
Major defense firms are purging Claude from their operations following the Trump administration's supply chain ban — not because they are legally required to, but because the alternative is risking their share of a trillion-dollar federal budget. The compliance cascade reveals how political pressure can outpace the law. Anthropic says it plans to fight the designation in court.
What Happened
Days after the Trump administration declared Anthropic a national security "supply chain risk," the consequences are rolling downhill. Lockheed Martin — one of the largest defense contractors in the world — confirmed it will remove Anthropic's Claude AI tools from its operations, issuing a statement that managed to say everything and nothing at once: "We will follow the president's and the Department of War's direction. We expect minimal impacts."
The underlying dispute traces to February 27, 2026, when Defense Secretary Pete Hegseth gave Anthropic a deadline: drop the two conditions protecting Claude's use in military settings — no fully autonomous weapons, no mass domestic surveillance of U.S. citizens — or lose its Pentagon contract. Anthropic, which had signed that contract in summer 2025 for up to $200 million and whose Claude was the first AI model brought into classified military networks, refused. Trump signed the ban the same evening.
Hegseth's directive did not stop at federal agencies. It extended the prohibition to private defense contractors — a move legal experts describe as legally aggressive at best, and likely to be overturned in court. Contractors have been given a six-month transition window to wind down their Claude usage.
Why It Matters
Here is what makes this story different from the Treasury ban we reported last week: the federal government can restrict its own procurement. That is settled law. What Hegseth did — directing private companies like Lockheed to stop using a commercial vendor on their own systems — is something else entirely.

Contract law specialists at Mayer Brown flagged the directive as a "highly aggressive" exercise of authority for which the Pentagon likely lacks statutory grounding. The Department of Defense does not generally have the power to dictate which commercial software vendors a private company uses internally. Anthropic plans to contest the supply-chain designation in court, and the administration's decision to grant a six-month wind-down window suggests even the White House anticipates a legal challenge.
And yet: Lockheed is complying anyway.
That gap — between legal obligation and actual behavior — is the real story. Defense contractors depend on Pentagon contracts for their existence. No general counsel is going to recommend testing a cabinet secretary's patience over an AI vendor relationship, even when that secretary's authority to issue the directive is constitutionally uncertain. Compliance is cheaper than litigation, and infinitely cheaper than a jeopardized contract renewal.
The contrast with OpenAI's response makes the picture sharper. Within hours of the Anthropic ban, OpenAI signed its own Pentagon deal. Even Sam Altman later acknowledged the move looked bad. "We were genuinely trying to de-escalate things and avoid a much worse outcome," Altman told staff, "but I think it just looked opportunistic and sloppy." OpenAI subsequently revised the contract to include a domestic surveillance prohibition — a quiet acknowledgment that the limits Anthropic refused to abandon had merit all along.
Anthropic holds its ethical lines. The contractors calculate the cost of doing the same, and quietly fall in line.
Sources
- T1Reutersnews
- T1CNBCnews
- T1Mayer Brownnews
- T2DevDiscoursenews
Stay informed. The best AI coverage, delivered weekly.