Skip to main content
Editorial sketch of the Pentagon building, rendered in hand-drawn pencil illustration style
Compass
SHALLOWS

The Pentagon just labeled an American AI company a supply-chain risk

The U.S. Department of Defense has formally labeled Anthropic a supply-chain risk — the first American company to receive a designation previously reserved for foreign adversaries like Huawei — after the AI lab refused to remove safety guardrails preventing Claude from being used for mass surveillance or autonomous weapons. The move is part of a broader pattern of U.S. government assertion of control over AI, alongside draft chip export rules that would require permits for Nvidia and AMD sales to any country in the world.

VERIFIEDConfidence: 80%

What Happened

On March 5, 2026, the Pentagon formally notified Anthropic that the company and its AI products have been designated a supply-chain risk to the U.S. government — making Anthropic the first American company ever to receive a label previously applied only to foreign entities, most notably Chinese telecom giant Huawei.

The dispute has been building for months. Anthropic, which had been the only AI company with models deployed on the Pentagon's classified networks, refused to remove two explicit prohibitions from its contracts: Claude cannot be used for mass surveillance of Americans, and Claude cannot be used in fully autonomous weapons systems. The Pentagon demanded Claude be available for "all lawful purposes" without written restrictions.

CEO Dario Amodei stated plainly: "We have these two red lines. We've had them from day one. We are still advocating for those red lines." Pentagon CTO Emil Michael offered the military's counter: "At some level, you have to trust your military to do the right thing." A senior Pentagon official went further, stating that "the military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability."

The immediate consequences are significant. Federal agencies must stop using Claude right away. The Defense Department has six months to phase out the technology entirely. Defense contractors must certify they are not using Anthropic's models in Pentagon work. Meanwhile, OpenAI reportedly agreed to the Pentagon's terms without similar guardrails and secured a military contract in Anthropic's place, according to Fortune.

Why It Matters

This is not just a dispute between one company and one government client. It is the opening act of a fundamental question the AI industry has so far managed to avoid answering in public: when a company builds an AI system with ethical limits, who gets to decide where those limits sit?

Editorial sketch of a semiconductor wafer in a clean-room laboratory, representing US chip export control policy

Until now, supply-chain-risk designations were a geopolitical tool — a way to keep adversary technology out of sensitive government systems. Applying the same label to an American company because it refused to waive safety guardrails is, as law firm Mayer Brown noted, genuinely unprecedented. Anthropic has called the designation "legally unsound" and warned it sets a dangerous precedent for any American company that negotiates with the government. A major Big Tech industry coalition echoed that concern directly to Defense Secretary Hegseth, signaling that alarm about the ruling extends well beyond Anthropic alone. Experts at Northeastern University warn the designation could chill innovation across the broader AI industry, as companies weigh whether holding ethical lines is commercially viable when government contracts are at stake.

The Anthropic story does not stand alone in today's news. Bloomberg is also reporting that U.S. officials have drafted rules requiring government approval for AI chip shipments to any country in the world — not just adversaries. Under the proposed regulations, Nvidia and AMD would need a permit for global sales of advanced AI chips, giving Washington broad authority over which nations can build AI infrastructure. Taken together, these two developments reveal the same underlying direction: the U.S. government is asserting control over the AI supply chain at every level, from the hardware that runs AI to the policies governing how it is used.

There is a third story today that offers a useful contrast. Netflix acquired InterPositive, an AI filmmaking startup co-founded by actor and director Ben Affleck. The company builds AI tools that analyze a production's own footage to assist with post-production tasks — color grading, relighting, visual effects — without generating synthetic performances. It is a model built around the filmmaker's authority, not against it. Outside the military-industrial context, AI development is also moving toward frameworks that center human creative control. That tension — between AI as a tool subject to human judgment and AI as a capability that governments and militaries want unconstrained — is the defining argument of this moment.

If the current pattern holds, safety-focused AI labs face a structural disadvantage in government markets. Whether that disadvantage reshapes which companies lead in sensitive applications, or whether Anthropic's threatened legal challenge rebalances the terms, is the question to watch.

Newsletter

Stay informed. The best AI coverage, delivered weekly.

Related