Skip to main content
SurfaceSURFACE BREAK

Anthropic CEO refuses Pentagon demand to remove AI safety limits

Anthropic CEO Dario Amodei publicly refused to remove two AI safety guardrails from Claude after the Pentagon threatened supply chain risk designation and Defense Production Act invocation. The two guardrails ban mass domestic surveillance and fully autonomous lethal weapons.

VERIFIEDConfidence: 80%

Anthropic CEO Dario Amodei publicly refused on February 26 to remove two AI safety guardrails from Claude after the Pentagon, led by Defense Secretary Pete Hegseth, threatened to designate Anthropic a "supply chain risk" -- a label previously reserved for adversarial foreign entities such as Huawei -- unless the company granted unrestricted military access by a Friday 5:01 p.m. deadline. The two guardrails at issue ban Claude from enabling mass domestic surveillance and from powering fully autonomous lethal weapons. "These threats do not change our position: we cannot in good conscience accede to their request," Amodei wrote.

The confrontation puts Anthropic's $200 million Pentagon contract at immediate risk and tests the limits of government authority over private AI developers. The Pentagon also threatened to invoke the Defense Production Act, a wartime law normally used to direct industrial production, to compel unrestricted access to Claude. Previously, Anthropic had demonstrated defense-sector commitment by deploying Claude on classified networks and forfeiting hundreds of millions of dollars in revenue by blocking access from China. The standoff may set a precedent for how AI safety commitments interact with national security demands as frontier AI becomes embedded in government operations.

Newsletter

Stay informed. The best AI coverage, delivered weekly.

Related