Skip to main content
Editorial pencil sketch of an empty government negotiation room with leather chairs around a conference table, scattered papers and laptops, the U.S. Capitol dome visible through windows
Compass
SHALLOWS

No one has a good plan for how AI companies should work with the government

VERIFIEDConfidence: 80%

What Happened

On February 28, 2026, Defense Secretary Pete Hegseth designated Anthropic a "supply chain risk to national security" — a label typically reserved for foreign adversaries — and barred all military contractors from doing business with the company. The trigger: Anthropic had refused to lift its prohibitions on using its Claude AI for mass domestic surveillance or fully autonomous weapons deployment. CBS News reported that Hegseth's directive stated "no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic." President Trump separately ordered all federal agencies to stop using Anthropic's technology.

Hours later, OpenAI announced its own Pentagon deal. CEO Sam Altman later acknowledged the timing looked bad. "We were genuinely trying to de-escalate things and avoid a much worse outcome," he told Fortune, "but I think it just looked opportunistic and sloppy." Anthropic CEO Dario Amodei called the government's response "retaliatory and punitive," and the company announced it would challenge the supply chain risk designation in court. Anthropic had held a $200 million Pentagon contract awarded in July 2025; that relationship is now severed.

Why It Matters

The Anthropic-Pentagon standoff reads like a contract dispute. But the story underneath it is structural: neither the government nor the AI industry has worked out the basic rules of engagement.

Editorial pencil sketch of the Pentagon building with competing forces represented by opposing arrows at its entrance, symbolizing the AI governance standoff

No federal law or procurement framework currently specifies what safety conditions a private AI company can demand from the government. The Pentagon awarded both Anthropic and OpenAI contracts worth up to $200 million each through "Other Transactions" pathways — a procurement vehicle that offers flexibility but limited public accountability. As Jessica Tillipman, associate dean for government procurement law at George Washington University, explained to Nextgov, the core question is not whether companies can impose restrictions at all. "Contractors restrict the government's use of their products all the time," she said. "The real governance challenge is whether government procurement adequately protects public interests."

The OpenAI deal illustrates exactly why that challenge matters. Tillipman reviewed the published contract language and concluded it does not give OpenAI a "freestanding right to prohibit otherwise-lawful government use" — it references existing laws rather than creating explicit prohibitions. What counts as "lawful" can shift with administration priorities, and companies that rely on that formulation have limited recourse when it does. OpenAI subsequently announced it would amend the contract to add explicit language prohibiting domestic surveillance of U.S. persons.

Two other developments this week reinforce the same underlying pattern. Reports emerged that the Pentagon used Anthropic's Claude in planning airstrikes against Iran — reportedly hours after Trump banned the company from government work — illustrating the gap between policy announcements and operational practice. Separately, the Supreme Court declined to hear a case on whether AI-generated content can receive copyright protection, leaving in place the rule requiring human authorship but sidestepping the deeper question of how AI-generated work will be owned at scale. Across defense, intellectual property, and procurement, the rules governing AI's role in public institutions are being set through confrontation and inaction rather than deliberate policy.

If this trajectory holds, AI companies that depend on government contracts face a structural incentive to soften their ethical constraints as contract sizes and political pressures grow. The Anthropic-OpenAI dynamic this week may be a preview of that pressure playing out repeatedly — with higher stakes each time.

Sources

Newsletter

Stay informed. The best AI coverage, delivered weekly.

Related