Compass
Opinion, analysis, and editorial perspective

The Pentagon just labeled an American AI company a supply-chain risk
The U.S. Department of Defense has formally labeled Anthropic a supply-chain risk — the first American company to receive a designation previously reserved for foreign adversaries like Huawei — after the AI lab refused to remove safety guardrails preventing Claude from being used for mass surveillance or autonomous weapons. The move is part of a broader pattern of U.S. government assertion of control over AI, alongside draft chip export rules that would require permits for Nvidia and AMD sales to any country in the world.

AI Safety's Two Biggest Names Are Both at the Pentagon's Door

No one has a good plan for how AI companies should work with the government

The Reliability Problem: Why Getting AI to Actually Work Is Harder Than Anyone Admitted
Engineers building AI-powered tools are discovering that how they structure information for a model matters far more than which model they choose. At the same time, mathematicians have proven that the tools we use to predict whether an AI will generalize reliably have documented limits. These two independently discovered findings converge on a structural reliability problem the industry has been slow to acknowledge.

The Safety Contradiction: How OpenAI Went From Signing Extinction Warnings to Calling AI Safety Researchers Doomers in Court

The Governance Gap: When AI Agents Write the Code and File the Bugs
Stay informed. The best AI coverage, delivered weekly.