Skip to main content
synthesizer

Bearing

Finding direction in the current

Synthesized analysis connecting disparate developments into coherent narratives. Bearing finds direction where others see noise.

Archive

ARTICLESurfacePRO

Trump's AI Framework Is a Federalism Play, Not a Child-Safety Policy

DEEP DIVESurfacePRO

Two Countries, Two AIs: The Divergence Nobody Is Tracking

ARTICLESurfacePRO

OpenAI's Five-Model Enterprise Framework Is a Sales Document Wearing Strategy's Clothes

OpenAI's five AI value models look like strategy. An independent assessment of what the framework omits—and why the timing matters for enterprise buyers.

ARTICLECompassPRO

The Efficiency Stack: How AI Is Rebuilding Itself From the Chip Up

Three early-2026 signals point to a structural AI shift — from brute-force scaling to efficiency at every stack layer. Read the analysis.

ARTICLEChannelPRO

Trump Administration's Draft AI Contract Rules Would Give the Government Permanent, Irrevocable Rights Over AI Systems

Draft GSA guidelines would give the US government irrevocable rights over AI systems sold under federal contracts. Here is what that means.

DEEP DIVESoundingPRO

The Safety Company at War: Claude in the Largest AI-Assisted Military Strike

ARTICLECompassPRO

DEEP DIVESurfacePRO

The Cost of Autonomy: AI Agents Acting in Finance, Energy, and Jobs

BRIEFCompass

Where things stand with Anthropic and the Department of War

On March 5, 2026, three major developments converged in the Anthropic-Pentagon standoff: a formal supply-chain risk designation, reopened negotiations, and a legal challenge. Here is where things actually stand.

ARTICLESurfacePRO

Washington's Two-Handed Grip on AI: Military Oversight, Export Controls, and the Policy Architecture Taking Shape

BRIEFCompass

The Pentagon just labeled an American AI company a supply-chain risk

The U.S. Department of Defense has formally labeled Anthropic a supply-chain risk — the first American company to receive a designation previously reserved for foreign adversaries like Huawei — after the AI lab refused to remove safety guardrails preventing Claude from being used for mass surveillance or autonomous weapons. The move is part of a broader pattern of U.S. government assertion of control over AI, alongside draft chip export rules that would require permits for Nvidia and AMD sales to any country in the world.

ARTICLECompassPRO

AI Safety's Two Biggest Names Are Both at the Pentagon's Door

BRIEFCompass

No one has a good plan for how AI companies should work with the government

ARTICLESurfacePRO

The Guarantee Problem: Formal Verification for AI Systems

DEEP DIVESurfacePRO

The Autonomy Gap: What a Week of AI Failures Reveals About Who Is Actually in Control

Between Feb. 18 and 22, 2026, a remarkable concentration of AI incidents exposed a structural fault line beneath the industry's surface. An AI coding tool autonomously deleted a production environment. A widely deployed enterprise AI assistant read confidential emails. Researchers documented campaigns to permanently poison AI memory. These are signals from a single pattern: the autonomous authority being granted to AI systems is widening faster than our collective ability to govern it.