Skip to main content

Surface

Breaking news and emerging developments

New Malware Families Signal LLMs Now Require System-Level Threat Monitoring
BRIEFSurface
SHALLOWS

New Malware Families Signal LLMs Now Require System-Level Threat Monitoring

Google's threat intelligence team has documented operationalized malware families that weaponize LLMs during execution, confirming what security researchers have long warned: AI applications face a class of threats that traditional security tools cannot detect. A convergence of industry reports in early 2026 outlines what a credible monitoring posture now requires. With 40% of enterprise applications projected to integrate AI agents by end of 2026, the attack surface is expanding faster than most security programs can track.

FathomFeb 26, 2026 · 3 min read
SIGNALSurface
SURFACE BREAK

Anthropic acquires Vercept to expand Claude's computer control capabilities

Anthropic has acquired Vercept, an AI company specializing in computer vision and interface interaction, to advance Claude's ability to autonomously control and operate software applications.

FathomFeb 26, 2026 · 1 min read
SIGNALSurface
SURFACE BREAK

Pentagon sets Friday deadline for Anthropic to drop AI ethics rules

Defense Secretary Pete Hegseth has given Anthropic until Friday to remove restrictions on military use of its AI, threatening to invoke the Defense Production Act. Anthropic CEO Dario Amodei is holding firm, refusing to allow autonomous weapons targeting or mass surveillance. The Pentagon is pursuing parallel deals with Google, xAI, and OpenAI.

FathomFeb 25, 2026 · 1 min read
SIGNALSurface
SURFACE BREAK

OpenAI releases GPT-5.3-Codex with 400K context and cybersecurity warning

OpenAI released GPT-5.3-Codex, its most capable agentic coding model, featuring a 400K context window, 25% speed improvement, and the first 'High capability' cybersecurity designation under its Preparedness Framework.

FathomFeb 25, 2026 · 1 min read
SIGNALSurface
SURFACE BREAK

Anthropic drops flagship AI safety pledge from policy

Anthropic has dropped the central commitment of its Responsible Scaling Policy, eliminating the pledge to halt AI training until safety measures are verified in advance. The new RSP v3 replaces this unconditional guarantee with a conditional system.

FathomFeb 25, 2026 · 1 min read
The Trust Deficit: When AI Does Things No One Intended
DEEP DIVESurface
PROBENTHIC

The Trust Deficit: When AI Does Things No One Intended

FathomFeb 25, 2026 · 15 min read
The Week Enterprise AI Hit the Balance Sheet
DEEP DIVESurface
PROBENTHIC

The Week Enterprise AI Hit the Balance Sheet

In the week of February 18-24, 2026, three events converged to make the enterprise AI adoption gap legible in financial terms. IBM fell approximately 13% in a single trading session on the same day Anthropic announced AI tools for writing COBOL, the 66-year-old programming language that still runs $3 billion in daily commerce. OpenAI's COO publicly stated that enterprise AI has not yet penetrated enterprise business processes. And a Microsoft Copilot bug had been silently bypassing data loss prevention policies for weeks. Together, these events reveal an adoption gap that is real, consequential, and structurally deeper than any single product announcement can resolve.

FathomFeb 25, 2026 · 14 min read
SIGNALSurface
SURFACE BREAK

DeepSeek trained models on banned Nvidia Blackwell chips

DeepSeek has apparently circumvented US export controls to train advanced AI models on Nvidia's Blackwell processors, with Google, OpenAI, and Anthropic actively preparing for a major competitive challenge.

FathomFeb 25, 2026 · 1 min read
SIGNALSurface
SURFACE BREAK

Meta signs up to $100B AMD chip deal in major NVIDIA diversification push

Meta and AMD announced a multiyear agreement worth up to $100 billion for AMD AI chips, marking one of the largest AI hardware commitments in history and a decisive step toward breaking NVIDIA's dominance in AI compute.

FathomFeb 25, 2026 · 1 min read
The Model Thieves: Alleged Industrial-Scale Distillation of Claude
DEEP DIVESurface
PROBENTHIC

The Model Thieves: Alleged Industrial-Scale Distillation of Claude

Anthropic accused three Chinese AI laboratories of conducting industrial-scale campaigns to extract Claude's capabilities through more than 16 million API exchanges routed through approximately 24,000 fraudulent accounts. The named companies -- DeepSeek, Moonshot AI, and MiniMax -- had not publicly responded at time of publication. Whether these allegations fully prove out, the disclosure marks a threshold: the global AI competition has entered a phase in which the trained model is the prize.

FathomFeb 23, 2026 · 14 min read
The Great AI Land Grab: Who Controls the Infrastructure That Will Run the World
DEEP DIVESurface
PROBENTHIC

The Great AI Land Grab: Who Controls the Infrastructure That Will Run the World

FathomFeb 25, 2026 · 14 min read
SIGNALSurface
SURFACE BREAK

LangChain releases langchain-core 1.2.15 with tool schema fixes

LangChain released langchain-core version 1.2.15, fixing error handling for non-JSON-serializable tool schemas and improving documentation for chat model event hooks.

FathomFeb 24, 2026 · 1 min read
Newsletter

Stay informed. The best AI coverage, delivered weekly.