Skip to main content
SurfaceSHALLOWS

OpenAI's next model is nearly ready, and Altman says it can move the economy

OpenAI has completed pre-training on a new model codenamed "Spud," which CEO Sam Altman described to employees as "very strong" and capable of meaningfully accelerating the economy. The release is expected within weeks. The announcement comes with a consequential governance shift: Altman is stepping back from direct oversight of safety and security teams to focus on fundraising and infrastructure.

VERIFIEDConfidence: 80%

What Happened

OpenAI completed pre-training on Spud in late March 2026, less than three weeks after GPT-5.4 launched on March 5. Altman told employees the model could "really accelerate the economy" — language aimed at re-energizing a company under competitive pressure from Anthropic and Google, and at signaling market differentiation ahead of a planned IPO in late 2026. Whether Spud ships as GPT-5.5 or GPT-6 has not been decided. The branding question is not trivial: one label positions it as an incremental update, the other as a generational leap.

The internal framing has already shifted. OpenAI renamed Fidji Simo's product organization "AGI Deployment," treating the next model as an operational step toward artificial general intelligence rather than a product milestone. Altman handed safety oversight to Chief Revenue Officer Mark Chen and security oversight to President Greg Brockman. OpenAI shut down its Sora video tool to redirect compute toward Spud.

Why It Matters

A CEO stepping back from safety oversight, a product team renamed for AGI, and a consumer product shuttered to free resources — the pattern points in one direction. OpenAI is treating Spud as its most consequential release yet and organizing itself accordingly. The "AGI Deployment" rename is the company's stated frame for the product division releasing this model in the next few weeks. The rename is not rhetorical hedging; OpenAI is treating its next model as an operational step toward AGI.

New research sharpens the stakes. A preprint on arXiv (JBFuzz, 2503.08990; not peer-reviewed) demonstrated that jailbreak attacks can compromise nine major language models in roughly seven queries and 60 seconds, with a 99% success rate. Safety alignment remains fragile across the industry, and OpenAI is releasing its most capable model yet while its CEO is no longer the one watching the guardrails.

Key Takeaways

  • Spud's release timeline and final branding remain unconfirmed, but the organizational restructuring around it is already complete.
  • Altman stepping back from safety oversight at precisely this moment is the detail that matters most.
Newsletter

Stay informed. The best AI coverage, delivered weekly.

Related