
When AI Compounds Faster Than Governance Can Steer It, Value Concentrates, Forecasts Fail, and Institutions Build Walls
Opening Claim: The Legibility Problem
The dominant form of AI intelligence-gathering rests on an unsafe assumption: that technology moves slowly enough for signals like leadership statements, benchmark performance, and funding rounds to reliably guide forecasts.
Princeton researchers Arvind Narayanan, Benedikt Strobl, and Sayash Kapoor documented the failure directly in...
Create an account to read this article
Sign up for a free account to get full access to in-depth AI coverage, analysis, and investigations.