The AI market has officially flipped from the glamour of training to the industrial reality of inference. In a staggering escalation of B2B software scaling, Anthropic has reportedly hit a $30 billion annualized revenue run-rate as of early April 2026. This represents a nearly $11 billion jump in a single month, vaulting the startup past OpenAI’s estimated $25 billion pace.
The Shift from Training to Usage
For years, the “AI race” was measured by GPU clusters and the capital intensity of training massive models. We are now entering the Age of Inference, where economic value is captured not by who builds the smartest model, but by who serves intelligence at planetary scale. Anthropic’s explosive growth is driven by enterprise demand for autonomous workflows and the success of its Claude Code agent, which has doubled its $1M+ per year customer base to over 1,000 in recent months.
Market Implications
This revenue explosion confirms that AI is no longer a speculative “hype” cycle but a massive recurring revenue engine. The beneficiaries are broadening beyond the model labs to the inference infrastructure layer. While mega-cap tech leadership has thinned, semiconductor and hardware players like AMD, Marvell, and Dell are hitting new 52-week highs, reflecting the urgent need for the chips and servers that power high-volume, real-time intelligence.
The “Inference Moat” is now the primary strategic objective. Companies that can deliver reliable, low-latency intelligence at the lowest cost per token will dominate the next decade of the cognitive economy.