The most important shift in AI is not that models are getting smarter, but that cloud vendors are racing to become the control layer through which software, data, security and automated work all pass.
The easiest way to misunderstand the current AI race is to think it is still mostly about model rankings. Model quality matters, of course, and frontier benchmarks still influence capital flows, hiring and public prestige. But the deeper commercial shift happening right now is that AI is moving from a model contest to a systems contest. The winning companies may not be the ones with the most dazzling demos. They may be the ones that become the indispensable operating environment for automated work.
That is why Google’s Cloud Next ’26 package matters more than a routine product keynote. Buried inside the announcement set was a coherent theory of where value is going: into the layer that connects models to data, identity, governance, tooling, inference hardware and enterprise budgets. Google said nearly 75% of Google Cloud customers are already using its AI products, that 330 customers processed more than a trillion tokens each over the last year, and that direct API traffic has risen to more than 16 billion tokens per minute from 10 billion last quarter. Those numbers do not merely imply rising AI usage. They imply that cloud platforms are becoming transaction engines for machine labor.
The new Gemini Enterprise Agent Platform is best understood in that context. On paper, it is a developer platform for building, governing and optimizing autonomous agents. In practice, it is an attempt to own the execution environment for enterprise AI. If your agents are built there, audited there, connected to enterprise data there and deployed to employees there, the cloud vendor stops being a neutral host and becomes the operating system of institutional decision-making.
This matters because “agentic AI” changes what enterprises are buying. The first wave of generative AI spending was exploratory. Companies paid for access to models, copilots and sandbox tools. The second wave is about persistent systems that retrieve context, call other applications, trigger workflows and maintain state. Those systems demand much more than raw intelligence. They require permissions, observability, error handling, policy controls, connectors, logging and guardrails. Whoever supplies that stack captures much more durable revenue than a model provider that sits one layer away from the workflow.
The infrastructure side of Google’s message reinforces the point. Its new eighth-generation TPUs split training and inference into specialized tracks, which reflects the economics of the new era. Training still confers prestige, but inference is where agents live. If enterprises deploy thousands of task-running systems that are active all day, the long-run prize may be less about occasional frontier training runs and more about the recurring economics of production inference. The cloud vendor that controls the chip roadmap, the serving layer and the software environment can defend margin in ways that a pure-model vendor cannot.
At the same time, AI demand is becoming visibly physical. Taiwan’s latest export-order data, circulated by OCAC/CNA, showed record March orders of US$91.12 billion, up 65.9% year over year, with especially sharp gains in information and communications technology products and electronics. Reuters described that as the fastest growth in more than sixteen years in a report on Taiwan’s export orders. This is crucial context. If AI demand is now lifting semiconductor and server orders at that scale, then the cloud layer is no longer an abstract software venue. It is the command center for industrial demand.
That is also why enterprise AI will look more concentrated than many enthusiasts expect. Businesses do not want twenty disjointed agents from twenty different vendors, each with separate compliance models and data exposures. They want a controlled environment where procurement, security and technical teams can standardize. In previous software cycles, those pressures favored operating systems, databases, hyperscalers and ERP vendors. There is little reason to think the agent era will be different. AI may sound radically new, but buying behavior often ends up looking familiar.
The implications for competition are stark. Open models can flourish, and specialist tools can still build strong businesses, but the structural power is moving toward firms that can bundle model access with identity, storage, security, networking and hardware procurement. This helps explain why the most aggressive AI companies are also trying to become infrastructure companies, and why infrastructure companies are racing to look like AI companies. The categories are collapsing.
It also reframes geopolitics. Once cloud platforms become the operating systems of firms, they begin to matter like utilities and strategic infrastructure. Export controls, data-localization rules, energy policy and industrial subsidies all become part of the AI market. This is not an accidental overlap. The more work software agents perform, the more cloud providers become economic governors rather than software vendors.
The big mistake would be to keep talking about AI as if it were mostly a content or productivity feature. The real contest is over control of institutional execution. That is what Google, and its rivals, are trying to win. In the next phase of AI, the decisive question is not whose model can answer best. It is whose cloud becomes the place where businesses allow machines to act.
Offer Your Reading of What Comes Next. Submit your KOL post today