Nicholas Mitsakos

AI Noise and a Far-off Signal

The productivity flywheel hasn’t started spinning. Here’s what’s happening and what it means.

Is AI Productive?

Anthropic’s Claude Cowork, autonomous shopping agents from Amazon and Google, coding agents that write, test, and deploy…the demos are impressive and the announcements relentless. But how much economic value is any of this generating?

Almost all AI-related spending is capital expenditure. Companies are buying chips, building data centers, and scaling up cloud capacity. That is spending on AI infrastructure, not productivity from AI deployment.

The productivity flywheel that equity markets are pricing in hasn’t started spinning.

We are in the infrastructure build-out phase, not the value-capture phase. For context, Chinese labs are confronting a compressed version of the same problem — with less capital and tighter compute constraints. But the core structural tension is identical on both sides of the Pacific: massive spending, minimal returns, and a market that has decided to price the dream rather than the earnings.

Price-to-Dream

What exactly are markets pricing? Anthropic closed a $30 billion funding round in February 2026 at a $380 billion valuation. As of April 2026, the company reports approximately $30 billion in annualized revenue — a 3x increase from $9 billion at the end of 2025. Claude Code alone has reached $2.5 billion in annualized revenue. At roughly 13x revenue, Anthropic’s valuation is expensive but at least anchored to real, rapidly compounding income. The company is reportedly targeting a $60 billion-plus IPO in October at a $400–500 billion valuation.

Anthropic now exceeds OpenAI’s approximately $25 billion in ARR. The lab that prioritized safety architecture and alignment research, and often dismissed as too cautious, has moved to the front of the revenue race.

While “caution” may be good publicity, fundamentally, Anthropic is taking the lead because it is offering a better, more fully integrated product. Better economics and business principles will ultimately prevail.

The Chinese comparison sharpens the picture. MiniMax generated $79 million in revenue in 2025 while recording $512 million in losses in just the first nine months. It currently trades at a market cap of approximately $40 billion – roughly 500x trailing revenue. Zhipu did approximately $105 million in 2025 revenue with nearly $700 million in full-year losses, yet commands a $44 billion market cap. Shares are up nearly 600% since its January listing.

For reference, JD.com has a market cap of roughly $40 billion, a trailing P/E of approximately 16x, and over $150 billion in annual revenue. These Chinese AI labs are valued at the same level as a profitable e-commerce giant, yet generate only a fraction of its revenue and none of its earnings. There is no P/E ratio because there are no earnings.

This is a price-to-dream market, and the dream has a burn rate that gives many of these companies 12–18 months of runway before they require fresh capital or a strategic lifeline.

The Hyperscaler Advantage

The central dynamic in U.S. AI is the relationship between frontier labs and hyperscalers. Microsoft, Google, Amazon, and Meta are not passive infrastructure providers. They are the distribution layer, the compute substrate, and, increasingly, the monetization engine for AI capabilities. Understanding this relationship is essential to understanding where value accrues.

Microsoft’s integration of OpenAI models across Azure, Copilot, and the enterprise stack gives it a structural advantage in converting AI capability into recurring revenue.

Google’s vertical integration, from TPU silicon through Gemini to Google Workspace, positions it to capture margin at every layer of the stack.

Amazon’s strategy is platform-agnostic while ensuring AWS remains the default compute layer for enterprise AI workloads.

Meta’s open-weighted approach with Llama commoditizes the model layer, preventing rivals from building proprietary moats, while Meta captures value through advertising intelligence.

These are not equal strategies.

Microsoft and Google are executing integration plays that compound with scale. Amazon is executing a platform play that benefits from fragmentation. Meta is executing a market structure play that serves its core business model. Each is rational given its competitive position. What they share is the ability to sustain losses in AI deployment while extracting value from adjacent businesses. No standalone AI lab can do this.

The Chinese analogs — Alibaba, Tencent, ByteDance — follow a similar structural logic but operate under tighter compute constraints and a more contested regulatory environment. Alibaba’s Qwen is facing the same question every hyperscaler-affiliated lab must answer. If cloud revenue cannot cover AI infrastructure investment, and open-source model development does not translate into revenue, the economics do not work. The open-source window may be closing in China faster than most observers expect.

The Agents Win

For standalone labs, the default monetization strategy is API sales. It is also a race to the bottom. DeepSeek’s API pricing runs at approximately $0.028 per million tokens — roughly 1/180th of equivalent GPT pricing. When token pricing is the competitive axis, no one wins on margin. At these price points, standalone API revenue cannot cover compute costs, talent acquisition, or frontier research.

The counterargument is that, as subsidized infrastructure contracts, a supply crunch emerges. Users who relied on artificially cheap inference will either need to self-host, adopt open-source alternatives, or migrate to cheaper paid options. That transition could benefit labs that have built efficient inference infrastructure regardless of geography. It is a plausible thesis. But it assumes the transition is manageable and that these labs survive long enough to benefit from it.

It won’t work.

AI Agents that execute multi-step tasks autonomously over extended time horizons are the most defensible surface and hardest to commoditize.

In other words, the winners

Security

Anthropic’s Claude Mythos is so capable at discovering zero-day software vulnerabilities that the company partially withheld it from public release. Project Glasswing deploys Mythos defensively on critical infrastructure.

The security implications of frontier capability concentration in U.S. labs are significant and underappreciated by most investors. If the most capable models reside within a small number of U.S.-based labs connected to government facilities and infrastructure, vulnerability expands exponentially and unpredictably.

Fewer labs mean fewer points of failure. But a single successful compromise would be catastrophic. Open-source ecosystems distribute this risk.

Open-source models can be jailbroken for malicious use with no regulatory constraints. The gap between capability availability and governance infrastructure is the central tension in AI security policy today.

Academics from both the U.S. and China have suggested that the most realistic path to cooperation may be a joint response to a major security incident. The fact that such cooperation requires a crisis to materialize is itself a signal about the inadequate state of international AI governance.

Compute

The first generation of models trained natively on Grace Blackwell architecture is now entering deployment. The capability gap between labs with access to leading-edge U.S. compute and those without is widening. U.S. export controls have made this structural. The question of whether Huawei chips can substitute for Nvidia at scale remains open, but the performance differential is material, and the software ecosystem is orders of magnitude less mature.

For U.S. hyperscalers, compute access is a structural moat. Microsoft’s $80 billion data center commitment in fiscal 2025, Google’s sustained TPU investment, Amazon’s Trainium program, and Meta’s custom silicon efforts are not just infrastructure spending. They are capability barriers that compound over time. A lab without hyperscaler-level compute cannot sustain frontier research. That is a permanent disadvantage.

The industry is structurally shifting from a training-dominated era to an inference-dominated one. The bottleneck is now serving tokens at scale, cheaply and fast enough for agents that run for hours or days.

That shift favors the hyperscalers with efficient inference infrastructure and deep integration into enterprise workflows.

The Diffusion Problem

The US’s leadership in AI infrastructure and frontier model development does not translate into leadership in AI adoption. Smaller, more highly digitized economies, such as South Korea, are prominently among them and are outpacing the U.S. on diffusion metrics.

This matters because diffusion is where economic value is ultimately created. Infrastructure investment and model capability are necessary, but not sufficient, conditions. The productivity gains that justify AI valuations can only materialize when AI is embedded in workflows at scale. Right now, that embedding is happening more slowly in the U.S. than the market narrative suggests.

The employment implications compound this concern. If AI diffusion concentrates among hyperscalers rather than spreading across the broader economy, the productivity benefits narrow.

The question of whether AI destroys jobs faster than the economy can absorb and create new ones has direct implications for fiscal revenues, political stability, and the social license that AI development currently enjoys. Governments cannot defend AI investment to their populations if the returns are not showing up in employment and household income.

People Matter

What differentiates the leading models? Talent.

Architecture, scale, and compute are necessary, but not sufficient. The differentiating variable is talent.

People matter most

Anthropic’s revenue growth is not solely a function of model capability. It reflects organizational design, research culture, and the ability to attract and retain researchers capable of operating at the frontier. OpenAI’s persistent leadership in deployment scale reflects similar organizational advantages.

The talent concentration in U.S. labs stems from immigration policy, compensation structures, and proximity to compute infrastructure, which is a competitive advantage that does not appear on any balance sheet but is arguably the most durable moat in the industry. It also explains why the capability gap between U.S. frontier labs and their international counterparts is widening rather than converging, despite significant international talent and investment.

Consolidation

Consolidation is the eventual process of every new technology and emerging industry, whether it’s automobiles, power generation, computers, semiconductor manufacturers, or artificial intelligence companies.

The endgame for standalone AI labs is consolidation. Labs without hyperscaler anchors cannot sustain frontier research on API revenue alone. The open-source window is closing as the compute requirements for competitive models exceed what any independent organization can fund without institutional backing.

The Microsoft-OpenAI relationship was the template: a frontier research lab providing capability, a hyperscaler providing distribution, compute, and monetization infrastructure. That model is fragmenting, and Microsoft does not wish to place all its bets on one frontier model maker. It is also developing relationships with Antropic.

This hybrid model, in which strategic alliances between model makers and hyperscalers are essential, will serve as the blueprint for the industry and drive consolidation. Which labs survive long enough to be worth acquiring? Which hyperscalers move first? At what valuations does the math work for both parties?

Winning

Winning? Well, it depends.

If winning is raw compute and energy capacity, the U.S. hyperscalers hold a structural lead that widens with every dollar of data center investment. If winning is a frontier model capability, Anthropic and OpenAI are pulling. If winning is diffusion – getting AI into the most hands, applications, and workflows – the U.S. is underperforming relative to its infrastructure.

It’s Economic Value

Diffusion without productivity gain is adoption without return. The companies best positioned for the next phase are those solving the token supply chain: efficient architectures, scalable inference infrastructure, and agent orchestration that enables every token to produce real output.

Economic Reality

The thread running through all of these questions is a single one: can any of this translate into economic reality before the capital runs out and political patience expires?

Infrastructure, capability, and revenue growth are happening. Productivity is developing but has not fully arrived. A significant gap still exists between infrastructure investment and return on that investment. AI is risky, but these investments are not irrational. Whether they are long-term winners is still unclear.

Share This