The Datacenter CPU Renaissance
AI doesn't just need GPUs. Reinforcement learning, agentic inference, and vibe coding are creating an unprecedented demand surge for datacenter CPUs. Here's where the money flows.
The Core Thesis
For five years, the semiconductor narrative has been GPU, GPU, GPU. But a critical inflection is underway: datacenter CPUs are experiencing a demand resurgence that most investors have not priced in.
Three converging forces are driving this:
1. Reinforcement Learning (RL) Training Loops — Modern AI training isn't just matrix multiplication on GPUs. RL requires massive parallel CPU clusters for code compilation, verification, physics simulation, and reward calculation. GPU clusters literally sit idle without sufficient CPU horsepower. Microsoft's "Fairwater" datacenter design reveals a ~48MW CPU/storage building supporting a 295MW GPU cluster — a 1:6 CPU-to-GPU power ratio that nobody was modeling.
2. Agentic AI Inference — AI agents don't just generate text; they make API calls, run database queries, execute code, and orchestrate multi-step workflows. Each agent interaction generates 10-100x more general-purpose compute demand than a simple chat response. As agents scale from demos to production, CPU demand scales exponentially.
3. Vibe Coding — AI-assisted code generation is creating a surge in compilation, testing, and CI/CD workloads. Every line of AI-generated code still needs to be compiled and tested on CPUs.
Intel's Q4 2025 earnings revealed an unexpected CPU demand uptick, prompting increased 2026 capex and production shifts from PC to server wafers. Frontier AI labs are now competing directly with cloud providers for x86 CPU allocation.
The 2026 CPU Competitive Landscape
AMD Venice (EPYC 7th Gen) — Clear Leader
256 cores / 512 threads across 8 TSMC N2 CCDs. 16 DDR5 memory channels delivering ~1.64 TB/s bandwidth via MRDIMM-12800. Claims >1.7x performance/watt vs prior-gen Turin in SPECrate int benchmarks. AMD's chiplet architecture — refined over 7 generations since 2017 — is inherently more scalable than Intel's approach. Each small CCD is identical, cheap to manufacture, and only requires one tapeout per generation. The new SP8 (8-channel) variant fills the exact gap Intel is abandoning.
INTC Diamond Rapids — Architecturally Ambitious, Commercially Hamstrung
192 enabled cores (256 printed) using 4 CBB dies on Intel 18A-P with hybrid bonding onto Intel 3 base dies. Impressive engineering, but no SMT — meaning 192 cores = 192 threads vs AMD's 512. Expected only ~40% faster than 128-core Granite Rapids despite far more complexity. Critically, Intel cancelled the mainstream SP (8-channel) variant, leaving zero Intel competition in the volume server market until 2028+.
ARM Phoenix — The Inflection Point
ARM's first full chip design (not just IP licensing). 128 Neoverse V3 cores, two TSMC 3nm dies, 12-channel DDR5. First customers: Meta, OpenAI (Stargate), Cloudflare. ARM datacenter royalties doubled YoY. Over 1 billion Neoverse cores deployed. The transition from IP licensor to chip designer represents a fundamental business model expansion.
NVDA Vera CPU — The Ecosystem Lock-in
Custom Olympus ARM cores with SMT (88c/176t). The real story is the 1.8 TB/s NVLink-C2C connection to NVIDIA GPUs — an interconnect moat no competitor can match. Every AI cluster needs a host CPU, and NVIDIA wants it to be theirs.
Hyperscaler Custom Silicon
AMZN Graviton5 (192 cores, 3nm, head node for Trainium3), MSFT Cobalt 200 (132 cores, ~50% faster than Cobalt 100), GOOGL Axion (migrating Gmail/YouTube). All three hyperscalers are building custom ARM CPUs to reduce dependence on Intel/AMD and gain cost advantages.
The AMD Bull Case
AMD is the single best-positioned company in the datacenter CPU market for 2026-2027:
- No mainstream Intel competition until 2028+. Diamond Rapids SP is cancelled. AMD Venice SP8 owns the volume market unchallenged.
- 7 generations of chiplet maturity. The CCD model is proven, scalable, and yields are excellent. Intel is attempting 3D stacking for the first time with predictable teething problems.
- Memory bandwidth leadership. 16-channel MRDIMM-12800 at 1.64 TB/s — critical for AI host CPU workloads where data movement is the bottleneck.
- "Strong double digits" TAM growth guided by AMD management for 2026.
- TSMC N2 process. Best-in-class manufacturing on the most advanced node, with a partner (TSMC) that is executing flawlessly.
Risks: AMD has a history of stumbling after gaining share (see 2006 ATI acquisition). Execution must remain flawless. ARM custom silicon from hyperscalers is a long-term share risk, though it primarily displaces Intel today.
The Supply Chain: Picks and Shovels
Regardless of who wins the CPU architecture war, certain companies benefit from every chip shipped.
Manufacturing
TSM TSMC — Manufactures for AMD, NVIDIA, ARM, AWS, Google, Microsoft. Every winning chip in this thesis except Intel's runs on TSMC. CoWoS advanced packaging is capacity-constrained. Monopoly position on leading-edge nodes.
ASML ASML — Sole supplier of EUV and High-NA EUV lithography machines. Both TSMC and Intel are customers. $350M+ per High-NA machine. Absolute monopoly.
Memory (The Bottleneck)
The article identifies a DRAM shortage that is impacting CPU configurations and pricing. 16-channel CPUs (Venice, Diamond Rapids, Graviton5) double the number of DIMMs per server. MRDIMM is a new premium product with higher ASPs.
MU Micron — Only US-based DRAM manufacturer. Benefits from both DDR5 volume growth and HBM for GPUs.
Packaging Equipment
BESI Besi — Leader in hybrid bonding equipment. Every 3D-stacked chip (Intel Diamond Rapids, Clearwater Forest, HBM4) needs their tools. Classic picks-and-shovels play at the heart of the packaging revolution.
Testing
TER Teradyne — More dies per package = more testing required. Revenue scales directly with chip complexity.
Next-Generation Disruptions
Beyond the current CPU war, several technology shifts create additional investment opportunities:
Optical Interconnects (Replacing Copper)
Copper-based chip-to-chip communication is hitting bandwidth and power limits. Silicon photonics — using light instead of electrons — offers dramatically lower power per bit.
AVGO Co-packaged optics leader. COHR 800G/1.6T optical transceivers. LITE Lumentum, partnering with NVIDIA on optical NVLink.
Liquid Cooling (Mandatory at 350W+)
Air cooling cannot handle 256-core, 350W CPUs. The transition to liquid cooling is not optional — it's physics.
VRT Vertiv — Largest pure-play datacenter cooling/power company. Already $7B+ revenue and growing fast.
Power Delivery (GaN Revolution)
Gallium Nitride power chips are 10x more efficient than silicon for voltage regulation. Critical as CPU power envelopes expand.
NVTS Navitas Semi (GaN leader). VICR Vicor (48V power modules, used by NVIDIA).
Glass Substrates
Intel announced a shift from organic to glass substrates — 10x better wiring density. GLW Corning is a major technology partner. Early stage but could be transformative for advanced packaging.
Nuclear Power for Datacenters
Datacenter power demand is overwhelming electrical grids. Nuclear is the only scalable, carbon-free baseload solution.
CEG Constellation Energy (Microsoft deal for Three Mile Island restart). OKLO Small modular reactors. SMR NuScale (first NRC-approved SMR design).
AI Networking
The Ultra Ethernet Consortium is building an open alternative to NVIDIA's proprietary InfiniBand for AI clusters.
ANET Arista Networks — Leading AI datacenter Ethernet switch vendor. Direct beneficiary.
Risk Factors
- DRAM supply constraints could limit server shipments regardless of CPU availability. Memory pricing impacts server OEM margins.
- NVIDIA BlueField-4 "Context Memory Storage" platform may offload KV-cache and memory tasks from general-purpose CPUs, reducing CPU demand per AI cluster.
- CPU performance/watt is improving far slower than GPU. Future RL workloads may demand even higher CPU:GPU power ratios than today's 1:6, straining datacenter power budgets.
- Hyperscaler custom silicon (Graviton, Cobalt, Axion) reduces TAM for merchant CPU vendors (AMD, Intel) over time.
- China export controls limit the addressable market for advanced CPUs. Huawei's Kunpeng is building domestic alternatives.
- Cyclicality. Server CPU markets have historically been cyclical. A capex pause by hyperscalers would impact all players.
- AMD valuation. Much of the share gain story may already be priced in. Execution risk remains.
Investment Picks by Conviction
| Ticker | Company | Theme | Conviction | Timeframe | Why |
|---|---|---|---|---|---|
| AMD | AMD | CPU share gains | HIGH | 1-2 years | Venice best-in-class, Intel ceded mainstream, "strong double digits" growth |
| TSM | TSMC | Foundry monopoly | HIGH | 1-3 years | Makes chips for every winner except Intel. CoWoS constrained. |
| ASML | ASML | Equipment monopoly | HIGH | 1-5 years | Sole EUV supplier. Both TSMC and Intel buy. Impossible to compete with. |
| ARM | ARM Holdings | Datacenter royalty growth | HIGH | 2-3 years | Royalties 2x YoY, Phoenix chip, Meta/OpenAI customers |
| MU | Micron | DRAM shortage | HIGH | 1-2 years | 16-ch CPUs double DIMM demand. DRAM shortage = pricing power. |
| VRT | Vertiv | Liquid cooling | HIGH | 1-3 years | 350W CPUs require liquid cooling. $7B+ revenue, growing fast. |
| AVGO | Broadcom | Optical + custom silicon | MEDIUM | 2-3 years | Co-packaged optics + custom AI chip design for hyperscalers |
| ANET | Arista Networks | AI networking | MEDIUM | 1-3 years | Leading AI datacenter switch vendor. Ultra Ethernet beneficiary. |
| COHR | Coherent | Optical transceivers | MEDIUM | 2-3 years | 800G/1.6T transceivers. Silicon photonics leader. |
| BESI | Besi | Hybrid bonding equipment | MEDIUM | 2-4 years | Every 3D-stacked chip needs their tools. Picks-and-shovels. |
| GLW | Corning | Glass substrates | SPECULATIVE | 3-5 years | 10x density improvement. Intel partnership. Early stage. |
| NVTS | Navitas Semi | GaN power delivery | SPECULATIVE | 2-4 years | GaN 10x more efficient than silicon. Small cap, high-risk. |
| CEG | Constellation Energy | Nuclear for datacenters | SPECULATIVE | 3-5 years | Microsoft nuclear deal. Grid constraints = nuclear tailwind. |
| INTC | Intel | Contrarian / turnaround | SPECULATIVE | 2-3 years | Inventory depletion + price increases. If 18A works, massive upside. If not, existential risk. |
Verdict
The datacenter CPU market is entering a demand supercycle that most investors are underweighting.
The narrative has been "GPUs are all that matters for AI." The reality is that every GPU cluster needs a proportional CPU infrastructure for RL training, agentic inference, and general-purpose workloads. Microsoft's 1:6 CPU-to-GPU power ratio in new datacenter designs quantifies a demand vector that isn't in most models.
AMD is the highest-conviction single-stock play — Venice is the best server CPU shipping in 2026, and Intel has voluntarily exited the volume market. TSMC and ASML are the infrastructure monopolies that win regardless of who designs the best chip. ARM represents the most significant architectural shift, with royalties doubling and first-party silicon shipping to Meta and OpenAI.
The supply chain tells an equally compelling story: DRAM shortage (Micron), liquid cooling (Vertiv), optical interconnects (Coherent, Broadcom), and advanced packaging (Besi) are all bottlenecks that create pricing power for well-positioned companies.
This is not a one-quarter trade. The CPU demand supercycle is structural, driven by the fundamental compute requirements of AI systems that are still in early deployment. The investment window is now, before the market fully reprices CPU relevance.
RL training + agentic inference creating unprecedented CPU demand alongside GPUs
Venice dominates with Intel mainstream cancelled until 2028
Royalties doubling, Phoenix chip, hyperscaler adoption
16-ch CPUs double DIMM demand amid supply shortage
350W chips force liquid cooling adoption, grid constraints push nuclear