INVESTMENT THESIS February 2026 THEMATIC DEEP DIVE Share on Twitter

The Datacenter CPU Renaissance

AI doesn't just need GPUs. Reinforcement learning, agentic inference, and vibe coding are creating an unprecedented demand surge for datacenter CPUs. Here's where the money flows.

1:6
CPU-to-GPU Power Ratio in AI Datacenters
2x
ARM Datacenter Royalties YoY Growth
256
AMD Venice Core Count (New Record)
CPU Demand Supercycle AMD Share Gains ARM Datacenter Inflection DRAM Shortage Intel Mainstream Gap

The Core Thesis

For five years, the semiconductor narrative has been GPU, GPU, GPU. But a critical inflection is underway: datacenter CPUs are experiencing a demand resurgence that most investors have not priced in.

Three converging forces are driving this:

1. Reinforcement Learning (RL) Training Loops — Modern AI training isn't just matrix multiplication on GPUs. RL requires massive parallel CPU clusters for code compilation, verification, physics simulation, and reward calculation. GPU clusters literally sit idle without sufficient CPU horsepower. Microsoft's "Fairwater" datacenter design reveals a ~48MW CPU/storage building supporting a 295MW GPU cluster — a 1:6 CPU-to-GPU power ratio that nobody was modeling.

2. Agentic AI Inference — AI agents don't just generate text; they make API calls, run database queries, execute code, and orchestrate multi-step workflows. Each agent interaction generates 10-100x more general-purpose compute demand than a simple chat response. As agents scale from demos to production, CPU demand scales exponentially.

3. Vibe Coding — AI-assisted code generation is creating a surge in compilation, testing, and CI/CD workloads. Every line of AI-generated code still needs to be compiled and tested on CPUs.

Intel's Q4 2025 earnings revealed an unexpected CPU demand uptick, prompting increased 2026 capex and production shifts from PC to server wafers. Frontier AI labs are now competing directly with cloud providers for x86 CPU allocation.

The 2026 CPU Competitive Landscape

AMD Venice (EPYC 7th Gen) — Clear Leader

256 cores / 512 threads across 8 TSMC N2 CCDs. 16 DDR5 memory channels delivering ~1.64 TB/s bandwidth via MRDIMM-12800. Claims >1.7x performance/watt vs prior-gen Turin in SPECrate int benchmarks. AMD's chiplet architecture — refined over 7 generations since 2017 — is inherently more scalable than Intel's approach. Each small CCD is identical, cheap to manufacture, and only requires one tapeout per generation. The new SP8 (8-channel) variant fills the exact gap Intel is abandoning.

INTC Diamond Rapids — Architecturally Ambitious, Commercially Hamstrung

192 enabled cores (256 printed) using 4 CBB dies on Intel 18A-P with hybrid bonding onto Intel 3 base dies. Impressive engineering, but no SMT — meaning 192 cores = 192 threads vs AMD's 512. Expected only ~40% faster than 128-core Granite Rapids despite far more complexity. Critically, Intel cancelled the mainstream SP (8-channel) variant, leaving zero Intel competition in the volume server market until 2028+.

ARM Phoenix — The Inflection Point

ARM's first full chip design (not just IP licensing). 128 Neoverse V3 cores, two TSMC 3nm dies, 12-channel DDR5. First customers: Meta, OpenAI (Stargate), Cloudflare. ARM datacenter royalties doubled YoY. Over 1 billion Neoverse cores deployed. The transition from IP licensor to chip designer represents a fundamental business model expansion.

NVDA Vera CPU — The Ecosystem Lock-in

Custom Olympus ARM cores with SMT (88c/176t). The real story is the 1.8 TB/s NVLink-C2C connection to NVIDIA GPUs — an interconnect moat no competitor can match. Every AI cluster needs a host CPU, and NVIDIA wants it to be theirs.

Hyperscaler Custom Silicon

AMZN Graviton5 (192 cores, 3nm, head node for Trainium3), MSFT Cobalt 200 (132 cores, ~50% faster than Cobalt 100), GOOGL Axion (migrating Gmail/YouTube). All three hyperscalers are building custom ARM CPUs to reduce dependence on Intel/AMD and gain cost advantages.

The AMD Bull Case

AMD is the single best-positioned company in the datacenter CPU market for 2026-2027:

  • No mainstream Intel competition until 2028+. Diamond Rapids SP is cancelled. AMD Venice SP8 owns the volume market unchallenged.
  • 7 generations of chiplet maturity. The CCD model is proven, scalable, and yields are excellent. Intel is attempting 3D stacking for the first time with predictable teething problems.
  • Memory bandwidth leadership. 16-channel MRDIMM-12800 at 1.64 TB/s — critical for AI host CPU workloads where data movement is the bottleneck.
  • "Strong double digits" TAM growth guided by AMD management for 2026.
  • TSMC N2 process. Best-in-class manufacturing on the most advanced node, with a partner (TSMC) that is executing flawlessly.

Risks: AMD has a history of stumbling after gaining share (see 2006 ATI acquisition). Execution must remain flawless. ARM custom silicon from hyperscalers is a long-term share risk, though it primarily displaces Intel today.

The Supply Chain: Picks and Shovels

Regardless of who wins the CPU architecture war, certain companies benefit from every chip shipped.

Manufacturing

TSM TSMC — Manufactures for AMD, NVIDIA, ARM, AWS, Google, Microsoft. Every winning chip in this thesis except Intel's runs on TSMC. CoWoS advanced packaging is capacity-constrained. Monopoly position on leading-edge nodes.

ASML ASML — Sole supplier of EUV and High-NA EUV lithography machines. Both TSMC and Intel are customers. $350M+ per High-NA machine. Absolute monopoly.

Memory (The Bottleneck)

The article identifies a DRAM shortage that is impacting CPU configurations and pricing. 16-channel CPUs (Venice, Diamond Rapids, Graviton5) double the number of DIMMs per server. MRDIMM is a new premium product with higher ASPs.

MU Micron — Only US-based DRAM manufacturer. Benefits from both DDR5 volume growth and HBM for GPUs.

Packaging Equipment

BESI Besi — Leader in hybrid bonding equipment. Every 3D-stacked chip (Intel Diamond Rapids, Clearwater Forest, HBM4) needs their tools. Classic picks-and-shovels play at the heart of the packaging revolution.

Testing

TER Teradyne — More dies per package = more testing required. Revenue scales directly with chip complexity.

Next-Generation Disruptions

Beyond the current CPU war, several technology shifts create additional investment opportunities:

Optical Interconnects (Replacing Copper)

Copper-based chip-to-chip communication is hitting bandwidth and power limits. Silicon photonics — using light instead of electrons — offers dramatically lower power per bit.

AVGO Co-packaged optics leader. COHR 800G/1.6T optical transceivers. LITE Lumentum, partnering with NVIDIA on optical NVLink.

Liquid Cooling (Mandatory at 350W+)

Air cooling cannot handle 256-core, 350W CPUs. The transition to liquid cooling is not optional — it's physics.

VRT Vertiv — Largest pure-play datacenter cooling/power company. Already $7B+ revenue and growing fast.

Power Delivery (GaN Revolution)

Gallium Nitride power chips are 10x more efficient than silicon for voltage regulation. Critical as CPU power envelopes expand.

NVTS Navitas Semi (GaN leader). VICR Vicor (48V power modules, used by NVIDIA).

Glass Substrates

Intel announced a shift from organic to glass substrates — 10x better wiring density. GLW Corning is a major technology partner. Early stage but could be transformative for advanced packaging.

Nuclear Power for Datacenters

Datacenter power demand is overwhelming electrical grids. Nuclear is the only scalable, carbon-free baseload solution.

CEG Constellation Energy (Microsoft deal for Three Mile Island restart). OKLO Small modular reactors. SMR NuScale (first NRC-approved SMR design).

AI Networking

The Ultra Ethernet Consortium is building an open alternative to NVIDIA's proprietary InfiniBand for AI clusters.

ANET Arista Networks — Leading AI datacenter Ethernet switch vendor. Direct beneficiary.

Risk Factors

  • DRAM supply constraints could limit server shipments regardless of CPU availability. Memory pricing impacts server OEM margins.
  • NVIDIA BlueField-4 "Context Memory Storage" platform may offload KV-cache and memory tasks from general-purpose CPUs, reducing CPU demand per AI cluster.
  • CPU performance/watt is improving far slower than GPU. Future RL workloads may demand even higher CPU:GPU power ratios than today's 1:6, straining datacenter power budgets.
  • Hyperscaler custom silicon (Graviton, Cobalt, Axion) reduces TAM for merchant CPU vendors (AMD, Intel) over time.
  • China export controls limit the addressable market for advanced CPUs. Huawei's Kunpeng is building domestic alternatives.
  • Cyclicality. Server CPU markets have historically been cyclical. A capex pause by hyperscalers would impact all players.
  • AMD valuation. Much of the share gain story may already be priced in. Execution risk remains.

Investment Picks by Conviction

TickerCompanyThemeConvictionTimeframeWhy
AMDAMDCPU share gainsHIGH1-2 yearsVenice best-in-class, Intel ceded mainstream, "strong double digits" growth
TSMTSMCFoundry monopolyHIGH1-3 yearsMakes chips for every winner except Intel. CoWoS constrained.
ASMLASMLEquipment monopolyHIGH1-5 yearsSole EUV supplier. Both TSMC and Intel buy. Impossible to compete with.
ARMARM HoldingsDatacenter royalty growthHIGH2-3 yearsRoyalties 2x YoY, Phoenix chip, Meta/OpenAI customers
MUMicronDRAM shortageHIGH1-2 years16-ch CPUs double DIMM demand. DRAM shortage = pricing power.
VRTVertivLiquid coolingHIGH1-3 years350W CPUs require liquid cooling. $7B+ revenue, growing fast.
AVGOBroadcomOptical + custom siliconMEDIUM2-3 yearsCo-packaged optics + custom AI chip design for hyperscalers
ANETArista NetworksAI networkingMEDIUM1-3 yearsLeading AI datacenter switch vendor. Ultra Ethernet beneficiary.
COHRCoherentOptical transceiversMEDIUM2-3 years800G/1.6T transceivers. Silicon photonics leader.
BESIBesiHybrid bonding equipmentMEDIUM2-4 yearsEvery 3D-stacked chip needs their tools. Picks-and-shovels.
GLWCorningGlass substratesSPECULATIVE3-5 years10x density improvement. Intel partnership. Early stage.
NVTSNavitas SemiGaN power deliverySPECULATIVE2-4 yearsGaN 10x more efficient than silicon. Small cap, high-risk.
CEGConstellation EnergyNuclear for datacentersSPECULATIVE3-5 yearsMicrosoft nuclear deal. Grid constraints = nuclear tailwind.
INTCIntelContrarian / turnaroundSPECULATIVE2-3 yearsInventory depletion + price increases. If 18A works, massive upside. If not, existential risk.

Verdict

The datacenter CPU market is entering a demand supercycle that most investors are underweighting.

The narrative has been "GPUs are all that matters for AI." The reality is that every GPU cluster needs a proportional CPU infrastructure for RL training, agentic inference, and general-purpose workloads. Microsoft's 1:6 CPU-to-GPU power ratio in new datacenter designs quantifies a demand vector that isn't in most models.

AMD is the highest-conviction single-stock play — Venice is the best server CPU shipping in 2026, and Intel has voluntarily exited the volume market. TSMC and ASML are the infrastructure monopolies that win regardless of who designs the best chip. ARM represents the most significant architectural shift, with royalties doubling and first-party silicon shipping to Meta and OpenAI.

The supply chain tells an equally compelling story: DRAM shortage (Micron), liquid cooling (Vertiv), optical interconnects (Coherent, Broadcom), and advanced packaging (Besi) are all bottlenecks that create pricing power for well-positioned companies.

This is not a one-quarter trade. The CPU demand supercycle is structural, driven by the fundamental compute requirements of AI systems that are still in early deployment. The investment window is now, before the market fully reprices CPU relevance.

Disclaimer: This is not financial advice. This thesis is for educational and informational purposes only. The author may hold positions in securities mentioned. Always do your own research before making investment decisions. Past performance does not guarantee future results. All investments carry risk of loss.
Thesis Themes
CPU Demand Supercycle

RL training + agentic inference creating unprecedented CPU demand alongside GPUs

AMD Share Gains

Venice dominates with Intel mainstream cancelled until 2028

ARM Datacenter Inflection

Royalties doubling, Phoenix chip, hyperscaler adoption

DRAM Bottleneck

16-ch CPUs double DIMM demand amid supply shortage

Cooling & Power Crisis

350W chips force liquid cooling adoption, grid constraints push nuclear