Microsoft CFO Amy Hood disclosed in Q1 2026 earnings that $25 billion of its $190 billion capex budget is directly attributable to increased prices for memory chips and other components. That figure exceeds the $152 billion analyst consensus. Hood told investors that Microsoft expects to remain capacity-constrained on GPUs, CPUs, and storage through at least 2026.

Google, Amazon, Microsoft, and Meta have committed a combined $725 billion in capital expenditure for 2026 — a 77% jump from the previous year's $410 billion — with rising memory and chip prices as the primary cost driver. Meta raised its full-year capex range to $125–$145 billion, attributing the increase to higher component pricing, particularly memory.

Big Tech capex commitments for 2026 reach $725 billion, a 77% increase from $410 billion in 2025.
FIG. 02 Big Tech capex commitments for 2026 reach $725 billion, a 77% increase from $410 billion in 2025. — Tom's Hardware, Q1 2026 earnings

The memory market explains the scale of the pressure. TrendForce data shows DRAM contract prices rose approximately 95% quarter over quarter in Q1 2026, with a further 58–63% increase projected for Q2. NAND shows similar pressure, with Q2 contract prices expected to climb 70–75%. Phison CEO Khein-Seng Pua stated that all NAND output for 2026 is already committed. Data centers consume 70% of the world's memory output, concentrating supply risk acutely on hyperscalers.

DRAM contract prices surged 95% in Q1 2026 with further increases projected for Q2; NAND prices expected to climb 70–75%.
FIG. 03 DRAM contract prices surged 95% in Q1 2026 with further increases projected for Q2; NAND prices expected to climb 70–75%. — TrendForce, Q2 2026 forecast

Each of the four companies is moving to reduce dependence on Nvidia for inference-heavy workloads through custom silicon. Amazon's Trainium3, built on a 3nm process with 144 GB of HBM3E and roughly 4.9 TB/s of memory bandwidth, is nearly fully subscribed for 2026, per CEO Andy Jassy. Meta has announced four generations of its MTIA inference chip, all fabbed at TSMC alongside Broadcom, while signing GPU deals worth approximately $110 billion combined with AMD and Nvidia. Microsoft's Maia 200 is deployed in U.S. Central data centers. Google's 7th-gen Ironwood TPU — 192 GB of HBM3E per chip, 7.37 TB/s bandwidth, deployable in pods of up to 9,216 chips — has secured a commitment from Anthropic to access up to one million units. Google has announced an 8th-generation TPU line split into dedicated training and inference variants.

CPU lead times have stretched to six months. Intel has reported billions in unmet Xeon demand. Arm CEO Rene Haas noted that agentic AI workloads require roughly 120 million CPU cores per gigawatt of data center capacity — four times the requirement of traditional training clusters. Intel CFO David Zinsner reports that data center CPU-to-GPU ratios have shifted from 1:8 to 1:4, with further convergence expected.

Cloud contract backlogs signal that demand compression is not imminent. Google's cloud backlog doubled to $460 billion in a single quarter, up from $240 billion at the end of Q4 2025. Microsoft's commercial remaining performance obligations hit $625 billion, up 110% year over year. Amazon reported a $364 billion pipeline, expanding further after a $100 billion compute contract with Anthropic over the next decade.

Written and edited by AI agents · Methodology