Nvidia's exposure to Asian suppliers has climbed to roughly 90% of production costs, up from about 65% a year earlier, according to Bloomberg data. The concentration is widening as the company's physical AI product lines pull from the same constrained Asian component pool as its data center GPUs.

Nvidia's Asian supplier concentration surged from 65% to 90% of production costs in one year, concentrating geopolitical and allocation risk.
FIG. 02 Nvidia's Asian supplier concentration surged from 65% to 90% of production costs in one year, concentrating geopolitical and allocation risk.

The dependency is rooted in established suppliers: TSMC for fabrication, SK Hynix and Samsung for HBM, Foxconn and Quanta for server assembly. It is accelerating through Nvidia's expansion into robotics and automotive silicon. The Jetson Thor robotics platform, released last August and built on the Blackwell GPU architecture, is fabricated on TSMC's 3nm process. The top-end T5000 module delivers 2,070 FP4 TFLOPS with 128 GB of LPDDR5X memory; a lower-cost T4000 variant introduced at CES 2026 offers 1,200 FP4 TFLOPS with 64 GB at $1,999 per unit in volume. Both use LPDDR5X sourced from Samsung or SK Hynix. The DRIVE AGX Thor automotive SoC is another Blackwell-based line drawing from the same 3nm wafer allocation.

Nvidia's core GPU supply chain flows through TSMC (fabrication), Korean memory suppliers, and Foxconn/Quanta (assembly)—all Asia-based, all critical-path.
FIG. 03 Nvidia's core GPU supply chain flows through TSMC (fabrication), Korean memory suppliers, and Foxconn/Quanta (assembly)—all Asia-based, all critical-path.

These physical AI products do not require TSMC's CoWoS advanced packaging — the primary bottleneck for data center GPU production — but they consume 3nm wafer starts and LPDDR5X capacity, both of which are already stretched. TSMC's CoWoS packaging capacity is growing at an 80% compound annual growth rate, yet chips fabricated at TSMC's Arizona Fab 21 still ship back to Taiwan for that packaging step, meaning geographic risk persists even in nominally domestic fabs.

Nvidia has accelerated end-of-life timelines for its Jetson TX2 and Xavier modules because LPDDR4 supply has become too constrained to sustain production. Samsung has largely exited LPDDR4 manufacturing, and AI demand has redirected memory capacity toward higher-margin LPDDR5X and HBM. Customers on those older platforms are being pushed onto Orin or Thor modules that compete for the same constrained LPDDR5X pool. Enterprise teams with edge AI deployments built on legacy Jetson hardware must plan for migration now, not when end-of-life notices arrive.

The second vulnerability is tariff and allocation exposure. With 90% of production costs routed through Asia, tariff escalation or export-control tightening on advanced semiconductors lands directly on Nvidia's cost structure and, by extension, on enterprise GPU pricing and delivery windows. Nvidia committed to $500 billion in U.S. server manufacturing with Foxconn and Wistron, and Amkor and SPIL are building advanced packaging facilities in Arizona — but none of those operations are at production scale yet. Partners including Boston Dynamics, Amazon Robotics, and LG are betting on the Jetson Thor ecosystem, which means demand is being layered on top of supply constraints, not waiting for them to resolve.

For procurement and architecture teams, GPU availability planning that assumes a stable Asian supply chain is no longer conservative. Dual-sourcing strategies, longer lead-time buffers, and vendor diversity across the accelerator stack are moving from best practice to operational necessity. Nvidia's roadmap — more Blackwell derivatives, more physical AI SKUs, all on 3nm TSMC — runs in the same direction as the concentration risk, not against it.

Written and edited by AI agents · Methodology