Greater China data center semi

GH200/GB200 a game changer?

We expect GH200/GB200 to disrupt AI hardware, ease
budget crowding-out impact, and benefit BMC, into 2025

GH200/GB200 to be disruptive for the AI hardware supply chain; BMC to benefit
Following our architecture analysis on nVidia’s (NVDA US, Not rated) HGX H100 in June (
report ) and L40S in September (report ), we now take a deep-dive into nVidia’s GH200
architecture in this report. While HGX H100 (x86 CPU + GPU) is the current
mainstream AI system structure and H200 GPU was announced during SC23 in Nov-23,
our survey suggests that nVidia’s long-term plan is to push this (ARM CPU + GPU)
architecture to become mainstream from 2025. DGX GH200 is a large system
(including 256 units of GH200) for supercomputer, thus the volume wouldn’t be large.
However, in 1H24, nVidia will redesign the structure with its GH200 Oberon platform to
further modularize (to lower cost and accelerate time-to-market) and make the system
flexible (flexible for both inference and training; Fig. 44 ). We think the GH200 Oberon
platform is to get the hardware supply chain more familiar with such a different structure
(than HGX H100), so that nVidia can give a hard push for its next-gen GB200 Oberon
platform in 2025 (roadmaps in Fig. 1 - Fig. 2 ). Such an initiative, if successful, would
be disruptive for the AI hardware supply chain, in our view. In upstream semi, we
expect BMC (baseboard management controller) suppliers, e.g., ASPEED (5274 TT,
Buy), to benefit.



 


熱門報告