At the recent Arete investor webinar conference call, AMD Chief Technology Officer Mark Papermaster hinted that the MI300 accelerator will support HBM3E memory.
According to Tom’s Hardware, AMD uses a single particle 24GB (12 layer stacked, single layer 16Gb) of HBM3 memory on the Instact MI300X accelerator, and a single particle 16GB (8 layer stacked, single layer 16Gb) of HBM3 memory on the Instact MI300A data center APU.
Mark Papermaster’s related remarks are as follows:
Therefore, we have built an architecture for the future: we now support 8-layer stacked HBM, and we have also designed a 12 layer stacked HBM; Our MI300 is equipped with HBM3 memory, and we have also designed the architecture for HBM3E.
So we understand memory: we have good relationships and expertise in architecture, which enables us to truly master the required skills.
In terms of delivery and supply chain, we have a deep history of cooperation not only with storage suppliers, but also with TSMC and other substrate suppliers, as well as the OSAT community.
The Instinct MI300 series uses 8 HBM stacks, achieving a total memory capacity of 192/128GB on both MI300X and MI300A. Compared to HBM3 memory, HBM3e memory further improves in speed: taking SK Hynix products as an example, its HBM3 pin transmission rate reaches 6.4Gbps, while HBM3e improves to 9.2Gbps in this parameter. At the same time, switching to HBM3E can also increase the total memory size. Samsung and Hynix have both launched single HBM3E memory with a capacity of 36GB.
Recently, a source @ Kepler-L2 revealed that AMD will launch a modified version of the MI300 using HBM3e to compete with its competitor NVIDIA B100 at a low price. Mark Papermaster’s statement indirectly added authenticity to this rumor. This new and redesigned MI300 accelerator is expected to bring users more cost-effective choices while providing revolutionary performance upgrades.