JEDEC launches their HBM4 memory standard
HBM4 has arrived, and it will deliver a huge bandwidth boost to future AI accelerators
It’s official, JEDEC have launched their HBM4 memory standard, delivering higher levels of bandwidth and power efficiency for next-generation hardware.
With this standard, JEDEC promises to deliver HBM users more bandwidth, larger memory capacities, and higher levels of efficiency. That’s everything that the AI market needs to push performance to the next level.
To support an easier transition from HBM3, chips can be designed with memory controllers that support both HBM3/4. This will allow chipmakers to launch products with HBM3 memory and then launch upgraded versions with next-generation memory when it becomes available. If the next-generation of AI accelerators doesn’t offer both HBM3/HBM4 memory support, I would be very surprised.
One key upgrade to the 4th generation HBM standard is its 24Gb and 32Gb layers and support for 4-high, 8-high, 12-high, and 16-high TSV stacks. This enables the creation of 512Gb (64GB) HBM modules. This paves the way towards AI accelerators and other HBM devices with insane levels of memory capacity. That’s perfect for users of large data sets. Even so, not all users will require this level of memory capacity.
(HBM memory is a vital part of today’s high-end AI accelerators)
Currently, HBM memory is commonplace on high-end accelerators for AI and GPU compute. Sadly, no current-gen consumer GPUs utilise HBM memory. For now, GDDR memory is more cost-effective. However, those economics could change as chip packaging technologies advance. HBM memory offers users insane levels of bandwidth, and it would be exciting to see that level of performance on consumer-grade products.
You can join the discussion on JEDEC releasing their HBM4 memory standard on the OC3D Forums.