Supercharged speeds – Micron ships HBM4 memory to “key customers”

Micron’s new HBM4 memory provides a 60% speed boost over its last-generation memory

Micron has confirmed that it has started shipping its next-generation HBM4 memory to “key customers” ahead of its production ramp in 2026. With its new 12-high 36GB HBM4 modules, Micron has delivered 60% more performance than its last-generation HBM3E memory modules. This boost will fuel the next generation of AI hardware advancement, offering boosted performance and power efficiency.

With 2.0 TB/s of memory bandwidth per chip, a single HBM4 module can deliver more memory bandwidth than an RTX 5090 graphics card. Nvidia’s GeForce RTX 5090 uses sixteen GDDR7 memory chips to deliver 1792 GB/s of memory bandwidth. With HBM4, Micron can deliver more bandwidth and higher memory capacities on a single chip.

With HBM4, Micron also boasts a 20% boost in power efficiency when compared to its HBM3E products. This is great news for datacenters, as increased performance/watt results in reduced power and cooling costs.

The increased data throughput of HBM4 memory will help to accelerate the inferencing performance of new AI accelerators. With a 60% boost in bandwidth, HBM4 memory will deliver a major leap in performance for those who adopt it. Unfortunately, HBM memory is primarily used by enterprise and data centre products. That means that consumer-grade GPUs are unlikely to benefit from this tech anytime soon.

You can join the discussion on Micron’s super-fast HBM4 memory shipping to customers on the OC3D Forums.

Mark Campbell

Mark Campbell

A Northern Irish father, husband, and techie that works to turn tea and coffee into articles when he isn’t painting his extensive minis collection or using things to make other things.

Follow Mark Campbell on Twitter
View more about me and my articles.