Nvidia teases a huge performance leap for their upcoming Blackwell AI accelerators
Nvidia confirms that their next-generation B100 Blackwell AI processor is launching in 2024
During the company’s SC23 Special Address, Nvidia teased the performance of their upcoming B100 Blackwell GPU. This GPU is focused on AI and compute performance, and it is due to be launched in 2024.
In the graph below, Nvidia has revealed that their B100 Blackwell accelerator will give customers a huge leap in AI performance over its predecessor. The test below was using a GPT-3 model with 178 billion parameters. This graph showcases the capabilities of Nvidia’s next-generation AI hardware with large datasets.
Nvidia has revealed nothing about the computational capabilities of their next-generation Blackwell B100 AI accelerator. However, the graph below suggests that their next-gen accelerator will feature a larger memory capacity and more memory bandwidth. A large performance boost is expected with Nvidia’s next-generation accelerators. That said, it is probable that the result below is from a memory-limited workload, rather than a compute-limited one.
While Nvidia did not specifically talk about Blackwell during their presentation, the company has clearly shown that stronger products on the horizon. Rumour has it that Nvidia’s next-generation of gaming GPUs, their RTX 50 series, will also be based on the company’s Blackwell GPU architecture.
One of the critical bottlenecks for AI processors is memory performance. Data access times and data transfer speeds are critical for AI workloads, and Nvidia are working hard to ensure that they remain on top of the latest memory advancements. Nvidia’s Hopper-based H200 accelerators are eventually H100 accelerators with faster HBM3E memory. That change alone delivers a substantial performance increase. With their next-generation accelerators, we can expect Nvidia to utilise more and faster memory to further accelerate workflows.
You can join the discussion on Nvidia’s Blackwell AI performance teaser on the OC3D Forums.