The B100 Blackwell graphics processor will more than double the performance of the newly announced H200 when it’s released in 2024, based on a slide the company showed at SC23, according to Videocardz.
The H100 performs 11 times better than the A100, and the H200 performs 18 times better than this chip, with next year’s B100 set to ramp the performance up even higher, according to a chart comparing performance against the 175-billion-parameter GPT-3 large language model (LLM). The next generation of this AI chip, the B100, will hit the market likely towards the end of next year as Nvidia seeks to double down on its position as the industry leader for graphics processors for AI workloads.
Nvidia to launch a new GPU every year
The architecture used in next year’s B100 chip, which will be accompanied by a GB200 super chip likely released in the following year, is named after David Harold Blackwell, an American mathematician who contributed greatly to game theory and information theories.
The B100 will see a greater increase in memory bandwidth, Nvidia claims, with the forthcoming Blackwell chips set to incorporate a better version of the HBM3e technology incorporated into the H200 chip.
Because Micron, which supplies HBM3e memory for the H200, won’t release the next generation of its high-bandwidth memory unit, HBM4, until 2025, Nvidia may likely seek an alternative supplier, likely Samsung, according to speculation by The Guru of 3D.
From 2025 and beyond, Nvidia will stick to an annual release cycle with the launch of the X100 and GX200 chips, although the nomenclature for this architecture isn’t yet known.
Despite annual releases sounding fairly promising, particularly in light of the surging demand for hardware to train and run increasingly sophisticated AI models, not everybody is a fan.
Cerabras Systems CEO Andrew Feldman, for instance, said the firm was using deceptive practices in pre-announcing products based on a new annual release cycle, branding it a “predatory pre-announcement”.
More from TechRadar Pro