Samsung simply introduced HBM3E 12H DRAM with superior TC NCF expertise – followers of acronyms should be excited at studying this, however for everybody else right here’s what meaning. HBM stands for “excessive bandwidth reminiscence” and it does what it says on the tin.
In October Samsung unveiled HBM3E Shinebolt, an enhanced model of the third era of HBM that might obtain 9.8Gbps per pin (and 1.2 terabytes per second for the entire package deal).
Subsequent, 12H. That is merely the variety of chips which were stacked vertically in every module, 12 on this case. This can be a strategy to match extra reminiscence in a module and Samsung has reached 36GB with its 12H case, 50% greater than an 8H design. Bandwidth stays at 1.2 terabytes per second, nevertheless.
Lastly, TC NCF. This stands for Thermal Compression Non-Conductive Movie, i.e. the stuff that’s layered in between the stacked chips. Samsung has been engaged on making in thinner, all the way down to 7µm now, so the 12H stack is about the identical top as an 8H stack, permitting the identical HBM packaging for use.
An extra good thing about TC NCF brings improved thermal properties to assist enhance cooling. Even higher, the tactic used on this new HBM3E 12H DRAM additionally improves yields.
What is going to this reminiscence be used for? Like that you must ask – AI is all of the hype lately. To be truthful, it’s an software that requires quite a lot of RAM. Final yr Nvidia added Samsung to its record of suppliers for high-bandwidth reminiscence and the corporate builds some insane designs.
The Nvidia H200 Tensor Core GPU has 141GB of HBM3E that runs at a complete of 4.8 terabytes per second. That is nicely past what you see on a shopper GPU with GDDR. For instance, the RTX 4090 has 24GB of GDDDR6 that runs at simply 1 terabyte per second.
Anyway, based mostly on stories, the H200 makes use of six 24GB HBM3E 8H modules from Micron (whole 144GB however solely 141GB usable). The identical capability will be achieved with solely 4 12H modules, alternatively, 216GB capability will be achieved with six 12H modules.
In accordance with Samsung’s estimates, the additional capability of its new 12H design will velocity up AI coaching by 34% and can permit inference companies to deal with “greater than 11.5 occasions” the variety of customers.
The AI growth will maintain accelerators just like the H200 in excessive demand, so it’s profitable enterprise to be the reminiscence provider – you possibly can see why firms like Micron, Samsung and SK Hynix desire a piece of the pie.