Rambus has announced that it has completed the development of its advanced HBM3 memory subsystem which can hit transfer speeds of up to 8.4 Gbps. The memory solution consists of an fully integrated PHY & digital controller.
Rambus Pushes High-Bandwidth Memory Forward With HBM3, Announces Development of HBM3 With Speeds of Up To 8.4 Gbps & 1 TB/s Bandwidth
Currently, HBM2E is the fastest memory option available and in its current implementation, the memory can offer transfer rates of up to 3.2 Gbps. HBM3 will aim to offer more than double that with an insane 8.4 Gbps transfer speed and that also results in higher bandwidth. A single HBM2E package peaks out at 460 GB/s bandwidth. HBM3 will offer up to 1.075 TB/s bandwidth which is 2x the bandwidth jump.
Of course, there would be more efficient variants of HBM3 memory in the works such as a 5.2 Gbps IO stack which would offer 665 GB/s bandwidth. The difference here is that HBM3 will go as high as 16 stacks on a single DRAM package and will be compatible with both 2.5D and 3D vertical stacking implementations.
The memory bandwidth requirements of AI/ML training are insatiable with leading-edge training models now surpassing billions of parameters,” said Soo Kyoum Kim, associate vice president, Memory Semiconductors at IDC. “The Rambus HBM3-ready memory subsystem raises the bar for performance enabling state-of-the-art AI/ML and HPC applications.”
Rambus achieves HBM3 operation of up to 8.4 Gbps leveraging over 30 years of high-speed signaling expertise, and a strong history of 2.5D memory system architecture design and enablement. In addition to the fully integrated HBM3-ready memory subsystem, Rambus provides its customers with interposer and package reference designs to speed their products to market.
“With the performance achieved by our HBM3-ready memory subsystem, designers can deliver the bandwidth needed by the most demanding designs,” said Matt Jones, general manager of Interface IP at Rambus. “Our fully-integrated PHY and digital controller solution builds on our broad installed base of HBM2 customer deployments and is backed by a full suite of support services to ensure first-time-right implementations for mission-critical AI/ML designs.”
via Rambus
Benefits of the Rambus HBM3-ready Memory Interface Subsystem:
- Supports up to 8.4 Gbps data rate delivering bandwidth of 1.075 Terabytes per second (TB/s)
- Reduces ASIC design complexity and speeds time to market with fully integrated PHY and digital controller
- Delivers full bandwidth performance across all data traffic scenarios
- Supports HBM3 RAS features
- Includes built-in hardware-level performance activity monitor
- Provides access to Rambus system and SI/PI experts helping ASIC designers to ensure maximum signal and power integrity for devices and systems
- Includes 2.5D package and interposer reference design as part of IP license
- Features LabStation development environment that enables quick system bring-up, characterization, and debugs
- Enables the highest performance in applications including state-of-the-art AI/ML training and high-performance computing (HPC) systems
Moving on, in terms of capacity, we are expecting the first generation of HBM3 memory to be very similar to HBM2E which is made up of 16Gb DRAM Dies for a total of 16 GB (8-hi stack). But we can expect increased memory densities with HBM3 once the specifications are finalized by JEDEC. For products, we can expect a range of them coming next years such as AMD's Instinct Accelerators that will be based on next-gen CDNA architecture, NVIDIA's Hopper GPUs, and Intel's future HPC accelerators based on their next-gen Xe-HPC architecture.
The post Rambus Pushes HBM3 Memory To 8.4 Gbps, Delivering Over 1 TB/s Bandwidth Through a Single DRAM Stack by Hassan Mujtaba appeared first on Wccftech.
source https://wccftech.com/rambus-pushes-hbm3-memory-to-8-4-gbps-delivering-over-1-tb-s-bandwidth-through-a-single-dram-stack/