JEDEC Publishes HBM3 High Bandwidth Memory Standard: Up To 6.4 Gb/s Data Rate, 819 GB/s Bandwidth, 16-Hi Stacks & 64 GB Capacities Per Stack - Android Tricks 4 All
News Update
Loading...

Friday, January 28, 2022

JEDEC Publishes HBM3 High Bandwidth Memory Standard: Up To 6.4 Gb/s Data Rate, 819 GB/s Bandwidth, 16-Hi Stacks & 64 GB Capacities Per Stack

SK Hynix First To Complete HBM3 Development: Up To 24 GB In 12-Hi Stack, 819 GB/s Bandwidth

JEDEC has just published the HBM3 High-Bandwidth Memory standard which offers an insane uplift over existing HBM2 and HBM2e standards.

JEDEC HBM3 Published: Up To 819 GB/s Bandwidth, Double The Channels, 16-Hi Stacks With Up To 64 GB Capacities Per Stack

Press Release: JEDEC Solid State Technology Association, the global leader in the development of standards for the microelectronics industry, today announced the publication of the next version of its High Bandwidth Memory (HBM) DRAM standard: JESD238 HBM3, available for download from the JEDEC website.  HBM3 is an innovative approach to raising the data processing rate used in applications where higher bandwidth, lower power consumption, and capacity per area are essential to a solution’s market success, including graphics processing and high-performance computing and servers.

Key attributes of the new HBM3 include:

  • Extending the proven architecture of HBM2 towards even higher bandwidth, doubling the per-pin data rate of HBM2 generation and defining data rates of up to 6.4 Gb/s, equivalent to 819 GB/s per device
  • Doubling the number of independent channels from 8 (HBM2) to 16; with two pseudo channels per channel, HBM3 virtually supports 32 channels
  • Supporting 4-high, 8-high, and 12-high TSV stacks with provision for a future extension to a 16-high TSV stack
  • Enabling a wide range of densities based on 8Gb to 32Gb per memory layer, spanning device densities from 4GB (8Gb 4-high) to 64GB (32Gb 16-high); first-generation HBM3 devices are expected to be based on a 16Gb memory layer
  • Addressing the market need for high platform-level RAS (reliability, availability, serviceability), HBM3 introduces strong, symbol-based ECC on-die, as well as real-time error reporting and transparency
  • Improved energy efficiency by using low-swing (0.4V) signaling on the host interface and a lower (1.1V) operating voltage

“With its enhanced performance and reliability attributes, HBM3 will enable new applications requiring tremendous memory bandwidth and capacity,” said Barry Wagner, Director of Technical Marketing at NVIDIA and JEDEC HBM Subcommittee Chair.

Industry Support

“HBM3 will enable the industry to reach even higher performance thresholds with improved reliability and lower energy consumption,” said Mark Montierth, vice president and general manager of High-Performance Memory and Networking at Micron. “In collaborating with JEDEC members to develop this specification, we leveraged Micron’s long history of delivering advanced memory stacking and packaging solutions to optimize market-leading computing platforms.”

“With continued advancements in HPC and AI applications, demands for higher performance and improved power efficiency have been growing more than ever before. With the current release of the HBM3 JEDEC standard, SK Hynix is pleased to be able to provide a memory to our customers that have the highest bandwidth and the best power efficiency existing today with added robustness through the adoption of an enhanced ECC scheme. SK Hynix is proud to be part of JEDEC and is thereby excited to be able to continue to build a strong HBM eco-system together with our industry partners, and to provide both ESG and TCO values to our customers”, said Uksong Kang, Vice President of DRAM Product Planning at SK Hynix.

Synopsys has been an active contributor of JEDEC for more than a decade, helping to drive development and adoption of the most advanced memory interfaces like HBM3, DDR5 and LPDDR5 for a range of emerging applications,” said John Koeter, Senior Vice President of Marketing and Strategy for IP at Synopsys. “The Synopsys HBM3 IP and verification solutions, already adopted by leading customers, accelerate the integration of this new interface into high-performance SoCs and enable the development of multi-die system-in-package designs with maximum memory bandwidth and power efficiency.”

GPU Memory Technology Updates

Graphics Card Name Memory Technology Memory Speed Memory Bus Memory Bandwidth Release
AMD Radeon R9 Fury X HBM1 1.0 Gbps 4096-bit 512 GB/s 2015
NVIDIA GTX 1080 GDDR5X 10.0 Gbps 256-bit 320 GB/s 2016
NVIDIA Tesla P100 HBM2 1.4 Gbps 4096-bit 720 GB/s 2016
NVIDIA Titan Xp GDDR5X 11.4 Gbps 384-bit 547 GB/s 2017
AMD RX Vega 64 HBM2 1.9 Gbps 2048-bit 483 GB/s 2017
NVIDIA Titan V HBM2 1.7 Gbps 3072-bit 652 GB/s 2017
NVIDIA Tesla V100 HBM2 1.7 Gbps 4096-bit 901 GB/s 2017
NVIDIA RTX 2080 Ti GDDR6 14.0 Gbps 384-bit 672 GB/s 2018
AMD Instinct MI100 HBM2 2.4 Gbps 4096-bit 1229 GB/s 2020
NVIDIA A100 80 GB HBM2e 3.2 Gbps 5120-bit 2039 GB/s 2020
NVIDIA RTX 3090 GDDR6X 19.5 Gbps 384-bit 936.2 GB/s 2020
AMD Instinct MI200 HBM2e 3.2 Gbps 8192-bit 3200 GB/s 2021
NVIDIA RTX 3090 Ti GDDR6X 21.0 Gbps 384-bit 1008 GB/s 2022

The post JEDEC Publishes HBM3 High Bandwidth Memory Standard: Up To 6.4 Gb/s Data Rate, 819 GB/s Bandwidth, 16-Hi Stacks & 64 GB Capacities Per Stack by Hassan Mujtaba appeared first on Wccftech.

Comments


EmoticonEmoticon

Notification
This is just an example, you can fill it later with your own note.
Done