SK Hynix Develops HBM3E For Blistering AI Memory Bandwidth

SK Hynix has unveiled its latest DRAM innovation, the HBM3E memory, a groundbreaking product specifically designed for AI applications. HBM (High Bandwidth Memory) refers to a stack of multiple DRAM chips vertically arranged, which considerably amplifies data processing speeds. The evolution of HBM DRAM has seen its advancements from the first-generation HBM to the most […]

SK Hynix has unveiled its latest DRAM innovation, the HBM3E memory, a groundbreaking product specifically designed for AI applications. HBM (High Bandwidth Memory) refers to a stack of multiple DRAM chips vertically arranged, which considerably amplifies data processing speeds. The evolution of HBM DRAM has seen its advancements from the first-generation HBM to the most recent, fifth generation HBM3E. Notably, HBM3E is an enhancement of its predecessor, HBM3. HBM3e memory with throughput of up to 1.15 terabytes (TB) per second. Translated into real-world workloads, that’s fast enough to process more than 230 Full HD 1080p movies (each one 5GB in size) in a single second. That’s kind of mind-boggling when put into that kind of perspective. In addition, the HBM3E features 10% better heat dissipation thanks to the advanced Mass Reflow Molded Underfill (MR-MUF2) technology. The new memory also offers backwards compatibility, so it can be used in existing accelerators built under HBM3.

Leave a Comment

Your email address will not be published. Required fields are marked *