AI boom and the politics of HBM memory chips



The high-bandwidth memory (HBM) landscape, steadily growing in importance for its critical pairing with artificial intelligence (AI) processors, is ready to move to its next manifestation, HBM3e, increasing data transfer rate and peak memory bandwidth by 44%. Here, SK hynix, which launched the first HBM chip in 2013, is also the first to offer HBM3e validation for Nvidia’s H-200 AI hardware.

HBM is a high-performance memory that stacks chips on top of one another and connects them with through-silicon vias (TSVs) for faster and more energy-efficient data processing. The demand for HBM memory chips has boomed with the growing popularity of generative AI. However, it’s currently facing a supply bottleneck caused by both packaging constraints and the inherently long production cycle of HBM.

Figure 1 SK hynix aims to maintain its lead by releasing an HBM3e device with 16 layers of DRAM and a single-stack speed of up to 1,280 GB/s.

According to TrendForce, 2024 will mark the transition from HBM3 to HBM3e, and SK hynix is leading the pack with HBM3e validation in the first quarter of this year. It’s worth mentioning that SK hynix is currently the primary supplier of HBM3 memory chips for Nvidia’s H100 AI solutions.

Samsung, now fighting back to make up for the lost space, has received certification for AMD’s AMD MI300 series AI accelerators. That’s a significant breakthrough for the Suwon, South Korea-based memory supplier, as AMD’s AI accelerators are expected to scale up later this year.

Micron, which largely missed the HBM opportunity, is also catching up by launching the next iteration, HBM3e, for Nvidia’s H200 GPUs by the end of the first quarter in 2024. Nvidia’s H200 GPUs will start shipping in the second quarter of 2024.

Figure 2 The 8H HBM3e memory offering 24 GB will be part of Nvidia’s H200 Tensor Core GPUs, which will begin shipping in the second quarter of 2024. Source: Micron

It’s important to note that when it comes to HBM technology, SK hynix has remained ahead of its two mega competitors—Micron and Samsung—since 2013, when SK hynix introduced HBM memory in partnership with AMD. It took Samsung two years to challenge its South Korean neighbor when it developed the HBM2 device in late 2015.

But the rivalry between SK hynix and Samsung is more than merely a first-mover advantage. While Samsung chose the conventional non-conductive film (NCF) technology for producing HBM chips, SK hynix switched to the mass reflow molded underfill (MR-MUF) method to address NFC limitations. According to a Reuters report, while SK hynix has secured about 60-70% yield rates for its HBM3 production, Samsung’s HBM3 production yields stand at nearly 10-20%.

The MUF process involves injecting and then hardening liquid material between layers of silicon, which in turn, improves heat dissipation and production yields. Here, SK hynix teamed up with a Japanese materials engineering firm Namics while sourcing MUF materials from Nagase. SK hynix adopted the mass reflow molded underfill technique ahead of others and subsequently became the first vendor to supply HBM3 chips to Nvidia.

Recent trade media reports suggest Samsung is in contact with MUF material suppliers, though the memory supplier has vowed to stick to its NFC technology for the upcoming HBM3e chips. However, industry observers point out that Samsung’s MUF technology will not be ready until 2025 anyway. So, it’s likely that Samsung will use both NFC and MUF techniques to manufacture the latest HBM3 chips.

Both Micron and Samsung are making strides to narrow the gap with SK hynix as the industry moves from HBM3 to HBM3e memory chips. Samsung, for instance, has announced that it has developed an HBM3e device with 12 layers of DRAM chips, and it boasts the industry’s largest capacity of 36 GB.

Figure 3 The HBM3E 12H delivers a bandwidth of up to 1,280 GB/s and a storage capacity of 36 GB. Source: Samsung

Likewise, Idaho-based Micron claims to have started volume production of its 8-layer HBM3e device offering 24-GB capacity. As mentioned earlier, it’ll be part of Nvidia’s H200 Tensor Core units shipping in the second quarter. Still, SK hynix seems to be ahead of the pack when it comes to the most sought-after AI memory: HBM.

It made all the right moves at the right time and won Nvidia as a customer in late 2019 for pairing HBM chips with AI accelerators. No wonder engineers at SK hynix now jokingly call HBM “Hynix’s Best Memory”.

Related Content

<!–

VIDEO AD

–>


<!–

div-gpt-ad-inread

–>

googletag.cmd.push(function() { googletag.display(‘div-gpt-ad-inread’); });

<!–
googletag.cmd.push(function() { googletag.display(‘div-gpt-ad-native’); });
–>

The post AI boom and the politics of HBM memory chips appeared first on EDN.



Source link