Rambus Preps for HBM3

Article By : Gary Hilson

AI and machine learning designs are accelerating premium memory standard evolution.

Final specifications for High Bandwidth Memory (HBM) 3 haven’t been finalized, but that’s not preventing Rambus from laying the groundwork for its adoption, driven by the memory bandwidth requirements of for AI and machine learning model training.

The silicon IP vendor has released its HBM3-ready memory interface consisting of a fully integrated physical layer (PHY) and digital memory controller, the latter drawing on intellectual property from its recent acquisition of Northwest Logic.

The subsystem supports data rates of up to 8.4Gbps, leveraging decades of experience in high-speed signaling expertise as well as 2.5D memory system architecture design and enablement, said Frank Ferro, senior director of product marketing for IP cores. By delivering 1 terabyte per second of bandwidth, Rambus’ HBM3-compliant memory interface is said to double the performance of high-end HBM2E memory subsystems.

The Rambus system architecture for HBM3 memory integrates PHY and memory controller. (Source: Rambus) (Click on image to enlarge.)

The PHY-digital controller combination builds on the company’s installed base of HBM2 customer deployments, Ferro said, and is backed by a suite of support services to aid with implementation for AI/ML designs. The integration of the PHY and controller is aimed at reducing complexity of ASIC designs.

Ferro said advancements in the HBM3 memory subsystem reflect growing risks that memory becomes a bottleneck in a larger system, a problem that is driving the need for more raw memory bandwidth. Another driver is the Compute Express Link (CXL) open standard interconnection for pooling memory and achieving higher utilization.

The HBM3 memory subsystem also anticipates a future memory standard, said Ferro, as customers ponder the next iteration, presumably an intermediate update before HBM4.

“HBM has been an extremely popular memory solution in the data center, in networking and high-performance computing, especially around AI training,” he said. However, with design lead times for ASICs used in AI applications running as long as 18 months, Rambus is promoting its current IP so customers can begin production in 2023.

“What customers want is more of a turnkey kind of solution,” said Ferro.

Integrating PHY and controller makes HBM3 memory subsystems easier to use, added Joe Rodriguez, senior product marketing engineer for IP cores. The new subsystem draws upon IP from Northwest Logic, where Rodriquez previously worked. “We’ve been deploying it to customers who want to look at the early view of what their front-end designs [are] all about,” including visibility into preferred features and suggestions on new ones.

The HBM3-ready memory subsystem supports data rates up to 8.4 Gbps. (Source: Rambus) (Click on image to enlarge.)

Rodriquez said the controller benefits not only from IP acquired by Rambus but also the expertise of the Northwest Logic development team that was retained and combined with Rambus’ expertise, including extensive 2.5D system design.

Despite design experience and a customer base, Ferro said challenges remain when writing data at 8.4 Gbps per second and going beyond to 10 Gbps. “The interposer designs are changing for HBM3. They’re getting more layers. They’re getting thicker dielectrics. Some are putting some capacitors that you can build into the interposer.”

Unlike DRAM, where the transition between DDR4 and DDR5 has been somewhat long and supports backward compatibility, Ferro said HBM technology is turning over faster without being compatible with previous generations. Having a memory subsystem that performs up to 8.4 Gbps provides ample margin and “future proofs” designs for memory speed upgrades for the upcoming HBM3 release.

Initial target applications for the HMB3 memory subsystems are AI/ML training, high performance computing, advanced data center workloads and graphics. Ferro said hyper-scalers exercise substantial influence over HBM designs. “It really boils down to the customers we’re working with are pushing us for bandwidth on these training chips. They just need more bandwidth.”

This article was originally published on EE Times.

Gary Hilson is a freelance writer and editor who has written thousands of words for print and pixel publications across North America. His areas of interest include software, enterprise and networking technology, research and education, sustainable transportation, and community news. His articles have been published by Network Computing, InformationWeek, Computing Canada, Computer Dealer News, Toronto Business Times, Strategy Magazine, and the Ottawa Citizen.

 

Virtual Event - PowerUP Asia 2024 is coming (May 21-23, 2024)

Power Semiconductor Innovations Toward Green Goals, Decarbonization and Sustainability

Day 1: GaN and SiC Semiconductors

Day 2: Power Semiconductors in Low- and High-Power Applications

Day 3: Power Semiconductor Packaging Technologies and Renewable Energy

Register to watch 30+ conference speeches and visit booths, download technical whitepapers.

Subscribe to Newsletter

Leave a comment