TORONTO — High Bandwidth Memory (HBM), like many other memory technologies, is being adopted for emerging use cases that didn’t exist at its inception because of specific characteristics such performance, capacity and power consumption. But it won’t be long before there’s pressure to improve upon them as adoption in newer scenarios takes off.

The Jedec Solid State Technology Association’s most recent update to the JESD235 HBM DRAM standard focuses on meeting the needs of applications in which peak bandwidth, bandwidth per watt, and capacity per area are critical metrics. Such applications include high-performance graphics, network and client applications, and high-performance computing.

The JESD235 standard builds on the first HBM standard released in November 2015 with the input of GPU and CPU developers with the goal of keeping ahead of the system bandwidth growth curve supported by traditional discrete packaged memory.

In a telephone interview with EE Times, Barry Wagner, HBM task group chair for Jedec, said that the update reflects the decision to add some density range to the HBM2 class of products before moving on to the HBM3 generation of devices.

“This update was really focused on extending the support and the design from an 8 Gb-per-layer definition to a 16 Gb-per-layer,” Wagner said.

An HBM DRAM has a distributed interface tightly coupled to the host compute die and divided into independent channels, with each channel completely independent of one another and not necessarily synchronous to each other — they are independently clocked. The wide-interface architecture enables high-speed, low-power operation, with each channel interface maintaining a 128-bit data bus operating at double data rate (DDR).

HBM_P1 JESD235B includes a legacy mode to support HBM1 and a new pseudo-channel mode in HBM2.

JESD235B adds a new footprint option to accommodate the 16 Gb-per-layer and 12-high configurations for higher-density components and extends the per-pin bandwidth to 2.4 Gbps. Performance-wise, the HBM standard update supports speeds up to 307 GB/s and densities up to 24 GB per device by leveraging wide I/O and TSV technologies. Bandwidth is delivered across a 1,024-bit-wide device interface that is divided into eight independent channels on each DRAM stack.

The standard can support 2-high, 4-high, 8-high, and 12-high TSV stacks of DRAM at full bandwidth to allow systems flexibility on capacity requirements from 1 GB to 24 GB per stack.

Wagner said that it was relatively easy to come to some consensus on what the HBM2 definition should be moving forward, but that it was complicated by restrictions for backward compatibility as needed by Jedec stakeholders. This is reflected by the inclusion of a legacy mode to support HBM1 and a new pseudo-channel (PC) mode in HBM2, for example.

Legacy mode provides 256-bit prefetch per memory read and write access when the burst length is set to 2, while the PC mode divides a channel into two individual sub-channels of 64-bit I/O each, providing 256-bit prefetch per memory read and write access for each pseudo channel. HMB2 devices supporting PC mode require a burst length of four.

HBM DRAMs exploit the increase in available signals to provide semi-independent row and column command interfaces for each channel, thereby enabling higher performance because these interfaces increase command bandwidth and performance by allowing read and write commands to be issued simultaneously with other commands like activates and precharges.

HBM_P2 Samsung was ahead of the curve when it announced its 8-GB HBM2 with a 2.4-Gbps data transfer speed per pin at 1.2 V.

Wagner said that much of the HBM adoption is occurring in high-performance computing applications and networking chips that need to keep up with faster Ethernet speeds. “A lot of the demand for the high capacity is driven by very large data-set–type applications for high-performance computing," he said.

As previously noted by Jim Handy, principal analyst with Objective Analysis, HBM is a niche technology for now because it remains relatively expensive due to the TSVs being put on silicon wafers. It’s most commonly used in graphics cards that need a lot of bandwidth to get to the GPU.

One emerging area for HBM is artificial intelligence applications, a use case already being targeted by Samsung. A year ago, Samsung announced mass production of its second-generation technology, dubbed Aquabolt, specifically designed with AI in mind, as well as next-gen supercomputers and graphics systems.

One notable feature of Aquabolt, as well as SK Hynix’s HBM2 memory chips, is that they already support 2.4-Gbps speeds at 1.2 V, so in a sense, the Jedec update formalizes what is already available from a performance perspective.

Wagner said that work is already underway to develop the HBM3 standard, with the overarching goal of increasing bandwidth and offering a range of density while improving performance per watt.