CXL Product Pipeline Gets Flowing

Article By : Gary Hilson

The Compute Express Link (CXL) interconnect standard is poised to gain a lot of traction, especially as 2022 rolls around.

Micron Technology made the news when it abandoned further 3D XPoint development in favor of focusing on the rapidly emerging Compute Express Link (CXL) interconnect standard. But it’s not the first company out of the gate with CXL-related products.

Rambus is corralling its wide-ranging portfolio of intellectual property to address the burgeoning CXL market, including some from recent acquisitions, while Samsung recently announced a high-performance DRAM module.

Samsung’s Double Data Rate 5 (DDR5) DRAM-based memory module is targeted at data-intensive applications such as artificial intelligence (AI) and high-performance computing (HPC) that need server systems that can significantly scale memory capacity and bandwidth. The company has been collaborating with several data center, server, and chipset manufacturers to develop next-generation interface technology since the CXL consortium was formed in 2019.

Samsung opted for a DDR5 DRAM for its first CXL memory module because it expects DDR5 to be the most cost-effective solution in terms of bandwidth, capacity expansion, speed and reliability, as well as power efficiency when CXL really gains widespread traction in 2022. (Courtesy Samsung)

Cheolmin Park, vice president and head of Samsung Electronics’ Datacenter Platform Group, said the company has received a lot of positive feedback and numerous collaboration proposals from major data center and OEM customers who are interested in new applications that require memory capacity and bandwidth expansion. “To improve bandwidth and capacity at the system level, we reached the conclusion that CXL memory expansion based on DRAM would be the best solution.” He expects Samsung’s collaborations with customers regarding CXL and DDR will show meaningful results starting in the second half of this year. “We expect demand for Samsung CXL memory to continue to steadily increase throughout the market after that.”

Helping to move things along is that the technologies used in implementing CXL have already been validated and widely commercialized, including the PCIe interface.  “Similar to how the DDR controller is integrated within the AP or CPU, integrating the DDR interface with the CXL controller should pose little difficulty.” However, he said, it may be challenging to stabilize peer-to-peer communication or Direct Memory Access (DMA) of the accelerator and network interface cards that support the CXL interface.

Park said Samsung will provide CXL memory that is optimized for existing PCIe infrastructure by adding a CXL layer to meet customer requirements for memory expansion. It will also offer its CXL Memory Software Development Kit (CMDK) to ensure that the same level of performance, system reliability, software environment and security as conventional memory systems are offered to customers. He said Samsung opted to use DDR5 for DRAM because the expectation is that it will be 2022 by the time server systems are actually supporting CXL, and CXL-based DRAM becomes mainstream.  “We expect DDR5 to be the most cost-effective solution in terms of bandwidth, capacity expansion, speed and reliability, as well as power efficiency.”

Even though the CXL isn’t not yet mainstream, he said it made sense to begin to address the market as server platforms from Intel, AMD and ARM have begun to support CXL. “Market demand for CXL memory is increasing, so we are confident that now is the best time for host and device manufacturers to work together in building an extensive ecosystem for CXL.” He said Samsung will be looking to expand its use of CXL beyond DRAM to also include NAND and storage class memory (SCM).

Jeff Janukowicz, IDC research vice president for solid state storage and enabling technologies, said there’s a lot of interest in expanding memory pools to support some of the next generation workloads, whether it’s in-memory databases or emerging applications such as AI and machine learning, all of which require higher performance.  “That’s where something like CXL can certainly come in and offer some advantages.” In addition to DRAM, he said, it wouldn’t be surprising see some of the SCM products also make their way over to the CXL interface.

Memory is not the only opportunity for CXL products. CXL consortium member Microchip Technology was quick out of the gate with its XpressConnect CXL 2.0 retimer, which addresses the high-performance computing demands of data center workloads by supporting ultra-low latency signal transmission required to support AI, ML, and other computation workloads. PCIe retimers are usually implemented as an integrated circuit (IC) chip placed on a PCB that can be used to extend the length of a PCIe bus. The retimer takes care of the discontinuities that are caused by interconnect, PCB, and cable changes that lead to poor PCIe signals by outputting a re-generated signal as if it were a fresh PCIe device, in both directions.

CXL memory expansion provides more main memory to the host (CPU) for higher performance on high-capacity workloads, while CXL memory pooling ultimately supports disaggregation and composability. (Courtesy Rambus) (Click on the image for a larger view)

For Rambus Inc., CXL isn’t just about pooling memory, it’s about corralling its IP to fuel its recently announced CXL Memory Interconnect Initiative to support the evolving architectures of data centers and the continuing growth and specialization in server workloads. Matt Jones, general manager of IP cores, said the company’s acquisitions of CXL and PCIe digital controller provider PLDA and PHY provider AnalogX add products and expertise that complement the company’s expertise in server memory interface chips. Essentially, its IP can be broken into two halves—the memory side and the chip-to-chip sides of the business, including its Serdes interfaces.  “The acquisitions fit very nicely into this initiative.”

Rambus sees the data center’s new architecture as moving from the server as the unit of computing to disaggregated model, said Jones, so that compute resources can be “composed” to meet the needs of varying workloads. Expansion or pooling devices has been something the company has been investigating for some time. The CXL Memory Interconnect Initiative is Rambus’ effort to codify it, he said, and pull together a divers set of building blocks, including its CXL and PCIe PHYs and controllers to interface with host processors and other devices, DDR memory PHYs and controllers to interface with memory devices, and advanced cryptographic cores and secure protocol engines to enable secure firmware downloads and protect the links against data tampering and physical attacks with Integrity and Data Encryption (IDE) security.

Rambus fellow Steve Woo said acquisitions that go back a decade are playing a role in the company’s CXL strategy as the company has moved toward the notion of securing semiconductors and data paths which be necessary as data center architectures evolve. Other technologies playing a role are buffer chips that attach to DIMMs and sit between the host processor and the actual DRAMs. “We’ve got this really nice portfolio of things where we have a chip business, and we have all these nice building blocks for doing things in a buffer between a CPU and an actual memory device.”

The emerging data architecture reflects that compute is no longer the bottleneck, said Woo. CXL allows for big pools of memory outside the server chassis that can be provisioned as needed and returned when a workload is done. “It’s really about the data and the data movement. People want to look at ways to have memory that isn’t so tightly coupled to the CPU.”

This article was originally published on EE Times.

Gary Hilson is a general contributing editor with a focus on memory and flash technologies for EE Times.

 

 Lucky Draw 2021

Leave a comment