Optane Ecosystem Hints at Broader Persistent Memory Support

Article By : Gary Hilson

Interconnects such as CXL open opportunities for software startup MemVerge, including supporting Optane persistent memory.

With Intel now the only Optane game in town, there’s a need for supporting players to build an ecosystem to accelerate the adoption of the emerging memory.

One of those players is memVerge, which also sees a coming tsunami of opportunities to enhance a wide range of persistent memories in the next few years, said CEO Charles Fan in an interview with EE Times. “We are seeing the rise of more data-centric applications that are demanding data to be both big and fast at the same time, and this is putting pressure on the performance of storage I/O, which is really the data movement between memory and storage.”

Charles Fan

He said the problem is the memory isn’t fast enough and not persistent, so storage is necessary. “When you have two buckets for data, you need to move data back and forth.” Fan sees the future as being one bucket, eliminating the need for storage IO, even if it takes a decade or two.

“Memory will become a bigger part of the data infrastructure. More and more applications, especially data-centric applications, will be run from the memory.” He said Optane signifies a major breakthrough where the memory can be bigger, cheaper, and persistent, but there also needs to be a memory interconnect that works with new kinds of media including Optane, such as the CXL interconnect. “We can easily envision a per server memory of over a hundred terabytes.”

Fan said the advent of CXL has led to the emergence of a new memory performance pyramid. The hottest data will run in High Bandwidth Memory (HBM) close to the CPU, while hot data will run in different types of memory connected to the DDR bus. Heterogenous processors and memories will be interconnected in a CXL fabric that’s not quite as fast as DDR and HBM memory.

For now, Optane is not the easiest technology to adopt without the support of a good software ecosystem, said Fan. With the hardware alone, Intel allows native access to the Optane persistent memory with app direct mode, but it requires a new API, which means rewriting existing applications that have been written for DRAM. The second mode uses firmware to set Optane for memory mode, which is compatible with DRAM, but without the persistence feature, and slower than DRAM, he said. “The whole thing is hardware defined and less flexible to address if you have more than one virtual machine or more than one of the apps ready on the server.”

MemVerge sees the advent of CXL leading to the emergence of a new memory performance pyramid. (Courtesy MemVerge) (Click on the image for a larger view)

This is where MemVerge enters the picture to help new memory types be adopted by the applications much easier, said Fan. Its Memory Machine software replicates what the hardware does in memory mode by employing tiering algorithms between various types of memory. “We provide a software defined-DRAM compatible interface to the applications.” This negates the need for customers to rewrite their applications.

At the same time, access to the Optane persistence feature remains, said Fan. “We are creating various data services on top of memory that we can provide to the applications.” The first is its ZeroIO memory snapshot technology, which eliminates IO to storage so that terabytes of data can be snapshotted and recovered from persistent memory in a few seconds, rather taking minutes to hours from storage, he said. “We can do this repeatedly by tracking the change among the pages.” This allows for a series of auto-saves on an application that can be revisited, whether for recovery, security, or workflow purposes.

Eric Burgener, research vice president of IDC’s Infrastructure Systems, Platforms and Technologies Groupwith IDC, said memVerge’s offering has appeal on two fronts. One is the many new artificial intelligence driven workloads that require big data on the back end. “They’re basically data science applications that are using some form of artificial intelligence.” Whether it’s machine learning, deep learning or even a simple AI application, more and more they’re being done in real time, such autonomous vehicles, he said. “They really need to be able to have a super low latency for data access as they’re feeding the CPUs that are doing all of the inferencing in these kinds of workloads.”

A MemVerge Memory Machine can be configured for each application so the three major capabilities can be tailored to the specific needs of each one. In addition, four enterprise-class data services are available which are based on ZeroIO Snapshots: Time Travel, AutoSave, Thin Clone and App Migration. (Courtesy MemVerge)

If these workloads can push more data into what looks like main memory rather an NVMe-based SSD, the latency is reduced by roughly 10 times, said Burgener, although it’s still slower than DRAM. “The persistent memory gives you about a three to 400 nanosecond access time.” For block-based Optane SSD applications, the latency won’t be reduced as much, but it’s a performance play for high transaction apps for financial workloads, he said. “If there’s a technology that a bank can buy to make things run faster, they want that.” Trading applications are also harnessing AI and need to keep CPUs fed with data, too.

There’s also an in-memory database play where main memory is limited but the application requires reloading a new dataset into that main memory, said Burgener. “There’s a time lag as you get that data off of persistent storage. People who run those environments tend to try to put as much memory as they can in their servers so that they can keep as much data as possible close to the CPU.” Combining Optane with DRAM creates a main memory pool that runs only slightly slower than just DRAM, he said, but it’s a lower cost solution that provides the needed performance range.

In the short term, it’s a small market, said Burgener, and almost all Optane is being purchased for performance-intensive environments. However, memVerge is only of a few players and as a software company, bringing in $10 million in its second year would be a pretty good achievement. As Optane moves to higher volumes, it gets lower in price, and then this becomes a viable strategy for a lot more workloads, he said. “MemVerge is going to become more of an interesting option as Optane gets less expensive.”

Although Burgener wouldn’t say the Optane market as taking off, there’s decent growth happening. “The issue has been that there haven’t been enough applications that could significantly benefit from the better performance of Optane in the older world.” When Optane began shipping as an actual product, this new era of AI-drive workloads was just beginning, he noted, and as these workloads continue to grow, there’s a great future market opportunity for 3D Xpoint media because these workloads are starting to drive mission-critical, day-to-day operations.

Fan said memVerge is preparing for a whole cohort of new memory technologies created for CXL that will lead to a new hierarchy. “Optane is what’s available today. Three to five years down the road, we think there will be multiple alternatives of a new kind of memory class that’s going to become available, and our intention is for our software to support all of them.”

This article was originally published on EE Times.

Gary Hilson is a general contributing editor with a focus on memory and flash technologies for EE Times.

Subscribe to Newsletter

Leave a comment