SSDs Risk Over-Segmentation

Article By : Gary Hilson

Form factors and features are flourishing to address diverse workloads, environments.

It was less than a decade ago when flash storage was expensive and precious, reserved for 10% of data deemed “hot” so that it could be accessed quickly. But today, NAND flash-based storage in the form of Solid State Drives (SSDs) are ubiquitous in data centers and even laptops. There are different types of NAND; it can now be stacked three-dimensionally and there are  multiple form factors for different workloads — so many so that the lines between an SSD and a flash card are blurring.

Virtium’s customer base is so varied in its needs when it comes to developing a solution for a specific use case, it’s much like Michelangelo chipping away at David until the sculpture is revealed, said vice president of marketing Scott Phillips. Unneeded features and functions are stripped away until a specific solution to solve a customer’s problem can be revealed. “They’re finding one size doesn’t fit all.” This trend has been accelerated by the COVID-19 pandemic as technology becomes more touchless and automated. He said more customers are looking to deploy different processing units to do machine learning and AI at the edge for data logging and predictive maintenance, among other things—all touchless. “They’re going to basically try to automate everything, so you don’t have to touch it.”

But you can’t put standard PCs everywhere because of the myriad size, weight, power, and air flow considerations, said Phillips. “When they’re going outside, they have to get more and more rugged.” In the meantime, the workloads are becoming even more segmented—everything from streaming video for live surveillance to a small block — random types of workloads such as a boot loader in the telecom space, where Virtium has several customers. One needed the company to build an 8GB NVMe M.2 SSD, which is the smallest chip density and will last forever, he said. “It didn’t need to do a whole lot of stuff.”

This is where the lines start to blur between what is considered an SSD or other flash devices, such as card. Problems with embedded eMMC flash devices, like the recent Tesla failure, have made the new JEDEC Crossover Flash Memory (XFM)Embedded and Removable Memory Device (XFMD) standard appealing, Phillips noted. “That came out of a specific need. You don’t have to take the whole board out.” Automotive applications are requiring a whole lot more data storage in different form factors just as the data center has evolved to contain its own diversity.

It’s not that everyone is trying to create their own standard, he said, but they’re trying to address a problem, whether it’s removability or a capacity constraint, or other form factor and fit challenge. “It’s too big. It’s too long. It’s too short. It’s too tall and cuts air flow. It’s going to be more and more challenging actually as an SSD provider, especially for the industrial space.” Phillips said Virtium business requires it to be more flexible and responsive to more diverse needs, whether it’s form factors or interfaces—everything from SATA to NVMe. “We’re still selling PATA drives to some of the avionics and military customers.” But bigger players don’t need to have that versatility, he said, which means they can focus their product lines on the latest and greatest form factors and interfaces for larger data center customers.

That means a company such as Micron Technology has a smaller portfolio of SSDS, but that portfolio has become more diverse. Because even in data centers, there is variety, even ones that are designed to handle AI and machine learning workloads.

Jeremy Werner, general manager of Micron’s storage business unit, which handles all NAND-based products that go into compute applications, said that although processors are an important part of the data center, without access to data, they can’t really do anything, which is why storage as SSDs must keep pace. “We are living at an unprecedented time in human history in terms of the way that we collect data, generate data, the way that we interact and use that data to really solve problems,” he said. “The types of leaps forward that we’re making all rely on massive amounts of data being analyzed very rapidly.”

Raj Hazra, general manager of Micron’s compute and networking business unit, said there’s been a lot of focus on the compute side, but generating insight from data to solve problems requires cooperative work between compute, memory, storage, and networking. “You have to store the data, you have to move the data, and then you have to crunch on the data.” This requires more data centers that can scale up cost effectively, including from a power perspective, and have inherent flexibility to address many different workloads.

Werner said Micron’s Gen4 data center NVMe SSD was designed with flexibility for customers in mind, and it includes its own DRAM, controller and firmware as well as leveraging Micron’s heterogeneous memory-storage engine. The new Micron SSD comes in three EDSFF form factors, which he said recognizes that as flash has become more affordable, many hard drive applications have moved to flash. Optimizing the form factors for flash allows better performance for reduction of the footprint in the data center and reduction in energy consumption.

Micron’s new SSD is also one of the first products that the Open Compute Project (OCP) standard, the open compute standard that defines NVMe SSD requirements for qualified applications. “This is an important development because many of the NVMe SSDs in the market today have disjointed features and not necessarily great compatibility interchangeability.” Werner said the introduction of the OCP standard will help address that.

Flexibility for different workloads because there’s a variety of form factors; different NAND flash types is one thing, but there is a danger of fragmentation. While customers no longer must worry about endurance for even write-heavy applications, there are a lot of SSD options available.

One of the benefits of the “refactoring” exercise the NVMe organization did when releasing the latest iteration of the specification is that new functionalities and features could explore without having to be irrevocably intertwined with core specification, said NVM Express technical work group chair Peter Onufryk. If there’s no market for the feature and the work winds down, there’s no negative impact on the NVMe work overall. The computational storage work going on today are an excellent example. “We’re very bullish on computational storage, we think it’s going to be great. Maybe it won’t work out,” he said. “If we put this in the main spec, it’s there forever. This way, it’s just off to the side and if it happens, great.”

NVMe can arguably be credited with the success of SSDs as the work was the result of wanting a simple protocol at a time when the available ones were more complex than anyone like, including “baggage” related to rotational media, said Onufryk. “People associate NVMe with flash-based SSDs, but the goal was all NVM.” That includes other persistent/storage-class memory such as Optane. “You should be able to connect anything to it and be fast and efficient.”

Even though other non-volatile memories could be put into an NVMe-friendly form factor such as an SSD, nothing has come close to the economics of NAND flash. In the meantime, rotational media continues to be useful in data centers, and NVMe 2.0 added support for HDD with updates to features, management capabilities, and other enhancements required for HDD support. The irony is that when SSDs were first deployed in data centers, they were tied to architecture designed for hard drives.

Jim Handy, principal analyst with Objective Analysis, said before SSDs, no one paid attention to the bottleneck created by the software servicing the hard drive. “No matter how slow they made it, it wasn’t anywhere near as slow as the hard drive.” When SSDs came onto the scene, the software was identified as the bottleneck, “and so there was a lot of cleaning up of the software.” He said one of the reasons why Optane SSDs haven’t really taken off is that the interface doesn’t favor the 3D Xpoint technology, even though it is faster than NAND flash. “There’s so much other stuff going on that interface. The Optane didn’t really have a chance to show Optane speed because the disk interface was so slow.”

High-end SSDs that use Toshiba’s or Samsung’s special high-speed NAND chips are keeping Intel Optane SSDs at bay, said Handy. “They aren’t yet mass-market devices.” The bulk of the market comprises of high-speed enterprise NVMe SSDs down to low-end consumer SSDs, all built using standard NAND flash, while the physical form factors of NAND flash SSDs have come a long way from looking much like a traditional hard drive and more like a stick of gum, and the common concerns such as cost and endurance are rarely spoken about anymore.

The evolution of software has led to more mindfulness about what were write-intensive pieces of software and things that were not write intensive. “They split those up in such a way that you can now determine certain zones where you need a high write,” said Handy. The software is now written to optimize workloads, which means endurance is much less of a concern, even for write-intensive workloads. “You could have a drive that has pretty weak wear specifications, but that’s okay because you’re using it for an application that doesn’t put a lot of wear onto the disk.”

Thanks to better software, smarter controllers, new drive form factors, and advances in NAND itself, including the density from stacking, there’s no shortage of SSD options for different workloads and physical environments. Virtium’s Phillips anticipates the coming year will be a big one for NVMe in the industrial space, but there are still plenty of customers who need drives that run other established interfaces, such as SATA. “It’s going to be more and more challenging actually as an SSD provider, especially for the industrial space.”

A scenario of too much segmentation also concerns Onufryk. As the market becomes more fragmented, it becomes more difficult to have a sustainable market in every SSD segment. “Now everybody wants to do different things,” he said. “Because the market’s fragmenting, feature sets fragment.”

Related Articles:

JEDEC Advances Small, Changeable Flash Standard

NVMe Gets Refactored

Lots of Spin Left for Hard Drives

SAS, SATA Still Satisfy Many SSD Workloads

Micron Puts SSD into AI Mix

This article was originally published on EE Times.

Gary Hilson is a freelance writer and editor who has written thousands of words for print and pixel publications across North America. His areas of interest include software, enterprise and networking technology, research and education, sustainable transportation, and community news. His articles have been published by Network Computing, InformationWeek, Computing Canada, Computer Dealer News, Toronto Business Times, Strategy Magazine, and the Ottawa Citizen.

Subscribe to Newsletter

Leave a comment