Startup banks on triple-level cell NAND advantage
Startup NxGnData has started to make its move as it hopes to grab the opportunities for technology that puts computational tasks closer to where data resides while also seeing a role for triple-level cell (TLC) NAND in the enterprise as "cold storage." It recently came out at the Flash Memory Summit after a year in operation.
In an interview with EE Times, James Fife, VP of business development, said the Irvine, Calif.-based company's design centre set up in Taiwan in December has 30 employees, with 40 on staff in total. He said the company sees itself as a solid-state drive (SSD) company first but also has its own controller in the works. "We feel an SSD player in the enterprise space needs to provide its own controller."
Fife said NxGnData is targeting hyperscale computing customers (the Googles and Amazons of the world) with a low-power controller that has a small footprint, the M.2 form factor, while also being able to address high-capacity storage, as much as 64TB.
In some uses cases, Fife said, the company will employ lower-cost TLC NAND, particularly for what has been dubbed cold storage of data, and that the company's variable code rate LDPC-based error-correcting code (ECC) memory can address endurance concerns. However, he believes, multi-level cell (MLC) is still the best option for hyperscale applications.
Social networking company Facebook has been vocal about wanting a low-cost flash technology, saying at last year's Flash Summit that a relatively low-endurance, poor-performance chip would better serve its need to store some 350 million new photos a day. Not long after, Jim Handy, principal analyst at Objective Analysis, concluded that Facebook would have to settle for a hierarchy of DRAM-flash-HDD for the foreseeable future. TLC might be cheaper and viable for cold storage, but not as cheap as Facebook would like, he said.
From left, NxGnData execs Richard Mataya, co-founder and SVP; Nader Salessi, founder and CEO; Vladimir Alves, co-founder and CTO; James Fife, VP of business development (Source: EE Times/Rick Merritt)
But TLC could make its way into the enterprise soon, as it is following a similar path to MLC's, as Handy said in a recent interview regarding Silicon Motion's latest controller for TLC NAND in client devices.
With regard to NxGnData's technology, Handy said the company is "biting into an awful lot." He said LPDC is very hard to implement workably because it's very esoteric and involves mass processing; some vendors do it well and others struggle with it. The other challenging technology NxGnData is implementing, according to Handy, is DSP, which involves signal processing and filters. For SSD makers, it involves understanding the environment around every individual bit, and "that's complicated."
He said the company does have a good pedigree in terms of the team it has assembled, with three founders who have collective experience with Western Digital, STEC and Memtech.
What's just as notable about NxGnData, said Handy, is its in-storage computation capability, dubbed In-Situ Processing. As outlined in a presentation by the company at this year's Flash Summit, In-Situ is based on the premise that a computation requested by an application is much more efficient if it is executed near the data it operates on, because that cuts network traffic, increases throughput and improves system performance.
This approach is better than the other way around, according to NxGenData, especially for big data, including large sets and unstructured data. Traditionally, enterprises have employed expensive, high-performance servers coupled with SAN/NAS storage, but this architecture is ultimately limited by networking bottlenecks.
With In-Situ processing, code is executed within the storage device (hence in-situ) with minimal impact on the application and seamless integration at the system level. Possible use cases for In-Situ include performing a number of functions on the SSD such as running grep remotely, searching large datasets using a hardware-assisted search engine and performing a map section of Map-Reduce.
Handy noted that a number of vendors have looked at this problem in the past, and some are addressing it in small way. "This is an interesting twist." He added that some vendors such as Violin, have built software so computers give directions and the flash array does all of the work. Another approach is Micron's Automata Processor.
Earlier this year, startup A3Cube announced a network interface card, dubbed RONNIEE Express, designed to eliminate the I/O performance gap between CPU power and data access performance for data centres, big data and high-performance computing applications.
Fife said NxGnData's key technology will be available for evaluation by a select group of customers in late 2014, with fully functional FPGA-based samples available in early 2015. Production samples of SoC-based M.2 solutions are expected in late 2015.
- Gary Hilson