Memory sharing facilitates network interconnection
Memory has begun to be factored in when making improvements in application performance, and reducing latency. Earlier, it has been in storage, now, in network architectures.
Start-up A3Cube has named the RONNIEE Express, a network interface card it designed to close the I/O performance gap between CPU power and data access performance for datacentres, big data, and high-performance computing applications. The company said that memory latencies can be improved, and existing networking technologies such as Ethernet, InfiniBand, and Fibre Channel can be surpassed by turning PCI Express into an intelligent network fabric.
However, it is not meant to be a datacentre network or an Ethernet substitute; the company described RONNIEE as a data plane technology that differentiates itself from other existing interconnection networks because of its hardware-based shared memory facilities. Although not required, A3Cube's ByOS operating system can leverage the in-memory RONNIEE network to create a parallel computing system. It supports features such as deduplication, compression, and encryption.
As explained to me by A3Cube's CTO and founder Emilio Billi, RONNIEE Express uses A3Cube's In-Memory Network technology to share non-coherent global memory across the entire network. Just as compute resources have become virtualised, A3Cube's network fabric can improve communication with memory regardless of where it is physically located.
RONNIEE Express uses memory as the main communication paradigm at protocol level, reported Billi. By creating global shared memory container, the architecture allows for direct communication between local and remote CPUs, memory to memory, and local and remote I/O.
In a datacentre environment, this memory is likely to be an SSD, said Billi, but it could be any type of memory. He added that adoption of SSDs in the enterprise has shifted the storage I/O bottleneck from the storage device to the interconnection between storage and the CPU, highlighting the limitations of conventional PCI Express and other flash architectures.
Bob Laliberte, senior analyst with Enterprise Strategy Group, said A3Cube's network fabric is an example of wider push to put memory and storage closer to the applications. As the cost of SSDs come down and more flash is incorporated into storage arrays, said Laliberte, "the cost per IOPs is becoming a new measurement. There's more of a focus of driving more memory and storage closer to the applications that require it."
From a networking perspective, Laliberte said A3Cube's approach is not dissimilar to 3D Torus, a switchless interconnection topology, which is often employed in high-performance computing environments.
And just as SANs developed as a way to leverage underutilised storage resources, Laliberte pointed out, A3Cube fabric allows for better sharing of memory by addressing the challenge of how to connect to it, "Right now, without using the fabric like they are proposing, you are talking about having to go down through a couple tiers of switching and back." This doesn't make sense, particularly in a high-performance environment, he noted.
A3Cube is not the only company to take an architectural approach to either improve communication with available memory or move applications closer to memory. Scale-out memory platform maker Violin Memory has an array that allows applications such as SQL Server, SharePoint, and Exchange, as well as Windows Server Hyper-V virtualisation and Server Message Block (SMB) file services, to access persistent memory directly. Meanwhile, Diablo Technologies' Memory Channel Storage architecture connects NAND flash directly to the CPU through a server's memory bus, so that persistent memory is essentially attached to the host processors of a server or storage array. SanDisk has incorporated the MCS architecture into its ULLtraDIMM technology.
- Gary Hilson
|Related Articles||Editor's Choice|