Nvidia's latest DPU reflects that distributed-computing environments are here to stay—and that hardware is key in implementing zero-trust security in the data center and at the edge.
The remote-work era spawned by the pandemic shone a light on the need for robust security when endpoints exponentially proliferate and workloads become more distributed. Nvidia’s latest data processing unit (DPU) reflects that these distributed-computing environments are here to stay—and that hardware has a key role to play in implementing zero-trust security, whether it’s in the data center or at the edge.
Nvidia’s BlueField-2 DPUs will be deployed in Dell PowerEdge systems with the aim of improving the performance of virtualized workloads based on VMware vSphere 8.
The new offering is the result of two years of collaboration with VMware, with a focus on meeting the demands of artificial-intelligence workloads and security services, Kevin Deierling, senior VP of networking at Nvidia, told EE Times. Optimized for the VMware vSphere 8 enterprise workload platform, the Nvidia-Dell combo includes Nvidia BlueField DPUs, Nvidia GPUs, and Nvidia AI Enterprise software.
The DPU is used to offload, isolate, accelerate, and secure data-center infrastructure services, so that CPUs and GPUs are free to focus on running and processing large volumes of workloads for AI and other data center applications.
The ever-increasing amount of microservices supporting containerized and virtualized apps spread across data centers is taxing CPUs, Deierling said.
“The CPU capacity is being consumed with security aspects, moving data around, and running massive amounts of east-west traffic to allow these distributed applications to communicate with each other—and actually share all of the data across the entire dataset,” he said.
Modern applications, including AI, are continuing to generate massive amounts of data and processing, and that data is consuming CPU cycles.
Aside from taking pressure off the CPUs and GPUs, the programmability of the DPU plays a role in bolstering security for multi-cloud environments and at the edge, Deierling said. “The increased demand for distributed apps is the other thing that’s happening.”
Instead of a single monolithic application, microservices are spread across the entire data center, and more computing is being done at the edge—all which needs to be secured.
This where zero-trust security comes into play.
“Zero-trust security really implies that everything inside of the data center is untrusted,” he said, noting that means all users, devices, and data must be authenticated and validated.
The Nvidia platform takes the approach that the devices are the foundation of zero-trust security. All firmware being loaded can be authenticated in the boot and execution environments so that anything running the data center can be trusted.
Encryption, of course, is critical for securing hardware. But, as Deierling noted, it’s a very expensive CPU- intensive process.
The BlueField-2 DPU can take over to accelerate that encryption and decryption with hardware, making it feasible to encrypt all the data, the east-west traffic, both when the data is in motion and being stored.
Other features of the platform include leveraging the GPU and the DPU together to apply AI to detect anomalous behavior, such as rapid entry of passwords beyond what a human can type. He said the combination of the DPU and AI can look at how people are interacting with the data center and detect anomalous behaviors even when the data’s encrypted.
Zero trust as a concept has primarily been the domain of IT managers. And it’s more than just technology; it’s a cybersecurity philosophy that includes best practices and processes. Core to the concept of zero trust is users should have access to applications, data, and services only as necessary to do their jobs. But as threat actors increasingly set their sights on U.S. industrial control systems (ICS) and targeting critical infrastructure and especially utilities, securing operational technology (OT) at the hardware level is becoming increasingly important.
Even without the zero-trust moniker, adding security at the device level has been gaining traction, whether it’s memory or network interface cards. Security features in memory have been proliferating well before the exploding growth of edge computing, the internet of things, and connected cars: The “S” in SD card stands for “secure,” and electrically erasable programmable read-only memory (E2PROM) is favored for credit cards, SIM cards, and keyless entry systems.
Flash-based SSDs have for years included encryption, although there have been qualms about how it might affect the performance of the drive. Self-encrypting drives, such as those made by Virtium, include dedicated encryption engines using the Advanced Encryption Standard (AES) that do not require software to run on the host. CrossBar recently directed its focus to secure computing with ReRAM and PUF technology.
Hardware-based security features reflect the inevitability of every system being connected, and that one compromised device due to a hacker’s tampering can affect any number of different computing platforms, including an autonomous vehicle, which is essentially a server on wheels, or industrial, medical, and IoT devices connected via 5G networking.
Embedding security at the device left also aligns with the concept of DevSecOps, where developers have a mindset of thinking about security at the beginning of the software application development process, rather than it being bolted on as an afterthought. This also reduces the likelihood that security features will degrade application performance, and Nvidia’s approach to moving security responsibilities over to its DPU so that GPU and CPUs are less taxed dovetails well with that philosophy.
This article was originally published on EE Times.
Gary Hilson is a general contributing editor with a focus on memory and flash technologies for EE Times.