Nvidia's partnership with ODMs goes beyond winning the server market; it aims to push its GPU-compute chassis into the cloud servers and datacentres that Taiwan-based ODMs are making.
In the not-so-distant past, Computex Taipei, an international computer expo, was all about PCs. Taiwan’s original design manufacturers (ODMs) grew up with Computex and put Taiwan on the map as a global PC hub. Companies that profited from the Taiwan ODMs’ success were Intel and Microsoft, who played key roles in defining PC technology.
Fast forward to 2017.
Nvidia came to Computex in hopes of replicating what Intel and Microsoft achieved a few decades ago with the PC market. Nvidia’s sole focus is domination in the new era of accelerated computing.
Nvidia defines accelerated computing as the increased use of a graphics processing unit together with a CPU to accelerate deep learning, analytics and engineering applications.
On May 29, Nvidia unveiled "a partnership program with the world’s leading ODMs—Foxconn, Inventec, Quanta, and Wistron—to more rapidly meet the demands for AI cloud computing."
Through a partner program built around Nvidia’s hyperscale GPU accelerator for AI and cloud computing, Nvidia hopes to provide each ODM with “early access to the Nvidia HGX reference architecture, Nvidia's GPU computing technologies and design guidelines,” according to the company.
The GPU giant's move on the partnership program with Taiwanese ODMs isn’t about winning a server market. Taiwan already builds all the servers in the world, thanks to companies including Intel and Google who have worked with Taiwanese ODMs for years.
Nvidia wants to push its GPU-compute chassis further into datacentres and cloud computing servers that Taiwanese ODMs are already making.
GPU-accelerated computing offloads compute-intensive parts of the application to the GPU, while the rest of the code runs on the CPU. Nvidia's goal is to enable accelerated computing “everywhere” from labs to academia and small and medium businesses, explained Keith Morris, senior director of product management for Accelerated Computing at Nvidia. This will help “democratise” the use of such applications as deep learning, Artificial Intelligence, and machine learning, which rely on rapid acceleration of parallel codes, he added.
A number of companies have been wooing Taiwanese ODMs for years, said Paul Teich, principal analyst at Tirias Research. “Intel has been working with most of these server ODMs (Foxconn, Inventec, Quanta, and Wistron) for years—they are the leading cloud server ODMs because of their relationship with Intel,” he noted.
“Wistron is also a Gold-level member of the OpenPOWER Foundation, of which Google is a founder and Platinum member,” Teich added.
Nvidia is hardly new to this market. Inventec and Nvidia have been working on Open Compute Project (OCP) boxes since 2014, according to Teich. “Nvidia tapped Quanta as the motherboard provider for Nvidia's DGX-1 platform a couple of years ago. Foxconn is the most recent name on this list.”
Asked what’s new with Nvidia’s partnership program, Teich noted, “I'd say that Nvidia is simply taking advantage of these ODMs' existing relationships.”
The road to standardisation
Fuelling Nvidia’s efforts to drive AI cloud computing is a new HGX-1 hyperscale GPU accelerator, an open-source design released in conjunction with Microsoft’s Project Olympus.
As Morris noted, a standard GPU-compute chassis surrounded by a boundary box is ideal to provide hyperscale datacentres with a fast, flexible path for AI.
Citing the history of the PC world, Nvidia explained that “HGX-1 does for cloud-based AI workloads what ATX—Advanced Technology eXtended—did for PC motherboards when it was introduced more than two decades ago.”
Morris believes that something like HGX-1 establishes “an industry standard” that can be rapidly and efficiently embraced by many cloud-service providers.
The call for standardisation comes from many companies who design datacentre products and those who use them, as seen in the Open Compute Project (OCP). OCP is an organisation founded to design and enable the delivery of the most efficient server, storage, and datacentre hardware designs for scalable computing. Tirias Research’s Teich, however, noted, “If there is one thing that the OCP has taught us, it's that the cloud giants like the idea of a standard stock keeping units (SKU), but none of them will buy one. Everyone wants custom.”
That said, he added, “The telco cloud market may be different, but it's still emerging. And I think that Microsoft's Project Olympus contribution to OCP also has a chance of changing that dynamic, too, which is where [Nvidia’s] HGX-1 comes in.”
Figure 1: Nvidia's GPU compute chassis based on the HGX reference design. (Source: Nvidia)
What’s going for standard GPU compute chassis SKU is that it “moves the processors to a separate pizza box, which is exactly what HGX-1 does,” said Teich. “Some folks will want x86, some will want ARM and others will want OpenPOWER. The HGX-1 design makes the processor choice a PCIe cabling decision.”
In the end, the customer can experiment to discover the optimal ratio of CPU to GPU for their applications, he added.