The HGX-1 GPU accelerator chassis is made up of eight Tesla P100 GPUs, all of which utilise the new Pascal architecture, according to Nvidia.
To accelerate the deployments of AI-oriented technologies, Nvidia has teamed up with Microsoft and Ingrasys to develop a "hyperscale" GPU accelerator chassis. The HGX-1, as it's called, is an open-source design for AI and cloud computing released in conjunction with Microsoft's Project Olympus, its contribution to the Open Compute project.
The HGX-1 is made up of eight Tesla P100 GPUs, all of which utilise the new Pascal architecture. It also features a switching design based on Nvidia's NVLink interconnect technology and the PCIe standard, enabling a CPU to dynamically connect to any number of GPUs. This allows cloud service providers that standardise on the HGX-1 infrastructure to offer customers a range of CPU and GPU machine instance configurations, according to the company.
The HGX-1 provides up to 100x faster deep learning performance compared with legacy CPU-based servers, and is estimated at one-fifth the cost for conducting AI training and one-tenth the cost for AI inferencing, Nvidia said.
The new architecture is designed to meet the exploding demand for AI computing in the cloud—in fields such as autonomous driving, personalised healthcare, superhuman voice recognition, data and video analytics as well as molecular simulations.
"The HGX-1 hyperscale GPU accelerator will do for AI cloud computing what the ATX standard did to make PCs pervasive today. It will enable cloud-service providers to easily adopt NVIDIA GPUs to meet surging demand for AI computing," said Jen-Hsun Huang, founder and chief executive officer of Nvidia.
__Figure 1:__ *HGX-1 is designed to support eight of the latest Pascal-generation NVIDIA GPUs and NVIDIA’s NVLink high speed multi-GPU interconnect technology and provides high bandwidth interconnectivity for up to 32 GPUs by connecting four HGX-1 together. (Source: Microsoft)*
The hardware is part of Microsoft's ongoing Project Olympus initiative, which designed to give hyperscale data centres a high performance and flexible path for the machine learning industry. Nvidia and Microsoft hope that sharing the project with this open-source hardware development consortium will make the design easier for enterprises to purchase and deploy.