Biren GPGPU Aims for the Clouds

Article By :

Biren GPGPU Aims for the Clouds In the data-center accelerator race, the three-year-old startup has burst from the gate with a chiplet-based design that aims to compete with Nvidia for general-purpose-GPU (GPGPU) cloud computing. Mike Demler Three-year-old startup Biren Technology has entered the data-center accelerator race, bursting from the gate with a chiplet-based design that […]

Biren GPGPU Aims for the Clouds

In the data-center accelerator race, the three-year-old startup has burst from the gate with a chiplet-based design that aims to compete with Nvidia for general-purpose-GPU (GPGPU) cloud computing.

Joseph Byrne
Mike Demler

Three-year-old startup Biren Technology has entered the data-center accelerator race, bursting from the gate with a chiplet-based design that aims to compete with Nvidia for general-purpose-GPU (GPGPU) cloud computing. It’s sampling its first product to lead customers; the BR104 ships on a PCIe card that delivers up to 128 trillion FP32 operations per second (Tflop/s) while consuming 300W (TDP), nearly tripling the performance of Nvidia’s next-generation H100 PCIe. Biren expects the PCIe card to enter volume production in 4Q22. In that same quarter, it plans to sample the BR100, an open accelerator module (OAM) powered by two BR104s.

Biren’s target for the BR100 family is general-purpose cloud computing rather than AI acceleration, but the latter is where its first products shine compared with those of leading competitors AMD and Nvidia. Using two compute tiles, the BR100 needs about 20% less power to deliver about the same throughput as Nvidia’s H100 SXM for 8- and 16-bit integer and floating-point matrix math, as Table 1 shows. But since its tensor engines also accelerate FP32 GEMM, the BR100 trounces Hopper’s performance on that task by more than 4x. Most developers choose mixed- or reduced-precision floating point for AI training, however, reducing the value of faster FP32 throughput.

Although the BR104 architecture clones many features of Nvidia’s GPGPUs, it lacks support for the new FP8 format, and unlike Hopper, its use of structured sparsity delivers no performance boost. The compute cores also omit FP64, making the device better for AI than for HPC.

Free Newsletter

Get the latest analysis of new developments in semiconductor market and research analysis.


Subscribers can view the full article in the TechInsights Platform.

 

You must be a subscriber to access the Manufacturing Analysis reports & services.

If you are not a subscriber, you should be! Enter your email below to contact us about access.

*/

Leave a comment