AlphaICs Begins Sampling Its Deep Learning Co-Processor

Article By : Nitin Dahad

The edge AI silicon startup begins sampling its first co-processor for smart vision applications.

AlphaICs, a startup developing edge AI and learning silicon aimed at smart vision applications, is sampling its deep learning co-processor, Gluon, that also comes with a software development kit.

The Gluon co-processor is touted as delivering 8 TOPS edge AI inference performance in a 16nm FinFET process from Taiwan Semiconductor Manufacturing Co. Figures released by AlphaICs indicate frames-per-second (fps)/watt performance for classification and detection of neural networks at 32 fps/watt for the Yolo-V2 object detection model, and 22 fps/watt for the VGG-19 classification model. The co-processor is focused on accelerating deep learning neural network models for classification, detection and segmentation for smart vision in applications such as surveillance, industrial, retail, automotive and industrial IoT.

AlphaICs RAP architecture 2
The RAP architecture consists of multiple “agents,” each comprising a scalar processor, multiple Tensor processors and dedicated on-chip SRAM memory. (Source: AlphaICs)

The Gluon chip is based on AlphaICs’ RAP architecture that uses a proprietary, modular, scalable architecture. The framework includes multiple “agents,” each including a scalar processor, multiple Tensor processors and dedicated on-chip SRAM memory. The scalar processor performs basic operations (fetch, decide, execute, interfacing), then utilizes the Tensor processors to perform AI computations. Increasing or decreasing the number of agents and Tensor processors enables scalability, and Gluon is the first in a family of chips AlphaICs intends to develop, utilizing a 16×16 configuration with 16 RAP agents and 16 Tensor processors.

Gluon also incorporates PCIe and LPDDR4 interfaces to enable high-speed transfers to host processors and DRAM, respectively. The chip benchmarks include:

  • 153 frames per second with Yolo-V2 (416x416x3 image size) in 4.73 watts.
  • 79 frames per second with VGG-19 (224x224x3) in 3.6 watts.

Gluon is shipped with a SDK for deployment of neural networks, enabling developers to add AI capability to current X86/Arm-based systems. The startup has developed a software stack around the Gluon processor, including compile and runtime engines that enable deployment of trained models on the Gluon processor. The initial version supports the TensorFlow framework, with plans to expand the stack to support other frameworks.

AlphaICs CEO Pradeep Vajram said it is currently demonstrating its Gluon technology to customers. “Our team worked very hard over the last two years to achieve this great milestone.” AI vision applications being targeted include surveillance, retail, industrial and smart cities.

The company has also established a channel partnership with CBC Co. Ltd, a Japanese company specializing in video surveillance products. CBC has been working with AlphaICs for nearly two years, and will serve as its marketing partner in Japan. “Gluon was showcased at Japan AI Expo in October 2021 and generated great interest from Japanese customers for vision applications,” said Kazuhiko Kondo, a CBC executive officer.

This article was originally published on EE Times.

Nitin Dahad is a correspondent for EE Times, EE Times Europe and also Editor-in-Chief of embedded.com. With 35 years in the electronics industry, he’s had many different roles: from engineer to journalist, and from entrepreneur to startup mentor and government advisor. He was part of the startup team that launched 32-bit microprocessor company ARC International in the US in the late 1990s and took it public, and co-founder of The Chilli, which influenced much of the tech startup scene in the early 2000s. He’s also worked with many of the big names—including National Semiconductor, GEC Plessey Semiconductors, Dialog Semiconductor and Marconi Instruments.

 

Subscribe to Newsletter

Leave a comment