Startup AlphaICs to tape out 8 TOPS edge learning chip next month...
Artificial intelligence (AI) startup AlphaICs last week announced an $8 million funding round as it aims to tape out its first edge processor chip next month. With its performance of 8 TOPS at 4W, the company said this is ideal for carrying out edge inference and edge learning.
We had reported on their plans back in 2018. Following the latest announcement, EE Times caught up with CEO and executive chairman Pradeep Vajram and co-founder and VP Prashant Trivedi to learn more about the company’s technology and ambitions. Vajram highlighted how the company had re-aligned its ambition to focus on two key capabilities — edge inference and edge learning.
Having now raised a total of $11.5 million, the company said it will use the latest funds to tape-out its first chip, the 8 TOPS RAP-E AI chip, which it is calling Gluon, to develop the software stack and to build system solutions for its target markets. The ‘RAP’ in AlphaICs’ device nomenclature stands for “real AI processor;” it is based on a proprietary, modular, and scalable architecture to enable AI acceleration for low power edge applications.
The chip, which is already fully functional on an FPGA, is scheduled to tape out next month in a TSMC 16nm FinFET process.
The Gluon chip does 315 images per second with ResNet-50 on a batch size of one, and 100 images per second with Yolo V2 on a batch size of one. The company said these performances are best in class in terms of images per second per watt and images per second per TOPS, “and much better than the competition, even without any parsing.”
Typical power consumption is 4W for these performance figures. Supporting AI frameworks such as TensorFlow, PyTorch and Caffe2, AlphaICs said any trained model can be easily deployed on its Gluon chip.
Many startups are now offering what everyone calls “edge AI,” so what is AlphaICs differentiator? Vajram explained that the company is very much focused on enabling inference and learning on the edge. That means being able to identify and classify both seen and unseen objects at the edge. The current challenges for AI at the edge involve training with large sets of labeled data, the cost of labeling that data and the compute costs, and the significant drop in accuracy with unseen data.
AlphaICs said the performance it is able to achieve with its architecture means it can identify as well as classify objects with less training data. Vajram used the example of a proof of concept built in an FPGA board system that it has developed with a leading defense R&D institution. He said that the details cannot yet be made public until it has submitted the details to the institution, which is likely to be in the next few months. But in essence, the project involves demonstrating that the amount of labeled data needed for training can be reduced and enable edge learning for image classification and detection. In addition, it demonstrates the ability to classify and detect unseen images as and when it encounters them — this it says is continuous learning on-device, with no need for re-training.
With privacy and data availability being a driving factor for edge analytics and AI, this type of learning will be vital for reducing the cost of tasks like people identification, object detection and activity detection at the edge. In addition to facilitating privacy, edge learning also enables automated labeling, and facilitates continuous learning of new scenarios. Vajram said, “AlphaICs innovative architecture will empower system integrators to create AI solutions, with a short time-to-market; while staying within the systems cost and thermal constraints. This funding will help us bring our first inference co-processor to the market for vision applications with low latency requirements. We are also working with strategic partners to bring innovative solutions to the industrial, automotive, and surveillance markets.”
He commented that AlphaICs is attracting interest from a handful of companies looking to evaluate the technology. While Visteon showed early interest, the Covid-19 pandemic did delay things somewhat. “We worked with Visteon very closely for several months evaluating our technology on our FPGA platform till Q1 2020. Due to the delay in our silicon tape out, the discussions were put on hold and both the teams agreed to reinitiate as and when we are ready with tape out. Given that we have the funding now and tape out is happening in February 2021, we reconnected with Visteon to discuss the progress made and the next steps.”
Vajram said early market opportunities for the company are likely to be in surveillance and retail, plus many video analytics applications. “Longer term, cockpit electronics, and driver monitoring systems are also opportunities.”
Sateesh Andra, managing director of one of AlphaICs’ lead investors, Endiya Partners, commented, “Edge AI applications in consumer markets like high-end smartphones, wearables as well as enterprise markets like robots, cameras, and sensors will be pervasive in the next few years. AlphaICs RAP accelerates inferencing as well as learning tasks on-device, rather than in a remote data center, delivering benefits like low latency, cost, data privacy, and security. While Nvidia, Google, and startups like Graphcore are poised to dominate data center AI, AlphaICs has the opportunity to be a market leader in enabling AI at the edge.”