Discusses AI at MWC and demonstrates three AI solutions
BARCELONA — As expected, AI is the crowd magnet at this year’s Mobile World Congress. As Jem Davies, vice president, fellow and general manager of the machine learning group at Arm, quipped, during an interview with EE Times, “Machine learning is a bit like fleas. Everyone has got one.”
STMicroelectronics, meanwhile, broke its silence and discussed during the company’s press conference Tuesday (Feb. 27) how the company sees machine learning as a key to “distributed intelligence” in the embedded world. ST envisions a day when a network of tiny MCUs become smart enough to detect wear and tear in machines on the factory floor or find anomalies in a building, without reporting sensory data every so often back to data centers.
At its booth, ST demonstrated three tangible AI solutions: a neural network converter and code generator called STM32 CubeMX.AI, ST’s own Deep Learning SoC (codenamed Orlando V1), and a neural network hardware accelerator (currently under development using an FPGA) which can be eventually integrated into the STM32 microcontroller.
Asked if ST’s embedded AI solutions have been developed in partnerships with Arm’s Project Trillium, ST’s president and CEO Carlo Bozotti replied emphatically, “No. These are internally developed by ST.”
Unlike many smartphone chip vendors developing an AI accelerator designed to work with a CPU and a GPU inside a handset, ST focuses on designing machine-learning solutions on embedded processors deployed in connected mesh networks. Gerard Cronin, ST’s group vice president, told EE Times that ST already has neural network code that runs on any STM32 in software today. Its drawback is, he explained, that it would run too slow for sophisticated/processing intensive applications.
For machine-learning acceleration, ST is designing AI-specific hardware and software architectures. ST unveiled its first test chip, an ultra-energy efficient deep convolutional neural network (DCNN) SoC. It contains 8 DCNN reconfigurable accelerators and 16 DSPs. Manufactured in a 28nm FD-SOI process, it is “ultra-energy efficient,” claimed Bozotti. He described it as a significant achievement for ST’s R&D team. “It’s a real SoC, running AlexNet at 0.5 TOPS,” Bozotti said.
ST has not decided whether the SoC will be launched as is, since the company is already working on its follow-ons. But, running 2.9TOPS per watt at 266MHz, it can be used as a co-processor for ST’s MCUs.
ST’s ultimate AI scenario for STM32, however, might be in integrating a neural network hardware accelerator inside the MCU. The FPGA-based demo showed that it would take only a fraction of STM32 CPU load to detect how many people are in a scene captured by an infrared camera.
Responding to the market’s hunger for AI, Arm is confident it has built a better mousetrap, with its CPU and GPU instruction set extensions — specifically for machine learning. ARM is making these extensions available through an open-source license, and Davies said many companies are already using them.
Arm is planning to launch in mid-2018 what it calls a machine-learning processor capable of 3TOPS. Davies stressed that this isn’t a hardware accelerator to be used with Arm’s CPU and GPU. It is, he said, a standalone, powerful enough — and yet energy efficient — “machine-learning processor.”
“We have several hardwired blocks to run specific neural networks,” said Davies, “but this is truly a programmable AI processor. There’s no need for dynamic scheduling. Static scheduling can get you what you need.”
Asked about target markets for such an AI processor, Davies said, “Object detection, voice/messaging, and digital TV.”
Similar to ST, Arm also sees the machine-learning trend moving from the cloud to edge devices. “It’s simple, it’s a law of physics (too many edge devices), law of economics (nobody wants to pay for bandwidth), law of latency (time critical applications) and law of land (protection of privacy),” Davies said.
While agreeing on the vision for object detection, ST, as a leading MCU provider, isn’t going to wait for Arm to come up with a standalone AI processor.
Nor is MediaTek waiting for Arm. In an interview with EE Times, MediaTek president Joe Chen told us, “We are extending our NeuroPilot AI platform (bridging CPU, GPU and onboard AI accelerators) to MediaTek’s other consumer products including digital TV.”
Asked about AI in the context of digital TV, Arm’s Davies explained that the idea is somewhat similar to how Huawei is using its AI processor, Kirin 970, for beautification of one’s portrait photo. “These DTV guys are planning to use the power of AI for image enhancements in each video frame,” he said. “They are really eager to get their hands onto the AI processor.”
— Junko Yoshida, Chief International Correspondent, EE Times