Wave Computing, among others, pledging to use MIPS' 64-bit multi-threaded cores in AI processors
PARIS — MIPS, a storied but beleagured RISC processor core company, is coming back to life. Breathing new life into MIPS are a new customer — Wave Computing — and a number of existing clients that include Intel/Mobileye, NetSpeed, Fungible, ThiCI and Denso. All have pledged to use MIPS 64-bit multi-threaded processor core to handle device management and control functions inside their respective AI processors — many either in development or ready for rollout.
Wave Computing is a designer of a massively parallel dataflow architecture called Wave Dataflow Processing Unit (DPU) for deep learning. Wave Computing, which is getting ready to roll out its beta system in the next few weeks by using the company’s first-generation processor, has decided to use MIPS 64-bit CPU for the company’s second-generation DPU, Derek Meyer, Wave Computing CEO and a veteran of MIPS, told EE Times.
In the first-generation DPU, Wave Computing used a 32-bit RISC processor core developed by Taiwan’s Andes Technology Corp. Replacing it with a 64-bit RISC processor was Wave Computing’s plan all along, said Meyer. The question, however, was which 64-bit RISC core to choose. “Obviously, during our research, we looked at RISC-V, and a whole bunch of others,” Meyer said.
But when the issue comes down to a “RISC processor with hardware multi-threading architecture and cache coherence,” Meyer said, “MIPS is the only one. There are no other RISC processors that can do that today.”
MIPS’ future was uncertain when the company, acquired by Imagination Technologies in 2013, was believed to have lost its focus and momentum under the then new management. Choosing MIPS was too risky a decision for many SoC companies’ standard. This changed when Tallwood Venture Capital bought MIPS late last fall. The deal brought Dado Banatao, Tallwood’s managing partner, into MIPS as chairman of the board.
“With Dado [Banatao] heading the company, we see the stability is coming back to MIPS,” said Meyer. “I’ve always loved MIPS and love it more with Dado involved. He’s a real visionary.” Banatao is an investor in both MIPS and Wave Computing.
Why multithreading and cache coherence are important
Wave Computing’s Meyer sees MIPS’ multithreading technology as key for why his team wants MIPS. In Wave Computing’s dataflow processing, “when we load, unload and reload data for machine learning agent, hardware multithreading architecture is effective,” said Meyer.
Kevin Krewell, principal analyst at Tirias Research, told us, “Multithreading is a way to efficiently add many threads with a smaller number of cores. It’s very effective for workloads that have a lot of short tasks.”
Cache coherence is another positive Wave’s team sees in MIPS. “Because our DPU is 64-bit, it only makes sense both MIPS and DPU talk to the same memory in 64-bit address space,” said Meyer.
Paul Teich, principle analyst at Tirias Research, explained, “Cache coherence means that the results of a convolution are available to all other threads on a chip.” He noted, “As a layer of neurons in a model is processed, larger on-chip caches mean more of the layer can stay resident in cache, and maybe even multiple layers. That means fewer latency-inducing accesses to system memory and better performance.”
Still early in AI market growth
While MIPS rattled off a host of AI processor companies who have adopted MIPS, the AI market is still in an early phase.
Teich told us he sees several different AI accelerator camps. First, there is the GPU gang consisting of Nvida and AMD, plus Arm, Qualcomm and others for mobile. Krewell added, “Nvidia rules the market from here.”
Then there is an FPGA posse, including Intel and Xilinx.
There is also a DSP camp consisting of Qualcomm, Ceva and a few others.
Finally, there is a team working on new architecture, said Teich. This group includes Arm, Fungible, Mobileye, Thinci, Wave Computing, and others. Google’s TPU is a member, Teich added.
With all said and done, Teich concluded that Tirias Research believes AI is going to contribute to most workloads in the future.
“The industry is just at the start of this ride, so there is plenty of upside for the foreseeable future,” he said. “It’s unlikely Nvidia’s competitors will impede its current growth, but we’re still early in AI market growth.”
Teich added, “It is not a zero -sum game.” There will be a lot of market opportunities available and a lot of experimentation will be going on. “MIPS can benefit from that,” Teich said.
Wave Computing updates
Wave Computing’s Meyer told us that although his company developed a unique DPU architecture, it won’t be selling its chips. Instead, it will make systems it hopes to sell into data centers.
Wave Computing has positioned itself as a competitor to Nvidia’s customers such as Dell EMC, HPE, IBM, Lenovo, Cisco, Huawei, Quanta, Insupr, Sugon, Tyan, and Wiwynn, observed Teich. Wave will also directly compete with Nvidia’s DGX line of standalone appliances, Krewell added.
While such a long list looks formidable, Teich observed that customers are all looking for a differentiated competitive edge for their workloads. “That is more difficult when the OEMs and ODMs are all trying to architecturally innovate around a small set of Nvidia’s products,” said Teich. “If Wave Computing can give a cloud service a quantifiable edge for a specific workload, I think they will see some traction.”
Wave Computing plans to soon disclose details as to how much faster Wave Computing’s systems can run both training and inference.
Asked how far along Wave Computing is in terms of integrating MIPS 64-bit processor core, Meyer said, “We’ve been working on it since Dado [Banatao] came to MIPS last November.”
Noting that Wave Computing has a wealth of MIPS veterans including MIPS’ former CTO and VP of engineering, Meyer said, “It’s a small world in Silicon Valley. We are pretty far along when it comes to understanding what we need to do.”
— Junko Yoshida, Chief International Correspondent, EE Times