SAN JOSE, Calif. — A startup in India announced ambitious plans to design and license RISC-V-based processor cores as well as deep-learning accelerators and SoC design tools. InCore Semiconductors will make its first cores available before the end of the year.

The effort marks a small but significant addition to the RISC-V ecosystem. It shows that the initiative is gaining global interest for its open-source instruction set architecture as an alternative to offerings from Arm and other traditional suppliers.

InCore spun out of the Shakti processor research team at IIT-Madras, leveraging research in machine learning at its Robert Bosch AI Centre. So far, it is funding itself with revenues from providing commercial support for Shakti cores, according to G. S. Madhusudan, chief executive of InCore and a principal scientist at IIT-Madras.

The startup is developing two families of in-order cores that target edge systems ranging from ultra-low-power IoT to desktops.

At the low end, its E-class cores use three-stage pipelines and come in 32- and 64-bit versions supporting a subset of the RISC-V ISA. They will run at less than 200 MHz and come with ports of FreeRTOS, targeting Arm’s M-class cores.

The high-end, 64-bit C-class cores use a five-stage pipeline and support the full RISC-V ISA and virtualization. They target speeds up to 800 MHz but can be customized to run up to 2 GHz and issue two instructions per cycle.

The C-class cores will support a level-four secure version of Linux and target Arm’s A35/A55 cores. The startup also plans a set of extensions for the C-class cores that enable fault-tolerant functions for automotive and other markets.

Versions of both E and C cores will be available before the end of the year. Superscalar and dual-issue capabilities will be available before April.

InCore AI array

A systolic array is one of the first blocks on the way to an AI accelerator core. (Image: InCore)

AI plan starts with accelerator blocks

To accelerate deep learning in embedded systems, InCore will supply before the end of the year blocks to integrate with its cores. The so-called Axon series products are the start of a plan to design accelerator cores for machine learning that will include support for real-time guarantees.

One block will provide a basic systolic array using a data flow architecture and supporting frameworks such as Caffe and TensorFlow. Another offers cache optimizations to enable skipping redundant operations in sparse data sets by using a special address table and register file.

A separate Aegis series will deliver hardware-based security functions such as a tagged architecture due by next June to prevent common memory attacks. However, it requires software support and is an extension outside of the RISC-V specification.

Separately, InCore aims to release SoC design tools for its cores, preliminary versions of which are already available as open-source code. The tools aim to ease the job of integrating and testing the startup’s cores and blocks with each other using standard interfaces such as AXI and TileLink.

InCore aims to make money through a combination of licensing its intellectual property and providing design services. To date, it has worked mainly with HCL Technologies (Noida, India) to engage foundries. Intel taped out an IIT-Madras Shakti core on its 22-nm node with back-end design by HCL.

“We can go from concept to tape-out on any fab, even at 7-nm nodes,” said Madhusudan in an email exchange. “We like the Intel 22-nm process for IoT and sub-GHz-class devices.”

So far, InCore is not planning a venture investment round. “We can even go GA on one or two cores with our current revenue, but VCs are interested,” said Madhusudan. “Long term, there is enough strategic business in India for us, and we are the only CPU IP player around.”

The company’s chief technologist laid out the startup’s product plans at a RISC-V conference recently in Chennai, India, where InCore is based.

— Rick Merritt, Silicon Valley Bureau Chief, EE Times