Reexen's Hongjie Liu discusses the company’s integrated sensing and mixed-signal in-memory computing architecture.
Reexen, a neuromorphic engineering startup based in Shenzhen, Shanghai, and Chengdu, China, and in Zurich, Switzerland, develops ultra-low-power audio and visual signal-processing chips based on its inference sensing and in-memory computing architecture and has penetrated the markets for true wireless stereo (TWS) headsets and other wearables. The startup targets AR/VR/XR, AIoT, and autonomous driving applications in which its high-efficiency chips will meet strong demand.
In July 2021, Reexen secured about €15 million in a Series A round led by Inno-Chip (an investment firm of Omnivision, the largest fabless company in China), Spinnotec, and Miracleplus. It has also received R&D funding as part of two European Union AI collaborative projects, and it plans to expand its European R&D center in 2022.
Hongjie Liu, co-founder and CEO of Reexen, explained the company’s integrated sensing and mixed-signal in-memory computing architecture in a recent interview with EE Times China.
Interference sensing and in-memory computing
In traditional computing based on von Neumann architecture, the computing power of the processor is limited by the memory unit in terms of bandwidth and latency. This problem is particularly prominent in edge computing scenarios such as AR/VR and autonomous driving, which require high bandwidth and near-real–time performance.
The “memory and computing integrated” architecture that has emerged in recent years essentially integrates memory and computing units more closely to reduce unnecessary latency and energy consumption caused by data transfer. For IoT and edge computing applications, meanwhile, smart sensors are required to gather information from the physical world. Because sensing, memory, and computing are essential components of smart IoT devices, adding sensing on top of memory and computing makes sense to get better performance, power consumption, and area (PPA).
Reexen’s technology is based on neuromorphic computing concepts. Its founders were Ph.D. students under ETH Zurich professor Tobi Delbrück, a pioneering researcher in the field of neural perception computing and dynamic vision. Reexen developed the neural perception computing theory into analog preprocessing and in-memory computing technologies, progressing beyond visual processing to a variety of sensor-fusion applications.
Reexen’s CEO told EE Times China that its architecture consists of two parts: analog preprocessing (ASP) and analog/mixed-signal computing in-memory (CIM) (Figure 1).
The front-end ASP directly extracts signal characteristics from the original data to reduce information redundancy, thereby achieving a higher level of effective information extraction. The traditional signal-perception process requires an analog front end (AFE) for processing, then conversion into digital signals by an ADC and processing by a DSP. Analog preprocessing, however, can directly extract signal features in the analog domain, with power consumption only one-tenth that of traditional perception processing.
After preprocessing via the ASP, the characteristic data is converted to digital by the ADC, greatly reducing the ADC’s power-consumption requirements.
The second innovation in the Reexen architecture is mixed-signal CIM. Based on a patented technology developed by Reexen, the CIM architecture integrates sensing, memory, and computing into a unified unit. Performing mixed-signal computing inside the memory breaks through the limitation of traditional signal-sampling frequency and the “memory wall” bottleneck of the von Neumann architecture, thereby improving the area, power, and time efficiencies of computing according to the application requirements.
Deep-learning algorithms are required for incorporation into signal-processing applications such as voice, vision, and bio-electricity. If processors based on the traditional von Neumann architecture are used to perform the data operations, more than 70% of the power consumption will come from moving the data.
Reexen’s CIM architecture effectively solves the transmission-speed bottleneck between the logic and memory to improve parallel computing performance, with a 4× to 5× increase in the performance/power-efficiency ratio.
Compared with similar in-memory or near-memory computing architectures on the market, Reexen’s mixed-signal architecture has a higher performance/power-efficiency ratio (at 20 TOPS/W) and performance/area-efficiency ratio (at 4 TOPS/mm2), helping to ensure computing accuracy, according to the company.
Ultra low-power mixed-signal processing chip
Reexen’s chip product family comprises three series. The ADA10X series focuses on voice and bio-electric signal recognition and processing, mainly for use in wearables (such as TWS earphones, smartwatches, and health-monitoring bracelets), home health devices, and small IoT monitoring equipment. The first chip for voice signal processing, the ADA100, is in mass production and is claimed to achieve ultra-low power consumption (20 µW for voice-activity detect and 160 µW for keyword spotting) in offline voice wake-up + recognition, external sound field recognition, and in-ear sound field recognition. Its application reference is shown in Figure 2.
The second series, ADA20X, targets ultra-low–power visual signal processing and includes the ADA200 ultra-low–power visual co-processor and the ADA210 medium-computing–power visual system-on-chip (SoC). It can be used in household battery-powered IP cameras, smart-home appliances (such as smart locks/doorbells, air conditioners, refrigerators, and TVs), and personal devices (including AR/VR gear, mobile phones, and smartwatches). Reexen claims ADA20X can achieve ultra-low power consumption (1–3 mW) when performing facial recognition, object recognition, environmental sensing, eye tracking, and other functions, while effectively protecting user privacy. It is expected to reach mass production by the end of 2022.
Using AR/VR as an example, the startup said the chips can reduce power consumption by 4× to 5× compared with existing solutions and can extend the product’s battery life. Its application reference is shown in Figure 3.
In addition, ADA20X can be customized into application-specific chips that tailor compute-power performance and interfaces according to various application needs. Its computing power ranges from 0.3 TOPS to 20 TOPS, while its power consumption can be as low as microwatt-grade, meeting the requirements of tablet computers, wearable devices, smart homes, AR/VR, battery-powered IPCs, ADAS/autonomous driving, and other visual applications. Traditional digital AI vision chips have a large in-memory computing area (large arithmetic unit) and high data-handling power consumption.
ADA20X consumes only one-tenth the power of a traditional digital chip and therefore can achieve a higher power-efficiency ratio. In AR application scenarios, if high-power–efficiency computing chips are used to perform such functions as gesture recognition and eye tracking in video processing, the operational duration of AR devices can be increased and the user experience enhanced, according to Reexen.
In summary, ADA20X has the following features:
• The computing-in-memory architecture works like the human brain and breaks the system limitation of the memory wall caused by the separation of storage and computing with traditional von Neumann architecture. This is the key breakthrough of neuromorphic computing.
• The architecture offers a high power-efficiency ratio because its power consumption will not increase sharply in tandem with the complexity of the application’s computing tasks. It thereby addresses the dual problems of high computing power consumption and heat dissipation.
• It supports both convolutional and transformer neural networks (CNNs and TNNs), as well as neuromorphic computing architectures such as spiking neural networks (SNNs). The architecture is flexible, and the efficiency of array computing is not attenuated.
• It can support both the event input of dynamic vision sensors (such as event cameras) and the image input of traditional cameras.
• Configured with an imaging signal processor (ISP) based on in-memory computing, the ADA20X can support event and image preprocessing simultaneously.
The third series, ADA30X, is positioned as a high-performance SoC designed to perform multisensor (LiDAR plus vision) fusion signal processing. It targets AR/VR control, autonomous-driving master control below the Level 3 grade, and autonomous-driving secondary control above L3. The ADA30X is still in the planning stage and won’t be in mass production until the end of 2023.
EU collaborative research projects
Reexen has a research and development center in Switzerland and actively participates in cooperative EU research projects. It is currently involved in two such programs: StorAIge and IMOCO4.E.
The StorAIge initiative aims to develop high-performance, ultra-low–power, and safe SoC solutions to improve the EU’s competitive advantage in AI commercialization for edge applications. The total budget for the project is estimated at nearly €100 million.
The objective of the second project, IMOCO4.E, is to provide vertically distributed edge-to-cloud intelligence for machines, robots, and other systems with control components. The project’s total budget exceeds €30 million.
For both projects, Reexen has been participating as a technology solution provider. Other EU partners in the projects include research institutions such as Imec (Belgium), Fraunhofer Institute (Germany), and Grenoble University (France); semiconductor manufacturers including STMicroelectronics and X-Fab; and other European technology companies.
International R&D team
Reexen’s competitive strengths as a startup are its chip technology intellectual property and its experienced international R&D team, which includes scientists from Imec, ETH Zurich, Peking University, Tsinghua University, and other research institutions, as well as senior engineers from top semiconductor manufacturers such as Qualcomm, HiSilicon, Samsung, and Intel. Thus far, Reexen has obtained 30+ patents, published more than 20 ISSCC/JSSC papers, and accumulated extensive experience in chip design and mass production.
The nearly €15 million in Series A funding that closed in July will be used primarily for product development and to expand the R&D team. Reexen currently has 60 employees and has five new projects on the agenda for this year. The planned expansion of its Swiss R&D center will increase the number of experienced European engineers and scientists.
According to Hongjie Liu, Reexen’s vision is to become the ADI of the AI age. ADI is synonymous with analog and mixed-signal chips, and Reexen inference sensing and in-memory computing are a natural extension of traditional analog and mixed-signal processing.