ReRAM May Be Answer to Edge-learning Systems

Article By : Gary Hilson

Algorithms may be key to effectively using ReRAM devices in edge-learning systems, turning a ReRAM disadvantage to good use.

Sometimes a problem can become its own solution.

For CEA-Leti scientists, it means that traits of resistive-RAM (ReRAM) devices that have been previously considered as “non-ideal” may be the answer to overcoming barriers to developing ReRAM-based edge-learning systems, as outlined in a recent Nature Electronics publication titled “In-situ learning using intrinsic memristor variability via Markov chain Monte Carlo sampling.” It describes how RRAM, or memristor, technology can be used to create intelligent systems that learn locally at the edge, independent of the cloud.

Thomas Dalgaty

Thomas Dalgaty, a CEA-Leti scientist at France’s Université Grenoble, explained how the team were able to navigate the intrinsic non-idealities of ReRAM technology—the learning algorithms used in current ReRAM-based edge approaches cannot be reconciled with device programming randomness, or variability, among others. In a telephone interview with EE Times, he said the solution was to implement a Markov Chain Monte Carlo (MCMC) sampling learning algorithm in a fabricated chip that acts as a Bayesian machine-learning model, which actively exploited memristor randomness.

For the purposes of the research, Dalgaty said it’s important to clearly define what is meant by an edge system. Not only is it not likely to be connected to an essential cloud computing resource with big memory and labeled data, but it’s a system that’s not really connected to a big energy resource. This is important because one of the appeals of using ReRAM at the edge is the memory’s low power consumption, he said. “At the edge, you have lots of unlabeled data that you have to make sense of yourself locally.”

Machine-learning models are normally trained using general purpose hardware based on a von Neumann architecture, which isn’t well suited for edge learning, said Dalgaty, because an edge learning system is a distributed, energy-constrained and memory-constrained system. “The reason ReRAM are interesting for these kinds of systems is because once you start to compute with the analog properties of the devices, you don’t care about self-storing information in this so-called von Neumann memory sectors and transporting them to processing centers.”

Although the there’s a lot of potential to reduce the energy used in these edge systems, he said, ReRAM devices are too random for implementing standard machine learning algorithms. The memristor variability means you can’t make a specific change to the parameters of the learning model, and this variability is what needs to be overcome.

CEA-Leti researchers implemented a Markov Chain Monte Carlo (MCMC) algorithm in a fabricated chip to actively exploit the memristor randomness ReRAM for use in edge learning systems . (Courtesy CEA-Leti)

The researchers had been banging their heads against the wall trying to mitigate this memristor variability to take advantage of ReRAM device efficiencies, said Dalgaty, and then realized the answer was to use uses memristor variability instead of trying to fight against it, which is essentially a random variability. Implementing a MCMC sampling learning algorithm in a fabricated chip mitigated the randomness without any energy-intensive techniques.

By leveraging the randomness instead of preventing it, he said, highly efficient in-situ machine learning is made possible by applying nanosecond voltage pulses to nanoscale ReRAM memory devices. In fact, compared with a standard CMOS implementation of its algorithm, the approach requires five orders of magnitude less energy (the research team is employing hafnium dioxide technology, which is CMOS-compatible). Dalgaty said a real-word example this sort of edge computing system could be an implanted medical system that locally updates its operation based on the evolving state of a patient. The research team has already experimentally applied its ReRAM-based MCMC to train a multilayer Bayesian neural network to detect heart arrhythmias from electrocardiogram recordings, which was found to report a better detection rate than a standard neural network based on a von Neumann computing system.

This is one example of an application that’s being looked at, he said, but as with all research of this nature, there’s a lot of work to be done before this approach will find commercial applications in the real world and it’s not clear what all of them might be. Ultimately, the hope is that it will enable machine learning at the endge without the high amounts of energy and memory currently required.

ReRAM is seen as a good candidate for artificial intelligence (AI) and machine learning applications and for its potential to mimic how the human brain learns and processes information at the neuron and synaptic level. Scaling neuromorphic architectures are believed to benefit from ReRAM devices because they are significantly smaller and more energy-efficient than current AI data centers which used DRAM, flash, and even High Bandwidth Memory (HBM).

ReRAM makers such as Weebit Nano have devoted time and resources through recent research partnerships, including one with the Non-Volatile Memory Group of the Indian Institute of Technology Delhi (IITD) on a collaborative research project that will apply Weebit’s silicon oxide (SiOx) ReRAM technology to computer chips used for AI. More recently, researchers at Politecnico Milan (the Polytechnic University of Milan) presented joint research in a paper with the company that details a novel AI self-learning demonstration based on Weebit’s SiOx ReRAM, which outlined how a brain-inspired AI system could perform unsupervised learning tasks with high accuracy results.

Weebit’s ReRAM cell consists of two metal layers with a silicon oxide (SiOx) layer between them comprised of materials that can used in existing production lines, making it a potentially cost-effective, low power option for AI and machine learning architectures (Courtesy Weebit Nano).

Weebit Nano already has a long-term partnership with CEA-Leti for development of its ReRAM technology, but its research efforts for neuromorphic applications are lower in priority in comparison to its embedded ReRAM program, which is critical to driving company revenues, and its focus on responding to customer demand for discrete ReRAM memory components. However, it’s not the only ReRAM maker interested in AI opportunities—in 2019, a consortium dubbed SCAiLE (SCalable AI for Learning at the Edge) that included ReRAM maker Crossbar was formed to create AI platforms using ReRAM.

Where memory is going to reside in AI and machine learning architectures has been a key area of focus regardless of memory type; big data applications have already driven the need for architectures that put memory closer to compute resources. AI and machine learning has magnified that need because they conduct multiple accumulation operations on a vast matrix of data over neural networks. Because machine learning learns from working on the data, there’s a strong impetus for finding ways to bring compute and memory closer together, which will ultimately save power and improve performance.

This article was originally published on EE Times.

Gary Hilson is a general contributing editor with a focus on memory and flash technologies for EE Times.

Virtual Event - PowerUP Asia 2024 is coming (May 21-23, 2024)

Power Semiconductor Innovations Toward Green Goals, Decarbonization and Sustainability

Day 1: GaN and SiC Semiconductors

Day 2: Power Semiconductors in Low- and High-Power Applications

Day 3: Power Semiconductor Packaging Technologies and Renewable Energy

Register to watch 30+ conference speeches and visit booths, download technical whitepapers.

Subscribe to Newsletter

Leave a comment