Stanford's Kwabena Boahen discussed his work on brain-like processors after an annual gathering of neuromorphic experts at an AI workshop
Kwabena Boahen believes a better AI is imminent.
The Stanford professor is one of dozens of researchers working on chips modeled on the human brain. They promise to handle orders of magnitude greater computations than today’s processors with a fraction of their power consumption.
Braindrop, his latest chip, beats Nvidia’s Tesla GPUs in energy efficiency and also outpaces similar processors from academics. He is already working to secure funding for a next-generation effort that could do even better, probably using ferroelectric FETs at Globalfoundries.
The problem with all the so-called neuromorphic chips is they are missing a key piece of the puzzle. Researchers believe they understand the analog process the brain uses for computing and the spiking neural network technique it uses for efficiently communicating among neurons. What they don’t know is how the brain learns.
It’s a fundamental piece of the algorithm that’s still missing. Researchers like Boahen are optimistic and hot on the trail indicated by several good clues. But they lack the equivalent of back-propagation, aka backprop.
In the related field of deep learning, backprop is the heart of the training process. It’s painfully slow and requires banks of expensive CPUs or GPUs with tons of memory working offline, but it is delivering stellar results on a wide range of pattern recognition problems.
The problem with backprop and deep learning in general, say researchers like Boahen, is that it's artificial. It is not using neurons and techniques modeled after the brain that crunches through supercomputer-like tasks on the equivalent of a 35W power source.
“There’s a huge opportunity in this space. A lot of applications are not served by deep neural networks that run in the cloud with batch requests that create latency,” said Boahen in an interview with EE Times.
For example, neuromorphic chips could monitor and analyze vibrations on bridges in real time with a few microwatts from an energy harvester, only communicating when a human needs to take action. “We should think about how we can give everything — not just cloud services — a nervous system,” he said.
Boahen’s optimism was echoed at a recent workshop for leading researchers in the field.
“We want to expand the space of the types of computations we can perform. A lot of interesting computations are done in the brain that fall outside deep learning,” said Mike Davies who manages a neuromorphic computing lab at Intel.
“Deep learning uses a crude approximation of a neuron, but it’s useful and got traction thanks to backprop, which enables offline training. It’s not a neural-inspired idea, it’s stochastic gradient descent — but it works really well,” Davies added in a talk at the workshop.
Intel, IBM share progress on brain-like chips
Intel is trying to enlist researchers to work on its 14nm Loihi research chip to refine neuromorphic computing. The mainly digital chip packs 128,000 neurons, and Intel aims to release by June a 5U system packing 768 of the chips.
Davies noted work on three promising approaches to enable training with spiking neural nets. He also expressed optimism Intel will soon find ways to train a version of LSTN neural nets it already has running on Loihi.
“We want researchers to use [Loihi] to work on the fundamental problems of the algorithms and their programmability,” he said. Meanwhile, Intel continues work on the research chip to refine its architecture and offset its “relatively high cost because [they] integrate compute and memory,” he said.
Separately, researchers at Heidelberg University used wafer-scale integration to create their BrainScales device. They are adding programmable cores to mimic brain plasticity in a second-generation chip due next year.
Long term, spiking neural nets may use an analog coprocessor next to a traditional CPU that handles calibration, control and training, said Johannes Schemmel, a Heidelberg researcher who spoke at the event.
Mukesh Khare, vice president of semiconductor and AI research at IBM, pointed to Big Blue’s work on chips based on analog cores with non-volatile memories storing weights and arranged in crosspoint arrays. “The future of computing is bits, neurons and qubits,” he told attendees.
Another alternative subject to extensive research is the use of memristor arrays. Veteran researcher Steve Furber described the Spinnaker system, the world’s largest system for spiking neural nets with a million Arm cores in 11 cabinets at the University of Manchester, one of many systems part of Europe’s Human Brain Project.
To date Spinnaker has been used mainly on neuroscience applications. However, the challenge of creating realistic models of brain functions is so daunting no application has used more than 10% of the system’s resources except its own debug routines.
“The brain remains one of the great frontiers of science. We still fundamentally don’t know the information processing principles going on in our head,” Furber said in a talk.
Stanford’s Boahen sees the situation as a glass not half empty but overflowing.
“We know a lot about what’s going on in the brain. We have been recording neural activity for a century, and we now can do it across tens of thousands of neurons. We are deluged by all this information and it’s a bit overwhelming…We are drowning in information about the brain,” he said.
Boahen shares his plans and his past in computing
Braindrop, the latest chip from Boahen’s lab, packs 4,096 neurons in a 2x2mm device that runs on as little as 150 microwatts and was made in a 28nm FD-SOI process. Boahen considers it “an intermediate step to learning at level of spiking neural networks.”
Nevertheless, it was compelling enough to inspire two PhD students who worked on it to form a startup. Femtosense snagged $1.1 million in venture funding in August 2018 and Boahen sits on its board. Still in stealth mode, it is expected to build a larger chip using the Braindrop concepts — perhaps packing as many as a million neurons to handle spiking and deep-learning networks.
Boahen’s lab is working on both the design and funding for its next research chip. He won’t discuss its architecture, given papers on it could be as much as five years away, but it will aim to extend current capabilities automatically synthesizing network models on a chip.
Meanwhile researchers continue to study ways to used ReRAM and MRAM cells in neuron arrays. They also continue working to hammer out theories for learning on spiking networks.