Can Neurotechnology Help Aphasia Patients?

Article By : Nitin Dahad

A look at possible avenues for neurotechnology to help address aphasia, a speech and communication disorder resulting from stroke or brain injury.

Neurotechnology solutions developed by companies such as BIOS Health could be the key to treating chronic conditions such as aphasia. But adoption remains very much in the future.

Anyone who is a Bruce Willis fan will suddenly be aware of a condition called aphasia, as news emerged yesterday that the actor was diagnosed with this communication disorder.

The news hits a personal note for me. My family has been trying to deal with and come to terms with aphasia for the last few weeks, as a result of my Mum having a major stroke in February. It’s painful to see a person who was extremely active in all kinds of social and community networks to be suddenly unable to express their wants, needs, and desires.

According to the National Aphasia Association (NAA) in the U.S., aphasia affects two million Americans. Aphasia is caused by a brain injury such as stroke or head trauma. In the case of my Mum, it came on suddenly due to the stroke. So, what does it do to a person? According to the NAA, a person with aphasia “may have difficulty producing words, but their intelligence is intact; their ideas, thoughts, and knowledge are still in their head — it’s just communicating those ideas, thoughts, and knowledge that is interrupted”.

brain computer interface_ssEver since we first met the rehabilitation staff at the hospital, I’ve let my imagination run wild about what kind of technologies might help my Mum get back to whatever a new normal might be, and to enable some form of quality of life where she can engage with people better. While searching the depths of my knowledge, all I could come up with were images of Stephen Hawking with his speech synthesis system, and also Elon Musk’s Neuralink which uses brain implants. I also remembered chatting to a company called BIOS Health last year about reading brain signals and then being able to process those signals and write back to the brain to do something with it; in their case, to help treat chronic disease.

I looked at those various avenues. First of all, Stephen Hawking’s revived ability to communicate was enabled by ACAT (assistive context-aware toolkit), which is an open–source toolkit developed by Intel Labs in house. Predictive text functionality was powered by Presage, an intelligent predictive text engine, and integration with Presage was through the Windows Communication Framework. What this does is enable users to communicate with others through keyboard simulation, word prediction, and speech synthesis.

From what I understood, this approach requires some kind of switch, such as the blink sensor, in order to select and create the desired output.

More recently, brain computer interfaces and neurotechnology have evolved significantly. Neuralink, for example, uses a neural implant to connect to neurons in the brain, record the activity of these neurons, and process these signals in real time. The idea is then to decode the signals to figure out what the brain intention is and send that over Bluetooth to the user’s computer to deliver some useful information or control of external devices.

To me, it seems that reading the brain signals is a key part to solving any aphasia issue. One company that is working on enabling the treatment of chronic conditions by reading and writing electrical signals to the body’s neural network is BIOS Health. Its approach is to utilize artificial intelligence and machine learning to translate the “language” of the nervous system to help treat chronic diseases.

Emil Hewage - BIOS Health
Emil Hewage

Great — but could these signals also be used to help interpret speech signals from the brain, I asked myself? Well, I posed the question to Emil Hewage, co–founder and CEO of BIOS Health. He explained very helpfully that we’re not quite there in enabling patients to access neural signals and decode them.

Looking at the concept of neurotechnology from a higher level, Hewage said, “Over time, what we’ll start to recognize is that the most valuable part of the emerging neurotechnology landscape are the products and applications that translate the data from whatever biological information there is to that end application.”

He said this comes down to building an information translation tool, or a language translator. He continued, “That’s the value creating aspect at the heart of all these products. That’s the bit that needs to be developed and unlocked. We’re trying to bring more of these specifically seamless and health and quality of life providing innovations to market. So, for example, the models that translate what works today, we would want to make sure they are available, and then as new implantables come to market, we would want to make sure that the higher resolution of data in can lead to higher quality experiences. Hence the more data you can get, the more fully the patient would be able to talk. And the moment when we can interface with the other side of the biology, the muscular side, you could move from a machine voice to a human one.”

He said BIOS Health’s core purpose is to become better at being the reading and writing layer. “From whatever you can afford today for reading, put that through the best models we’ve got, and deploy it through the best forms of writing back.”

He points to something they call the neurotechnology stack framework, which comprises reading and writing tools and a computation layer. The reading tools capture the neural signal data and feed the computation layer, which carries out data processing and analysis and subsequent regulation of stimulation outputs; the writing tools then take these outputs to create the desired effect on the nervous system or neural state. The idea is that software developers can then build higher-level applications on this core stack, according to specific uses cases and needs.

From this conversation I could certainly see that there is potential in the future for neurotechnology solutions to address conditions such as aphasia. But that is still very much in the future. There wasn’t more detail on what caused Bruce Willis’ aphasia. But for today, for him and for my Mum, we’ll need to rely on current tools for augmentative and alternative communication, plus the various apps that have evolved to train patients to speak again. And of course, we will still be using real specialists in speech and language therapy. I am of course open to alternative suggestions and ideas from our readers.

This article was originally published on EE Times.

Nitin Dahad is a correspondent for EE Times, EE Times Europe and also Editor-in-Chief of embedded.com. With 35 years in the electronics industry, he’s had many different roles: from engineer to journalist, and from entrepreneur to startup mentor and government advisor. He was part of the startup team that launched 32-bit microprocessor company ARC International in the US in the late 1990s and took it public, and co-founder of The Chilli, which influenced much of the tech startup scene in the early 2000s. He’s also worked with many of the big names—including National Semiconductor, GEC Plessey Semiconductors, Dialog Semiconductor and Marconi Instruments.

 

Virtual Event - PowerUP Asia 2024 is coming (May 21-23, 2024)

Power Semiconductor Innovations Toward Green Goals, Decarbonization and Sustainability

Day 1: GaN and SiC Semiconductors

Day 2: Power Semiconductors in Low- and High-Power Applications

Day 3: Power Semiconductor Packaging Technologies and Renewable Energy

Register to watch 30+ conference speeches and visit booths, download technical whitepapers.

Subscribe to Newsletter

Leave a comment