Pages

Wednesday, October 18, 2017

Stanford scientists seek to speak the brain’s language to heal its disease

OCTOBER 17, 2017  BY NATHAN COLLINS

Brain-machine interfaces now treat neurological disease and change the way people with paralysis interact with the world. Improving those devices depends on getting better at translating the language of the brain.

Image credit: Guo Mong


Since the 19th century at least, humans have wondered what could be accomplished by linking our brains – smart and flexible but prone to disease and disarray – directly to technology in all its cold, hard precision. Writers of the time dreamed up intelligence enhanced by implanted clockwork and a starship controlled by a transplanted brain.
While these remain inconceivably far-fetched, the melding of brains and machines for treating disease and improving human health is now a reality. Brain-machine interfaces that connect computers and the nervous system can now restore rudimentary vision in people who have lost the ability to see, treat the symptoms of Parkinson’s disease and prevent some epileptic seizures. And there’s more to come.
But the biggest challenge in each of those cases may not be the hardware that science-fiction writers once dwelled on. Instead, it’s trying to understand, on some level at least, what the brain is trying to tell us – and how to speak to it in return. Like linguists piecing together the first bits of an alien language, researchers must search for signals that indicate an oncoming seizure or where a person wants to move a robotic arm. Improving that communication in parallel with the hardware, researchers say, will drive advances in treating disease or even enhancing our normal capabilities.
Stanford’s Jaimie Henderson and Krishna Shenoy are part of a consortium working on an investigational brain-machine interface

Listening to the language of the brain

The scientific interest in connecting the brain with machines began in earnest in the early 1970s, when computer scientist Jacques Vidal embarked on what he called the Brain Computer Interface project. As he described in a 1973 review paper, it comprised an electroencephalogram, or EEG, for recording electrical signals from the brain and a series of computers to process that information and translate it into some sort of action, such as playing a simple video game. In the long run, Vidal imagined brain-machine interfaces could control “such external apparatus as prosthetic devices or spaceships.”
There are other very real and pressing needs that brain-machine interfaces can solve.
PAUL NUYUJUKIAN
Assistant Professor of Bioengineering and Neurosurgery (Image Credit: L.A. Cicero)

Although brain-controlled spaceships remain in the realm of science fiction, the prosthetic device is not. Stanford researchers including Krishna Shenoy, a professor of electrical engineering, and Jaimie Henderson, a professor of neurosurgery, are bringing neural prosthetics closer to clinical reality. Over the course of nearly two decades, Shenoy, the Hong Seh and Vivian W. M. Lim Professor in the School of Engineering, and Henderson, the John and Jene Blume–Robert and Ruth Halperin Professor, developed a device that, in a clinical research study, gave people paralyzed by accident or disease a way to move a pointer on a computer screen and use it to type out messages. In similar research studies, people were able to move robotic arms with signals from the brain.
Reaching those milestones took work on many fronts, including developing the hardware and surgical techniques needed to physically connect the brain to an external computer.
But there was always another equally important challenge, one that Vidal anticipated: taking the brain’s startlingly complex language, encoded in the electrical and chemical signals sent from one of the brain’s billions of neurons on to the next, and extracting messages a computer could understand. On top of that, researchers like Shenoy and Henderson needed to do all that in real time, so that when a subject’s brain signals the desire to move a pointer on a computer screen, the pointer moves right then, and not a second later.
One of the people that challenge fell to was Paul Nuyujukian, now an assistant professor of bioengineering and neurosurgery. First as a graduate student with Shenoy’s research group and then a postdoctoral fellow with the lab jointly led by Henderson and Shenoy. Nuyujukian helped to build and refine the software algorithms, termed decoders, that translate brain signals into cursor movements.
Actually, “translate” may be too strong a word – the task, as Nuyujukian put it, was a bit like listening to a hundred people speaking a hundred different languages all at once and then trying to find something, anything, in the resulting din one could correlate with a person’s intentions. Yet as daunting as that sounds, Nuyujukian and his colleagues found some ingeniously simple ways to solve the problem, first in experiments with monkeys. For example, Nuyujukian and fellow graduate student Vikash Gilja showed that they could better pick out a voice in the crowd if they paid attention to where a monkey was being asked to move the cursor.
“Design insights like that turned out to have a huge impact on performance of the decoder,” said Nuyujukian, who is also a member of Stanford Bio-X and the Stanford Neurosciences Institute. In fact, it more than doubled the system’s performance in monkeys, and the algorithm the team developed remains the basis of the highest-performing system to date. Nuyujukian went on to adapt those insights to people in a clinical study – a significant challenge in its own right – resulting in devices that helped people with paralysis type at 12 words per minute, a record rate.
Although there’s a lot of important work left to do on prosthetics, Nuyujukian said he believes “there are other very real and pressing needs that brain-machine interfaces can solve,” such as the treatment of epilepsy and stroke – conditions in which the brain speaks a language scientists are only beginning to understand.

Listening for signs something’s wrong

Indeed, if one brain-machine interface can pick up pieces of what the brain is trying to say and use that to move a cursor on a screen, others could listen for times when the brain is trying to say something’s wrong.
One such interface, called NeuroPace and developed in part by Stanford researchers, does just that. Using electrodes implanted deep inside or lying on top of the surface of the brain, NeuroPace listens for patterns of brain activity that precede epileptic seizures and then, when it hears those patterns, stimulates the brain with soothing electrical pulses.
We’re developing brain pacemakers that can interface with brain signaling, so they can sense what the brain is doing.
HELEN BRONTE-STEWART
John E. Cahill Family Professor and Professor of Neurology and Neurological Sciences




Learning to listen for – and better identify – the brain’s needs could also improve deep brain stimulation, a 30-year-old technique that uses electrical impulses to treat Parkinson’s disease, tremor and dystonia, a movement disorder characterized by repetitive movements or abnormal postures brought on by involuntary muscle contractions, said Helen Bronte-Stewart, professor of neurology and neurological sciences.
Although the method has proven successful, there is a problem: Brain stimulators are pretty much always on, much like early cardiac pacemakers. Although the consequences are less dire – the first pacemakers “often caused as many arrhythmias as they treated,” Bronte-Stewart, the John E. Cahill Family Professor, said – there are still side effects, including tingling sensations and difficulty speaking. For cardiac pacemakers, the solution was to listen to what the heart had to say and turn on only when it needed help, and the same idea applies to deep brain stimulation, Bronte-Stewart said. To that end, “we’re developing brain pacemakers that can interface with brain signaling, so they can sense what the brain is doing” and respond appropriately.
The challenge is much the same as in Nuyujukian’s work, namely, to try to extract useful messages from the cacophony of the brain’s billions of neurons, although Bronte-Stewart’s lab takes a somewhat different approach. In one recent paper, the team focused on one of Parkinson’s more unsettling symptoms, “freezing of gait,” which affects around half of Parkinson’s patients and renders them periodically unable to lift their feet off the ground.
Bronte-Stewart’s question was whether the brain might be saying anything unusual during freezing episodes, and indeed it appears to be. Using methods originally developed in physics and information theory, the researchers found that low-frequency brain waves were less predictable, both in those who experienced freezing compared to those who didn’t, and, in the former group, during freezing episodes compared to normal movement. In other words, although no one knows exactly what the brain is trying to say, its speech – so to speak – is noticeably more random in freezers, the more so when they freeze.
By listening for those signs, well-timed brain stimulation may be able to prevent freezing of gait with fewer side effects than before, and one day, Bronte-Stewart said, more sophisticated feedback systems could treat the cognitive symptoms of Parkinson’s or even neuropsychiatric diseases such as obsessive compulsive disorder and major depression.

Do we need to speak the brain’s language?

Both Nuyujukian and Bronte-Stewart’s approaches are notable in part because they do not require researchers to understand very much of the language of brain, let alone speak that language. Indeed, learning that language and how the brain uses it, while of great interest to researchers attempting to decode the brain’s inner workings, may be beside the point for some doctors and patients whose goal is to find more effective prosthetics and treatments for neurological disease.
A one-way conversation sometimes doesn’t get you very far.
E.J. CHICHILNISKY
John R. Adler Professor, Professor of Neurosurgery and Ophthalmology










But other tasks will require greater fluency, at least according to E.J. Chichilnisky, a professor of neurosurgery and of ophthalmology, who thinks speaking the brain’s language will be essential when it comes to helping the blind to see. Chichilnisky, the John R. Adler Professor, co-leads the NeuroTechnology Initiative, funded by the Stanford Neuroscience Institute, and he and his lab are working on sophisticated technologies to restore sight to people with severely damaged retinas – a task he said will require listening closely to what individual neurons have to say, and then being able to speak to each neuron in its own language.
The problem, Chichilnisky said, is that retinas are not simply arrays of identical neurons, akin to the sensors in a modern digital camera, each of which corresponds to a single pixel. Instead, there are different types of neurons, each of which sends a different kind of information to the brain’s vision-processing system.
“We need to talk to those neurons,” Chichilnisky said. To do that, a brain-machine interface needs to figure out, first, what types of neurons its individual electrodes are talking to and how to convert an image into a language those neurons – not us, not a computer, but individual neurons in the retina and perhaps deeper in the brain – understand. Once researchers can do that, they can begin to have a direct, two-way conversation with the brain, enabling a prosthetic retina to adapt to the brain’s needs and improve what a person can see through the prosthesis.
“A one-way conversation sometimes doesn’t get you very far,” Chichilnisky said.

Bronte-Stewart, Chichilnisky, Henderson and Shenoy are members of Stanford Bio-X and the Stanford Neurosciences Institute.
https://news.stanford.edu/2017/10/17/speaking-the-brains-language-to-treat-disease/

No comments:

Post a Comment