Press "Enter" to skip to content

Not Science Fiction: Brain Implant May Enable Communication From Thoughts Alone

A team from Duke University has created a speech prosthetic that translates brain signals into speech, aiding individuals with neurological disorders. While still slower than natural speech, the technology, backed by advanced brain sensors and ongoing research, shows promising potential for enhanced communication abilities. (Artist’s concept) Credit: SciTechDaily.com

A prosthetic device deciphers signals from the brain’s speech center to predict what sound a person is trying to say.

A team of neuroscientists, neurosurgeons, and engineers from Duke University have developed a speech prosthetic that can convert brain signals into spoken words.

The new technology, detailed in a recent paper published in the journal Nature Communications<em>Nature Communications</em> is a peer-reviewed, open-access, multidisciplinary, scientific journal published by Nature Portfolio. It covers the natural sciences, including physics, biology, chemistry, medicine, and earth sciences. It began publishing in 2010 and has editorial offices in London, Berlin, New York City, and Shanghai. ” data-gt-translate-attributes=”[{“attribute”:”data-cmtooltip”, “format”:”html”}]” tabindex=”0″ role=”link”>Nature Communications, offers hope for individuals with neurological disorders that impair speech, potentially enabling them to communicate via a brain-computer interface.

Addressing Communication Challenges in Neurological Disorders

“There are many patients who suffer from debilitating motor disorders, like ALS (amyotrophic lateral sclerosis) or locked-in syndrome, that can impair their ability to speak,” said Gregory Cogan, Ph.D., a professor of neurology at Duke University’s School of Medicine and one of the lead researchers involved in the project. “But the current tools available to allow them to communicate are generally very slow and cumbersome.”

Duke Speech Decoder

A device no bigger than a postage stamp (dotted portion within white band) packs 128 microscopic sensors that can translate brain cell activity into what someone intends to say. Credit: Dan Vahaba/Duke University

Imagine listening to an audiobook at half-speed. That’s the best speech decoding rate currently available, which clocks in at about 78 words per minute. People, however, speak around 150 words per minute.

The lag between spoken and decoded speech rates is partially due to the relatively few brain activity sensors that can be fused onto a paper-thin piece of material that lays atop the surface of the brain. Fewer sensors provide less decipherable information to decode.

Enhancing Brain Signal Decoding

To improve on past limitations, Cogan teamed up with fellow Duke Institute for Brain Sciences faculty member Jonathan Viventi, Ph.D., whose biomedical engineering lab specializes in making high-density, ultra-thin, and flexible brain sensors.

New Duke Speech Prosthetic

Compared to current speech prosthetics with 128 electrodes (left), Duke engineers have developed a new device that accommodates twice as many sensors in a significantly smaller footprint. Credit: Dan Vahaba/Duke University

For this project, Viventi and his team packed an impressive 256 microscopic brain sensors onto a postage stamp-sized piece of flexible, medical-grade plastic. Neurons just a grain of sand apart can have wildly different activity patterns when coordinating speech, so it’s necessary to distinguish signals from neighboring brain cells to help make accurate predictions about intended speech.

Clinical Trials and Future Developments

After fabricating the new implant, Cogan and Viventi teamed up with several Duke University Hospital neurosurgeons, including Derek Southwell, M.D., Ph.D., Nandan Lad, M.D., Ph.D., and Allan Friedman, M.D., who helped recruit four patients to test the implants. The experiment required the researchers to place the device temporarily in patients who were undergoing brain surgery for some other condition, such as treating Parkinson’s disease or having a tumor removed. Time was limited for Cogan and his team to test drive their device in the OR.

“I like to compare it to a NASCAR pit crew,” Cogan said. “We don’t want to add any extra time to the operating procedure, so we had to be in and out within 15 minutes. As soon as the surgeon and the medical team said ‘Go!’ we rushed into action and the patient performed the task.”

The task was a simple listen-and-repeat activity. Participants heard a series of nonsense words, like “ava,” “kug,” or “vip,” and then spoke each one aloud. The device recorded activity from each patient’s speech motor cortex as it coordinated nearly 100 muscles that move the lips, tongue, jaw, and larynx.

Afterward, Suseendrakumar Duraivel, the first author of the new report and a biomedical engineering graduate student at Duke, took the neural and speech data from the surgery suite and fed it into a machine learningMachine learning is a subset of artificial intelligence (AI) that deals with the development of algorithms and statistical models that enable computers to learn from data and make predictions or decisions without being explicitly programmed to do so. Machine learning is used to identify patterns in data, classify data into different categories, or make predictions about future events. It can be categorized into three main types of learning: supervised, unsupervised and reinforcement learning.” data-gt-translate-attributes=”[{“attribute”:”data-cmtooltip”, “format”:”html”}]” tabindex=”0″ role=”link”>machine learning algorithm to see how accurately it could predict what sound was being made, based only on the brain activity recordings.

Kumar Duraivel

In the lab, Duke University Ph.D. candidate Kumar Duraivel analyzes a colorful array of brain-wave data. Each unique hue and line represent the activity from one of 256 sensors, all recorded in real-time from a patient’s brain in the operating room. Credit: Dan Vahaba/Duke University

For some sounds and participants, like /g/ in the word “gak,”  the decoder got it right 84% of the time when it was the first sound in a string of three that made up a given nonsense word.

Accuracy dropped, though, as the decoder parsed out sounds in the middle or at the end of a nonsense word. It also struggled if two sounds were similar, like /p/ and /b/.

Overall, the decoder was accurate 40% of the time. That may seem like a humble test score, but it was quite impressive given that similar brain-to-speech technical feats require hours or days worth of data to draw from. The speech decoding algorithm Duraivel used, however, was working with only 90 seconds of spoken data from the 15-minute test.

Duraivel and his mentors are excited about making a cordless version of the device with a recent $2.4M grant from the National Institutes of HealthThe National Institutes of Health (NIH) is the primary agency of the United States government responsible for biomedical and public health research. Founded in 1887, it is a part of the U.S. Department of Health and Human Services. The NIH conducts its own scientific research through its Intramural Research Program (IRP) and provides major biomedical research funding to non-NIH research facilities through its Extramural Research Program. With 27 different institutes and centers under its umbrella, the NIH covers a broad spectrum of health-related research, including specific diseases, population health, clinical research, and fundamental biological processes. Its mission is to seek fundamental knowledge about the nature and behavior of living systems and the application of that knowledge to enhance health, lengthen life, and reduce illness and disability.” data-gt-translate-attributes=”[{“attribute”:”data-cmtooltip”, “format”:”html”}]” tabindex=”0″ role=”link”>National Institutes of Health.

“We’re now developing the same kind of recording devices, but without any wires,” Cogan said. “You’d be able to move around, and you wouldn’t have to be tied to an electrical outlet, which is really exciting.”

While their work is encouraging, there’s still a long way to go for Viventi and Cogan’s speech prosthetic to hit the shelves anytime soon.

“We’re at the point where it’s still much slower than natural speech,” Viventi said in a recent Duke Magazine piece about the technology, “but you can see the trajectory where you might be able to get there.”

Reference: “High-resolution neural recordings improve the accuracyHow close the measured value conforms to the correct value.” data-gt-translate-attributes=”[{“attribute”:”data-cmtooltip”, “format”:”html”}]” tabindex=”0″ role=”link”>accuracy of speech decoding” by Suseendrakumar Duraivel, Shervin Rahimpour, Chia-Han Chiang, Michael Trumpis, Charles Wang, Katrina Barth, Stephen C. Harward, Shivanand P. Lad, Allan H. Friedman, Derek G. Southwell, Saurabh R. Sinha, Jonathan Viventi and Gregory B. Cogan, 6 November 2023, Nature Communications.
DOI: 10.1038/s41467-023-42555-1

This work was supported by grants from the National Institutes for Health (R01DC019498, UL1TR002553), Department of Defense (W81XWH-21-0538), Klingenstein-Simons Foundation, and an Incubator Award from the Duke Institute for Brain Sciences.

Source: SciTechDaily