Brain implant may enable fast translation of a patient’s thoughts into voiced words
Nov. 07, 2023.
3 mins. read.
4 Interactions
Decoding thoughts at 78 words per minute
A speech prosthetic developed by Duke neuroscientists, neurosurgeons, and engineers can quickly translate a person’s brain signals into voiced words they’re trying to say.
In a paper published Nov. 6 in the open-access journal Nature Communications, the researchers explained how the new technology works and might one day help people who are unable to talk (due to neurological disorders) regain the ability to communicate quickly through a brain-computer interface.
High-density brain sensors
To improve on past limitations, Cogan teamed up with fellow Duke Institute for Brain Sciences faculty member Jonathan Viventi, Ph.D., whose biomedical engineering lab specializes in making high-density, ultra-thin, and flexible brain sensors.
Viventi and his team packed 256 microscopic brain sensors onto a postage stamp-sized piece of flexible, medical-grade plastic. (Neurons can have wildly different activity patterns when coordinating speech, so it’s necessary to distinguish signals from neighboring brain cells to help make accurate predictions about intended speech.)
Testing implants with patients
After fabricating the new implant, Cogan and Viventi teamed up with several Duke University Hospital neurosurgeons, who helped recruit four patients to test the implants. The experiment required the researchers to place the device temporarily in patients who were undergoing brain surgery for some other condition, such as treating Parkinson’s disease or having a tumor removed. Time was limited to 15 minutes to test-drive their device in the operating room.
The task was a simple listen-and-repeat activity. Participants heard a series of nonsense words, like “ava,” “kug,” or “vip,” and then spoke each one aloud. The device recorded activity from each patient’s speech motor cortex as it coordinated nearly 100 muscles that move the lips, tongue, jaw, and larynx.
The research clocked the study to 78 words a second (a major improvement over current brain-machine systems and sufficient for a real-time conversation). But people speak around 150 words per minute, so this is a work in progress.
Acquiring speech data in seconds vs. hours
Afterwards, the researchers fed the neural and speech data from the surgery suite into a machine-learning algorithm to see how accurately it could predict what sound was being made, based only on the brain activity recordings.
Overall, the decoder was accurate 40% of the time, they found. (The speech decoding algorithm Duraivel used had only 90 seconds of spoken data from the 15-minute test.)
“We’re at the point where it’s still much slower than natural speech,” Viventi said, “but you can see the trajectory where you might be able to get there.
“There are many patients who suffer from debilitating motor disorders, like ALS (amyotrophic lateral sclerosis) or “locked-in syndrome,” that can impair their ability to speak,” said Gregory Cogan, Ph.D., a professor of neurology at Duke University’s School of Medicine and one of the lead researchers involved in the project. “But the current tools available to allow them to communicate are generally very slow and cumbersome.”
Why? Because the lag between spoken and decoded speech rates is partially due to the relatively few brain activity sensors that can be fused onto a paper-thin piece of material that lays atop the surface of the brain in current machines, the researchers explain. Fewer sensors means less information available to decode.
Wireless version
Duraivel and his mentors next plan to make a wireless version of the device, funded by a recent $2.4M grant from the National Institutes of Health. It will allow patients to move around freely.
This work was supported by grants from the National Institutes for Health, Department of Defense, Klingenstein-Simons Foundation, and an Incubator Award from the Duke Institute for Brain Sciences.
Citation: “High-resolution Neural Recordings Improve the Accuracy of Speech Decoding,” Suseendrakumar Duraivel, Shervin Rahimpour, Chia-Han Chiang, Michael Trumpis, Charles Wang, Katrina Barth, Stephen C. Harward, Shivanand P. Lad, Allan H. Friedman, Derek G. Southwell, Saurabh R. Sinha, Jonathan Viventi, Gregory B. Cogan. Nature Communications, November 06 2023. DOI: 10.1038/s41467-023-42555-1 (open-access)
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.
1 Comments
One thought on “Brain implant may enable fast translation of a patient’s thoughts into voiced words”
Blimey, this is a proper wild ride, innit? It's like a mash-up of excitement, beauty, and a smidge of the heebie-jeebies all rolled into one. The possibilities are as vast as the English countryside, from medical gizmos to cultural shindigs and whatnot. But hang on to your tea and crumpets, 'cause with wireless whatchamacallits, you can't help but imagine those awkward moments when folks start airing their deepest, darkest thoughts.
It's not all biscuits and gravy, though. There's a shadowy side to it, where corporate espionage and spy capers come into play. On the sunny side of the street, there's no need for any hands-on business. A little cheeky chitchat, and folks might spill the beans faster than a rooster greets the dawn. It's a real head-scratcher, this one, and I'm dead keen to see how the whole shebang pans out, I tell ya!
🟨 😴 😡 ❌ 🤮 💩