Study discovers neurons in the human brain that can predict what you are going to say and help you say it
Feb. 02, 2024.
2 min. read Interactions
Findings could be used to develop new treatments for speech and language disorders or machine interfaces capable of producing synthetic speech
Unlocking the Mysteries of the Fruit Fly Brain: How a Gigantic Map of Neurons is Transforming Neuroscience
By using advanced brain recording techniques, a new study demonstrates how neurons in the human brain work together to allow people to think about what words they want to say and then produce them aloud through speech.
These findings, led by researchers from Massachusetts General Hospital (MGH), provide a detailed map of how speech sounds such as consonants and vowels are represented in the brain well before they are even spoken and how they are strung together during language production.
Treatment for speech and language disorders
The work, published in the journal Nature, reveals insights into the brain’s neurons that enable language production, and could lead to improvements in the understanding and treatment of speech and language disorders.
“Although speaking usually seems easy, our brains perform many complex cognitive steps in the production of natural speech—including coming up with the words we want to say, planning the articulatory movements and producing our intended vocalizations,” says senior author Ziv Williams, MD, an associate professor in Neurosurgery at MGH and Harvard Medical School.
The researchers used a cutting-edge technology called Neuropixels, using probes to record the activities of single neurons in the prefrontal cortex. Williams and his colleagues identified cells that are involved in language production and that may underlie the ability to speak. They also found that there are separate groups of neurons in the brain dedicated to speaking and listening.
By recording individual neurons, the researchers found that certain neurons become active before this phoneme is spoken out loud. Other neurons reflected more complex aspects of word construction such as the specific assembly of phonemes into syllables.
Artificial prosthetics or brain-machine interfaces
With their technology, the scientists can predict what combination of consonants and vowels will be produced before the words are actually spoken. This capability could be leveraged to build artificial prosthetics or brain-machine interfaces capable of producing synthetic speech, which could benefit a range of patients.
The researchers hope to expand on their work by studying more complex language processes that will allow them to investigate questions related to how people choose the words that they intend to say and how the brain assembles words into sentences that convey an individual’s thoughts and feelings to others.
This work was supported by the National Institutes of Health.
Citation: Khanna, A.R., Muñoz, W., Kim, Y.J. et al. Single-neuronal elements of speech production in humans. Nature (2024). https://doi.org/10.1038/s41586-023-06982-w