Brain implant may enable fast translation of a patient’s thoughts into voiced words

A speech prosthetic developed by Duke neuroscientists, neurosurgeons, and engineers can quickly translate a person’s brain signals into voiced words they’re trying to say.

In a paper published Nov. 6 in the open-access journal Nature Communications, the researchers explained how the new technology works and might one day help people who are unable to talk (due to neurological disorders) regain the ability to communicate quickly through a brain-computer interface.

High-density brain sensors

To improve on past limitations, Cogan teamed up with fellow Duke Institute for Brain Sciences faculty member Jonathan Viventi, Ph.D., whose biomedical engineering lab specializes in making high-density, ultra-thin, and flexible brain sensors.

Viventi and his team packed 256 microscopic brain sensors onto a postage stamp-sized piece of flexible, medical-grade plastic. (Neurons can have wildly different activity patterns when coordinating speech, so it’s necessary to distinguish signals from neighboring brain cells to help make accurate predictions about intended speech.)

Testing implants with patients

After fabricating the new implant, Cogan and Viventi teamed up with several Duke University Hospital neurosurgeons, who helped recruit four patients to test the implants. The experiment required the researchers to place the device temporarily in patients who were undergoing brain surgery for some other condition, such as  treating Parkinson’s disease or having a tumor removed. Time was limited to 15 minutes to test-drive their device in the operating room.

The task was a simple listen-and-repeat activity. Participants heard a series of nonsense words, like “ava,” “kug,” or “vip,” and then spoke each one aloud. The device recorded activity from each patient’s speech motor cortex as it coordinated nearly 100 muscles that move the lips, tongue, jaw, and larynx.

The research clocked the study to 78 words a second (a major improvement over current brain-machine systems and sufficient for a real-time conversation). But people speak around 150 words per minute, so this is a work in progress.

Acquiring speech data in seconds vs. hours

Afterwards, the researchers fed the neural and speech data from the surgery suite into a machine-learning algorithm to see how accurately it could predict what sound was being made, based only on the brain activity recordings.

Overall, the decoder was accurate 40% of the time, they found. (The speech decoding algorithm Duraivel used had only 90 seconds of spoken data from the 15-minute test.)

“We’re at the point where it’s still much slower than natural speech,” Viventi said, “but you can see the trajectory where you might be able to get there.

“There are many patients who suffer from debilitating motor disorders, like ALS (amyotrophic lateral sclerosis) or “locked-in syndrome,” that can impair their ability to speak,” said Gregory Cogan, Ph.D., a professor of neurology at Duke University’s School of Medicine and one of the lead researchers involved in the project. “But the current tools available to allow them to communicate are generally very slow and cumbersome.”

Why? Because the lag between spoken and decoded speech rates is partially due to the relatively few brain activity sensors that can be fused onto a paper-thin piece of material that lays atop the surface of the brain in current machines, the researchers explain. Fewer sensors means less information available to decode.

Wireless version

Duraivel and his mentors next plan to make a wireless version of the device, funded by a recent $2.4M grant from the National Institutes of Health. It will allow patients to move around freely.

This work was supported by grants from the National Institutes for Health, Department of Defense, Klingenstein-Simons Foundation, and an Incubator Award from the Duke Institute for Brain Sciences.

Citation: “High-resolution Neural Recordings Improve the Accuracy of Speech Decoding,” Suseendrakumar Duraivel, Shervin Rahimpour, Chia-Han Chiang, Michael Trumpis, Charles Wang, Katrina Barth, Stephen C. Harward, Shivanand P. Lad, Allan H. Friedman, Derek G. Southwell, Saurabh R. Sinha, Jonathan Viventi, Gregory B. Cogan. Nature Communications, November 06 2023. DOI: 10.1038/s41467-023-42555-1 (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

New gene therapy restores natural hearing for deafness at birth

Several Chinese children with deafness at birth have had their natural hearing pathway restored with new gene therapy, MIT Technology Review reported last week.

“Shu Yilai, a surgeon and scientist at Fudan University in Shanghai, who is leading the experiment, explains that the new treatment applies only to one specific cause of deafness at birth: a defect in a gene that produces a protein called otoferlin. … “In an attempt to restore this function, researchers injected harmless viruses carrying DNA for a working copy of the otoferlin gene into the children’s ears.”

Hair cells + otoferlin relay sounds to the brain

“The inner ear contains about 16,000 hair cells, vibrating at different frequencies of sound. Without otoferlin, these cells cannot transmit the chemicals that relay information to the brain,” MIT Technology Review explains.

“The new treatment is designed to add a working copy of the otoferlin gene. Yilai injects the loaded viruses deep into a fluid-filled chamber in a part of the ears called the cochlea. … The treatment reaches maybe 60% to 65% of normal hearing,’ says Yilai.”

However…

Today (Monday, November 6), MIT Technology Review clarified that “gene therapy remains relatively expensive and inaccessible. … Nova Liu, the founder of Shanghai Refreshgene Therapeutics, which financially backed the Chinese research on this gene therapy, estimated that a commercialized product could cost between $125,000 and $250,000.” 

Also, “this particular gene therapy works only when the deafness is due to a lack of otoferlin. … Only around 1% to 5% of cases of inborn deafness are caused by lack of otoferlin, so many more deaf children aren’t able to benefit from this therapy yet.”

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

First spaceship-airplane announced today

Tenacity (credit: Sierra Space)

Sierra Space announced today “Dream Chaser,” the world’s first combined spaceship and airplane.

Dream Chaser can land at compatible commercial airports and deliver (in the future, crew) to and from low-Earth orbit (LEO), according to the company.

Described as “the world’s first runway-capable spaceplane,” Dream Chaser is supported by a contract with NASA to resupply the International Space Station. The construction is now complete and it will ship to NASA’s Neil A. Armstrong Test Facility in Ohio for environmental testing in coming weeks, the company says.

Cargo missions to the ISS

The inaugural Dream Chaser spaceplane will participate in seven initial cargo missions to and from the International Space Station, carrying essential supplies such as food, water and scientific experiments. One of Dream Chaser’s groundbreaking features is its capability to safely return critical cargo to Earth at fewer than 1.5g’s, ensuring the preservation of invaluable payloads, Sierra Space notes.

The launch is planned to occur from NASA’s Kennedy Space Center. Sierra Space’s launch provider, United Launch Alliance (ULA), will carry Dream Chaser into orbit on the second Vulcan Centaur rocket. Tenacity’s maiden resupply mission will culminate with the vehicle’s return, landing at the historic NASA Space Shuttle Landing Facility.

Dream Chaser spaceplane CRS-2 Mission Profile

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Physical neural network learns and remembers ‘on the fly’ like neurons for low-energy machine intelligence

Scientists have developed a low-energy machine intelligence for complex, real-world learning and memory tasks. It opens a pathway for developing efficient, low-energy machine intelligence for more complex, real-world learning and memory tasks.

Published today in Nature Communications, the research is a collaboration between scientists at the University of Sydney and University of California at Los Angeles.

“The findings demonstrate how brain-inspired learning and memory functions using nanowire networks can be harnessed to process dynamic, streaming data,” said Lead author Ruomin Zhu, a PhD student from the University of Sydney Nano Institute and School of Physics.

Nanowire networks

The system uses nanowire networks that arrange themselves into patterns mimicking physical neural networks, like those in a brain. The networks can be used to perform specific memory and learning tasks, using simple algorithms that respond to changes in electronic resistance at junctions where the nanowires overlap.

This “resistive memory switching” function is created when electrical inputs encounter changes in conductivity, similar to what happens with synapses in a brain.

In this study, researchers used the network to recognize and remember sequences of electrical pulses corresponding to images, inspired by the way the human brain processes information.

Benchmark image recognition

The network was used to perform a benchmark image-recognition task, accessing images in the MNIST database of handwritten digits, a collection of 70,000 small greyscale images used in machine learning. The nanowire neural network displayed a benchmark machine learning capability, scoring 93.4 percent in correctly identifying test images.

The memory task involved recalling sequences of up to eight digits. For both tasks, data was streamed into the network to demonstrate its capacity for online learning and to show how memory enhances that learning.

Out-performing conventional artificial neural networks

The nanowire neural network can learn and remember “on the fly,”sensor for instance,  sample by sample, extracting data online and thus avoiding heavy memory and energy usage.

“If the data is being streamed continuously, such as it would be from a sensor, for instance, machine learning that relied on conventional artificial neural networks would need to have the ability to adapt in real-time, which they are currently not optimized for,” Zhu said.

Citation: Zhu, R., Lilak, S., Loeffler, A., Lizier, J., Stieg, A., Gimzewski, J., & Kuncic, Z. (2023). Online dynamical learning and sequence memory with neuromorphic nanowire networks. Nature Communications, 14(1), 1-12. https://doi.org/10.1038/s41467-023-42470-5 (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Non-invasive technology maps brain activity to investigate behavior changes in neurological disease

A research team led by Cleveland Clinic and Oregon Health and Science University (OHSU) has developed a new method for mapping how the parts of the brain “speak” to each other. This is critical to understanding behavior changes in patients with neurological disease, according to the researchers.

“Effects on behavior and personality in Alzheimer’s disease and related disorders are caused by changes in brain function,” says Cleveland Clinic’s Hod Dana, PhD. “If we can understand exactly how the changes occur, we may figure out how to slow down the process or to stop it.

Mapping brain activity noninvasingly

Recording brain activity patterns that underlie behavioral changes is the first step to bridging the gap, say the researchers.

The team recently published results in Nature Communications on using a calcium sensor system called CaMPARI (Calcium-modulated photoactivatable ratiometric integrator) to map brain activity in preclinical models while completing cognitive tasks.

Current technologies are unable to map the whole brain while still identifying the single cells. The researchers were able to do cortex-wide activity recording without any head fixation, tethering, or attachment of a miniaturized device to the mouse’s head. Multiple cortical regions were recorded while the mouse was performing a battery of behavioral and cognitive tests. 

How Alzheimer’s-related genes affect learning and memory

The goal is to see how Alzheimer’s-related genes affect the way our neurons signal through our brains in learning and memory. The researchers hope to take what they learn from their results to develop tests and interventions that can improve the quality of life for patients, providing better treatment options.

“We now have the capability to study the relationship between brain activation and cognitive performance at an unprecedented level,” says Jacob Raber, PhD, an OHSU behavioral neuroscientist.

“These are the first steps in developing strategies to reverse those changes and improve cognitive performance in those affected by neurological conditions. The future of behavioral and cognitive neuroscience looks bright.”

Citation: Das, A., Holden, S., Borovicka, J., Icardi, J., Chaklai, A., Patel, D., Patel, R., Kaech Petrie, S., Raber, J., & Dana, H. (2023). Large-scale recording of neuronal activity in freely-moving mice at cellular resolution. Nature Communications, 14(1), 1-12. https://doi.org/10.1038/s41467-023-42083-y (open-access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Could AI machines become conscious?

Large language models (LLMs) can produce text that leaves the impression that you may be interacting with a human-like, intelligent conscious agent.

But in a paper in the journal Trends in Neurosciences, three neuroscientists—Jaan Aru at University of Tartu in Estonia; Matthew Larkum of Institute of Biology, Humboldt University; and James M. Shine of The University of Sydney—argue that present-day LLMs are “text-centric, whereas the phenomenological umwelt [perceived world] of living organisms is multifaceted and integrated.”

Three ways LLMs differ from human brains

Left: a schematic depicting the basic architecture of a large language model, which can have tens or even more than a 100 decoder blocks arranged in a feed-forward fashion. Right: a heuristic map of the thalamocortical system, which generates complex activity patterns thought to underlie consciousness. (credit: Mac Shine, Jaan Aru)

“Many theories of the neural basis of consciousness assign a central role to thalamocortical re-entrant processing,” say the neuroscientists. “Currently, such processes are not implemented in large language models (LLMs). The organizational complexity of living systems has no parallel in present-day AI tools.

“The inputs to language models lack the embodied, embedded information content characteristic of our sensory contact with the world around us.

“The evolutionary and developmental trajectories that led to the emergence of living conscious organisms arguably have no parallels in artificial systems as envisioned today. The existence of living organisms depends on their actions and their survival is intricately linked to multi-level cellular, inter-cellular, and organismal processes culminating in agency and consciousness.”

Citation: Jaan Aru, Matthew Larkum, and James M. Shine. October 18, 2023. The feasibility of artificial consciousness through the lens of neuroscience. DOI: https://doi.org/10.1016/j.tins.2023.09.009

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Funding for next-generation telescopes to capture light from the Big Bang and search for gravitational waves announced

The National Science Foundation announced Oct. 26 that it has awarded $3.7 million for final designs of next-generation telescopes to capture the microwave light from the earliest moments of the cosmic microwave background—light still traveling across the universe from the earliest moments after the Big Bang.

To capture that light, astrophysicists at the University of Chicago and Lawrence Berkeley National Laboratory plan to build telescopes and infrastructure in both Antarctica and Chile.

The researchers will also search for “primordial” gravitational waves—the physical vibrations from the Big Bang—and will investigate the mystery known as dark matter.

Biggest question

The research goal is to answer the biggest question: did our universe begin with a burst of expansion at the dawn of time (known as inflation), which would have stretched minuscule quantum-mechanical fluctuations into the initial seeds of the structure in the universe today?

The entire project, known as CMB-S4, is proposed to be jointly funded by the National Science Foundation and the U.S. Department of Energy. It is expected to cost on the order of $800 million and to come fully online in the early 2030s. The collaboration currently involves 450 scientists from more than 100 institutions, spanning 20 countries.

“With these telescopes we will be testing our theory of how our entire universe came to be, but also looking at physics that are at the most extreme scales in a way we simply cannot do with particle physics experiments on Earth,” said John Carlstrom, the Subrahmanyan Chandrasekhar Distinguished Service Professor of Astronomy and Astrophysics and Physics, who serves as the project scientist for CMB-S4.

This in a current snapshot of the cosmic microwave background, the oldest light in our universe, from when the cosmos was just 380,000 years old. The colors of the map represent small temperature fluctuations that ultimately resulted in the galaxies we see today. (Credit: ESA and the Planck Collaboration)

The South Pole advantage

The CMB-S4 project would involve two large telescopes in the mountains of Chile and nine smaller ones in Antarctica. “The South Pole is the only location that allows a telescope to look at one place in the sky continuously, because it’s at the pole where the rest of the Earth spins around,” explained Jeff Zivick, deputy project manager for CMB-S4.

CMBS4 1
The South Pole telescope (credit: U. of Chicago)

The South Pole Telescope, which has been operating since 2007, is equipped with the most powerful cosmic microwave background (CMB) camera.

CMB-S4 is expected to have nearly 500,000 superconducting detectors, a significant increase over all precursor cosmic microwave background experiments combined. The project is also expected to be funded by the U.S. Department of Energy. The National Science Foundation portion of the project is led by the University of Chicago; the Department of Energy’s portion is led by Lawrence Berkeley National Laboratory.

Primordial gravitational waves

This funding would also look for evidence of “primordial gravitational waves”—the ripples in space-time that would have been created if the universe really did explode into being from a space much smaller than a single subatomic particle. These ripples would interact with the cosmic microwave background, creating a distinct but extremely faint signature.

Primordial gravitational waves would also be evidence to connect the force of gravity with the laws of quantum mechanics, said Jim Strait, a physicist at Lawrence Berkeley National Laboratory and the project director for CMB-S4.

“The mismatch between the two theories—one that applies at the very largest scales in the universe and the other at the very smallest—has been plaguing scientists for decades.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Smart glasses with new ‘acoustic touch’ feature may help blind or low-visioned persons recognize and reach objects

Wearable smart glasses technology is becoming popular in the assistive technologies industry. The tech aids computer vision and other sensory information to translate the wearer’s surrounding into computer-synthesized speech.

Acoustic touch

The researchers explored the potential of a new technique known as “acoustic touch.” It is similar to human “echolocation*”—providing a “wearable spatial audio solution for assisting people who are blind in finding objects,” they explain.

Unlike traditional vision-aid systems, this new acoustic touch technique uses smart glasses to “sonify” (convert objects into distinct auditory icons) when the object enters the device’s field of view. This draws inspiration from human echolocation*.

This innovative approach employs head scanning and auditory icons to represent objects in the user’s field of view. It leverages head tracking and binaural (a type of spatial audio) rendering technology to offer high-quality auditory cues that assist users in pinpointing the location and identity of objects.

(It’s conceivable that generative AI methods could further increase the quality of audio representation of visual data.)

Evaluation study

“We developed a wearable ‘Foveated Audio Device’ to study the efficacy and usability of using acoustic touch to search, memorize, and reach items,” say researchers from the University of Technology Sydney and the University of Sydney, together with Sydney start-up ARIA Research.

“Our evaluation study involved 14 participants—7 blind or low-visioned and 7 blindfolded sighted (as a control group) participants,” they said. “We compared the wearable device to two idealized conditions: a verbal clock face description and a sequential audio presentation through external speakers. We found that the wearable device can effectively aid the recognition and reaching of an object.”

The researchers also observed that the device “does not significantly increase the user’s cognitive workload. These promising results suggest that acoustic touch can provide a wearable and effective method of sensory augmentation.”

* Watch How This Blind Man Uses Echo Location To Ride a Bike

Citation: Zhu, H. Y., Hossain, S. N., Jin, C., Singh, A. K., Duc Nguyen, M. T., Deverell, L., Nguyen, V., Gates, F. S., Fernandez, I. G., Melencio, M. V., Bell, R., & Lin, T. (2023). An investigation into the effectiveness of using acoustic touch to assist people who are blind. PLOS ONE, 18(10), e0290431. https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0290431

Funding: This work was supported by the Australian Cooperative Research Centres Projects (CRC-P) Round 11 CRCPXI000007, the ARIA research, the University of Technology Sydney, and the University of Sydney. Received by C.J, V.N and C.L. Website: https://business.gov.au/grants-and-programs/cooperative-research-centres-crc-grants.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Future soft optical fibers could allow for exploring new treatments to block peripheral nerve pain

Engineers at MIT have developed experimental soft, implantable optical fibers that could deliver light to major nerves to explore causes and potential treatments for peripheral nerve disorders in animal models.

(Peripheral nerve pain—from sciatica, motor neuron disease, and general numbness and pain—can occur when nerves outside the brain and spinal cord are damaged, resulting in tingling, numbness, and pain in affected limbs. Peripheral neuropathy is estimated to affect more than 20 million people in the United States.) 

“Current devices used to study nerve disorders are made of stiff materials that constrain movement, so that we can’t really study spinal cord injury and recovery if pain is involved,” says Siyuan Rao, assistant professor of biomedical engineering at the University of Massachusetts.

Optogenetics

The study is based on optogenetics, an animal research technique (originally developed at MIT) that genetically modifies neurons in the brain to respond to light. Neuroscientists have applied optogenetics in animals to precisely trace the neural pathways underlying a range of brain disorders, information that has led to targeted therapies for these conditions. 

But peripheral nerves experience constant pushing and pulling from the surrounding muscles and tissues. Rigid silicon devices would constrain an animal’s natural movement and potentially cause tissue damage.  

Testing transparent hydrogel fiber

Their new design is a soft, stretchable, transparent fiber made from hydrogel—a rubbery, biocompatible mix of polymers and water (a more Jell-O-like solution). 

The team tested the optical fibers in mice whose nerves were genetically modified to respond to blue light (to excite neural activity) or yellow light (to inhibit sciatic pain). 

“Now, people have a tool to study the diseases related to the peripheral nervous system, in very dynamic, natural, and unconstrained conditions,” said Xinyue Liu, an assistant professor at Michigan State University (MSU).

“We hope to help dissect mechanisms underlying pain in the peripheral nervous system. With time, our technology may help identify novel mechanistic therapies for chronic pain and other debilitating conditions such as nerve degeneration or injury.”

This research was supported, in part, by the National Institutes of Health, the National Science Foundation, the U.S. Army Research Office, the McGovern Institute for Brain Research, the Hock E. Tan and K. Lisa Yang Center for Autism Research, the K. Lisa Yang Brain-Body Center, and the Brain and Behavior Research Foundation.

Details of the team’s new fibers are reported today (Oct. 19, 2023), in a study appearing in Nature Methods

Citation: Liu, X., Rao, S., Chen, W., Felix, K., Ni, J., Sahasrabudhe, A., Lin, S., Wang, Q., Liu, Y., He, Z., Xu, J., Huang, S., Hong, E., Yau, T., Anikeeva, P., & Zhao, X. (2023). Fatigue-resistant hydrogel optical fibers enable peripheral nerve optogenetics during locomotion. Nature Methods, 1-8. c9 https://doi.org/10.1038/s41592-023-02020-9

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Wearable device could read or write data by just bending your finger

Researchers have invented an experimental wearable device using a material that generates power to create and store data by bending a finger—a possible, promising step towards health monitoring and other uses.

Multifunctional devices normally require several materials in layers, requiring nanomaterials with high precision, the researchers note.

The team, led by RMIT University and the University of Melbourne in collaboration with other Australian and international institutions, created the proof-of-concept device using a safe oxide of a low-temperature metal called bismuth.

Senior lead researcher Ali Zavabeti said the invention could be developed to create medical wearables that monitor vital signs and store personal data.

“The innovation was used in our experiments to write, erase and re-write images in nanoscale, so it could feasibly be developed to one day encode bank notes, original art or authentication services,” said Zavabeti, an engineer from RMIT and the University of Melbourne.

The team’s research is published in the journal Advanced Functional Materials.

“We tested natural motion behavior with the device attached to a finger joint, with an average output peak of about 1 volt,” Zavabeti said.

Memory functions

Lead author and PhD student Xiangyang Guo from RMIT said the team can print layers of bismuth oxide in just a few seconds.

“We fundamentally investigated this instant-printing technique for the first time, using low-melting point liquid metals,” said Guo.

The device was able to perform the memory functions of “read,” “write” and “erase” for various images. The device, which was not worn by a user during these memory experiments, wrote and stored the logo and symbol in a space that could fit 20 times within the width of a human hair.  

Other possible uses include sensing and energy harvesting, he said. “The material can act as a semiconductor, meaning it can be used for computation. It is a nanogenerator, meaning it’s energy efficient, with a green-energy supply from environmental vibrations and mechanical movements.”

Guo also said bismuth oxide was likely to cause less irritation to skin, compared with silicon, and it was durable, so it was stretchable and can be integrated into wearable technologies.

The researchers plan to adapt their approach to other low-temperature liquid and solid metals and alloys that could be developed for personalized wearables.

The Australian Research Council and the National Computational Infrastructure funded the research. The team included researchers from the University of Toronto, Western Sydney University, University of Sydney, University of New South Wales and Australian National University.

The researchers’ peer-reviewed article, “Multi-Functional Atomically Thin Oxides from Bismuth Liquid Metal,” was published in the journal Advanced Functional Materials.

Citation: Guo, X., Nguyen, C. K., Syed, N., Ravindran, A., Islam, M. A., Filleter, T., Cao, K., Wang, Y., Mazumder, A., Xu, C., Walia, S., Ghasemian, M. B., Kalantar-Zadeh, K., Scholten, S. C., Robertson, I. O., Healey, A. J., Tetienne, P., Lu, T., Liu, Y., . . . Zavabeti, A. Multi-Functional Atomically Thin Oxides from Bismuth Liquid Metal. Advanced Functional Materials, 2307348. https://onlinelibrary.wiley.com/doi/full/10.1002/adfm.202307348 (open access)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter