UCLA engineers have invented a soft, thin, stretchy device measuring just over 1 square inch that can be attached to the skin outside the throat to help people with dysfunctional vocal cords regain their voice function.
The development is described in the open-access journal Nature Communications.
The new bioelectric system, developed by Jun Chen, an assistant professor of bioengineering at the UCLA Samueli School of Engineering, and his colleagues, can detect movement in a person’s larynx muscles and translate those signals into audible speech, with the assistance of machine-learning technology and with nearly 95% accuracy.
How it works
The tiny new patch-like device is made up of two components.
A self-powered sensing component detects and converts signals generated by muscle movements into high-fidelity, analyzable electrical signals. These are then translated into speech signals, using a machine-learning algorithm. The other component turns those speech signals into the desired voice expression.
The device uses a soft magnetoelastic sensing mechanism, developed by Chen’s team in 2021, to detect changes in the magnetic field when it is altered as a result of mechanical forces—in this case, the movement of laryngeal muscles. The embedded serpentine induction coils in the magnetoelastic layers help generate high-fidelity electrical signals for sensing purposes.
With double-sided biocompatible tape, the device can easily adhere to an individual’s throat near the location of the vocal cords and can be reused by reapplying tape as needed.
Machine learning
In their experiments, the researchers tested the wearable technology on eight healthy adults. They collected data on laryngeal muscle movement and used a machine-learning algorithm to correlate the resulting signals to certain words. They then selected a corresponding output voice signal through the device’s actuation component.
The research team demonstrated the system’s accuracy by having the participants pronounce five sentences—both aloud and voicelessly—including “Hi, Rachel, how are you doing today?” and “I love you!”
The overall prediction accuracy of the model was 94.68%, with the participants’ voice signal amplified by the actuation component, and demonstrating that the sensing mechanism recognized their laryngeal movement signal and matched the corresponding sentence the participants wished to say.
The research team plans to continue enlarging the vocabulary of the device through machine learning and to test it in people with speech disorders.
The team previously developed a wearable glove capable of translating American Sign Language into English speech in real time to help users of ASL communicate with those who don’t know how to sign.
The research was funded by the National Institutes of Health, the U.S. Office of Naval Research, the American Heart Association, Brain & Behavior Research Foundation, the UCLA Clinical and Translational Science Institute, and the UCLA Samueli School of Engineering.
Citation: Che, Z., Wan, X., Xu, J. et al. Speaking withoutCitation: Che, Z., Wan, X., Xu, J. et al. Speaking without vocal folds using a machine-learning-assisted wearable sensing-actuation system. Nat Commun 15, 1873 (2024). Vocal folds using a machine-learning-assisted wearable sensing-actuation system. Nat Commun 15, 1873 (2024). https://www.nature.com/articles/s41467-024-45915-7 (open-access)
Thumbnail image credit: Jun Chen Lab/UCLA
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.