Speaking without vocal cords, thanks to a new AI-assisted wearable device

Bioengineers at the University of California, Los Angeles (UCLA), a public land-grant research university, have invented a thin, flexible device that adheres to the neck and translates the muscle movements of the larynx into audible speech. The device is trained through machine learning to recognize which muscle movements correspond to which words. The self-powered technology could serve as a non-invasive tool for people who have lost the ability to speak due to vocal cord problems. People with voice disorders, including those with pathological vocal cord conditions or who are recovering from laryngeal cancer surgeries, can often find it difficult or impossible to speak. That may soon change. A team of UCLA engineers has invented a soft, thin, stretchy device measuring just over one square inch that can be attached to the skin outside the throat to help people with dysfunctional vocal cords regain their voice function, according to a report in the journal, Nature Communications. The new bioelectric system, developed by Jun Chen, an assistant professor of bioengineering at the UCLA Samueli School of Engineering, and his colleagues, can detect movement in a person’s larynx muscles and translate those signals into audible speech with the assistance of machine-learning technology — with nearly 95 percent accuracy. The breakthrough is the latest in efforts to help those with disabilities. His team previously developed a wearable glove capable of translating American Sign Language into English speech in real time to help users of ASL communicate with those who don’t know how to sign.


The wearable technology is designed to be flexible enough to move with and capture the activity of laryngeal muscles beneath the skin. The tiny new patch-like device is made up of two components. One, a self-powered sensing component, detects and converts signals generated by muscle movements into high-fidelity electrical signals; these electrical signals are then translated into speech signals using a machine-learning algorithm. The other, an actuation component, turns those speech signals into the desired voice expression.  Utilizing a soft magnetoelastic sensing mechanism developed by Chen’s team in 2021, the device is capable of detecting changes in the magnetic field when it is altered as a result of mechanical forces — in this case, the movement of laryngeal muscles. The embedded serpentine induction coils in the magnetoelastic layers help generate high-fidelity electrical signals for sensing purposes. Measuring 1.2 inches on each side, the device weighs about seven grams and is just 0.06 inches thick. With double-sided biocompatible tape, it can easily adhere to an individual’s throat near the location of the vocal cords and can be reused by reapplying tape as needed. Voice disorders are prevalent across all ages and demographic groups; research has shown that nearly 30 percent of people will experience at least one such disorder in their lifetime. Yet with therapeutic approaches, such as surgical interventions and voice therapy, voice recovery can stretch from three months to a year, with some invasive techniques requiring a significant period of mandatory postoperative voice rest. The two components — and five layers — of the device allow it to turn muscle movement into electrical signals which, with the help of machine learning, are ultimately converted into speech signals and audible vocal expression. In their experiments, the researchers tested the wearable technology on eight healthy adults. They collected data on laryngeal muscle movement and used a machine-learning algorithm to correlate the resulting signals to certain words. They then selected a corresponding output voice signal through the device’s actuation component.  The overall prediction accuracy of the model was 94.68 percent, with the participants’ voice signal amplified by the actuation component, demonstrating that the sensing mechanism recognized their laryngeal movement signal and matched the corresponding sentence the participants wished to say. Going forward, the research team plans to continue enlarging the vocabulary of the device through machine learning and to test it in people with speech disorders.

Source:  UCLA