One of the challenges facing science fiction writers is coming up with ways for alien species to understand each other. One fictional device is a machine that reads the individual’s thoughts and then converts that into audible speech. And now this is coming closer to science reality.
A lot of researchers are working on brain-machine interface projects, but most of these focus on controlling mechanical objects such as robots or powered prosthetic limbs. Scientists at the University of California San Francisco have developed a system that can convert brain activity into natural-sounding human speech.
Rather than try to “read the mind” of the subject and detect discrete words, the researchers instead created a virtual vocal system including larynx, throat, tongue, and lips. They then used sensors to detect brain activity as patients read prepared text out loud. They were then able to reverse-engineer the movements required to make those sounds, and matched them to the brain impulses that created them. They could then create a synthesizer to recreate those sounds in the patient’s own voice.
Crowd-sourced volunteers transcribed the synthesized recordings. Given a limited vocabulary list of just 25 words, 69% of the transcriptions were perfectly accurate. Doubling the list of possible words to 50 dropped the rate of perfect transcription dropped to just 43%. Ultimately, this technology could restore speech to patients who have lost their ability to speak due to neurological impairment or other injury.
Our bodies are complex systems, but scientists are making great strides at reconnecting our brains to our bodies in ways that could transform the lives of millions.