Digital sound processing such as beamforming have done wonders to improve hearing aids so that they can do more than just amplify sounds for the hearing impaired. But there are times when even these new technologies fall short. How is the device supposed to know which person you want to listen to, when you’re in a noisy setting?
A research team led by scientists from Columbia University have conducted experiments that could provide a whole new level of assistance. They have developed a way to use a listener’s neural signals to select a specific person’s voice from a mix of voices and then amplify just that one voice. This differs from beamforming in that it can work with a single audio channel and does not require an array of two or more microphones. By recording a subjects brain activity while listening to a specific voice, the system is then able to recognize that pattern, and can switch to extracting that target voice from the mix of sounds. The research has been done using invasive neural sensors, but the scientists believe that it can work with non-invasive brain inputs as well. The system has to be trained for each specific voice that is to be recognized which could result in large data storage requirements, but these could be handled easily by a typical smartphone linked wirelessly to the hearing devices.
While this is still a lab experiment, it points to a future where hearing impaired users could switch their attention to a specific speaker simply by thinking about it. Coupled with other digital sound processing technologies, this could bring additional clarity and convenience to those who have difficulty hearing in noisy settings.