Prof. Sharon Gannot and colleagues are developing an advanced hearing aid

Prof. Sharon Gannot and colleagues are developing an advanced hearing aid
תאריך

A Bar Ilan project funded by the Ministry of Science and Technology combines experts in the field of signal processing, neuroscience, and deep learning, in an attempt to construct an advanced hearing aid that could focus hearing according to the attention requested by the brain.

Modern living is full of stimuli fighting for our attention and posing significant challenges to the brain’s perceptive systems. Focusing our attention on a single speaker while ignoring different interruptions is a difficult challenge, particularly for people with hearing impairments. Technological solutions that separate sources of speech, particularly using microphone arrays, have had their significant advances over the past several years, and today are very helpful to people with hearing impairments. But what happens in, say, a cocktail party scenario – several speakers in one room, and we want to direct our attention to one particular speaker? For people with normal hearing, the brain helps direct attention to the right speaker. But what about people with hearing impairments who use technological aids?

This is the problem currently explored by a research team at Bar Ilan: Prof. Sharon Gannot of the Faculty of Engineering who specializes in processing speech signals; Prof. Jacob Goldberger, also of the Faculty of Engineering, who specializes in deep learning; and Dr. Elana Zion-Golumbic of the Brain Research Center, specializing in the relationship between brainwaves and hearing. The goal, says Prof. Gannot, is to develop innovative algorithms to significantly improve the comprehension abilities of the desired speaker in complex acoustic environments. “Most hearing aids on the market today try to separate the speaker standing in front of the hearing-aid wearer from the complex of background sounds and noises. Our goal is to understand who the person is listening to and direct the device accordingly. To do that, we have to ‘look into their brain,” he explains. “This can be done by EEG that in laboratory conditions, uses 64 electrodes to monitor the electrical activity in the brain. There are areas in the brain in which hearing and attention processing is done; if we can analyze those and understand which one of the speakers the aid-wearer is listening to, we can focus the signal processing algorithms on extracting that speaker.”

It is a complex project that requires the development of algorithms that can simultaneously receive audio (from microphones in the hearing aid) and brain information (EEG), decide where the listener wishes to direct his attention and feed the right information into the ear. “Modern hearing aids are made of a small mounted earpiece, with 2-3 microphones. Usually, the wearer will have one device in each ear. All in all, we have 4-6 microphones that we can use to design a beamformer that is capable of extracting the desired speaker while maintaining the spatial information of the positioning of all speakers in the acoustic scene,” explains Gannot on his part in the project. “The key question is which among all speakers to listen to. In order to do that, we need the interface to the information, which is extracted from brainwaves. We create this interface using deep learning. This allows us to offer a more advanced, sophisticated hearing aid than the ones currently on the market. Ours can do three things: extract the desired individual (the one the hearing-aid wearer listens to) from a mixture of speakers and to hear him loud and clear; keep the other speaker(s) at a lower volume, so we can easily switch our attention to them, and maintain the spatial information of all speakers in the scene – i.e., we would like that even after processing, the speaker to the right of the hearing aid wearer will remain in the same angle to the right, and vice versa. All this has to be executed in a way that allows for rapid changes, to that listening attention can be switched online as required.”

The project was launched in February of last year and received a three-year grant from the Ministry of Science and Technology at a sum of 1.8M ILS. Prof. Gannot and his collaborators are hoping to create a super-sophisticated hearing aid that, in addition to microphones, is also equipped with two EEG sensors. “At a later stage,” says Gannot, “the team would like to add another improvement that monitors eye movements – the direction at which we are looking.” “At the end of the day,” he says, “our goal is to create a hearing aid that best imitates natural hearing.”

Last Updated Date : 14/02/2021