While hearing technology has improved dramatically in the past several decades, some situations are special challenges. With healthy hearing, the auditory system has a complex, instantaneous ability to prioritize some sounds over others, making it easier to focus on single conversations or listen for specific sound cues. In contrast, noisy environments are still the biggest challenge for hearing aids to adapt to.
When a hearing aid encounters a multitude of simultaneous sounds, the effect for the listener can still be confusing. Parsing and prioritizing individual sounds in a noisy situation require advanced sound processing, something our brain has developed naturally, and scientists are just now catching up to.
What Makes Noisy Places Complex?
While cutting edge hearing aids are getting better and better at improving how a hearing aid selectively amplifies sounds, it has been one of the hardest puzzles to crack. While a person with healthy hearing may sit down in a busy restaurant and be able to naturally hear their friend’s conversation over the clank of dishes and silverware, a person with hearing loss is going to have much bigger challenges.
When hearing loss is left untreated, noisy places can be the hardest and most anxiety-producing to visit. Significant speech cues sound muffled and go unrecognized, blending into the overall soundscape and becoming indistinguishable from background noises. This makes it difficult to carry conversations or pick up distinct sound cues in noisy settings.
Even if a person with hearing loss is using hearing aids, busy sound environments may still be challenging. Although sound processing has improved, hearing aids often amplify insignificant background sounds alongside the speech cues it is trying to prioritize.
New Research in Denmark
Based at Audio Lab Analysis at Aalborg University in Denmark, new strides forward are developing to improve sound processing for background noise. Mathew Kavalekalam, a current PhD student at the University, has been developing new software that gradually learns how to prioritize speech and other significant sounds through gradual reiteration.
The basis for the new sound processing software is programmed model of how the human body generates speech. Kavalekalam created a digital model of how speech is created taking into account the way different elements like the lungs and diaphragm interact with the tongue, lips and teeth. With this working computerized understanding of speech, Kavalekalam then sought to expand the software’s capacity for speech recognition.
Kavalekalam’s algorithm works by receiving high-priority sounds that are, at first, distinctly louder than background noises. The sounds are repeatedly played for the software while increasing the volume and presence of background noises. Through this recognition teaching, Kavalekalam gradually “trains” the software to focus on some sounds over others, much in the way a healthy auditory system does.
At the end of this training, the audio processing software was able to much more accurately emphasize important speech and sound, possibly being the start of a new chapter in hearing technology.
The new software is just starting to flex its muscles. In a small 10-participant study the sound algorithm produced a 15% better understanding of speech within complex noise environments.
Kavalekalam tested subjects by asking them to follow simple verbal instructions while listening for the directions amongst dense background noise. Testing participants were able to follow these directions correctly 15% more while using the sound processing algorithm than without. The success highlights how adaptable and nuanced sound processing technology is becoming, as well as how our understanding of human hearing is advancing.
A Pressing Challenge
It may be a while before Kavalekalam’s work is part of a standard hearing aid. While sound processing that can navigate noisy environments is a pressing concern, hearing aids also need to operate through instantaneous microcomputing. Currently, it is unclear if the processing power of the new algorithm can be made to fit within the constraints of hearing aid software.
Developing algorithms that learn how to prioritize sound may be a great new direction for hearing technology however. With hearing aids on the forefront of smart and efficient micro processing, advances have happened in leaps and bounds. The encouraging trial testing of the software can also make the way for new funding and development, pushing hearing software advancements towards a solution that may be closer to natural, healthy hearing than was ever thought possible.
Visit Us at Atlanta Hearing Doctor
Difficulty with speech recognition is one of the most common signs of hearing loss. If you have been struggling to keep up with conversation, contact us at Atlanta Hearing Doctor for a hearing test and consultation today.