Skip to main content
Choose another country or region to see content specific to your location.
A green digital abstract square

AI & DNNs in Hearing Aids

Everything you need to know (but maybe were afraid to ask)

Do you feel like the world has exploded with phrases like Artificial Intelligence, Language learning models, Deep Neural Networks, Machine learning and the like? You’re not wrong! 

While much of this technology has existed for years, the recent launches of AI tools that can do everything from writing an essay to finding cancer in MRI scans has the news, governments, companies and everyday people enthralled with the possibilities (and a little concerned about how fast things are changing).

This kind of technology can get extremely sophisticated, but once you have a handle on the basic definitions and concepts, you’ll be able to confidently discuss these topics and explain them to your patients in everyday terms.

 
A woman's face overlayed with miscellaneous data and images
A trio working together on something

What's a DNN?

(And does size really matter?)

Deep Neural Networks 101

A Deep Neural Network, or DNN, is a type of Artificial Node Network (ANN). Similar to individual neurons in the human brain, nodes in an ANN work together to learn and solve problems based on data inputs.  

A DNN has one or more hidden node layers between the input and output layers, meaning that with the right training, it has greater capability to learn to recognize patterns and solve complex problems like we do every day with our brains.  

The way a DNN learns is through training on data inputs and machine learning, which is the process a neural network uses to program (or teach) itself. DNNs can be trained on all different types of data, including images, text, videos, sound and more. 

To train a DNN, the algorithm is shown many pairs of inputs and labeled outputs. The algorithm then must ‘learn’ for itself what rules can be used to arrive at the correct output for any given input through repeated attempts and continuous feedback about those attempts.

By having a large dataset with plenty of variability, the DNN will eventually be able to generalize—in other words, to accurately handle data it has never seen in training. 

For example, if the data set includes 10 pictures of horses, the DNN might correctly identify a new picture of a horse 5% of the time. The other 95% of the time it might think it’s looking at a dog or a table. If the training data set contains 10000 pictures of horses, then the DNN is likely to identify a new picture of a horse almost 100% of the time.  

So, just like a human brain, the more information the DNN has been trained on, the better it is able to recognize patterns and make decisions. 

Are you still with us? Not too complicated, right? So those are the basics. Now to the fun stuff.

 

Why DNN size matters

Yes, bigger IS better

Remember when we learned that a DNN has three types of node layers, the Input layer, the Output layer, and the Hidden layer(s) in between? Those hidden layers are where the magic happens. In general, the more layers and nodes per layer, the more calculations or “thinking” the DNN can do between the Input and Output layers. This means the DNN can handle more and more complex tasks. 

The specific way the hidden layers in a DNN are structured and connected with each other also plays a crucial role in how it functions. Because of this, DNN structures can be optimized to perform certain tasks better than others.

You could think of it like the brains of animals; the larger the brain and the more densely-packed the neurons, the “smarter” the creature is. A small DNN with only a single layer might equate to the brain of a small frog, whereas a large DNN with many layers would be more like a human’s brain.  

Which is more equipped to figure out the intent of a listener – the frog or human? Higher-level thinking and problem solving in a DNN is pretty much a mathematical outcome of more nodes doing more calculations for better results.  

Additionally, we can train larger DNNs on larger data sets; they can access larger amounts of information to make better decisions. The frog from earlier won’t do nearly as well learning the alphabet as one of us human-brained folks.  

A large DNN with an optimal architecture will have the ability to leverage far more data for its calculations than a small DNN with limited capacity. 

DNNs in Hearing Aids

There are currently three ways hearing aids use DNNs.  

The first, and most common way a hearing aid can use a DNN is to classify the sound scene. The manufacturer trains the sound scene classifier using different sounds to teach the hearing aid when to activate or deactivate features.  

To use the example from above, this is teaching the hearing aid to know that a horse is a horse and not a table; or in a more apt hearing aid example, teaching the hearing aid to know what is speech in noise versus just noise.  

Another way hearing aids can use DNNs is for light noise reduction. This is not much different from traditional noise reduction technology. They teach the hearing aid which of the sounds are unwanted noise and try to reduce it by selectively decreasing gain. 

The third, and most unique way hearing aids use a DNN, is to instantly separate the speech from noise. This type of DNN frees the hearing aid from using narrow directionality and allows the listener to hear multiple speakers from any direction. This is accomplished by identifying and isolating the speech so it can be preserved and reinforced. 

Not all DNNs are the same

Small DNNs that are used primarily to classify sound scenes need to be always-on, because that’s how they determine which programs and features to switch to when the user’s environment changes. 

The large, uniquely-structured DNN in Infinio Sphere is activated as needed to do real-time sound cleaning, like when the user enters a noisy environment (such as walking into a busy restaurant). 

Other brands also use their small DNNs to help with noise management, but with pretty limited patient benefits due to a limited number of nodes doing the “thinking.” 

A trio surrounding by the Infinio swoosh
Green neurons

AI in Hearing Aids

AI in hearing aids isn’t a feature that you can directly measure, but you can look at how AI-powered features offer patient benefits. Anyone can say they have AI and use it in their hearing aids, but the real question to ask is: What kind of benefits does it offer the provider and the patient? 

The answer can be very different among different brands.

How some brands are using AI

  • Sound scene classification/feature steering

    Some manufacturers use small DNNs to classify the listening environment or detect when speech is present. DNNs are one tool to accomplish this task, but they aren’t necessarily better than other approaches.

  • Traditional noise reduction

    Another way some brands use DNNs in hearing aids is to drive digital noise reduction. In this application, the DNN helps the hearing aid to apply channel-specific gain reduction to make listening more comfortable, but this approach cannot effectively isolate speech from noise.

How Phonak uses AI

  • Sound scene classification/feature steering

    AutoSense OS uses AI and machine learning to accurately capture and analyze the sound environment, then precisely blends feature elements from multiple programs in real time to provide a seamless listening experience.

    Phonak hearing aids began using AutoSense OS in 2016, and with each passing year this technology has been further developed and refined. AutoSense 6.0 came to market with the Infinio platform in August 2024, and leverages the incredibly-powerful ERA chip to scan the user’s environment 700 times per second for real-time adjustments. 

  • Instant separation of speech and noise in all directions 

    The DEEPSONIC AI chip uses a large and highly-trained DNN to separate and remove background noise from speech for a listening experience that can WOW users. This is how Infinio Sphere handles noise in a unique and groundbreaking way.

    To understand just how fast DEEPSONIC is making calculations and processing data, it can perform 7.7 billion operations every single second.

    This powerful technology represents a leap forward in hearing solutions, putting Infinio Sphere with Spheric Speech Clarity into an entirely new category of hearing solutions beyond anything else available in the market today.

 

Common Myths

Have you heard any of these? 

Look, somebody will always have something to say. But just like Infinio, we will do our best to separate the noise from what you want to hear. 

Wrong. Phonak has been innovating in the AI space for over 25 years and released the first AI-based Phonak solution in 1999! Autosense OS was released in 2016. What is new is how Infinio Sphere is using a dedicated DNN processing chip to deliver a level of clean and clear speech far beyond any existing hearing technology. 

Not really. The industry has used hybrid chips for years. But the DEEPSONIC chip in Audéo Infinio Sphere is a completely different architecture to hearing aid chips of the past. Developed exclusively by Sonova for enhancing speech in noise beyond any other available technology, this dedicated AI chip houses a DNN that provides next-level noise reduction. There is no comparable chip available in any industry.

Nope. Infinio Sphere is the only hearing aid in the industry using a dedicated AI chip to separate speech from noise. The DNN in the DEEPSONIC chip has an optimal architecture for speech enhancement, and has been trained on over 22 million sound samples, which enables the AI to correctly identify and reduce background noise.

Incorrect. While a dedicated DNN chip does take more power to run than those with a small DNN integrated into their processing chip, these devices do last a full day for the patient. In fact, our data shows that on average, Sphere-wearers are using their hearing aids even longer each day than people wearing our previous platform. And let's be clear: the benefits of powerful AI-driven technology made possible by a dedicated DNN chip are simply not possible with any other existing hearing aid technology.  

Nope. Hearing aids use DNNs for different reasons. When used to identify the listening situation, the DNN needs to always be on. Phonak devices use AutoSenseOS, which is driven by machine learning, to identify and automatically adjust to the sound environment. With Sphere, the dedicated AI chip uses a large DNN to instantly separate speech from noise. This is only active when there is background noise. You wouldn’t want all the lights in your house permanently set on dim if you were using one room that needed bright light, right?

Wrong. That would not be a great experience for the patient, would it? You and your patient can control the limits for how long Spheric Speech Clarity can be active from a single charge. At default settings, there is enough battery for a full day of multi-use listening for your patient.