AI models show striking likeness to human hearing ability in new study

MIT scientists are developing computer models to mimic how our ears and brain process sound, aiming to enhance hearing aid technology.

Sejal Sharma
AI models show striking likeness to human hearing ability in new study
MIT study reveals advances in computational models for human hearing

Daria Kulkova/iStock 

Scientists use machine learning and deep artificial neural networks (DNNs) to mimic biological tasks, hoping these models will reproduce human-like behavior and brain responses. 

While DNNs have shown success in vision-related tasks, in the realm of auditory modeling, DNNs trained on various tasks and sounds have demonstrated an ability to predict brain responses in the auditory cortex – the part of the brain that processes auditory information.

However, the effectiveness varies depending on the training task and architecture. 

Researchers at the Massachusetts Institute of Technology (MIT) have been working on computer models that imitate how our ears and brain process sound so that better hearing aids and other devices can be built in the future.

In their study, the researchers found that recent computer models developed from machine learning are getting closer to replicating the way our brain processes sounds.

The study highlights the role of training data and tasks in shaping accurate auditory cortical representations in DNNs.

DNNs show promise as models of human hearing

“What sets this study apart is it is the most comprehensive comparison of these kinds of models to the auditory system so far,” said Josh McDermott, an associate professor of brain and cognitive sciences at MIT and the study’s senior author.

In their study, scientists looked at nine computer models designed to understand sound and created 14 more models. These models were trained to do different hearing-related tasks, like recognizing words or identifying speakers.

“Even though the model has seen the exact same training data and the architecture is the same, when you optimize for one particular task, you can see that it selectively explains specific tuning properties in the brain,” said Greta Tuckute, an MIT graduate student and lead author of the study.

When tested with real-life sounds used in human brain studies, the best models were those trained on multiple tasks and exposed to background noise. 

The study suggests this kind of training makes these models more similar to how the human brain processes sounds.

“These models that are built with machine learning are able to mediate behaviors on a scale that really wasn’t possible with previous types of models, and that has led to interest in whether or not the representations in the models might capture things that are happening in the brain,” added Tuckute.