Understanding how the brain makes sense of sound


Research News

Research may lead to improved treatments for those with impaired hearing, improved sound interpretation by machines

March 15, 2019

For neuroscientists, human hearing is a process full of unanswered questions. How does the brain translate sounds — vibrations that travel through the air — into the patterns of neural activity that we recognize as speech, or laughter, or the footsteps of an approaching friend? And are those same neural processes universal, or do they vary across cultures?

With support from the National Science Foundation’s (NSF) Directorate for Social, Behavioral, and Economic Sciences (SBE), Massachusetts Institute of Technology professor Josh McDermott is leading a research team seeking to answer those questions. Their work lies at the intersection of psychology, neuroscience and engineering.

To identify whether there are aspects of auditory perception that are universal across cultures, McDermott and his team have traveled into places ranging from Boston to remote Amazonia, where they record sounds ranging from the clatter of a noisy diner to the stillness of a woodland path.

McDermott spends much of his time studying how the brain processes sound. A typical day finds him reviewing results from experiments involving human brain imaging, particularly functional magnetic resonance imaging, or fMRI. This line of research has revealed that the human auditory cortex contains neurons that respond selectively to music — not to speech or environmental sounds.

This work has shown that sound processing in the auditory cortex happens in stages, beginning with the analysis of low-level features, such as loudness and pitch. That processing proceeds step-by-step into higher-level features, such as the source of the sound production (for example, whether the sounds are produced by speech) and the identity of the person or thing producing the sound.

McDermott’s team has also developed an artificial neural network that can recognize sound and music. The network can identify words in speech and genres of music with the same accuracy as a human subject.

Because the researchers can “look under the hood” of the software to see how it handles information at every stage of processing, they can compare each stage with the functions of the auditory cortex, as imaged with fMRI. This comparison has shown that certain stages in auditory processing in the computer program are similar to those performed by the brain.

The long-term goals of the research include improving treatments for those with hearing impairments and designing machines that can interpret sound with abilities that rival those of humans.

— 
Stanley Dambroski,

(703) 292-7728 sdambros@nsf.gov

Investigators

Joshua McDermott

Related Institutions/Organizations

Massachusetts Institute of Technology

Related Awards

#1634050 Computational neuroimaging of human auditory cortex

Total Grants

$500,000

Source: NSF News

Brought to you by China News

Fenny

By Fenny

Senior Editor in Chief on Press Release Worldwide.

Leave a Reply

Your email address will not be published. Required fields are marked *