Individuals With Hearing Loss Process Sound Differently In Noisy Environments

Individuals With Hearing Loss Process Sound Differently In Noisy Environments

 

New research out of the labs at Purdue University is providing another crucial piece of the puzzle over just how sound in general, and speech in particular, is detected and coded for transmission to the brain: This finding shows how temporal resolution and processing is eroded in the presence of noise, which has a bearing on hearing aid settings; but more on that later. From The Hearing Review and Purdue University:

Purdue University researchers have found that background chatter causes the ears of those with hearing loss to work differently than those with normal hearing. The findings may lead to improved noise-reduction algorithms in hearing aids.

“When immersed in the noise, the neurons of the inner ear must work harder because they are spread too thin:   It’s comparable to turning on a dozen television screens and asking someone to focus on one program. The result can be fuzzy because these neurons get distracted by other information,” explained Kenneth S. Henry, a postdoctoral researcher in Purdue’s Department of Speech, Language and Hearing Sciences, who collaborated with Michael G. Heinz, an associate professor of speech, language and hearing sciences; with the work published as a Brief Communication in Nature Neuroscience. The National Institutes of Health and the National Institute on Deafness and Other Communication Disorders funded the work.

“Previous studies on how the inner ear processes sound have failed to find connections between hearing impairment and degraded temporal coding in auditory nerve fibers, which transmit messages from the inner ear to the brain,” said Heinz, who studies auditory neuroscience. “The difference is that such earlier studies were done in quiet environments, but when the same tests are conducted in a noisy environment, there is a physical difference in how auditory nerve fibers respond to sound…” [Emphasis added: DLS]

“The study confirmed that there is essentially no change, even for those with hearing loss, in terms of how the cochlear neurons are processing the tones in quiet, but once noise was added, we did observe a diminished coding of the temporal structure,” Henry said. [Emphasis added: DLS]

The researchers focused on coding of the temporal fine structure of sound, which involves synchrony of neural discharges to relatively fast fluctuations in sound pressure. Both coding of fast fine structure information and coding of slower envelope fluctuations are critical to perception of speech in everyday listening environments.

“When noise was part of the study, there was a reduction in how synchronized the neurons were with the temporal fine structure,” Henry said. [Emphasis added: DLS]

We’ll pause here, because loss of neural synchrony is a hallmark  of auditory neuropathy spectrum disorder (ANSD), which causes a distortion in the perception of sound, and has an adverse impact on speech discrimination in quiet, and especially in noise. For a truly frightening simulation of what dys-synchrony sounds like to the sufferer, please listen to this sequence of profound, severe, moderate, mild, and then no ANSD samples (window opens into your media player for the WMA file).

Continuing with The Hearing Review article:

The auditory system filters sound into a number of channels that are tuned to different frequencies, and those channels vary based on their frequency tuning. In a normal system, the channels are sharp and focused, but they get broader and more scattered with hearing impairment.

“Now that we know a major physiological effect from hearing loss is that the auditory nerve fibers are particularly distracted by background noise, this has implications for research and clinical settings,” said Heinz, who also has a joint appointment in biomedical engineering. “For example, most audiology testing, whether it is lab research or hearing-loss screenings, takes place in a quiet environment, but testing noisy, more realistic [and we add: reverberant] backgrounds is necessary to truly understand how the ear is processing sound. This also could influence the design of hearing aids and assistive technologies.

“Designers are often working on improving the temporal coding of the signal, but this research suggests that a primary focus should be on improving noise-reduction algorithms so the hearing aid provides a clean signal to the auditory nerve. Other ways people can reduce background noise include induction-loop hearing systems, which are used in churches and other public settings to provide a cleaner signal by avoiding room noise.”

Next, the researchers plan to expand the study to focus on more real-world noises and coding of slower envelope information in sound…

 

Discussion:

1) We are looking forward to the results of the speech envelope coding, as this has an impact on AGC attack & release times, more specifically the fast speed of wide dynamic range compression (WDRC) has a tendency to “squash” the speech envelope. Already, we have identified two instances where long (5000 mSec) attack & release times are needed — For amplifying when ANSD is present, and when the working memory is not good (cognition issues) — and if indeed speech envelope preservation is needed to improve speech discrimination for the more general case of sensorineural hearing loss, then this could call into question the whole concept of WDRC with its’ fast attack & release times (or alternately, using a more complex, nonlinear control vector for the AGC characteristics);

2) These findings have an impact on predicting speech scores using the speech intelligibility index (SII; also called articulation index AI), such as used by Mead Killion’s Count The Dots Audiogram method:

Articulation Index AI (or Speech Intelligibility Index SII) - to - Speech Scores graph

Articulation Index AI (or Speech Intelligibility Index SII) – to – Speech Scores graph
Click image to enlarge

For decades it has been a given that there’s a 1:1 correlation between the signal-to-noise ratio (top row) and SII (bottom row), i.e. for each 3dB reduction in S/N there is a corresponding ten percentage point drop in SII — This holds true for normal hearing people; but when hearing is impaired this research points to the numbers in the top row shifting to the left, i.e. instead of needing +9 dB S/N to get a 50% SII, in fact it may be a +12 dB (or even more) S/N when sensorineural hearing loss is present, with the net effect of the intelligibility curves shifting to the right. There is also the possibility that the 1:1 correlation itself may no longer hold, with a “spreading” of the S/N values, i.e. there may be more than a 3 dB increment for each 10% difference in SII.

[It's also worth noting that when auditory neuropathy spectrum disorder (ANSD) is in play, the curves shift downward, which reflects the poor speech discrimination even in quiet.]

 

References:

Bootnotes:

All that is old, is new again: Back in the mid-70’s in high school we used Reference Data for Radio Engineers as our textbook for the Ham Radio Club, where among many other things we were introduced to the Articulation Index. Here it is:

Articulation Index chart from Reference Data for Radio Engineers

Articulation Index chart from Reference Data for Radio Engineers, Electroacoustics (Chapter 37 Page 25);
Copyright ©1975 Howard W. Sams & Co.
Library of Congress Card Catalog Number: 75-28060
Click Image to Enlarge

Permalink: http://thehearingblog.com/archives/1121 | Short link: http://TinyURL.com/blog1121

← Digital “Moore’s Law” Radio: The Enabling Technology For “Made For Apple” Hearing Aids Made For Apple Cochlear Implants: The Dominos Are Falling… →

About the author

Dan Schwartz

Electrical Engineer, via Georgia Tech

4 Comments

  1. Jeffrey Swartz
    September 30, 2012 at 4:05 pm

    I find this very interesting and needing audi’s testing environment to be in a more natural setting. “Now that we know a major physiological effect from hearing loss is that the auditory nerve fibers are particularly distracted by background noise, this has implications for research and clinical settings,” said Heinz, who also has a joint appointment in biomedical engineering. “For example, most audiology testing, whether it is lab research or hearing-loss screenings, takes place in a quiet environment, but testing noisy, more realistic backgrounds is necessary to truly understand how the ear is processing sound. This also could influence the design of hearing aids and assistive technologies.” (http://www.hearingreview.com/news/2012-09-17_03.asp)


  2. Hearing Aids Sydney
    November 2, 2012 at 6:12 am

    Your post plays an important role in the field of our business.I also get a lot of important information.Thanks for sharing this type of blog..:-)
    Hearing Aids Sydney


  3. Art Noxon
    April 17, 2013 at 11:44 am

    It’s a relief to finally see physical evidence being reported which confirms an experience so commonly known. Adding sound fusing, up to 30ms delayed, early treble range reflections to the direct signal improves the S/N ratio for intelligibility. I’m wondering if the sound fusion amplification effect (correlation detection) has been or can be physically measured. If so, is this amplification effect also compromised by or does it remain insensitive to variations in the background noise level.


Leave A Reply

*

%d bloggers like this: