Purpose: The objective of the current study was to describe outcomes on physiological and perceptual measures of auditory function in human listeners with and without a history of recreational firearm noise exposure related to hunting. Design: This study assessed the effects of hunting-related recreational firearm noise exposure on audiometric thresholds, oto-acoustic emissions (OAEs), brainstem neural representation of fundamental frequency (F0) in frequency following responses (FFRs), tonal middle-ear muscle reflex (MEMR) thresholds, and behavioral tests of auditory processing in 20 young adults with normal hearing sensitivity. Results: Performance on both physiological (FFR, MEMR) and perceptual (behavioral auditory processing tests) measures of auditory function were largely similar across participants, regardless of hunting-related recreational noise exposure. On both behavioral and neural measures including different listening conditions, performance degraded as difficulty of listening condition increased for both nonhunter and hunter participants. A right-ear advantage was observed in tests of dichotic listening for both nonhunter and hunter participants. Conclusion: The null results in the current study could reflect an absence of cochlear synaptopathy in the participating cohort, variability related to participant characteristics and/or test protocols, or an insensitivity of the selected physiological and behavioral auditory measures to noise-induced synaptopathy.
Keywords: auditory processing, Cochlear synaptopathy, frequency following response, middle-ear muscle reflex, recreational firearm noise exposure
How to cite this article:Continuous and high levels of noise exposure can cause metabolic reactions and mechanical damage in cochlear structures[1],[2],[3],[4] that can result in a range of short- and long-term auditory health consequences in human listeners.[1] Such auditory consequences include temporary threshold shifts,[5],[6] permanent threshold shifts,[5],[6],[7],[8],[9] tinnitus,[10],[11],[12],[13],[14],[15] and suprathreshold speech perception deficits in human listeners.[16] Noise exposure can occur at the workplace, during recreational pursuits, and from environmental sources. The detrimental effects of noise exposure have typically been considered in the context of occupational noise exposure, with federal agencies (e.g., Occupational Safety and Health Administration [OSHA])[17] developing and implementing strict guidelines to regulate workplace noise exposure. However, a number of recreational activities are also associated with excessive noise levels, including but not limited to hunting, concerts, motorsports, and use of personal audio systems (for detailed reviews, see Keppler et al.,[3] Meinke et al.,[4] Neitzel & Fligor[18]). Firearm exposure during noisy recreational activities poses the greatest risk for permanent hearing loss.[19] In recent years, a growing body of research has focused on various aspects related to recreational firearm use, including demographics and statistics related to firearm users, acoustic parameters that define the noise from firearm discharge, auditory risks from firearm exposure, pathological changes to the auditory system following firearm exposure and their audiological manifestations, and types and use of hearing protection among firearm users (see Meinke et al.[4] for a detailed review).
Firearm use is common during recreational pursuits such as hunting, target shooting, reenactment of historical events, and scouting.[4] Per the Small Arms Survey,[20] close to 857 million firearms are owned by civilians around the world, with 120.5 firearms per 100 civilians in the United States. Survey data querying reasons for firearm ownership in the United States from 1972 to 2010 have shown that hunting is one of the primary motivations for owning a firearm.[21] Based on the 2016 National Survey of Fishing, Hunting, and Wildlife-Associated Recreation conducted by the US Fish and Wildlife Services,[22] 8% of males and 1% of females above the age of 16 years engaged in hunting in 2016 in the United States. A total of 11.5 million individuals reported hunting, of which 90% were males and 10% were females. Additionally, the survey estimated that 1.4 million 6- to 15 year olds in the United States engaged in hunting. More recently, the National Hunting License Data released by the United States Fish & Wildlife Service[23] reported as many as 15.2 million paid hunting license owners in the United States in 2021.
When considering recreational pastimes involving firearm use, hunting, specifically waterfowl hunting, demonstrates the most significant risk for noise-induced hearing loss (NIHL).[24] Firearm discharge results in brief duration, high-frequency impulse sounds with maximum sound pressure level (SPL) values ranging from 140 to 175 dB; most recreational firearms produce peak SPLs between 150 and 165 dB (see Meinke et al.[4] for a detailed review). Currently, noise generated by recreational firearms is not subject to any federal regulations. However, the maximum SPLs associated with these devices consistently exceed the international standards for safe listening at venues and events (100 dB SPL) recommended by the World Health Organization (WHO),[25] and workplace noise exposure limits mandated by OSHA[17] and the National Institute for Occupational Safety & Health (NIOSH)[26] (140 dB SPL). Further, the auditory system can sustain greater damage from impulse noise as compared to continuous noise, recovery from which may be only partial and require an extended time.[27],[28],[29],[30] Meinke et al.[4] also provide a detailed review of the characteristic auditory outcomes of recreational firearm noise exposure, which include bilateral asymmetric permanent high-frequency hearing loss[31],[32],[33],[34],[35],[36],[37] and tinnitus.[38],[39],[40] Impulse noise can also result in temporary threshold shifts (TTS).[41] Modeling data based on TTS measured 2 minutes following impulse noise exposure in chinchilla auditory brainstem responses predict a 43-hour recovery period for a 25 dB TTS increasing to a 38-day recovery period for a 50 dB TTS in human listeners.[27]
Although hearing may ostensibly recover when the TTS recovers, evidence from animal physiology studies accrued over the past decade suggests this might not always be the case. In their seminal 2009 paper, Kujawa & Liberman[42] demonstrated that mice exposed to short durations of continuous noise experienced and recovered from a TTS with no damage to cochlear outer hair cells (OHCs), but presented with a permanent and significant loss of synapses between inner hair cells and their innervating auditory nerve fibers (“cochlear synaptopathy”). Kujawa and Liberman’s mouse data have subsequently been replicated in several animal studies (see Kobel et al.[43] & Hickman et al.[44] for detailed reviews). Such synaptopathic changes have been noted following impulse noise exposure as well.[44] It is thought that such noise-induced cochlear synaptopathy has the potential to cause deficits in suprathreshold auditory processing in human listeners.[16] Whether cochlear synaptopathy occurs in noise-exposed human listeners and can account for any auditory perceptual deficits in this population is a matter of much debate with mixed findings reported in the literature.[45],[46],[47],[48],[49],[50],[51],[52],[53],[54],[55],[56],[57],[58],[59],[60] A significant challenge associated with identifying cochlear synaptopathy in human listeners is that it has minimal-to-no effect on auditory thresholds, and is hence not observed on the gold standard of hearing testing, the audiogram. Direct verification of cochlear synaptopathy can only occur through an evaluation of synaptic ribbon count during postmortem analysis of the temporal bone. However, several noninvasive physiological measurements of the auditory system, such as the auditory brainstem response (ABR) wave I amplitude, the middle-ear muscle reflex (MEMR), and the frequency following response (FFR) have emerged as potential indirect measures of cochlear synaptopathy in human listeners. These physiological measurements have been compared with self-reported noise exposure and/or performance on suprathreshold perceptual tasks such as amplitude modulation (AM) detection, word recognition scores in quiet and background noise, speech identification in noise, and time-compressed speech recognition (for a detailed review, see Bramhall et al.[61]).
As reductions in ABR wave I amplitudes have been observed in animals with decreased synaptic ribbon count consequent to short-term noise exposure,[42],[62],[63] the majority of studies in human listeners have also utilized the ABR wave I amplitude as an indirect metric of noise-induced synaptopathy.[46],[47],[48],[49],[50],[51],[52],[53],[54],[55],[56],[57],[58],[59] However, changes in ABR wave I amplitude could arise from inner hair cell or auditory nerve damage that may be independent of synaptic count and function[65]; they could also reflect a steepened ABR wave I amplitude-intensity function due to OHC loss.[66] Further, ABR wave I is primarily generated by auditory nerve fibers with low thresholds and high spontaneous rate.[67] On the other hand, it is the low spontaneous rate/high threshold auditory nerve fibers that have been implicated in synaptopathic animals. For these reasons, Bramhall et al.[65] argue that ABR wave I amplitude may not be index of choice for identifying synaptopathy in human listeners.
The acoustic middle-ear muscle response (MEMR) refers to a reflexive contraction of the stapedius muscle in response to a high-intensity signal, which decreases the compliance of the middle-ear system. The MEMR threshold is the least signal intensity that elicits this reflex. In contrast to the ABR wave I, the MEMR arises predominantly from firing of the low spontaneous rate/high threshold auditory nerve fibers[68],[69] that are implicated in synaptopathy. Mouse data suggest that the MEMR threshold is strongly correlated with synaptopathy, and to a larger extent than ABR wave I amplitude.[70],[71] Additionally, as highlighted by Bramhall et al.,[72] the MEMR takes a shorter time to administer than the ABR, and is a part of routine hearing testing, as opposed to the ABR. Elevated MEMR thresholds[73] and reduced MEMR amplitudes[73],[74],[75] have been observed in noise-exposed listeners. MEMR metrics have also been found to correlate with tinnitus and speech perception,[75],[76],[77] which are commonly reported perceptual deficits in synaptopathy.[16] However, Guest et al.[78] found no relationships between MEMR thresholds and speech perception in noise or self-reported lifetime noise exposure. Although not unambiguous, the preponderance of evidence suggests that the MEMR appears to be a more robust and time-efficient index of cochlear synaptopathy in human listeners as compared to the ABR wave I amplitude.
A third physiological metric that has received considerable attention as a possible indicator of synaptopathy in human listeners is the FFR. The scalp-recorded FFR is generated by sustained phase-locked activity occurring in a neural population in the auditory system and may be elicited by a range of auditory stimuli (for detailed reviews, see Chandrasekaran & Kraus[79]; Krishnan;[80] Krizman & Kraus[81]). The neural activity underlying the FFR reflects phase-locking to the envelope and fine structure elements of the stimulus.[82] The phase-locked response to the stimulus envelope, obtained by summing opposite-polarity FFRs, is referred to as the envelope FFR or the envelope following response (EFR). Phase-locked response to stimulus harmonics is enhanced in the spectral FFR, derived by subtracting opposite-polarity FFRs. Lower EFR amplitudes have been measured in mice exhibiting synaptopathic changes due to aging[83] and noise exposure.[84] The increased sensitivity of the EFR to synaptopathy may be attributed in part to the contributions of the low spontaneous rate/high threshold auditory nerve fibers that are thought to dominate EFR generation at low modulation depths.[45],[85] Further, the EFR is typically elicited by relatively high-intensity stimuli (70–90 dB SPL) and is likely unaffected by any OHC damage that may occur following noise exposure in individuals with normal hearing sensitivity.[86] However, findings in the literature about the sensitivity of the EFR as an index of synaptopathy in human listeners are mixed. For example, Bharadwaj et al.[45] found that EFR amplitudes reduced at a faster rate with decrease in modulation depth in individuals with noise exposure. Similarly, Bramhall et al.[46] reported reduced EFR amplitudes in Veterans with significant noise exposure as compared to non-Veterans. On the other hand, some studies have reported weak or no links between EFR magnitude and young listeners with recreational noise exposure.[49],[50],[53]
Several of the synaptopathy studies cited above examine physiological and/or perceptual performance in human listeners while taking into consideration their lifetime noise exposure but do not delineate the relative contributions of recreational and occupational noise exposure. When recreational noise exposure is the focus, the recreational activities are limited to attendance at concerts, loud parties, sporting events, use of earphones, and other forms of music exposure.[45],[52] The literature on physiological and behavioral correlates of auditory processing in listeners with normal hearing sensitivity involved in recreational firearm use remains limited. To our knowledge, there is only one study published to date[46] that discusses cochlear synaptopathy in listeners with recreational firearm use. Although the primary focus of Bramhall et al.[46] was to describe the effects of noise exposure on Veterans, they found that ABR wave I amplitudes were attenuated in non-Veterans with a history of nonoccupational/recreational firearm use as compared to non-Veterans who reported no firearm use. However, Bramhall and colleagues did not measure the FFR and the MEMR in this study, which are thought to be more robust indicators of synaptopathy as compared to the ABR. Although associations between FFR and MEMR metrics, noise exposure history, and auditory perception remain ambiguous in human listeners, it is worth noting that both EFR[65] and MEMR[72] magnitudes have been found to be reduced in amplitude in Veterans with high noise exposure as compared to non-Veteran controls with no firearm exposure. All but two of the participating Veterans in Bramhall et al.[65] and Bramhall et al.[72] reported firearms training. Additionally, the only difference in noise exposure between the non-Veteran groups in Bramhall et al.[46] was the use of recreational firearms. Overall, the data presented by Bramhall and colleagues suggest that firearm exposure may be a risk factor for synaptopathy. Additionally, though Bramhall et al.[46] provide valuable information on the association between self-reported noise exposure and physiological metrics in non-Veterans with recreational firearm use, the study does not describe how these listeners fare on perceptual tasks. Further, although Bramhall and colleagues included non-Veteran listeners with recreational firearm experience in their study, these participants were not specifically queried on whether their firearm use was related to hunting. Here, we aim to describe FFR and MEMR measurements, and performance on commercially available behavioral tests of central auditory processing in listeners with normal hearing sensitivity with and without firearm exposure related to hunting. As far as we know, no study to date has examined this question, which is of central interest given the number of individuals engaged in hunting, the established effects of noise exposure on the auditory system, and the emerging and evolving information on noise-induced cochlear synaptopathy in human listeners. Ultimately, this information may benefit recreational firearm users in the community and clinical audiologists serving them, in terms of awareness, monitoring, counseling, and (re)habilitation.
Materials and methodsParticipants
A total of 20 adult participants (age range: 20 to 28 years) with self-reported normal hearing were recruited for the study. Participants were divided into two groups (non-hunters and hunters) based on self-report of recreational firearm use through hunting in at least 1 year immediately preceding data collection. Hunter participants consisted of 10 individuals with a history of noise exposure through recreational firearm use while hunting (males = 9, females =░ age range: 20 to 23 years; M: 21.60; SD: 1.07). The nonhunter group also consisted of 10 individuals, but with no hunting-associated recreational firearm use (males = 5, females = 5; age range: 21 to 28 years; M: 23.40; SD: 3.10). Participant age and sex information is provided in [TABLE 1]. Participation was voluntary and participants were recruited through word of mouth. Inclusion was based on the following criteria: an insignificant otologic history (no history of ear surgeries, ear infections, and ototoxicity), normal hearing sensitivity defined by pure-tone audiometric thresholds ≤ 20 dB HL at octave frequencies from .25 to 8 kHz, air-bone gaps of ≤ 10 dB, normal middle-ear compliance and pressure as evidenced by Jerger type A tympanograms, native speaker of English, and history (or lack thereof) of noise exposure due to recreational firearm use from hunting. All participants were paid for their participation and provided informed consent in compliance with a protocol approved by the Institutional Review Board at the institution where the study was conducted.
TABLE 1 Age, LAeq, Pure-Tone Average of .5, 1, 2 kHz (PTA), Speech Recognition Threshold (SRT), and Word Recognition Scores (WRS) for Nonhunter and Hunter Participants.Procedures
Testing was performed over two 2-hour sessions. During the first session, all participants completed a detailed case history, the Noise Exposure Questionnaire (NEQ[87]), a comprehensive diagnostic audiologic evaluation, and a battery of behavioral auditory processing tests. The second session consisted of FFR testing. All testing was performed in sound-treated audiometric booths. All equipment was calibrated to American National Standards Institute (ANSI) standards.[88]
Audiologic History, Noise Exposure Questionnaire, and Audiologic Evaluation
All participants were asked to provide a detailed audiologic history using the institution’s audiology clinic case history form Appendix A as well as a comprehensive noise exposure history using the NEQ[87] prior to testing. Annual noise exposure scores (LAeq) were derived using responses on the NEQ to quantify noise exposure for each participant.[87] Immittance testing was conducted using a GSI Tympstar immittance bridge and tympanometry was performed using a 226 Hz probe tone. Pure-tone air conduction audiometry was performed using the GSI Audiostar audiometer via Eartone 3A inserts in octave bands from .25 to 1 kHz, and half octave bands from 2 to 8 kHz, bilaterally. Pure-tone bone conduction testing was performed using a mastoid placement Radioear B71 bone oscillator from .5 to 4 kHz in octave bands, bilaterally. Responses were obtained via a push button. Hearing sensitivity was considered normal if thresholds were ≤ 20 dB HL at all frequencies in both ears. Speech audiometry (Speech Recognition Threshold [SRT] and Word Recognition Score [WRS] testing) was conducted using a Sony CD player. SRTs were obtained via recorded male voice presenting spondee words in each ear. WRS were obtained via monitored recorded male voice using the Northwestern University Auditory Test No. 6 (NU-6) word list at 40 dB SL re: SRT, in each ear. Transient Evoked Oto-acoustic Emissions (TEOAEs) and Distortion Product Oto-acoustic Emissions (DPOAEs) were recorded bilaterally using the ILOv6 software (Otodynamics, Ltd.) to characterize cochlear OHC function. TEOAEs were recorded from 1 to 4 kHz using 260 sweeps. DPOAEs were recorded from 1 to 6 kHz with an L1/L2 combination of 65/55 dB SPL and an f2/f1 ratio of 1.22. Signal-to-noise ratios (SNR) of 6 dB or greater were considered acceptable for oto-acoustic emission testing.
Frequency Following Response (FFR) testing
Stimuli
Previous synaptopathy studies using the FFR have utilized “transposed” tones, consisting of a high-frequency tonal carrier (usually around 4 kHz) amplitude modulated by a lower frequency (e.g., 100 Hz). It is thought that FFRs obtained to such transposed tones would be ideally suited to capture the contribution of low spontaneous rate/high threshold fibers in the 3 to 6 kHz region, thereby reflecting temporal coding deficits in the frequency region known to be most affected by noise exposure. In the current study, a natural version of the vowel/u/(F0 = 122.1 Hz, F1 = 287–338 Hz, F2 = 1051–1477 Hz) was selected from the UT Dallas Vowel database produced by an adult male speaker.[88],[89] FFRs were measured in response to the vowel presented in a quiet (“clean”) listening condition, as well as in three SNR conditions (+5 dB, 0 dB, and −5 dB) with competing noise (male four-talker babble). FFRs to speech in noise were selected as the experimental measure in the current study for the following reasons: 1) speech is an ecologically relevant, amplitude-modulated signal with modulations occurring at the fundamental frequency; 2) the vowel fundamental frequency (F0) (122.1 Hz) occurs below the upper bound of neural phase-locking in the brainstem (1500 Hz);[82],[90],[91] 3) the vowel fundamental frequency (122.1 Hz) was well above 80 Hz, below which the FFR is thought to be dominated by cortical rather than subcortical generators;[92] 4) speech-evoked FFRs have been effectively utilized in the past to describe brainstem phase-locking to both F0 and stimulus harmonics aligning with formant frequencies. Both F0 and formant frequencies are critical for speech perception in challenging listening situations such as background noise and reverberation, a function predicted to be affected in listeners with synaptopathy; 5) both intact and degraded versions of this vowel have been successfully used to elicit FFRs in listeners with normal hearing sensitivity previously;[90],[93],[94],[95] 6) signal level was 70 dB SPL, ensuring contributions to the response from low spontaneous rate/high threshold fibers; and 7) valid comparisons may be performed between behavioral and neural metrics if both measurements involve obtaining responses to speech in noise stimuli.
All stimuli (vowel in quiet, or vowel + four-talker babble) were presented monaurally to the right ear via a magnetically shielded insert earphone (Etymotic, ER-3 A; 6-8 kHz bandwidth) at an intensity of 70 dB SPL and a rate of 3.13/s. Stimulus duration was 265 ms and all stimuli had 10 millisecond on-off ramps. The vowel was presented in alternating polarity; however, the four-talker babble was presented in rarefaction polarity. Use of rarefaction polarity for the four-talker babble was necessitated as the babble was played using the “continuous loop” feature on the Smart-EP module, which does not permit use of alternating polarity. The “continuous loop” feature was selected so that the babble was not synchronized in time with the vowel stimulus.[96] Two thousand sweeps were presented per trial. Each trial was repeated at least once to ensure replicability of the responses. Responses were amplified by 200,000 and a band pass filter (70–3000 Hz) was applied to remove artifact and myogenic background noise from the response. The Smart-EP module in the Intelligent Hearing Systems (IHS, Miami, FL) was used for signal presentation and data acquisition. Stimulus presentation order was randomized within and across all participants.
FFR Protocol
Participants were seated in a comfortable recliner situated in an acoustically and electromagnetically shielded booth. Participants were directed to keep their eyes closed, stay relaxed, and avoid any extraneous body movements to ensure minimal response contamination due to movement artifacts. Most participants fell asleep during FFR acquisition. FFRs were recorded using a two-channel vertical electrode array (Channel 1: Fz [noninverting], A1 & A2 [linked inverting electrodes]; Channel 2: Fz [noninverting], C7 [inverting]; Fpz [common ground]). FFRs measured using such an electrode configuration are considered to reflect predominantly rostral brainstem activity, as the vertical electrode array is aligned similarly to the vertical dipole of the brainstem.[97],[98] In order to minimize response contamination by the cochlear microphonic and stimulus artifact, stimuli were delivered in alternating polarity through transducers that were electromagnetically shielded and separated from electrodes to the extent possible.[99]
FFR Analysis
FFR data measured in the two vertical electrode channels (Fz-linked A1 & A2 and Fz-C7) used in the current study have been previously shown to be strongly correlated with no statistically significant differences.[95] Given this, the current FFR time-waveforms were collapsed across the two measurement channels in each subject and for each test condition in order to increase the response SNR. Thus, this process yielded one FFR time-waveform per participant per stimulus condition. A Fast Fourier Transform (FFT) analysis was conducted on the FFR time-waveforms to obtain the spectral composition of the FFR. The spectral peak magnitude at stimulus F0 (122.1 Hz), as well as the magnitude of the noise floor (calculated by averaging FFT peak magnitudes in a 50 Hz window on either side of 122.1 Hz) were measured for each participant for each stimulus condition. As the magnitude of the noise floor can vary across participants, a ratio of the spectral peak magnitude at F0 to the noise floor magnitude (F0/NF) was calculated for each participant for each stimulus condition, and was used for statistical analyses.
MEMR
Following the standard clinical immittance test battery protocol, ipsilateral and contralateral MEMR thresholds were obtained using a 226 Hz probe tone presented alongside pulsed tonal elicitors at 500 Hz, 1000 Hz, and 2000 Hz in each ear.[100] MEMR measurements were not made for a 4000 Hz elicitor, as increased variability has been noted in acoustic reflexes obtained at this frequency in individuals with normal hearing sensitivity.[101],[102],[103] The MEMR was considered to be present if compliance in the test ear decreased by .02 mL or greater. The lowest elicitor intensity level at which this criterion was met on two out of three stimulus presentations was accepted as the MEMR threshold for that particular elicitor. All MEMR measurements were made using a GSI Tympstar diagnostic middle-ear analyzer. Here, it is worth mentioning that several studies of synaptopathy have utilized wideband probes and stimuli to elicit the MEMR.[72],[75],[76],[77] However, a tonal probe and elicitors were utilized in the current study in order to investigate if effects of synaptopathy are evident using a standard clinical immittance testing protocol.
Behavioral Tests of Auditory Processing
Behavioral tests of auditory processing were administered in the areas of monaural low redundancy, temporal processing, dichotic listening, and binaural interaction. The specific tests employed in this battery included both speech and nonspeech tasks and assessed a range of auditory processes, following the recommendations for behavioral auditory processing test battery selection proposed by AAA[104] and ASHA.[105] Many of the tests included in the test battery (e.g., Time Compressed Sentences Test, Frequency Pattern Test, Gaps In Noise test, Dichotic Digits Test, Masking Level Difference) were similar to those utilized in Gallun et al.,[106] Gallun et al.,[107] Kubli et al.,[108] and Saunders et al.[109] Tests were administered via monitored recorded voice using a Sony CD player connected to a GSI Audiostar audiometer. Test stimuli were calibrated so that the tone peaked at 0 on the VU meter prior to testing. Stimuli were presented at intensity levels specified in the administration instructions for each test. Participants were instructed verbally via standardized prewritten testing instructions prior to administration of each test and were asked to respond to stimuli either verbally or by pressing a button, as required by the test. Order of tests administered and ear tested were randomized to reduce test order and ear effects. Breaks were provided after every third test and/or at the participants’ request.
Monaural low redundancy
Tasks of monaural low redundancy assess a participant’s ability to understand and repeat spectrally or temporally degraded auditory signals.[110] The Auditory Figure Ground (AFG) test and Low Pass Filtered Speech (LPFS) test were used to assess the effects of spectral degradation on speech understanding. The Time Compressed and Reverberated Speech (TCRS) test was administered to assess the effects of temporal degradation on speech perception.
AFG
Participants were asked to repeat back 20 target words in the presence of background noise presented to each ear at the following SNRs: +0 dB, +8 dB, and +12 dB.[111] The number of words repeated back correctly were calculated for each ear (“raw scores”). For diagnostic reporting purposes, raw scores are typically converted to scaled scores. In the current study, in addition to scaled scores, raw scores were utilized in data analysis to allow for ear-specific data comparisons between participants. Stimuli were presented using the SCAN-3:A CD.
LPFS
The LPFS is comprised of 50 NU-6 words that have been passed through a low-pass filter with a cut-off value of 1500 Hz to modify frequency content such that the words consist only of spectral energy below 1500 Hz.[112] The test was administered via the Tonal and Speech Materials for Auditory Perceptual Assessment audio compact disc (developed by the Department of Veterans Affairs). Participants were asked to repeat back words that were presented monaurally to the right ear (25 words) and the left ear (25 words), and to guess if unsure. Percent correct scores were calculated for each ear.
TCRS
The TCRS, also administered via the Tonal and Speech Materials for Auditory Perceptual Assessment audio compact disc (Department of Veterans Affairs), consists of 100 NU-6 words that have been compressed in the time domain.[113] The test in this study utilized both 45% and 65% compression with 0.3 seconds of reverberation. A total of 50 words were presented to each ear and percent correct score was determined for the right and left ears, respectively.
Dichotic listening
Tasks of dichotic listening involve the presentation of two different stimuli, such as words, digits, or sound clusters, to each ear simultaneously.[114] Dichotic listening can be further separated into tasks of binaural integration (combining information coming from the right and left ears) and binaural separation (attending to stimuli presented to a specified ear while ignoring stimuli presented to the opposite ear). The Dichotic Digits Test (DDT) and Competing Words Directed Ear (CWDE) test were used to assess binaural integration, and the Competing Sentences Test (CST) was administered to assess binaural separation.
DDT
The DDT consists of 20 two-digit pairs of numbers presented to the right ear and 20 two-digit pairs of numbers presented to the left ear.[115] Stimuli were presented via the Tonal and Speech Materials for Auditory Perceptual Assessment audio compact disc (Department of Veterans Affairs). Participants were instructed to repeat both pairs of numbers heard in each ear (a total of four numbers) and to guess if unsure. Percent correct scores were determined for the right and left ears.
CWDE
The CWDE consists of 15 word pairs directed to the right ear and 15 word pairs directed to the left ear. Stimuli were presented via the SCAN 3:A CD. Participants were instructed to first repeat back the word heard in the directed ear (determined by the test administrator), then the word heard in the nondirected ear.[111] For example, a participant may hear the word “knock” in the right ear while simultaneously hearing the word “deep” in the left ear. If the right ear is the directed ear, the patient must repeat back “knock, deep” in that order. The number of correct responses were tallied to obtain a raw score for each ear for each participant. As with the AFG test, raw scores obtained on the CWDE test are typically converted to scaled scores for diagnostic purposes. In the current study, raw scores were retained along with scaled scores for ear-specific data analysis.
CST
The CST consists of a set of 30 sentences that are presented to the right ear, whereas a set of 30 different sentences are simultaneously presented to the left ear via the SCAN 3:A CD.[111] Participants were asked to repeat 15 sentences presented to one ear while ignoring the sentences presented to the other ear. Participants were then asked to repeat 15 sentences presented to the opposite ear and to ignore the sentences presented to the first test ear. Key words in the sentence were scored to determine a percent correct score for each ear. Raw scores obtained on the CST are typically converted to scaled scores for diagnostic purposes. In the current study, raw scores were utilized for data analysis, in addition to the scaled scores, in order to retain ear-specific information.
Temporal processing
Tasks of temporal processing assess a participant’s ability to process auditory signals in the time domain[116] and may be categorized as temporal ordering or temporal ordering tasks. Participants underwent the Frequency Pattern Test (FPT) to assess temporal ordering abilities and the Gaps in Noise (GIN) test to assess temporal resolution abilities.
FPT
The FPT consists of a total of 30 patterns of three tones composed of high (1122 Hz) and low (880 Hz) frequencies that are presented to each ear individually.[117] Participants were asked to label the pattern by identifying each tone as either high or low. Each pattern contained at least one high- and one low-frequency tone. Stimuli for the FPT were presented via the Tonal and Speech Materials for Auditory Perceptual Assessment audio compact disc (Department of Veterans Affairs). A percent correct score was calculated for each ear.
GIN
The GIN is comprised of segments of broadband noise that are presented monaurally for 6 seconds with zero to three gaps in each segment. Gaps range in duration from 2 to 20 milliseconds.[118] Stimuli for the GIN were presented via the Gaps In Noise compact disc (Auditec, Inc.). Participants were asked to press a button every time they detected a gap in the noise. Gap detection threshold was determined to be the smallest gap participants were able to detect for four out of six presentations for both the right and left ears.
Binaural interaction
Binaural integration abilities were assessed using the Masking Level Difference (MLD) test, which determines a participant’s ability to detect signals in the presence of noise in both a homophasic (SₒNₒ) and antiphasic (SπNₒ) masking paradigm.[119] In this study, the signal of interest consisted of a 500 Hz tone with a narrowband masker centered around this frequency. Participants were instructed to ignore the masker and to press a button when the tone was detected. Masking level difference was calculated by determining the difference in threshold value between the homophasic and antiphasic conditions. Stimuli were presented using the Masking Level Difference-Tone audio compact disc (Auditec, Inc.)
Statistical Analysis
Mixed-model analyses of variance (ANOVA) were conducted to assess main and interaction effects of group (hunter, nonhunter; between-subjects factor), test ear (right, left; within-subjects factor), test frequency (within-subjects factor) and test condition (within-subjects factor) on audiometric measurements, outcomes of behavioral tests of auditory processing, and brainstem neural representation of F0. Post-hoc pairwise comparisons were conducted using Bonferroni correction. Mixed ANOVAs were supplemented with independent samples t-tests for any comparison of means between hunter and nonhunter participants. All statistical analysis was performed using IBM SPSS Statistics for Windows, version 25 (IBM Corp., Armonk, N.Y., USA).
ResultsNIHL risk classification based on NEQ
Annual noise exposure values, or LAeq scores, calculated for each participant are listed in [TABLE 1]. Johnson et al.[87] consider participants with LAeq scores ≥ 79 to be at high risk for NIHL. A total of five participants (female = 1, male = 4), all of whom described themselves as hunters, exceeded this criterion in the current study. It is important to note that annual noise exposure estimation using the approach proposed by Johnson et al.[87] reflects exposures to continuous noise and does not take into account exposure to impulse noise such as firearm discharge. However, Johnson et al.[87] include a special cautionary note for firearm users taking their 1-Minute Noise Screen, where they indicate that regular use of firearms, even if once in a few months, puts these users at high risk of hearing loss. Thus, based on report of firearm use alone, all participants in the hunter group (female = 1, male = 9) were classified as being high risk for NIHL. NEQ, audiological, FFR, MEMR, and behavioral test outcomes in participants classified as hunters (n = 10) versus non-hunters (n = 10) based on a self-report of hunting in the immediate year preceding experiment participation are presented in detail in section 3.2.
Additionally, though all participants in the hunter group in the current study utilized firearms, two participants (S6 and S7) from the nonhunter group indicated recreational firearm use in the past year, although not for hunting purposes. Following the recommendations for risk categorization based on firearm use outlined by Johnson et al.,[87] the two nonhunter participants would also classify as “high risk.” Two supplemental analyses were conducted on the perceptual and physiological auditory measures to address this potential confound:
Eliminating S6 and S7 (hunters [n = 10] vs. non-hunters [n = 8]).Including S6 and S7 in the hunter group based on their history of recreational firearm use (albeit not hunting-related) (hunters [n = 12] vs. non-hunters [n = 8]).With minor exceptions, the outcomes of these supplemental analyses yielded similar results in terms of effects of group, test ear, test frequency, and test condition on LAeq scores, audiological test results, behavioral auditory processing test outcomes, and FFR F0/NF values, and are briefly summarized in Appendix B.
NEQ, Audiological, FFR, MEMR, and behavioral test outcomes in participants classified as “hunters” vs. “non-hunters”
NEQ in nonhunter and hunter participants
An independent-samples t-test was conducted to compare LAeq scores in hunter and nonhunter participants. Results indicated significantly higher (poorer) LAeq scores in hunters (M = 78.72, SD = 6.09) as compared to non-hunters (M = 69.66, SD = 3.99); t(18) = -3.93, p = 0.001. , five participants in the hunter group (female = 1, male = 4) exceeded the criterion value of 79 for the LAeq in the current study; no individual in the nonhunter group returned an LAeq value exceeding 79.
Audiological outcomes in nonhunter and hunter participants
Mixed-model ANOVAs were used to determine effects of test frequency, test ear, and group (hunters, non-hunters) on audiometric thresholds, MEMR thresholds, TEOAE SNR values, and DPOAE SNR values. Based on the test in question, the dependent variable was audiometric threshold (dB HL), MEMR threshold (dB HL), TEOAE SNR (dB), and DPOAE SNR (dB). Test frequency and test ear were within-subject variables, whereas group was considered the between-subjects variable. Similar mixed-model ANOVAs were used to determine effects of test ear and group (hunters, non-hunters) on pure-tone average and speech recognition thresholds. For these analyses, test ear was considered the within-subjects factor, whereas group was the between-subjects factor, with the dependent variable being pure-tone average or speech recognition threshold. For all mixed-model ANOVAs, Greenhouse Geisser corrections were applied in instances when Mauchly’s test indicated that the assumption of sphericity had been violated.
Pure-tone audiometric thresholds
All participants had pure-tone audiometric thresholds of ≤ 20 dB HL from .25 to 8 kHz in both ears (see [TABLE 2]) with air-bone gaps of ≤ 10 dB, meeting clinical definitions of normal hearing sensitivity. Mean air conduction pure tone thresholds for nonhunter (filled circles) and hunter (open circles) participants are plotted at audiometric test frequencies between .25 and 8 kHz for the right (panel A) and left (panel B) ears in [Figure 1], respectively. A significant main effect was noted for audiometric test frequency (F3.46, 62.43 = 7.72, p < 0.001). Bonferroni-corrected post-hoc multiple comparisons indicated that audiometric thresholds were lower (better) at 6 kHz as compared to .25 kHz and 3 kHz, and higher (poorer) at 3 kHz as compared to 1 kHz. A significant main effect was also observed for group (F1,18 = 4.47, p = 0.04), such that non-hunters had lower (better) audiometric thresholds than hunters. Bonferroni-adjusted post-hoc multiple comparison testing revealed that apart from .25 kHz in the left ear, audiometric thresholds were consistently poorer in participants classified as hunters as compared to non-hunters. There was no significant main effect for test ear (F1,18 = 4.03, p = 0.06), nor were there any significant interaction effects. Additionally, when pure-tone average (0.5, 1, 2 kHz; PTA) was the dependent variable, no significant main effects were observed for test ear (F1,18 =.02, p = 0.65) or group (F1,18 = 2.72, p = 0.11); further, there was no interaction effect. PTA values calculated for each participant are listed in [TABLE 1].
TABLE 2 Air Conduction Audiometeric Thresholds for Nonhunter and Hunter ParticipantFigure 1 Mean air conduction pure-tone thresholds for nonhunter (filled circles) and hunter (open circles) participants at audiometric test frequencies between .25 and 8 kHz for the right (panel A) and left (panel B) ears. Symbols represent the mean, whereas error bars represent the standard error across participants.Speech recognition in quiet
SRT values obtained for each participant are listed in [TABLE 1]. For SRT, though there was no main effect of test ear (F1,18 = 1.6, p = 0.22), there was a main effect of group (F1,18 = 6.48, p = 0.02). Specifically, non-hunters (M = 4.75 dB HL, SE = 1.25) had a lower (better) SRT than hunters (M = 9.25 dB HL, SE = 1.25). There was no interaction effect of test ear and group on SRT. Both hunter and nonhunter participants scored 100% on the WRS in both ears; further inferential statistics were not performed as there was no variance in the WRS data.
Distortion Product Oto-Acoustic Emissions
DPOAE SNRs were > 6 dB at all test frequencies for 5/10 nonhunter participants, although different participants met this criterion in each ear. When examining DPOAE SNR levels by test frequency in the nonhunter group, SNR levels were > 6 dB in both ears for all participants at 2.8, 4, and 6 kHz, for 9/10 participants at 1.4 and 2 kHz, and for 5/10 participants at 1 kHz. In the hunter group, the 6 dB SNR criterion was met at all test frequencies for 2/10 participants in the right ear and 3/10 participants in the left ear. When analyzing by test frequency in the hunter group, DPOAE SNR levels were > 6 dB for all participants at 4 and 6 kHz in both ears, for 9/10 participants at 2.8 kHz in both ears, for 8/10 (right ear) and 7/10 (left ear) participants at 2 kHz, and for 2/10 (right ear) and 3/10 (left ear) participants at 1 kHz. DPOAE SNR values at each test frequency are provided for each participant in [TABLE 3]. Mean DPOAE SNR values for nonhunter (filled circles) and hunter (open circles) participants are plotted at test frequencies between 1 and 6 kHz for the right (panel A) and left (panel B) ears in [Figure 2]. A significant main effect was observed for DPOAE test frequency (F1.63,29.45 = 14.04, p < .001). Post-hoc multiple comparison testing with Bonferroni correction indicated that, in general, DPOAE SNRs increased as a function of test frequency. Main effects for test ear (F1,18 =2.14, p =.161) and group (F1,18 =.52, p =.48) were nonsignificant, as were interaction effects.
Figure 2 Mean DPOAE SNR (right ear: Panel A; left ear: Panel B) and TEOAE SNR (right ear: Panel C; left ear: Panel D) values for nonhunter (filled circles) and hunter (open circles) as a function of test frequency in kHz. Symbols represent the mean, whereas error bars represent the standard error across participants.Transient Evoked Oto-Acoustic Emissions
TEOAE SNR levels were > 6 dB at all frequencies in both ears for only one participant each in the nonhunter (participant # 3) and hunter (participant # 11) groups. When analyzing by test frequency, the 6 dB SNR criterion in the nonhunter group was met as follows: 1 kHz (1/10 right ear, 4/10 left ear), 1.4 kHz (6/10 both ears), 2 kHz (4/10 both ears), 2.8 kHz (5/10 right ear, 6/10 left ear), and 4 kHz (4/10 right ear, 7/10 left ear). Test frequency-wise, the 6 dB SNR criterion in the hunter group was met as follows: 1 kHz (3/10 both ears), 1.4 kHz (5/10 right ear, 3/10 left ear), 2 kHz (6/10 right ear, 4/10 left ear), 2.8 kHz (6/10 both ears), and 4 kHz (3/10 right ear, 7/10 left ear). TEOAE SNR values at each test frequency are provided for each participant in [TABLE 4]. Mean TEOAE SNR values for nonhunter (filled circles) and hunter (open circles) participants are plotted at test frequencies between 1 and 4 kHz for the right (panel C) and left (panel D) ears in [Figure 2]. Similar to DPOAEs, a significant main effect was observed for TEOAE test frequency (F2.78, 50.06 = 3.42, p = 0.027). Bonferroni-adjusted post-hoc multiple comparison testing indicated that TEOAE SNRs were lower (poorer) at 1 kHz and 2 kHz as compared to 2.8 kHz. As with DPOAEs, main effects for test ear (F1,18 = 1.02, p =.32) and group (F1,18 =.15, p =.69), as well as interaction effects, were not significant.
FFRs in nonhunter and hunter participants
Grand average FFR time-waveforms and spectra obtained to the English back vowel /u/ presented in clean, +5, 0, and −5 dB SNR conditions in nonhunter (black) and hunter (red) participants are plotted in panels A and B in [Figure 3]. Visual inspection of the FFR time-waveforms suggests a larger amplitude associated with nonhunter as compared to hunter participants, for each SNR condition. FFR time-waveforms ([Figure 3], panel A) show robust periodicity for both nonhunter and hunter participants in the clean condition; whereas clear periodicity is maintained in the nonhunter group at the remaining SNR conditions, there is some degradation apparent in the waveform morphology in hunter participants at +5, 0, and −5 dB SNRs. For any given SNR condition, spectral peaks at F0 (120 Hz) and harmonics were discernable in the spectra of FFR ([Figure 3], panel B) elicited in both nonhunter and hunter participants. Average peak magnitudes were greater in the nonhunter as compared to the hunter participants at all SNR conditions. For both nonhunter and hunter groups, average peak magnitude at F0 decreased with decrease in SNR.
Comments (0)