Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 19 de 19
Filter
Add more filters










Publication year range
1.
Hear Res ; 424: 108594, 2022 10.
Article in English | MEDLINE | ID: mdl-35964452

ABSTRACT

Middle ear muscle contractions (MEMCs) are most commonly considered a response to high-level acoustic stimuli. However, MEMCs have also been observed in the absence of sound, either as a response to somatosensory stimulation or in concert with other motor activity. The relationship between MEMCs and non-acoustic sources is unclear. This study examined associations between measures of voluntary unilateral eye closure and impedance-based measures indicative of middle ear muscle activity while controlling for demographic and clinical factors in a large group of participants (N=190) with present clinical acoustic reflexes and no evidence of auditory dysfunction. Participants were instructed to voluntarily close the eye ipsilateral to the ear canal containing a detection probe at three levels of effort. Orbicularis oculi muscle activity was measured using surface electromyography. Middle ear muscle activity was inferred from changes in total energy reflected in the ear canal using a filtered (0.2 to 8 kHz) click train. Results revealed that middle ear muscle activity was positively associated with eye muscle activity. MEMC occurrence rates for eye closure observed in this study were generally higher than previously published rates for high-level brief acoustic stimuli in the same participant pool suggesting that motor activity may be a more reliable elicitor of MEMCs than acoustic stimuli. These results suggest motor activity can serve as a confounding factor for auditory exposure studies as well as complicate the interpretation of any impulsive noise damage risk criteria that assume MEMCs serve as a consistent, uniform protective factor. The mechanism linking eye and middle ear muscle activity is not understood and is an avenue for future research.


Subject(s)
Ear, Middle , Hearing Tests , Acoustic Stimulation/methods , Ear, Middle/physiology , Hearing Tests/methods , Humans , Muscle Contraction , Sound
2.
Ear Hear ; 43(4): 1262-1272, 2022.
Article in English | MEDLINE | ID: mdl-34882619

ABSTRACT

OBJECTIVES: Bilateral cochlear implant (BiCI) listeners use independent processors in each ear. This independence and lack of shared hardware prevents control of the timing of sampling and stimulation across ears, which precludes the development of bilaterally-coordinated signal processing strategies. As a result, these devices potentially reduce access to binaural cues and introduce disruptive artifacts. For example, measurements from two clinical processors demonstrate that independently-running processors introduce interaural incoherence. These issues are typically avoided in the laboratory by using research processors with bilaterally-synchronized hardware. However, these research processors do not typically run in real-time and are difficult to take out into the real-world due to their benchtop nature. Hence, the question of whether just applying hardware synchronization to reduce bilateral stimulation artifacts (and thereby potentially improve functional spatial hearing performance) has been difficult to answer. The CI personal digital assistant (ciPDA) research processor, which uses one clock to drive two processors, presented an opportunity to examine whether synchronization of hardware can have an impact on spatial hearing performance. DESIGN: Free-field sound localization and spatial release from masking (SRM) were assessed in 10 BiCI listeners using both their clinical processors and the synchronized ciPDA processor. For sound localization, localization accuracy was compared within-subject for the two processor types. For SRM, speech reception thresholds were compared for spatially separated and co-located configurations, and the amount of unmasking was compared for synchronized and unsynchronized hardware. There were no deliberate changes of the sound processing strategy on the ciPDA to restore or improve binaural cues. RESULTS: There was no significant difference in localization accuracy between unsynchronized and synchronized hardware (p = 0.62). Speech reception thresholds were higher with the ciPDA. In addition, although five of eight participants demonstrated improved SRM with synchronized hardware, there was no significant difference in the amount of unmasking due to spatial separation between synchronized and unsynchronized hardware (p = 0.21). CONCLUSIONS: Using processors with synchronized hardware did not yield an improvement in sound localization or SRM for all individuals, suggesting that mere synchronization of hardware is not sufficient for improving spatial hearing outcomes. Further work is needed to improve sound coding strategies to facilitate access to spatial hearing cues. This study provides a benchmark for spatial hearing performance with real-time, bilaterally-synchronized research processors.


Subject(s)
Cochlear Implantation , Cochlear Implants , Sound Localization , Speech Perception , Computers, Handheld , Hearing , Humans , Sound Localization/physiology , Speech Perception/physiology
3.
J Acoust Soc Am ; 146(5): 3993, 2019 11.
Article in English | MEDLINE | ID: mdl-31795698

ABSTRACT

Middle ear muscle contractions (MEMC) can be elicited in response to high-level sounds, and have been used clinically as acoustic reflexes (ARs) during evaluations of auditory system integrity. The results of clinical AR evaluations do not necessarily generalize to different signal types or durations. The purpose of this study was to evaluate the likelihood of observing MEMC in response to brief sound stimuli (tones, recorded gunshots, noise) in adult participants (N = 190) exhibiting clinical ARs and excellent hearing sensitivity. Results revealed that the presence of clinical ARs was not a sufficient indication that listeners will also exhibit MEMC for brief sounds. Detection rates varied across stimulus types between approximately 20% and 80%. Probabilities of observing MEMC also differed by clinical AR magnitude and latency, and declined over the period of minutes during the course of the MEMC measurement series. These results provide no support for the inclusion of MEMC as a protective factor in damage-risk criteria for impulsive noises, and the limited predictability of whether a given individual will exhibit MEMC in response to a brief sound indicates a need to measure and control for MEMC in studies evaluating pharmaceutical interventions for hearing loss.


Subject(s)
Ear, Middle/physiology , Hearing Tests/methods , Reflex, Acoustic , Acoustic Stimulation/methods , Acoustic Stimulation/standards , Adolescent , Adult , Female , Hearing Tests/standards , Humans , Male , Middle Aged , Muscle Contraction , Reaction Time , Sound
4.
J Acoust Soc Am ; 145(4): 2498, 2019 04.
Article in English | MEDLINE | ID: mdl-31046310

ABSTRACT

Adults with bilateral cochlear implants (BiCIs) receive benefits in localizing stationary sounds when listening with two implants compared with one; however, sound localization ability is significantly poorer when compared to normal hearing (NH) listeners. Little is known about localizing sound sources in motion, which occurs in typical everyday listening situations. The authors considered the possibility that sound motion may improve sound localization in BiCI users by providing multiple places of information. Alternatively, the ability to compare multiple spatial locations may be compromised in BiCI users due to degradation of binaural cues, and thus result in poorer performance relative to NH adults. In this study, the authors assessed listeners' abilities to distinguish between sounds that appear to be moving vs stationary, and track the angular range and direction of moving sounds. Stimuli were bandpass-filtered (150-6000 Hz) noise bursts of different durations, panned over an array of loudspeakers. Overall, the results showed that BiCI users were poorer than NH adults in (i) distinguishing between a moving vs stationary sound, (ii) correctly identifying the direction of movement, and (iii) tracking the range of movement. These findings suggest that conventional cochlear implant processors are not able to fully provide the cues necessary for perceiving auditory motion correctly.


Subject(s)
Cochlear Implants/standards , Hearing Loss/physiopathology , Sound Localization , Adult , Aged , Auditory Threshold , Case-Control Studies , Cues , Female , Hearing Loss/rehabilitation , Humans , Male , Middle Aged , Motion
5.
Hear Res ; 378: 53-62, 2019 07.
Article in English | MEDLINE | ID: mdl-30538053

ABSTRACT

The current study addressed the existence of an anticipatory middle-ear muscle contraction (MEMC) as a protective mechanism found in recent damage-risk criteria for impulse noise exposure. Specifically, the experiments reported here tested instances when an exposed individual was aware of and could anticipate the arrival of an acoustic impulse. In order to detect MEMCs in human subjects, a laser-Doppler vibrometer (LDV) was used to measure tympanic membrane (TM) motion in response to a probe tone. Here we directly measured the time course and relative magnitude changes of TM velocity in response to an acoustic reflex-eliciting (i.e. MEMC eliciting) impulse in 59 subjects with clinically assessable MEMCs. After verifying the presence of the MEMC, we used a classical conditioning paradigm pairing reflex-eliciting acoustic impulses (unconditioned stimulus, UCS) with various preceding stimuli (conditioned stimulus, CS). Changes in the time-course of the MEMC following conditioning were considered evidence of MEMC conditioning, and any indication of an MEMC prior to the onset of the acoustic elicitor was considered an anticipatory response. Nine subjects did not produce a MEMC measurable via LDV. For those subjects with an observable MEMC (n = 50), 48 subjects (96%) did not show evidence of an anticipatory response after conditioning, whereas only 2 subjects (4%) did. These findings reveal that MEMCs are not readily conditioned in most individuals, suggesting that anticipatory MEMCs are not prevalent within the general population. The prevalence of anticipatory MEMCs does not appear to be sufficient to justify inclusion as a protective mechanism in auditory injury risk assessments.


Subject(s)
Acoustic Stimulation , Anticipation, Psychological , Hearing Tests , Hearing , Muscle Contraction , Reflex, Acoustic , Stapedius/innervation , Tensor Tympani/innervation , Tympanic Membrane/physiology , Adult , Conditioning, Psychological , Female , Humans , Laser-Doppler Flowmetry , Male , Middle Aged , Movement , Predictive Value of Tests , Reproducibility of Results , Young Adult
6.
Hear Res ; 370: 65-73, 2018 12.
Article in English | MEDLINE | ID: mdl-30326382

ABSTRACT

Sensory performance is constrained by the information in the stimulus and the precision of the involved sensory system(s). Auditory spatial acuity is robust across a broad range of sound frequencies and source locations, but declines at eccentric lateral angles. The basis of such variation is not fully understood. Low-frequency auditory spatial acuity is mediated by sensitivity to interaural time difference (ITD) cues. While low-frequency spatial acuity varies across azimuth and some physiological models predict strong medial bias in the precision of ITD sensitivity, human psychophysical ITD sensitivity appears to vary only slightly with reference ITD magnitude. Correspondingly, recent analyses suggest that spatial variation in human low-frequency acuity is well-accounted for by acoustic factors alone. Here we examine the matter of high-frequency auditory acuity, which is mediated by sensitivity to interaural level difference (ILD) cues. Using two different psychophysical tasks in human subjects, we demonstrate decreasing ILD acuity with increasing ILD magnitude. We then demonstrate that the multiplicative combination of spatially variant sensory precision and physical cue information (local slope of the ILD cue) provides improved prediction of classic high-frequency spatial acuity data. Finally, we consider correlates of magnitude dependent acuity in neurons that are sensitive to ILDs.


Subject(s)
Acoustic Stimulation/methods , Auditory Cortex/physiology , Cues , Signal Detection, Psychological , Sound Localization , Adult , Animals , Auditory Pathways/physiology , Behavior, Animal , Chinchilla , Evoked Potentials, Auditory , Female , Humans , Male , Psychoacoustics , Time Factors
7.
Aerosp Med Hum Perform ; 89(6): 547-551, 2018 Jun 01.
Article in English | MEDLINE | ID: mdl-29789088

ABSTRACT

BACKGROUND: During ground operations, rotary-wing aircraft engines and subsystems produce noise hazards that place airfield personnel at risk for hearing damage. The noise exposure levels outside the aircraft during various operating conditions, and the distances from aircraft at which they drop to safe levels, are not readily available. The current study measured noise levels at various positions around the UH-60 Black Hawk helicopter for three operating conditions typically used when the aircraft is on the ground. METHODS: Microphones were positioned systematically around the helicopter and A-weighted sound pressure levels (SPLs) were computed from the recordings. In addition, the 85-dBA SPL contour around the aircraft was mapped. The resulting A-weighted SPLs and contour mapping were used to determine the noise hazard area around the helicopter. RESULTS: Measurements reported here show noise levels of 105 dB or greater in all operating conditions. The fueling location at the left rear of the aircraft near the auxiliary power unit (APU) is the area of greatest risk for noise-induced hearing loss (NIHL). Additionally, sound field contours indicate noise hazard areas (>85 dBA SPL) can extend beyond 100 ft from the helicopter. CONCLUSIONS: This report details the areas of greatest risk for auditory injury around the UH-60 Black Hawk helicopter. Our findings suggest the area of hazardous noise levels around the aircraft can extend to neighboring aircraft, particularly on the side of the aircraft where the APU is located. Hearing protection should be worn whenever the aircraft is operating, even if working at a distance.Jones HG, Greene NT, Chen MR, Azcona CM, Archer BJ, Reeves ER. The danger zone for noise hazards around the Black Hawk helicopter. Aerosp Med Hum Perform. 2018; 89(6):547-551.


Subject(s)
Aircraft , Hearing Loss, Noise-Induced/etiology , Noise, Occupational/adverse effects , Humans , Occupational Diseases/etiology , Risk Factors
8.
J Acoust Soc Am ; 143(3): 1428, 2018 03.
Article in English | MEDLINE | ID: mdl-29604701

ABSTRACT

Normal hearing listeners extract small interaural time differences (ITDs) and interaural level differences (ILDs) to locate sounds and segregate targets from noise. Bilateral cochlear implant listeners show poor sensitivity to ITDs when using clinical processors. This is because common clinical stimulation approaches use high rates [∼1000 pulses per-second (pps)] for each electrode in order to provide good speech representation, but sensitivity to ITDs is best at low rates of stimulation (∼100-300 pps). Mixing rates of stimulation across the array is a potential solution. Here, ITD sensitivity for a number of mixed-rate configurations that were designed to preserve speech envelope cues using high-rate stimulation and spatial hearing using low rate stimulation was examined. Results showed that ITD sensitivity in mixed-rate configurations when only one low rate electrode was included generally yielded ITD thresholds comparable to a configuration with low rates only. Low rate stimulation at basal or middle regions on the electrode array yielded the best sensitivity to ITDs. This work provides critical evidence that supports the use of mixed-rate strategies for improving ITD sensitivity in bilateral cochlear implant users.


Subject(s)
Auditory Perception , Cochlear Implants , Aged , Aged, 80 and over , Female , Humans , Loudness Perception , Male , Middle Aged , Psychometrics , Time Factors
9.
J Acoust Soc Am ; 140(5): EL392, 2016 Nov.
Article in English | MEDLINE | ID: mdl-27908067

ABSTRACT

Bilateral cochlear implant (BiCI) users have shown variability in interaural time difference (ITD) sensitivity at different places along the cochlea. This paper investigates perception of multi-electrode binaural stimulation to determine if auditory object formation (AOF) and lateralization are affected by variability in ITD sensitivity when a complex sound is encoded with multi-channel processing. AOF and ITD lateralization were compared between single- and multi-electrode configurations. Most (7/8) BiCI users perceived a single auditory object with multi-electrode stimulation, and the range of lateralization was comparable to single-electrode stimulation, suggesting that variability in single-electrode ITD sensitivity does not compromise AOF with multi-electrode stimulation.


Subject(s)
Cochlear Implants , Acoustic Stimulation , Cochlear Implantation , Electric Stimulation , Sound , Sound Localization
10.
Ear Hear ; 37(5): e341-5, 2016.
Article in English | MEDLINE | ID: mdl-27054512

ABSTRACT

OBJECTIVE: This study examined the effect of microphone placement on the interaural level differences (ILDs) available to bilateral cochlear implant (BiCI) users, and the subsequent effects on horizontal-plane sound localization. DESIGN: Virtual acoustic stimuli for sound localization testing were created individually for eight BiCI users by making acoustic transfer function measurements for microphones placed in the ear (ITE), behind the ear (BTE), and on the shoulders (SHD). The ILDs across source locations were calculated for each placement to analyze their effect on sound localization performance. Sound localization was tested using a repeated-measures, within-participant design for the three microphone placements. RESULTS: The ITE microphone placement provided significantly larger ILDs compared to BTE and SHD placements, which correlated with overall localization errors. However, differences in localization errors across the microphone conditions were small. CONCLUSIONS: The BTE microphones worn by many BiCI users in everyday life do not capture the full range of acoustic ILDs available, and also reduce the change in cue magnitudes for sound sources across the horizontal plane. Acute testing with an ITE placement reduced sound localization errors along the horizontal plane compared to the other placements in some patients. Larger improvements may be observed if patients had more experience with the new ILD cues provided by an ITE placement.


Subject(s)
Cochlear Implantation/methods , Cochlear Implants , Deafness/rehabilitation , Hearing Loss, Bilateral/rehabilitation , Sound Localization , Acoustic Stimulation , Adult , Aged , Cues , Female , Humans , Male , Middle Aged
11.
J Neurophysiol ; 114(5): 2991-3001, 2015 Nov.
Article in English | MEDLINE | ID: mdl-26400253

ABSTRACT

Normal-hearing human listeners and a variety of studied animal species localize sound sources accurately in reverberant environments by responding to the directional cues carried by the first-arriving sound rather than spurious cues carried by later-arriving reflections, which are not perceived discretely. This phenomenon is known as the precedence effect (PE) in sound localization. Despite decades of study, the biological basis of the PE remains unclear. Though the PE was once widely attributed to central processes such as synaptic inhibition in the auditory midbrain, a more recent hypothesis holds that the PE may arise essentially as a by-product of normal cochlear function. Here we evaluated the PE in a unique human patient population with demonstrated sensitivity to binaural information but without functional cochleae. Users of bilateral cochlear implants (CIs) were tested in a psychophysical task that assessed the number and location(s) of auditory images perceived for simulated source-echo (lead-lag) stimuli. A parallel experiment was conducted in a group of normal-hearing (NH) listeners. Key findings were as follows: 1) Subjects in both groups exhibited lead-lag fusion. 2) Fusion was marginally weaker in CI users than in NH listeners but could be augmented by systematically attenuating the amplitude of the lag stimulus to coarsely simulate adaptation observed in acoustically stimulated auditory nerve fibers. 3) Dominance of the lead in localization varied substantially among both NH and CI subjects but was evident in both groups. Taken together, data suggest that aspects of the PE can be elicited in CI users, who lack functional cochleae, thus suggesting that neural mechanisms are sufficient to produce the PE.


Subject(s)
Auditory Threshold/physiology , Cochlea/physiology , Hearing/physiology , Sound Localization/physiology , Acoustic Stimulation , Aged , Cochlear Implants , Female , Humans , Male , Middle Aged , Psychophysics
12.
J Neurophysiol ; 114(1): 531-9, 2015 Jul.
Article in English | MEDLINE | ID: mdl-25972580

ABSTRACT

The century-old duplex theory of sound localization posits that low- and high-frequency sounds are localized with two different acoustical cues, interaural time and level differences (ITDs and ILDs), respectively. While behavioral studies in humans and behavioral and neurophysiological studies in a variety of animal models have largely supported the duplex theory, behavioral sensitivity to ILD is curiously invariant across the audible spectrum. Here we demonstrate that auditory midbrain neurons in the chinchilla (Chinchilla lanigera) also encode ILDs in a frequency-invariant manner, efficiently representing the full range of acoustical ILDs experienced as a joint function of sound source frequency, azimuth, and distance. We further show, using Fisher information, that nominal "low-frequency" and "high-frequency" ILD-sensitive neural populations can discriminate ILD with similar acuity, yielding neural ILD discrimination thresholds for near-midline sources comparable to behavioral discrimination thresholds estimated for chinchillas. These findings thus suggest a revision to the duplex theory and reinforce ecological and efficiency principles that hold that neural systems have evolved to encode the spectrum of biologically relevant sensory signals to which they are naturally exposed.


Subject(s)
Auditory Pathways/physiology , Inferior Colliculi/physiology , Neurons/physiology , Sound Localization/physiology , Acoustic Stimulation , Acoustics , Action Potentials , Animals , Chinchilla , Cues , Female , Information Theory , Male , Microelectrodes
13.
J Acoust Soc Am ; 138(6): 3826-33, 2015 Dec.
Article in English | MEDLINE | ID: mdl-26723337

ABSTRACT

Recent psychophysical studies in bilateral cochlear implant users have shown that interaural timing difference (ITD) sensitivity with electrical stimulation varies depending on the place of stimulation along the cochlear array. While these studies have measured ITD sensitivity at single electrode places separately, it is important to understand how ITD sensitivity is affected when multiple electrodes are stimulated together because multi-electrode stimulation is required for representation of complex sounds. Multi-electrode stimulation may lead to poorer overall performance due to interference from places with poor ITD sensitivity, or from channel interaction due to electrical current spread. Alternatively, multi-electrode stimulation might result in overall good sensitivity if listeners can extract the most reliable ITD cues available. ITD just noticeable differences (JNDs) were measured for different multi-electrode configurations. Results showed that multi-electrode ITD JNDs were poorer than ITD JNDs for the best single-electrode pair. However, presenting ITD information along the whole array appeared to produce better sensitivity compared with restricting stimulation to the ends of the array, where ITD JNDs were comparable to the poorest single-electrode pair. These findings suggest that presenting ITDs in one cochlear region only may not be optimal for maximizing ITD sensitivity in multi-electrode stimulation.


Subject(s)
Cochlear Implantation/instrumentation , Cochlear Implants , Cues , Persons With Hearing Impairments/rehabilitation , Sound Localization , Acoustic Stimulation , Aged , Aged, 80 and over , Electric Stimulation , Female , Humans , Male , Middle Aged , Persons With Hearing Impairments/psychology , Prosthesis Design , Time Factors
14.
Otol Neurotol ; 35(9): 1525-32, 2014 Oct.
Article in English | MEDLINE | ID: mdl-25158615

ABSTRACT

OBJECTIVES: To evaluate methods for measuring long-term benefits of cochlear implantation in a patient with single-sided deafness (SSD) with respect to spatial hearing and to document improved quality of life because of reduced tinnitus. PATIENT: A single adult male with profound right-sided sensorineural hearing loss and normal hearing in the left ear who underwent right-sided cochlear implantation. METHODS: The subject was evaluated at 6, 9, 12, and 18 months after implantation on speech intelligibility with specific target-masker configurations, sound localization accuracy, audiologic performance, and tinnitus handicap. Testing conditions involved the acoustic (NH) ear only, the cochlear implant (CI) ear (acoustic ear plugged), and the bilateral condition (CI+NH). Measures of spatial hearing included speech intelligibility improvement because of spatial release from masking (SRM) and sound localization. In addition, traditional measures known as "head shadow," "binaural squelch," and "binaural summation" were evaluated. RESULTS: The best indicator for improved speech intelligibility was SRM, in which both ears are activated, but the relative locations of target and masker(s) are manipulated. Measures that compare performance with a single ear to performance using bilateral auditory input indicated evidence of the ability to integrate inputs across the ears, possibly reflecting early binaural processing, with 12 months of bilateral input. Sound localization accuracy improved with addition of the implant, and a large improvement with respect to tinnitus handicap was observed. CONCLUSION: Cochlear implantation resulted in improved sound localization accuracy when compared with performance using only the NH ear, and reduced tinnitus handicap was observed with use of the implant. The use of SRM addresses some of the current limitations of traditional measures of spatial and binaural hearing, as spatial cues related to target and maskers are manipulated, rather than the ear(s) tested. Sound testing methods and calculations described here are therefore recommended for assessing performance of a larger sample size of individuals with SSD who receive a CI.


Subject(s)
Audiology/methods , Hearing Loss, Sensorineural/surgery , Hearing Tests/methods , Tinnitus/surgery , Cochlear Implantation/methods , Cochlear Implants , Cues , Head Injuries, Closed/complications , Hearing Loss, Sensorineural/etiology , Humans , Male , Middle Aged , Quality of Life , Speech Intelligibility , Speech Perception , Time
15.
J Exp Biol ; 217(Pt 7): 1094-107, 2014 Apr 01.
Article in English | MEDLINE | ID: mdl-24671963

ABSTRACT

Physiological and anatomical studies have suggested that alligators have unique adaptations for spatial hearing. Sound localization cues are primarily generated by the filtering of sound waves by the head. Different vertebrate lineages have evolved external and/or internal anatomical adaptations to enhance these cues, such as pinnae and interaural canals. It has been hypothesized that in alligators, directionality may be enhanced via the acoustic coupling of middle ear cavities, resulting in a pressure difference receiver (PDR) mechanism. The experiments reported here support a role for a PDR mechanism in alligator sound localization by demonstrating that (1) acoustic space cues generated by the external morphology of the animal are not sufficient to generate location cues that match physiological sensitivity, (2) continuous pathways between the middle ears are present to provide an anatomical basis for coupling, (3) the auditory brainstem response shows some directionality, and (4) eardrum movement is directionally sensitive. Together, these data support the role of a PDR mechanism in crocodilians and further suggest this mechanism is a shared archosaur trait, most likely found also in the extinct dinosaurs.


Subject(s)
Alligators and Crocodiles/physiology , Ear, Middle/anatomy & histology , Sound Localization/physiology , Tympanic Membrane/anatomy & histology , Alligators and Crocodiles/anatomy & histology , Animals , Biophysical Phenomena , Cochlear Nerve/physiology , Evoked Potentials, Auditory, Brain Stem/physiology , Head/anatomy & histology , Sound
16.
Adv Exp Med Biol ; 787: 273-82, 2013.
Article in English | MEDLINE | ID: mdl-23716233

ABSTRACT

For over a century, the Duplex theory has posited that low- and ­high-frequency sounds are localized using two different acoustical cues, interaural time (ITDs) and level (ILDs) differences, respectively. Psychophysical data have generally supported the theory for pure tones. Anatomically, ITDs and ILDs are separately encoded in two parallel brainstem pathways. Acoustically ILDs are a function of location and frequency such that lower and higher frequencies exhibit smaller and larger ILDs, respectively. It is well established that neurons throughout the auditory neuraxis encode high-frequency ILDs. Acoustically, low-frequency ILDs are negligible (∼1­2 dB); however, humans are still sensitive to them and physiological studies often report low-frequency ILD-sensitive neurons. These ­latter findings are at odds with the Duplex theory. We suggest that these discrepancies arise from an inadequate characterization of the acoustical environment. We hypothesize that low-frequency ILDs become large and useful when sources are located near the head. We tested this hypothesis by making measurements of the ILDs in chinchillas as a function of source distance and the sensitivity to ILDs in 103 neurons in the inferior colliculus (IC). The ILD sensitivity of IC neurons was found to be frequency independent even though far-field acoustical ILDs were frequency dependent. However, as source distance was decreased, the magnitudes of low-frequency ILDs increased. Using information theoretic methods, we ­demonstrate that a population of IC neurons can encode the full range of acoustic ILDs across frequency that would be experienced as a joint function of source location and distance.


Subject(s)
Auditory Perception/physiology , Cues , Inferior Colliculi/physiology , Models, Neurological , Sound Localization/physiology , Acoustic Stimulation/methods , Animals , Chinchilla , Inferior Colliculi/cytology , Neurons/physiology , Olivary Nucleus/physiology , Pitch Perception/physiology , Time Perception/physiology
17.
J Acoust Soc Am ; 130(1): EL38-43, 2011 Jul.
Article in English | MEDLINE | ID: mdl-21786866

ABSTRACT

The development of sound-evoked responses in Chinchilla lanigera was studied from postnatal ages P0-1 (first 24 h) to adult. Cochlear microphonic (CMs) and compound action potentials (CAPs), representing ensemble sound-evoked activities of hair cells and auditory nerve fibers, respectively, were present as early as age P0-1. The data indicate that CM thresholds and sensitivities were generally adult-like (i.e., fall into adult ranges) at birth, but suprathreshold CM amplitudes remained below adult ranges through P28. CAP thresholds reached adult-like values between P7-P14, but the suprathreshold CAP amplitude continued to increase until ∼P28. The results confirm the auditory precociousness of the chinchilla.


Subject(s)
Aging , Chinchilla/growth & development , Cochlea/growth & development , Cochlear Microphonic Potentials , Cochlear Nerve/growth & development , Evoked Potentials, Auditory , Acoustic Stimulation , Age Factors , Animals , Auditory Threshold
18.
Hear Res ; 272(1-2): 135-47, 2011 Feb.
Article in English | MEDLINE | ID: mdl-20971180

ABSTRACT

There are three main cues to sound location: the interaural differences in time (ITD) and level (ILD) as well as the monaural spectral shape cues. These cues are generated by the spatial- and frequency-dependent filtering of propagating sound waves by the head and external ears. Although the chinchilla has been used for decades to study the anatomy, physiology, and psychophysics of audition, including binaural and spatial hearing, little is actually known about the sound pressure transformations by the head and pinnae and the resulting sound localization cues available to them. Here, we measured the directional transfer functions (DTFs), the directional components of the head-related transfer functions, for 9 adult chinchillas. The resulting localization cues were computed from the DTFs. In the frontal hemisphere, spectral notch cues were present for frequencies from ∼6-18 kHz. In general, the frequency corresponding to the notch increased with increases in source elevation as well as in azimuth towards the ipsilateral ear. The ILDs demonstrated a strong correlation with source azimuth and frequency. The maximum ILDs were <10 dB for frequencies <5 kHz, and ranged from 10-30 dB for the frequencies >5 kHz. The maximum ITDs were dependent on frequency, yielding 236 µs at 4 kHz and 336 µs at 250 Hz. Removal of the pinnae eliminated the spectral notch cues, reduced the acoustic gain and the ILDs, altered the acoustic axis, and reduced the ITDs.


Subject(s)
Chinchilla/physiology , Cues , Ear/physiology , Head/physiology , Mechanotransduction, Cellular , Signal Detection, Psychological , Sound Localization , Acoustic Stimulation , Age Factors , Animals , Auditory Threshold , Ear/anatomy & histology , Head/anatomy & histology , Male , Pressure , Sound Spectrography , Time Factors
19.
J Assoc Res Otolaryngol ; 12(2): 127-40, 2011 Apr.
Article in English | MEDLINE | ID: mdl-20957506

ABSTRACT

Sounds are filtered in a spatial- and frequency-dependent manner by the head and pinna giving rise to the acoustical cues to sound source location. These spectral and temporal transformations are dependent on the physical dimensions of the head and pinna. Therefore, the magnitudes of binaural sound location cues-the interaural time (ITD) and level (ILD) differences-are hypothesized to systematically increase while the lower frequency limit of substantial ILD production is expected to decrease due to the increase in head and pinna size during development. The frequency ranges of the monaural spectral notch cues to source elevation are also expected to decrease. This hypothesis was tested here by measuring directional transfer functions (DTFs), the directional components of head-related transfer functions, and the linear dimensions of the head and pinnae for chinchillas from birth through adulthood. Dimensions of the head and pinna increased by factors of 1.8 and 2.42, respectively, reaching adult values by ~6 weeks. From the DTFs, the ITDs, ILDs, and spectral shape cues were computed. Maximum ITDs increased by a factor of 1.75, from ~160 µs at birth (P0-1, first postnatal day) to 280 µs in adults. ILDs depended on source location and frequency exhibiting a shift in the frequency range of substantial ILD (>10 dB) from higher to lower frequencies with increasing head and pinnae size. Similar trends were observed for the spectral notch frequencies which ranged from 14.7-33.4 kHz at P0-1 to 5.3-19.1 kHz in adults. The development of the spectral notch cues, the spatial- and frequency-dependent distributions of DTF amplitude gain, acoustic directionality, maximum gain, and the acoustic axis were systematically related to the dimensions of the head and pinnae. The dimension of the head and pinnae in the chinchilla as well as the acoustical properties associated with them are mature by ~6 weeks.


Subject(s)
Acoustics , Behavior, Animal/physiology , Chinchilla/physiology , Cues , Ear Auricle/growth & development , Head/growth & development , Sound Localization/physiology , Acoustic Stimulation , Aging/physiology , Animals , Animals, Newborn , Auditory Pathways/growth & development , Auditory Pathways/physiology , Chinchilla/growth & development , Humans , Models, Animal
SELECTION OF CITATIONS
SEARCH DETAIL
...