Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 12 de 12
Filter
Add more filters










Publication year range
1.
J Acoust Soc Am ; 155(5): 3101-3117, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38722101

ABSTRACT

Cochlear implant (CI) users often report being unsatisfied by music listening through their hearing device. Vibrotactile stimulation could help alleviate those challenges. Previous research has shown that musical stimuli was given higher preference ratings by normal-hearing listeners when concurrent vibrotactile stimulation was congruent in intensity and timing with the corresponding auditory signal compared to incongruent. However, it is not known whether this is also the case for CI users. Therefore, in this experiment, we presented 18 CI users and 24 normal-hearing listeners with five melodies and five different audio-to-tactile maps. Each map varied the congruence between the audio and tactile signals related to intensity, fundamental frequency, and timing. Participants were asked to rate the maps from zero to 100, based on preference. It was shown that almost all normal-hearing listeners, as well as a subset of the CI users, preferred tactile stimulation, which was congruent with the audio in intensity and timing. However, many CI users had no difference in preference between timing aligned and timing unaligned stimuli. The results provide evidence that vibrotactile music enjoyment enhancement could be a solution for some CI users; however, more research is needed to understand which CI users can benefit from it most.


Subject(s)
Acoustic Stimulation , Auditory Perception , Cochlear Implants , Music , Humans , Female , Male , Adult , Middle Aged , Aged , Auditory Perception/physiology , Young Adult , Patient Preference , Cochlear Implantation/instrumentation , Touch Perception/physiology , Vibration , Touch
2.
Front Psychol ; 14: 1232262, 2023.
Article in English | MEDLINE | ID: mdl-38023001

ABSTRACT

Introduction: The perception of phonemes is guided by both low-level acoustic cues and high-level linguistic context. However, differentiating between these two types of processing can be challenging. In this study, we explore the utility of pupillometry as a tool to investigate both low- and high-level processing of phonological stimuli, with a particular focus on its ability to capture novelty detection and cognitive processing during speech perception. Methods: Pupillometric traces were recorded from a sample of 22 Danish-speaking adults, with self-reported normal hearing, while performing two phonological-contrast perception tasks: a nonword discrimination task, which included minimal-pair combinations specific to the Danish language, and a nonword detection task involving the detection of phonologically modified words within sentences. The study explored the perception of contrasts in both unprocessed speech and degraded speech input, processed with a vocoder. Results: No difference in peak pupil dilation was observed when the contrast occurred between two isolated nonwords in the nonword discrimination task. For unprocessed speech, higher peak pupil dilations were measured when phonologically modified words were detected within a sentence compared to sentences without the nonwords. For vocoded speech, higher peak pupil dilation was observed for sentence stimuli, but not for the isolated nonwords, although performance decreased similarly for both tasks. Conclusion: Our findings demonstrate the complexity of pupil dynamics in the presence of acoustic and phonological manipulation. Pupil responses seemed to reflect higher-level cognitive and lexical processing related to phonological perception rather than low-level perception of acoustic cues. However, the incorporation of multiple talkers in the stimuli, coupled with the relatively low task complexity, may have affected the pupil dilation.

3.
J Am Acad Audiol ; 2023 Sep 25.
Article in English | MEDLINE | ID: mdl-37748726

ABSTRACT

BACKGROUND: Speech recognition in adult cochlear implant (CI) users is typically assessed using sentence materials with low talker variability. Little is known about the effects of talker variability on speech recognition in adult CI users, the factors underlying individual differences in speech recognition with high talker variability, or how sentence materials with high talker variability could be utilized clinically. PURPOSE: To examine the effects of talker variability on sentence recognition in adult CI users, using sentences from the Perceptually Robust English Sentence Test Open-Set (PRESTO), and to examine the relation between working memory capacity and high-variability speech recognition. RESEARCH DESIGN: Postlingually deafened adult CI users and normal-hearing (NH) listeners under CI simulation completed sentence recognition tests that contained varying levels of talker variability, including HINT (low-variability), AzBio (moderate-variability), and PRESTO sentences (high-variability). The tasks were completed in both quiet and multi-talker babble (MTB). For the adult CI users only, the relation between sentence recognition accuracy and working memory capacity was assessed. STUDY SAMPLE: Twenty postlingually deafened adult CI users and 35 NH adults under 8-channel acoustic noise-vocoder simulations of CI hearing. RESULTS: In both CI and NH groups, performance decreased as a function of increased talker variability, with the best scores obtained on HINT (low-variability), then AzBio (moderate-variability), followed by PRESTO (high-variability) in quiet. In MTB, performance was significantly lower on PRESTO sentences, compared to HINT and AzBio sentences, which were not significantly different. Working memory capacity in the CI users was related to sentence recognition accuracy across all materials and conditions. CONCLUSIONS: Findings from the current study suggest that the increased talker variability in the PRESTO sentence materials has a detrimental effect on speech recognition in both adult CI users and NH listeners under CI simulation, particularly when speech is further degraded by MTB. For adult CI users, working memory capacity contributes to speech recognition abilities. Sentence recognition testing with high-variability, multi-talker materials, as in PRESTO, provides robust assessment of speech recognition abilities for research and clinical application, generating a wide range of scores for evaluating individual differences without ceiling effects when compared to conventional low-variability sentences.

4.
Clin Neurophysiol ; 148: 76-92, 2023 04.
Article in English | MEDLINE | ID: mdl-36822119

ABSTRACT

OBJECTIVE: Ninety percent of cochlear implant (CI) users are interested in improving their music perception. However, only few objective behavioral and neurophysiological tests have been developed for tracing the development of music discrimination skills in CI users. In this study, we aimed to obtain an accurate individual mismatch negativity (MMN) marker that could predict behavioral auditory discrimination thresholds. METHODS: We measured the individual MMN response to four magnitudes of deviations in four different musical features (intensity, pitch, timbre, and rhythm) in a rare sample of experienced CI users and a control sample of normally hearing participants. We applied a recently developed spike density component analysis (SCA), which can suppress confounding alpha waves, and contrasted it with previously proposed methods. RESULTS: Statistically detected individual MMN predicted attentive sound discrimination ability with high accuracy: for CI users 89.2% (278/312 cases) and for controls 90.5% (384/424 cases). As expected, MMN was detected for fewer CI users when the sound deviants were of smaller magnitude. CONCLUSIONS: The findings support the use of MMN responses in individual CI users as a diagnostic tool for testing music perception. SIGNIFICANCE: For CI users, the new SCA method provided more accurate and replicable diagnostic detections than preceding state-of-the-art.


Subject(s)
Cochlear Implantation , Cochlear Implants , Music , Humans , Auditory Perception/physiology , Hearing , Pitch Perception/physiology
5.
Trends Hear ; 27: 23312165221148035, 2023.
Article in English | MEDLINE | ID: mdl-36597692

ABSTRACT

Cochlear implants (CIs) are optimized for speech perception but poor in conveying musical sound features such as pitch, melody, and timbre. Here, we investigated the early development of discrimination of musical sound features after cochlear implantation. Nine recently implanted CI users (CIre) were tested shortly after switch-on (T1) and approximately 3 months later (T2), using a musical multifeature mismatch negativity (MMN) paradigm, presenting four deviant features (intensity, pitch, timbre, and rhythm), and a three-alternative forced-choice behavioral test. For reference, groups of experienced CI users (CIex; n = 13) and normally hearing (NH) controls (n = 14) underwent the same tests once. We found significant improvement in CIre's neural discrimination of pitch and timbre as marked by increased MMN amplitudes. This was not reflected in the behavioral results. Behaviorally, CIre scored well above chance level at both time points for all features except intensity, but significantly below NH controls for all features except rhythm. Both CI groups scored significantly below NH in behavioral pitch discrimination. No significant difference was found in MMN amplitude between CIex and NH. The results indicate that development of musical discrimination can be detected neurophysiologically early after switch-on. However, to fully take advantage of the sparse information from the implant, a prolonged adaptation period may be required. Behavioral discrimination accuracy was notably high already shortly after implant switch-on, although well below that of NH listeners. This study provides new insight into the early development of music-discrimination abilities in CI users and may have clinical and therapeutic relevance.


Subject(s)
Cochlear Implantation , Cochlear Implants , Music , Humans , Auditory Perception/physiology , Pitch Discrimination , Pitch Perception
6.
J Acoust Soc Am ; 152(6): 3396, 2022 12.
Article in English | MEDLINE | ID: mdl-36586853

ABSTRACT

Music listening experiences can be enhanced with tactile vibrations. However, it is not known which parameters of the tactile vibration must be congruent with the music to enhance it. Devices that aim to enhance music with tactile vibrations often require coding an acoustic signal into a congruent vibrotactile signal. Therefore, understanding which of these audio-tactile congruences are important is crucial. Participants were presented with a simple sine wave melody through supra-aural headphones and a haptic actuator held between the thumb and forefinger. Incongruent versions of the stimuli were made by randomizing physical parameters of the tactile stimulus independently of the auditory stimulus. Participants were instructed to rate the stimuli against the incongruent stimuli based on preference. It was found making the intensity of the tactile stimulus incongruent with the intensity of the auditory stimulus, as well as misaligning the two modalities in time, had the biggest negative effect on ratings for the melody used. Future vibrotactile music enhancement devices can use time alignment and intensity congruence as a baseline coding strategy, which improved strategies can be tested against.


Subject(s)
Music , Touch Perception , Humans , Touch , Auditory Perception , Vibration
7.
J Am Acad Audiol ; 31(2): 118-128, 2020 02.
Article in English | MEDLINE | ID: mdl-31287056

ABSTRACT

BACKGROUND: Research has shown that hearing aid acceptance is closely related to how well an individual tolerates background noise, regardless of improved speech understanding in background noise. The acceptable noise level (ANL) test was developed to quantify background noise acceptance. The ANL test measures a listener's willingness to listen to speech in noise rather than their ability to understand speech in noise, and is clinically valuable as a predictor of hearing aid success. PURPOSE: Noise acceptance is thought to be mediated by central regions of the nervous system, but the underlying mechanism of noise acceptance is not well understood. Higher order central efferent mechanisms may be weaker and/or central afferent mechanisms are more active in listeners with large versus small ANLs. Noise acceptance, therefore, may not be limited to the auditory modality but observable across modalities. We designed a visual-ANL test, as a parallel of the auditory-ANL test, to examine the relations between auditory and visual noise acceptance. RESEARCH DESIGN: A correlational design. STUDY SAMPLE: Thirty-seven adults between the ages of 21 and 30 years with normal hearing participated in this study. DATA COLLECTION AND ANALYSIS: All participants completed the standard auditory-ANL task, the visual-ANL task developed for this study, reception thresholds for sentences using the hearing in noise test, and visual sentence recognition in noise using the text reception threshold test. Correlational analyses were performed to evaluate the relations between and among the ANL and perception tasks. RESULTS: Auditory- and Visual-ANLs were correlated; those who accepted more auditory noise were also those who accepted more visual noise. Auditory and visual perceptual measures were also correlated, demonstrating that both measures reflect common processes underlying the ability to recognize speech in noise. Finally, as expected, noise acceptance levels were unrelated to perception in noise across modalities. CONCLUSIONS: The results of this study support our hypothesis that noise acceptance may not be unique to the auditory modality, specifically, that the common variance shared between the two ANL tasks, may reflect a shared general perceptual or cognitive mechanism that is not specific to the auditory or visual domains. These findings also support that noise acceptance and speech recognition reflect different aspects of auditory and visual perception. Future work will relate these ANL measures with central tasks of inhibition and include hearing-impaired individuals to explore the mechanisms underlying noise acceptance.


Subject(s)
Auditory Perception/physiology , Noise , Visual Perception/physiology , Adult , Auditory Threshold , Female , Humans , Male , Perceptual Masking , Signal-To-Noise Ratio , Speech Perception , Young Adult
8.
Ear Hear ; 38(3): 344-356, 2017.
Article in English | MEDLINE | ID: mdl-28045787

ABSTRACT

OBJECTIVES: Noise-vocoded speech is a valuable research tool for testing experimental hypotheses about the effects of spectral degradation on speech recognition in adults with normal hearing (NH). However, very little research has utilized noise-vocoded speech with children with NH. Earlier studies with children with NH focused primarily on the amount of spectral information needed for speech recognition without assessing the contribution of neurocognitive processes to speech perception and spoken word recognition. In this study, we first replicated the seminal findings reported by ) who investigated effects of lexical density and word frequency on noise-vocoded speech perception in a small group of children with NH. We then extended the research to investigate relations between noise-vocoded speech recognition abilities and five neurocognitive measures: auditory attention (AA) and response set, talker discrimination, and verbal and nonverbal short-term working memory. DESIGN: Thirty-one children with NH between 5 and 13 years of age were assessed on their ability to perceive lexically controlled words in isolation and in sentences that were noise-vocoded to four spectral channels. Children were also administered vocabulary assessments (Peabody Picture Vocabulary test-4th Edition and Expressive Vocabulary test-2nd Edition) and measures of AA (NEPSY AA and response set and a talker discrimination task) and short-term memory (visual digit and symbol spans). RESULTS: Consistent with the findings reported in the original ) study, we found that children perceived noise-vocoded lexically easy words better than lexically hard words. Words in sentences were also recognized better than the same words presented in isolation. No significant correlations were observed between noise-vocoded speech recognition scores and the Peabody Picture Vocabulary test-4th Edition using language quotients to control for age effects. However, children who scored higher on the Expressive Vocabulary test-2nd Edition recognized lexically easy words better than lexically hard words in sentences. Older children perceived noise-vocoded speech better than younger children. Finally, we found that measures of AA and short-term memory capacity were significantly correlated with a child's ability to perceive noise-vocoded isolated words and sentences. CONCLUSIONS: First, we successfully replicated the major findings from the ) study. Because familiarity, phonological distinctiveness and lexical competition affect word recognition, these findings provide additional support for the proposal that several foundational elementary neurocognitive processes underlie the perception of spectrally degraded speech. Second, we found strong and significant correlations between performance on neurocognitive measures and children's ability to recognize words and sentences noise-vocoded to four spectral channels. These findings extend earlier research suggesting that perception of spectrally degraded speech reflects early peripheral auditory processes, as well as additional contributions of executive function, specifically, selective attention and short-term memory processes in spoken word recognition. The present findings suggest that AA and short-term memory support robust spoken word recognition in children with NH even under compromised and challenging listening conditions. These results are relevant to research carried out with listeners who have hearing loss, because they are routinely required to encode, process, and understand spectrally degraded acoustic signals.


Subject(s)
Hearing/physiology , Speech Perception/physiology , Vocabulary , Adolescent , Analysis of Variance , Child , Child, Preschool , Female , Humans , Language Tests , Male , Recognition, Psychology
9.
J Am Acad Audiol ; 26(6): 582-94, 2015 Jun.
Article in English | MEDLINE | ID: mdl-26134725

ABSTRACT

BACKGROUND: There is a pressing clinical need for the development of ecologically valid and robust assessment measures of speech recognition. Perceptually Robust English Sentence Test Open-set (PRESTO) is a new high-variability sentence recognition test that is sensitive to individual differences and was designed for use with several different clinical populations. PRESTO differs from other sentence recognition tests because the target sentences differ in talker, gender, and regional dialect. Increasing interest in using PRESTO as a clinical test of spoken word recognition dictates the need to establish equivalence across test lists. PURPOSE: The purpose of this study was to establish list equivalency of PRESTO for clinical use. RESEARCH DESIGN: PRESTO sentence lists were presented to three groups of normal-hearing listeners in noise (multitalker babble [MTB] at 0 dB signal-to-noise ratio) or under eight-channel cochlear implant simulation (CI-Sim). STUDY SAMPLE: Ninety-one young native speakers of English who were undergraduate students from the Indiana University community participated in this study. DATA COLLECTION AND ANALYSIS: Participants completed a sentence recognition task using different PRESTO sentence lists. They listened to sentences presented over headphones and typed in the words they heard on a computer. Keyword scoring was completed offline. Equivalency for sentence lists was determined based on the list intelligibility (mean keyword accuracy for each list compared with all other lists) and listener consistency (the relation between mean keyword accuracy on each list for each listener). RESULTS: Based on measures of list equivalency and listener consistency, ten PRESTO lists were found to be equivalent in the MTB condition, nine lists were equivalent in the CI-Sim condition, and six PRESTO lists were equivalent in both conditions. CONCLUSIONS: PRESTO is a valuable addition to the clinical toolbox for assessing sentence recognition across different populations. Because the test condition influenced the overall intelligibility of lists, researchers and clinicians should take the presentation conditions into consideration when selecting the best PRESTO lists for their research or clinical protocols.


Subject(s)
Cochlear Implants , Hearing Loss/physiopathology , Speech Discrimination Tests , Speech Perception/physiology , Adolescent , Adult , Cochlear Implantation , Female , Hearing Loss/etiology , Hearing Loss/therapy , Humans , Male , Reproducibility of Results , Signal-To-Noise Ratio , Young Adult
10.
Ear Hear ; 32(4): 436-44, 2011.
Article in English | MEDLINE | ID: mdl-21178633

ABSTRACT

OBJECTIVES: The goal of this study was to compare cochlear implant behavioral measures and electrically evoked auditory brain stem responses (EABRs) obtained with a spatially focused electrode configuration. It has been shown previously that channels with high thresholds, when measured with the tripolar configuration, exhibit relatively broad psychophysical tuning curves. The elevated threshold and degraded spatial/spectral selectivity of such channels are consistent with a poor electrode-neuron interface, defined as suboptimal electrode placement or reduced nerve survival. However, the psychophysical methods required to obtain these data are time intensive and may not be practical during a clinical mapping session, especially for young children. Here, we have extended the previous investigation to determine whether a physiological approach could provide a similar assessment of channel functionality. We hypothesized that, in accordance with the perceptual measures, higher EABR thresholds would correlate with steeper EABR amplitude growth functions, reflecting a degraded electrode-neuron interface. DESIGN: Data were collected from six cochlear implant listeners implanted with the HiRes 90k cochlear implant (Advanced Bionics). Single-channel thresholds and most comfortable listening levels were obtained for stimuli that varied in presumed electrical field size by using the partial tripolar configuration, for which a fraction of current (σ) from a center active electrode returns through two neighboring electrodes and the remainder through a distant indifferent electrode. EABRs were obtained in each subject for the two channels having the highest and lowest tripolar (σ = 1 or 0.9) behavioral threshold. Evoked potentials were measured with both the monopolar (σ = 0) and a more focused partial tripolar (σ ≥ 0.50) configuration. RESULTS: Consistent with previous studies, EABR thresholds were highly and positively correlated with behavioral thresholds obtained with both the monopolar and partial tripolar configurations. The Wave V amplitude growth functions with increasing stimulus level showed the predicted effect of shallower growth for the partial tripolar than for the monopolar configuration, but this was observed only for the low-threshold channels. In contrast, high-threshold channels showed the opposite effect; steeper growth functions were seen for the partial tripolar configuration. CONCLUSIONS: These results suggest that behavioral thresholds or EABRs measured with a restricted stimulus can be used to identify potentially impaired cochlear implant channels. Channels having high thresholds and steep growth functions would likely not activate the appropriate spatially restricted region of the cochlea, leading to suboptimal perception. As a clinical tool, quick identification of impaired channels could lead to patient-specific mapping strategies and result in improved speech and music perception.


Subject(s)
Auditory Pathways/physiology , Cochlear Implantation/methods , Cochlear Implants , Deafness/therapy , Electrodes, Implanted , Evoked Potentials, Auditory, Brain Stem/physiology , Acoustic Stimulation , Adult , Aged , Artifacts , Auditory Pathways/cytology , Auditory Threshold/physiology , Brain Mapping , Deafness/physiopathology , Female , Humans , Male , Middle Aged , Neurons/physiology
11.
Dev Neurobiol ; 70(6): 436-55, 2010 May.
Article in English | MEDLINE | ID: mdl-20155736

ABSTRACT

This study examined the morphological development of the otolith vestibular receptors in quail. Here, we describe epithelial growth, hair cell density, stereocilia polarization, and afferent nerve innervation during development. The otolith maculae epithelial areas increased exponentially throughout embryonic development reaching asymptotic values near posthatch day P7. Increases in hair cell density were dependent upon macular location; striolar hair cells developed first followed by hair cells in extrastriola regions. Stereocilia polarization was initiated early, with defining reversal zones forming at E8. Less than half of all immature hair cells observed had nonpolarized internal kinocilia with the remaining exhibiting planar polarity. Immunohistochemistry and neural tracing techniques were employed to examine the shape and location of the striolar regions. Initial innervation of the maculae was by small fibers with terminal growth cones at E6, followed by collateral branches with apparent bouton terminals at E8. Calyceal terminal formation began at E10; however, no mature calyces were observed until E12, when all fibers appeared to be dimorphs. Calyx afferents innervating only Type I hair cells did not develop until E14. Finally, the topographic organization of afferent macular innervation in the adult quail utricle was quantified. Calyx and dimorph afferents were primarily confined to the striolar regions, while bouton fibers were located in the extrastriola and Type II band. Calyx fibers were the least complex, followed by dimorph units. Bouton fibers had large innervation fields, with arborous branches and many terminal boutons.


Subject(s)
Coturnix/physiology , Hair Cells, Vestibular/physiology , Otolithic Membrane/innervation , Otolithic Membrane/physiology , Afferent Pathways/growth & development , Afferent Pathways/physiology , Aging , Animals , Apoptosis , Cell Count , Cell Polarity , Cilia/physiology , Coturnix/growth & development , Epithelium/growth & development , Epithelium/physiology , Growth Cones/physiology , Presynaptic Terminals/physiology , Saccule and Utricle/growth & development , Saccule and Utricle/innervation , Saccule and Utricle/physiology , Vestibular Nerve/growth & development , Vestibular Nerve/physiology
12.
Ear Hear ; 31(2): 247-58, 2010 Apr.
Article in English | MEDLINE | ID: mdl-20090533

ABSTRACT

OBJECTIVE: The goal of this study was to evaluate the ability of a threshold measure, made with a restricted electrode configuration, to identify channels exhibiting relatively poor spatial selectivity. With a restricted electrode configuration, channel-to-channel variability in threshold may reflect variations in the interface between the electrodes and auditory neurons (i.e., nerve survival, electrode placement, and tissue impedance). These variations in the electrode-neuron interface should also be reflected in psychophysical tuning curve (PTC) measurements. Specifically, it is hypothesized that high single-channel thresholds obtained with the spatially focused partial tripolar (pTP) electrode configuration are predictive of wide or tip-shifted PTCs. DESIGN: Data were collected from five cochlear implant listeners implanted with the HiRes90k cochlear implant (Advanced Bionics Corp., Sylmar, CA). Single-channel thresholds and most comfortable listening levels were obtained for stimuli that varied in presumed electrical field size by using the pTP configuration for which a fraction of current (sigma) from a center-active electrode returns through two neighboring electrodes and the remainder through a distant indifferent electrode. Forward-masked PTCs were obtained for channels with the highest, lowest, and median tripolar (sigma = 1 or 0.9) thresholds. The probe channel and level were fixed and presented with either the monopolar (sigma = 0) or a more focused pTP (sigma > or = 0.55) configuration. The masker channel and level were varied, whereas the configuration was fixed to sigma = 0.5. A standard, three-interval, two-alternative forced choice procedure was used for thresholds and masked levels. RESULTS: Single-channel threshold and variability in threshold across channels systematically increased as the compensating current, sigma, increased and the presumed electrical field became more focused. Across subjects, channels with the highest single-channel thresholds, when measured with a narrow, pTP stimulus, had significantly broader PTCs than the lowest threshold channels. In two subjects, the tips of the tuning curves were shifted away from the probe channel. Tuning curves were also wider for the monopolar probes than with pTP probes for both the highest and lowest threshold channels. CONCLUSIONS: These results suggest that single-channel thresholds measured with a restricted stimulus can be used to identify cochlear implant channels with poor spatial selectivity. Channels having wide or tip-shifted tuning characteristics would likely not deliver the appropriate spectral information to the intended auditory neurons, leading to suboptimal perception. As a clinical tool, quick identification of impaired channels could lead to patient-specific mapping strategies and result in improved speech and music perception.


Subject(s)
Auditory Threshold , Cochlear Implantation , Cochlear Implants/adverse effects , Hearing Loss/pathology , Hearing Loss/therapy , Psychoacoustics , Acoustic Stimulation , Adult , Aged , Brain Mapping , Electric Stimulation , Electrodes, Implanted , Female , Humans , Male , Middle Aged , Music , Perceptual Masking , Speech Perception , Spiral Ganglion/pathology
SELECTION OF CITATIONS
SEARCH DETAIL
...