Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 44
Filtrar
1.
Brain Commun ; 6(3): fcae175, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38846536

RESUMO

Over the first years of life, the brain undergoes substantial organization in response to environmental stimulation. In a silent world, it may promote vision by (i) recruiting resources from the auditory cortex and (ii) making the visual cortex more efficient. It is unclear when such changes occur and how adaptive they are, questions that children with cochlear implants can help address. Here, we examined 7-18 years old children: 50 had cochlear implants, with delayed or age-appropriate language abilities, and 25 had typical hearing and language. High-density electroencephalography and functional near-infrared spectroscopy were used to evaluate cortical responses to a low-level visual task. Evidence for a 'weaker visual cortex response' and 'less synchronized or less inhibitory activity of auditory association areas' in the implanted children with language delays suggests that cross-modal reorganization can be maladaptive and does not necessarily strengthen the dominant visual sense.

2.
Percept Mot Skills ; 131(1): 74-105, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37977135

RESUMO

Auditory-motor and visual-motor networks are often coupled in daily activities, such as when listening to music and dancing; but these networks are known to be highly malleable as a function of sensory input. Thus, congenital deafness may modify neural activities within the connections between the motor, auditory, and visual cortices. Here, we investigated whether the cortical responses of children with cochlear implants (CI) to a simple and repetitive motor task would differ from that of children with typical hearing (TH) and we sought to understand whether this response related to their language development. Participants were 75 school-aged children, including 50 with CI (with varying language abilities) and 25 controls with TH. We used functional near-infrared spectroscopy (fNIRS) to record cortical responses over the whole brain, as children squeezed the back triggers of a joystick that vibrated or not with the squeeze. Motor cortex activity was reflected by an increase in oxygenated hemoglobin concentration (HbO) and a decrease in deoxygenated hemoglobin concentration (HbR) in all children, irrespective of their hearing status. Unexpectedly, the visual cortex (supposedly an irrelevant region) was deactivated in this task, particularly for children with CI who had good language skills when compared to those with CI who had language delays. Presence or absence of vibrotactile feedback made no difference in cortical activation. These findings support the potential of fNIRS to examine cognitive functions related to language in children with CI.


Assuntos
Implante Coclear , Implantes Cocleares , Surdez , Criança , Humanos , Espectroscopia de Luz Próxima ao Infravermelho/métodos , Implante Coclear/métodos , Surdez/cirurgia , Hemoglobinas
3.
Brain Res Bull ; 205: 110817, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37989460

RESUMO

Sensory deprivation can offset the balance of audio versus visual information in multimodal processing. Such a phenomenon could persist for children born deaf, even after they receive cochlear implants (CIs), and could potentially explain why one modality is given priority over the other. Here, we recorded cortical responses to a single speaker uttering two syllables, presented in audio-only (A), visual-only (V), and audio-visual (AV) modes. Electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) were successively recorded in seventy-five school-aged children. Twenty-five were children with normal hearing (NH) and fifty wore CIs, among whom 26 had relatively high language abilities (HL) comparable to those of NH children, while 24 others had low language abilities (LL). In EEG data, visual-evoked potentials were captured in occipital regions, in response to V and AV stimuli, and they were accentuated in the HL group compared to the LL group (the NH group being intermediate). Close to the vertex, auditory-evoked potentials were captured in response to A and AV stimuli and reflected a differential treatment of the two syllables but only in the NH group. None of the EEG metrics revealed any interaction between group and modality. In fNIRS data, each modality induced a corresponding activity in visual or auditory regions, but no group difference was observed in A, V, or AV stimulation. The present study did not reveal any sign of abnormal AV integration in children with CI. An efficient multimodal integrative network (at least for rudimentary speech materials) is clearly not a sufficient condition to exhibit good language and literacy.


Assuntos
Implantes Cocleares , Surdez , Percepção da Fala , Criança , Humanos , Percepção da Fala/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica , Eletroencefalografia
4.
Front Neurosci ; 17: 1141886, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37409105

RESUMO

Background: Cochlear implantation (CI) in prelingually deafened children has been shown to be an effective intervention for developing language and reading skill. However, there is a substantial proportion of the children receiving CI who struggle with language and reading. The current study-one of the first to implement electrical source imaging in CI population was designed to identify the neural underpinnings in two groups of CI children with good and poor language and reading skill. Methods: Data using high density electroencephalography (EEG) under a resting state condition was obtained from 75 children, 50 with CIs having good (HL) or poor language skills (LL) and 25 normal hearing (NH) children. We identified coherent sources using dynamic imaging of coherent sources (DICS) and their effective connectivity computing time-frequency causality estimation based on temporal partial directed coherence (TPDC) in the two CI groups compared to a cohort of age and gender matched NH children. Findings: Sources with higher coherence amplitude were observed in three frequency bands (alpha, beta and gamma) for the CI groups when compared to normal hearing children. The two groups of CI children with good (HL) and poor (LL) language ability exhibited not only different cortical and subcortical source profiles but also distinct effective connectivity between them. Additionally, a support vector machine (SVM) algorithm using these sources and their connectivity patterns for each CI group across the three frequency bands was able to predict the language and reading scores with high accuracy. Interpretation: Increased coherence in the CI groups suggest overall that the oscillatory activity in some brain areas become more strongly coupled compared to the NH group. Moreover, the different sources and their connectivity patterns and their association to language and reading skill in both groups, suggest a compensatory adaptation that either facilitated or impeded language and reading development. The neural differences in the two groups of CI children may reflect potential biomarkers for predicting outcome success in CI children.

5.
Clin Neurophysiol ; 149: 133-145, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-36965466

RESUMO

OBJECTIVE: Although children with cochlear implants (CI) achieve remarkable success with their device, considerable variability remains in individual outcomes. Here, we explored whether auditory evoked potentials recorded during an oddball paradigm could provide useful markers of auditory processing in this pediatric population. METHODS: High-density electroencephalography (EEG) was recorded in 75 children listening to standard and odd noise stimuli: 25 had normal hearing (NH) and 50 wore a CI, divided between high language (HL) and low language (LL) abilities. Three metrics were extracted: the first negative and second positive components of the standard waveform (N1-P2 complex) close to the vertex, the mismatch negativity (MMN) around Fz and the late positive component (P3) around Pz of the difference waveform. RESULTS: While children with CIs generally exhibited a well-formed N1-P2 complex, those with language delays typically lacked reliable MMN and P3 components. But many children with CIs with age-appropriate skills showed MMN and P3 responses similar to those of NH children. Moreover, larger and earlier P3 (but not MMN) was linked to better literacy skills. CONCLUSIONS: Auditory evoked responses differentiated children with CIs based on their good or poor skills with language and literacy. SIGNIFICANCE: This short paradigm could eventually serve as a clinical tool for tracking the developmental outcomes of implanted children.


Assuntos
Implante Coclear , Implantes Cocleares , Criança , Humanos , Estimulação Acústica , Potenciais Evocados Auditivos/fisiologia , Percepção Auditiva/fisiologia , Eletroencefalografia
6.
J Am Acad Audiol ; 33(3): 142-148, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-36216041

RESUMO

PURPOSE: Cochlear implant (CI) recipients often experience speech recognition difficulty in noise in small group settings with multiple talkers. In traditional remote microphones systems, one talker wears a remote microphone that wirelessly delivers speech to the CI processor. This system will not transmit signals from multiple talkers in a small group. However, remote microphone systems with multiple microphones allowing for adaptive beamforming may be beneficial for small group situations with multiple talkers. Specifically, a remote microphone with an adaptive multiple-microphone beamformer may be placed in the center of the small group, and the beam (i.e., polar lobe) may be automatically steered toward the direction associated with the most favorable speech-to-noise ratio. The signal from the remote microphone can then be wirelessly delivered to the CI sound processor. Alternately, each of the talkers in a small group may use a remote microphone that is part of a multi-talker network that wirelessly delivers the remote microphone signal to the CI sound processor. The purpose of this study was to compare the potential benefit of an adaptive multiple-microphone beamformer remote microphone system and a multi-talker network remote microphone system. METHOD: Twenty recipients, ages 12 to 84 years, with Advanced Bionics CIs completed sentence-recognition-in-noise tasks while seated at a desk surrounded by three loudspeakers at 0, 90, and 270 degrees. These speakers randomly presented the target speech while competing noise was presented from four loudspeakers located in the corners of the room. Testing was completed in three conditions: 1) CI alone, 2) Remote microphone system with an adaptive multiple-microphone beamformer, and 3) and a multi-talker network remote microphone system each with five different signal levels (15 total conditions). RESULTS: Significant differences were found across all signal levels and technology conditions. Relative to the CI alone, sentence recognition improvements ranged from 14-23 percentage points with the adaptive multiple-microphone beamformer and 27-47 percentage points with the multi-talker network with superior performance for the latter remote microphone system. CONCLUSIONS: Both remote microphone systems significantly improved speech recognition in noise of CI recipients when listening in small group settings, but the multi-talker network provided superior performance.


Assuntos
Implante Coclear , Implantes Cocleares , Percepção da Fala , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Criança , Humanos , Pessoa de Meia-Idade , Ruído , Desenho de Prótese , Adulto Jovem
7.
J Commun Disord ; 99: 106252, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36007485

RESUMO

INTRODUCTION: Auditory challenges are both common and disruptive for autistic children and evidence suggests that listening difficulties may be linked to academic underachievement (Ashburner, Ziviani & Rodger, 2008). Such deficits may also contribute to issues with attention, behavior, and communication (Ashburner et al., 2008; Riccio, Cohen, Garrison & Smith, 2005). The present study aims to summarize the auditory challenges of autistic children with normal pure-tone hearing thresholds, and perceived listening difficulties, seen at auditory-ASD clinics in the US and Australia. METHODS: Data were compiled on a comprehensive, auditory-focused test battery in a large clinical sample of school-age autistic children with normal pure-tone hearing to date (N = 71, 6-14 years). Measures included a parent-reported auditory sensory processing questionnaire and tests of speech recognition in noise, binaural integration, attention, auditory memory and listening comprehension. Individual test performance was compared to normative data from children with no listening difficulties. RESULTS: Over 40% of patients exhibited significantly reduced speech recognition in noise and abnormal dichotic integration that were not attributed to deficits in attention. The majority of patients (86%) performed abnormally on at least one auditory measure, suggesting that functional auditory issues can exist in autistic patients despite normal pure-tone sensitivity. CONCLUSION: Including functional listening measures during audiological evaluations may improve clinicians' ability to detect and manage the auditory challenges impacting this population. Learner Outcomes: 1) Readers will be able to describe the auditory difficulties experienced by some autistic patients (ASD). 2) Readers will be able to describe clinical measures potentially useful for detecting listening difficulties in high-functioning autistic children.


Assuntos
Transtorno Autístico , Percepção da Fala , Atenção , Percepção Auditiva , Criança , Testes Auditivos , Humanos , Ruído
8.
J Am Acad Audiol ; 33(2): 66-74, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-35512843

RESUMO

BACKGROUND: Children with hearing loss frequently experience difficulty understanding speech in the presence of noise. Although remote microphone systems are likely to be the most effective solution to improve speech recognition in noise, the focus of this study centers on the evaluation of hearing aid noise management technologies including directional microphones, adaptive noise reduction (ANR), and frequency-gain shaping. These technologies can improve children's speech recognition, listening comfort, and/or sound quality in noise. However, individual contributions of these technologies as well as the effect of hearing aid microphone mode on localization abilities in children is unknown. PURPOSE: The objectives of this study were to (1) compare children's speech recognition and subjective perceptions across five hearing aid noise management technology conditions and (2) compare localization abilities across three hearing aid microphone modes. RESEARCH DESIGN: A single-group, repeated measures design was used to evaluate performance differences and subjective ratings. STUDY SAMPLE: Fourteen children with mild to moderately severe hearing loss. DATA COLLECTION AND ANALYSIS: Children's sentence recognition, listening comfort, sound quality, and localization were assessed in a room with an eight-loudspeaker array. RESULTS AND CONCLUSION: The use of adaptive directional microphone technology improves children's speech recognition in noise when the signal of interest arrives from the front and is spatially separated from the competing noise. In contrast, the use of adaptive directional microphone technology may result in a decrease in speech recognition in noise when the signal of interest arrives from behind. The use of a microphone mode that mimics the natural directivity of the unaided auricle provides a slight improvement in speech recognition in noise compared with omnidirectional use with limited decrement in speech recognition in noise when the signal of interest arrives from behind. The use of ANR and frequency-gain shaping provide no change in children's speech recognition in noise. The use of adaptive directional microphone technology, ANR, and frequency-gain shaping improve children's listening comfort, perceived ability to understand speech in noise, and overall listening experience. Children prefer to use each of these noise management technologies regardless of whether the signal of interest arrives from the front or from behind. The use of adaptive directional microphone technology does not result in a decrease in children's localization abilities when compared with the omnidirectional condition. The best localization performance occurred with use of the microphone mode that mimicked the directivity of the unaided auricle.


Assuntos
Auxiliares de Audição , Perda Auditiva Neurossensorial , Perda Auditiva , Percepção da Fala , Criança , Perda Auditiva Neurossensorial/reabilitação , Humanos , Ruído , Tecnologia
9.
J Am Acad Audiol ; 33(4): 196-205, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-34758503

RESUMO

BACKGROUND: For children with hearing loss, the primary goal of hearing aids is to provide improved access to the auditory environment within the limits of hearing aid technology and the child's auditory abilities. However, there are limited data examining aided speech recognition at very low (40 decibels A [dBA]) and low (50 dBA) presentation levels. PURPOSE: Due to the paucity of studies exploring aided speech recognition at low presentation levels for children with hearing loss, the present study aimed to (1) compare aided speech recognition at different presentation levels between groups of children with "normal" hearing and hearing loss, (2) explore the effects of aided pure tone average and aided Speech Intelligibility Index (SII) on aided speech recognition at low presentation levels for children with hearing loss ranging in degree from mild to severe, and (3) evaluate the effect of increasing low-level gain on aided speech recognition of children with hearing loss. RESEARCH DESIGN: In phase 1 of this study, a two-group, repeated-measures design was used to evaluate differences in speech recognition. In phase 2 of this study, a single-group, repeated-measures design was used to evaluate the potential benefit of additional low-level hearing aid gain for low-level aided speech recognition of children with hearing loss. STUDY SAMPLE: The first phase of the study included 27 school-age children with mild to severe sensorineural hearing loss and 12 school-age children with "normal" hearing. The second phase included eight children with mild to moderate sensorineural hearing loss. INTERVENTION: Prior to the study, children with hearing loss were fitted binaurally with digital hearing aids. Children in the second phase were fitted binaurally with digital study hearing aids and completed a trial period with two different gain settings: (1) gain required to match hearing aid output to prescriptive targets (i.e., primary program), and (2) a 6-dB increase in overall gain for low-level inputs relative to the primary program. In both phases of this study, real-ear verification measures were completed to ensure the hearing aid output matched prescriptive targets. DATA COLLECTION AND ANALYSIS: Phase 1 included monosyllabic word recognition and syllable-final plural recognition at three presentation levels (40, 50, and 60 dBA). Phase 2 compared speech recognition performance for the same test measures and presentation levels with two differing gain prescriptions. CONCLUSION: In phase 1 of the study, aided speech recognition was significantly poorer in children with hearing loss at all presentation levels. Higher aided SII in the better ear (55 dB sound pressure level input) was associated with higher Consonant-Nucleus-Consonant word recognition at a 40 dBA presentation level. In phase 2, increasing the hearing aid gain for low-level inputs provided a significant improvement in syllable-final plural recognition at very low-level inputs and resulted in a nonsignificant trend toward better monosyllabic word recognition at very low presentation levels. Additional research is needed to document the speech recognition difficulties children with hearing aids may experience with low-level speech in the real world as well as the potential benefit or detriment of providing additional low-level hearing aid gain.


Assuntos
Surdez , Auxiliares de Audição , Perda Auditiva Neurossensorial , Perda Auditiva , Percepção da Fala , Criança , Humanos , Perda Auditiva/reabilitação , Perda Auditiva Neurossensorial/reabilitação , Inteligibilidade da Fala
10.
J Am Acad Audiol ; 32(7): 433-444, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-34847584

RESUMO

BACKGROUND: Considerable variability exists in the speech recognition abilities achieved by children with cochlear implants (CIs) due to varying demographic and performance variables including language abilities. PURPOSE: This article examines the factors associated with speech recognition performance of school-aged children with CIs who were grouped by language ability. RESEARCH DESIGN: This is a single-center cross-sectional study with repeated measures for subjects across two language groups. STUDY SAMPLE: Participants included two groups of school-aged children, ages 7 to 17 years, who received unilateral or bilateral CIs by 4 years of age. The High Language group (N = 26) had age-appropriate spoken-language abilities, and the Low Language group (N = 24) had delays in their spoken-language abilities. DATA COLLECTION AND ANALYSIS: Group comparisons were conducted to examine the impact of demographic characteristics on word recognition in quiet and sentence recognition in quiet and noise. RESULTS: Speech recognition in quiet and noise was significantly poorer in the Low Language compared with the High Language group. Greater hours of implant use and better adherence to auditory-verbal (AV) therapy appointments were associated with higher speech recognition in quiet and noise. CONCLUSION: To ensure maximal speech recognition in children with low-language outcomes, professionals should develop strategies to ensure that families support full-time CI use and have the means to consistently attend AV appointments.


Assuntos
Implantes Cocleares , Fala , Adolescente , Criança , Estudos Transversais , Humanos , Instituições Acadêmicas
11.
J Am Acad Audiol ; 32(3): 180-185, 2021 03.
Artigo em Inglês | MEDLINE | ID: mdl-33873219

RESUMO

BACKGROUND: Cochlear implant (CI) recipients frequently experience difficulty understanding speech over the telephone and rely on hearing assistive technology (HAT) to improve performance. Bilateral inter-processor audio streaming technology using nearfield magnetic induction is an advanced technology incorporated within a hearing aid or CI processor that can deliver telephone audio signals captured at one sound processor to the sound processor at the opposite ear. To date, limited data exist examining the efficacy of this technology in CI users to improve speech understanding on the telephone. PURPOSE: The primary objective of this study was to examine telephone speech recognition outcomes in bilateral CI recipients in a bilateral inter-processor audio streaming condition (DuoPhone) compared with a monaural condition (i.e., telephone listening with one sound processor) in quiet and in background noise. Outcomes in the monaural and bilateral conditions using either a telecoil or T-Mic2 technology were also assessed. The secondary aim was to examine how deactivating microphone input in the contralateral processor in the bilateral wireless streaming conditions, and thereby modifying the signal-to-noise ratio, affected speech recognition in noise. RESEARCH DESIGN: A repeated-measures design was used to evaluate speech recognition performance in quiet and competing noise with the telephone signal transmitted acoustically or via the telecoil to the ipsilateral sound processor microphone in monaural and bilateral wireless streaming listening conditions. STUDY SAMPLE: Nine bilateral CI users with Advanced Bionics HiRes 90K and/or CII devices were included in the study. DATA COLLECTION AND ANALYSIS: The effects of phone input (monaural [DuoPhone Off] vs. bilateral [DuoPhone on]) and processor input (T-Mic2 vs. telecoil) on word recognition in quiet and noise were assessed using separate repeated-measures analysis of variance. Effect of the contralateral device mic deactivation on speech recognition outcomes for the T-Mic2 DuoPhone conditions was assessed using paired Student's t-tests. RESULTS: Telephone speech recognition was significantly better in the bilateral inter-processor streaming conditions relative to the monaural conditions in both quiet and noise. Speech recognition outcomes were similar in quiet and noise when using the T-Mic2 and telecoil in the monaural and bilateral conditions. For the acoustic DuoPhone conditions using the T-Mic2, speech recognition in noise was significantly better when the microphone of the contralateral processor was disabled. CONCLUSION: Inter-processor audio streaming allows for bilateral listening on the telephone and produces better speech recognition in quiet and in noise compared with monaural listening conditions for adult CI recipients.


Assuntos
Implante Coclear , Implantes Cocleares , Percepção da Fala , Adulto , Audição , Humanos , Telefone
12.
J Speech Lang Hear Res ; 63(5): 1561-1571, 2020 05 22.
Artigo em Inglês | MEDLINE | ID: mdl-32379527

RESUMO

Purpose The purpose of this study was to investigate the relationship between speech recognition benefit derived from the addition of a hearing aid (HA) to the nonimplanted ear (i.e., bimodal benefit) and spectral modulation detection (SMD) performance in the nonimplanted ear in a large clinical sample. An additional purpose was to investigate the influence of low-frequency pure-tone average (PTA) of the nonimplanted ear and age at implantation on the variance in bimodal benefit. Method Participants included 311 unilateral cochlear implant (CI) users who wore an HA in the nonimplanted ear. Participants completed speech recognition testing in quiet and in noise with the CI-alone and in the bimodal condition (i.e., CI and contralateral HA) and SMD in the nonimplanted ear. Results SMD performance in the nonimplanted ear was significantly correlated with bimodal benefit in quiet and in noise. However, this relationship was much weaker than previous reports with smaller samples. SMD, low-frequency PTA of the nonimplanted ear from 125 to 750 Hz, and age at implantation together accounted for, at most, 19.1% of the variance in bimodal benefit. Conclusions Taken together, SMD, low-frequency PTA, and age at implantation account for the greatest amount of variance in bimodal benefit than each variable alone. A large portion of variance (~80%) in bimodal benefit is not explained by these variables. Supplemental Material https://doi.org/10.23641/asha.12185493.


Assuntos
Implante Coclear , Implantes Cocleares , Auxiliares de Audição , Percepção da Fala , Humanos , Ruído
13.
J Am Acad Audiol ; 31(1): 50-60, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-31429403

RESUMO

BACKGROUND: Children with hearing loss often experience difficulty understanding speech in noisy and reverberant classrooms. Traditional remote microphone use, in which the teacher wears a remote microphone that captures her speech and wirelessly delivers it to radio receivers coupled to a child's hearing aids, is often ineffective for small-group listening and learning activities. A potential solution is to place a remote microphone in the middle of the desk used for small-group learning situations to capture the speech of the peers around the desk and wirelessly deliver the speech to the child's hearing aids. PURPOSE: The objective of this study was to compare speech recognition of children using hearing aids across three conditions: (1) hearing aid in an omnidirectional microphone mode (HA-O), (2) hearing aid with automatic activation of a directional microphone (HA-ADM) (i.e., the hearing aid automatically switches in noisy environments from omnidirectional mode to a directional mode with a cardioid polar plot pattern), and (3) HA-ADM with simultaneous use of a remote microphone (RM) in a "Small Group" mode (HA-ADM-RM). The Small Group mode is designed to pick up multiple near-field talkers. An additional objective of this study was to compare the subjective listening preferences of children between the HA-ADM and HA-ADM-RM conditions. RESEARCH DESIGN: A single-group, repeated measures design was used to evaluate performance differences obtained in the three technology conditions. Sentence recognition in noise was assessed in a classroom setting with each technology, while sentences were presented at a fixed level from three different loudspeakers surrounding a desk (0, 90, and 270° azimuth) at which the participant was seated. This arrangement was intended to simulate a small-group classroom learning activity. STUDY SAMPLE: Fifteen children with moderate to moderately severe hearing loss. DATA COLLECTION AND ANALYSIS: Speech recognition was evaluated in the three hearing technology conditions, and subjective auditory preference was evaluated in the HA-ADM and HA-ADM-RM conditions. RESULTS: The use of the remote microphone system in the Small Group mode resulted in a statistically significant improvement in sentence recognition in noise of 24 and 21 percentage points compared with the HA-O and HA-ADM conditions, respectively (individual benefit ranged from -8.6 to 61.1 and 3.4 to 44 percentage points, respectively). There was not a significant difference in sentence recognition in noise between the HA-O and HA-ADM conditions when the remote microphone system was not in use. Eleven of the 14 participants who completed the subjective rating scale reported at least a slight preference for the use of the remote microphone system in the Small Group mode. CONCLUSIONS: Objective and subjective measures of sentence recognition indicated that use of remote microphone technology with the Small Group mode may improve hearing performance in small-group learning activities. Sentence recognition in noise improved by 24 percentage points compared to the HA-O condition, and children expressed a preference for the use of the remote microphone Small Group technology regarding listening comfort, sound quality, speech intelligibility, background noise reduction, and overall listening experience.


Assuntos
Surdez/reabilitação , Auxiliares de Audição , Percepção da Fala , Adolescente , Limiar Auditivo , Criança , Desenho de Equipamento , Humanos , Ruído/efeitos adversos
14.
J Am Acad Audiol ; 29(4): 337-347, 2018 04.
Artigo em Inglês | MEDLINE | ID: mdl-29664726

RESUMO

BACKGROUND: The electrically evoked stapedial reflex threshold (ESRT) has been shown to be a good predictor of upper stimulation level for cochlear implant recipients. Previous research has shown that the ESRT may be recorded at lower stimulation levels and with a higher incidence of success with the use of higher frequency probe tones (e.g., 678 and 1000 Hz) relative to the use of the conventional 226-Hz probe tone. Research has also shown that the acoustic reflex may be recorded at lower stimulus levels with the use of wideband reflectance when compared to the acoustic reflex threshold recorded with a conventional acoustic immittance measurement. PURPOSE: The objective of this study was to compare the ESRT recorded with acoustic immittance and wideband reflectance measurements. RESEARCH DESIGN: A repeated measures design was used to evaluate potential differences in ESRTs with stimulation at an apical, middle, and basal electrode contact with the use of two different techniques, acoustic immittance measurement and wideband reflectance. STUDY SAMPLE: Twelve users of Cochlear Nucleus cochlear implants were included in the study. DATA COLLECTION AND ANALYSIS: Participants' ESRTs were evaluated in response to simulation at three different electrode contact sites (i.e., an apical, middle, and basal electrode contact) with the use of two different middle ear measurement techniques, acoustic immittance with the use of a 226-Hz probe tone and wideband reflectance with the use of a chirp stimulus. RESULTS: The mean ESRT recorded with wideband reflectance measurement was significantly lower when compared to the ESRT recorded with acoustic immittance. For one participant, the ESRT was not recorded with acoustic immittance before reaching the participant's loudness discomfort threshold, but it was successfully recorded with the use of wideband reflectance. CONCLUSIONS: The ESRT may potentially be recorded at lower presentation levels with the use of wideband reflectance measures relative to the use of acoustic immittance with a 226-Hz probe tone. This may allow for the ESRT to be obtained at levels that are more comfortable for the cochlear implant recipient, which may also allow for a higher incidence in the successful recording of the ESRT.


Assuntos
Testes de Impedância Acústica/métodos , Estimulação Acústica , Estimulação Elétrica , Reflexo Acústico/fisiologia , Estimulação Acústica/métodos , Adulto , Idoso , Humanos , Pessoa de Meia-Idade , Estapédio/fisiologia , Adulto Jovem
15.
Am J Audiol ; 26(4): 531-542, 2017 Dec 12.
Artigo em Inglês | MEDLINE | ID: mdl-29121162

RESUMO

PURPOSE: This study implemented a fitting method, developed for use with frequency lowering hearing aids, across multiple testing sites, participants, and hearing aid conditions to evaluate speech perception with a novel type of frequency lowering. METHOD: A total of 8 participants, including children and young adults, participated in real-world hearing aid trials. A blinded crossover design, including posttrial withdrawal testing, was used to assess aided phoneme perception. The hearing aid conditions included adaptive nonlinear frequency compression (NFC), static NFC, and conventional processing. RESULTS: Enabling either adaptive NFC or static NFC improved group-level detection and recognition results for some high-frequency phonemes, when compared with conventional processing. Mean results for the distinction component of the Phoneme Perception Test (Schmitt, Winkler, Boretzki, & Holube, 2016) were similar to those obtained with conventional processing. CONCLUSIONS: Findings suggest that both types of NFC tested in this study provided a similar amount of speech perception benefit, when compared with group-level performance with conventional hearing aid technology. Individual-level results are presented with discussion around patterns of results that differ from the group average.


Assuntos
Percepção Auditiva , Auxiliares de Audição , Perda Auditiva de Alta Frequência/reabilitação , Perda Auditiva Neurossensorial/reabilitação , Ajuste de Prótese/métodos , Adolescente , Adulto , Criança , Estudos Cross-Over , Feminino , Humanos , Masculino , Dinâmica não Linear , Fonética , Software , Adulto Jovem
16.
Int J Audiol ; 56(12): 976-988, 2017 12.
Artigo em Inglês | MEDLINE | ID: mdl-28851244

RESUMO

OBJECTIVE: The primary goal of this study was to evaluate a new form of non-linear frequency compression (NLFC) in children. The new NLFC processing scheme is adaptive and potentially allows for a better preservation of the spectral characteristics of the input sounds when compared to conventional NLFC processing. DESIGN: A repeated-measures design was utilised to compare the speech perception of the participants with two configurations of the new adaptive NLFC processing to their performance with the existing NLFC. The outcome measures included the University of Western Ontario Plurals test, the Consonant-Nucleus-Consonant word recognition test, and the Phonak Phoneme Perception test. STUDY SAMPLE: Study participants included 14 children, aged 6-17 years, with mild-to-severe low-frequency hearing loss and severe-to-profound high-frequency hearing loss. RESULTS: The results indicated that the use of the new adaptive NLFC processing resulted in significantly better average word recognition and plural detection relative to the conventional NLFC processing. CONCLUSION: Overall, the adaptive NLFC processing evaluated in this study has the potential to significantly improve speech perception relative to conventional NLFC processing.


Assuntos
Correção de Deficiência Auditiva/instrumentação , Auxiliares de Audição , Perda Auditiva/reabilitação , Pessoas com Deficiência Auditiva/reabilitação , Processamento de Sinais Assistido por Computador , Percepção da Fala , Acústica , Adolescente , Fatores Etários , Algoritmos , Audiometria da Fala , Limiar Auditivo , Criança , Comportamento Infantil , Desenho de Equipamento , Audição , Perda Auditiva/diagnóstico , Perda Auditiva/fisiopatologia , Perda Auditiva/psicologia , Humanos , Dinâmica não Linear , Pessoas com Deficiência Auditiva/psicologia , Dados Preliminares , Reconhecimento Psicológico , Índice de Gravidade de Doença , Espectrografia do Som , Inteligibilidade da Fala
17.
J Am Acad Audiol ; 28(5): 415-435, 2017 May.
Artigo em Inglês | MEDLINE | ID: mdl-28534732

RESUMO

BACKGROUND: Children with hearing loss experience significant difficulty understanding speech in noisy and reverberant situations. Adaptive noise management technologies, such as fully adaptive directional microphones and digital noise reduction, have the potential to improve communication in noise for children with hearing aids. However, there are no published studies evaluating the potential benefits children receive from the use of adaptive noise management technologies in simulated real-world environments as well as in daily situations. PURPOSE: The objective of this study was to compare speech recognition, speech intelligibility ratings (SIRs), and sound preferences of children using hearing aids equipped with and without adaptive noise management technologies. RESEARCH DESIGN: A single-group, repeated measures design was used to evaluate performance differences obtained in four simulated environments. In each simulated environment, participants were tested in a basic listening program with minimal noise management features, a manual program designed for that scene, and the hearing instruments' adaptive operating system that steered hearing instrument parameterization based on the characteristics of the environment. STUDY SAMPLE: Twelve children with mild to moderately severe sensorineural hearing loss. DATA COLLECTION AND ANALYSIS: Speech recognition and SIRs were evaluated in three hearing aid programs with and without noise management technologies across two different test sessions and various listening environments. Also, the participants' perceptual hearing performance in daily real-world listening situations with two of the hearing aid programs was evaluated during a four- to six-week field trial that took place between the two laboratory sessions. RESULTS: On average, the use of adaptive noise management technology improved sentence recognition in noise for speech presented in front of the participant but resulted in a decrement in performance for signals arriving from behind when the participant was facing forward. However, the improvement with adaptive noise management exceeded the decrement obtained when the signal arrived from behind. Most participants reported better subjective SIRs when using adaptive noise management technologies, particularly when the signal of interest arrived from in front of the listener. In addition, most participants reported a preference for the technology with an automatically switching, adaptive directional microphone and adaptive noise reduction in real-world listening situations when compared to conventional, omnidirectional microphone use with minimal noise reduction processing. CONCLUSIONS: Use of the adaptive noise management technologies evaluated in this study improves school-age children's speech recognition in noise for signals arriving from the front. Although a small decrement in speech recognition in noise was observed for signals arriving from behind the listener, most participants reported a preference for use of noise management technology both when the signal arrived from in front and from behind the child. The results of this study suggest that adaptive noise management technologies should be considered for use with school-age children when listening in academic and social situations.


Assuntos
Perda Auditiva Neurossensorial/reabilitação , Ruído/prevenção & controle , Acústica/instrumentação , Adolescente , Análise de Variância , Limiar Auditivo , Tecnologia Biomédica , Criança , Desenho de Equipamento , Auxiliares de Audição , Perda Auditiva Neurossensorial/fisiopatologia , Perda Auditiva Neurossensorial/psicologia , Humanos , Modelos Teóricos , Preferência do Paciente , Inteligibilidade da Fala/fisiologia , Percepção da Fala/fisiologia
18.
J Am Acad Audiol ; 28(2): 127-140, 2017 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-28240980

RESUMO

BACKGROUND: A number of published studies have demonstrated the benefits of electric-acoustic stimulation (EAS) over conventional electric stimulation for adults with functional low-frequency acoustic hearing and severe-to-profound high-frequency hearing loss. These benefits potentially include better speech recognition in quiet and in noise, better localization, improvements in sound quality, better music appreciation and aptitude, and better pitch recognition. There is, however, a paucity of published reports describing the potential benefits and limitations of EAS for children with functional low-frequency acoustic hearing and severe-to-profound high-frequency hearing loss. PURPOSE: The objective of this study was to explore the potential benefits of EAS for children. RESEARCH DESIGN: A repeated measures design was used to evaluate performance differences obtained with EAS stimulation versus acoustic- and electric-only stimulation. STUDY SAMPLE: Seven users of Cochlear Nucleus Hybrid, Nucleus 24 Freedom, CI512, and CI422 implants were included in the study. DATA COLLECTION AND ANALYSIS: Sentence recognition (assayed using the pediatric version of the AzBio sentence recognition test) was evaluated in quiet and at three fixed signal-to-noise ratios (SNR) (0, +5, and +10 dB). Functional hearing performance was also evaluated with the use of questionnaires, including the comparative version of the Speech, Spatial, and Qualities, the Listening Inventory for Education Revised, and the Children's Home Inventory for Listening Difficulties. RESULTS: Speech recognition in noise was typically better with EAS compared to participants' performance with acoustic- and electric-only stimulation, particularly when evaluated at the less favorable SNR. Additionally, in real-world situations, children generally preferred to use EAS compared to electric-only stimulation. Also, the participants' classroom teachers observed better hearing performance in the classroom with the use of EAS. CONCLUSIONS: Use of EAS provided better speech recognition in quiet and in noise when compared to performance obtained with use of acoustic- and electric-only stimulation, and children responded favorably to the use of EAS implemented in an integrated sound processor for real-world use.


Assuntos
Estimulação Acústica/métodos , Limiar Auditivo/fisiologia , Implantes Cocleares , Auxiliares de Audição , Perda Auditiva de Alta Frequência/terapia , Percepção da Fala/fisiologia , Adolescente , Fatores Etários , Audiometria/métodos , Criança , Pré-Escolar , Estimulação Elétrica/métodos , Feminino , Seguimentos , Perda Auditiva de Alta Frequência/diagnóstico , Humanos , Lactente , Masculino , Índice de Gravidade de Doença , Resultado do Tratamento
19.
Ear Hear ; 38(2): 255-261, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-27941405

RESUMO

OBJECTIVE: The electrically-evoked stapedial reflex threshold (eSRT) has proven to be useful in setting upper stimulation levels of cochlear implant recipients. However, the literature suggests that the reflex can be difficult to observe in a significant percentage of the population. The primary goal of this investigation was to assess the difference in eSRT levels obtained with alternative acoustic admittance probe tone frequencies. DESIGN: A repeated-measures design was used to examine the effect of 3 probe tone frequencies (226, 678, and 1000 Hz) on eSRT in 23 adults with cochlear implants. RESULTS: The mean eSRT measured using the conventional probe tone of 226 Hz was significantly higher than the mean eSRT measured with use of 678 and 1000 Hz probe tones. The mean eSRT were 174, 167, and 165 charge units with use of 226, 678, and 1000 Hz probe tones, respectively. There was not a statistically significant difference between the average eSRTs for probe tones 678 and 1000 Hz. Twenty of 23 participants had eSRT at lower charge unit levels with use of either a 678 or 1000 Hz probe tone when compared with the 226 Hz probe tone. Two participants had eSRT measured with 678 or 1000 Hz probe tones that were equal in level to the eSRT measured with a 226 Hz probe tone. Only 1 participant had an eSRT that was obtained at a lower charge unit level with a 226 Hz probe tone relative to the eSRT obtained with a 678 and 1000 Hz probe tone. CONCLUSIONS: The results of this investigation demonstrate that the use of a standard 226 Hz probe tone is not ideal for measurement of the eSRT. The use of higher probe tone frequencies (i.e., 678 or 1000 Hz) resulted in lower eSRT levels when compared with the eSRT levels obtained with use of a 226 probe tone. In addition, 4 of the 23 participants included in this study did not have a measureable eSRT with use of a 226 Hz probe tone, but all of the participants had measureable eSRT with use of both the 678 and 1000 Hz probe tones. Additional work is required to understand the clinical implication of these changes in the context of cochlear implant programming.


Assuntos
Implante Coclear , Implantes Cocleares , Surdez/reabilitação , Reflexo Acústico/fisiologia , Potenciais de Ação/fisiologia , Adulto , Idoso , Idoso de 80 Anos ou mais , Surdez/fisiopatologia , Estimulação Elétrica , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Adulto Jovem
20.
J Am Acad Audiol ; 27(5): 388-394, 2016 05.
Artigo em Inglês | MEDLINE | ID: mdl-27179258

RESUMO

BACKGROUND: Cochlear implant (CI) recipients often experience difficulty understanding speech in noise and speech that originates from a distance. Many CI recipients also experience difficulty understanding speech originating from a television. Use of hearing assistance technology (HAT) may improve speech recognition in noise and for signals that originate from more than a few feet from the listener; however, there are no published studies evaluating the potential benefits of a wireless HAT designed to deliver audio signals from a television directly to a CI sound processor. PURPOSE: The objective of this study was to compare speech recognition in quiet and in noise of CI recipients with the use of their CI alone and with the use of their CI and a wireless HAT (Cochlear Wireless TV Streamer). RESEARCH DESIGN: A two-way repeated measures design was used to evaluate performance differences obtained in quiet and in competing noise (65 dBA) with the CI sound processor alone and with the sound processor coupled to the Cochlear Wireless TV Streamer. STUDY SAMPLE: Sixteen users of Cochlear Nucleus 24 Freedom, CI512, and CI422 implants were included in the study. DATA COLLECTION AND ANALYSIS: Participants were evaluated in four conditions including use of the sound processor alone and use of the sound processor with the wireless streamer in quiet and in the presence of competing noise at 65 dBA. Speech recognition was evaluated in each condition with two full lists of Computer-Assisted Speech Perception Testing and Training Sentence-Level Test sentences presented from a light-emitting diode television. RESULTS: Speech recognition in noise was significantly better with use of the wireless streamer compared to participants' performance with their CI sound processor alone. There was also a nonsignificant trend toward better performance in quiet with use of the TV Streamer. Performance was significantly poorer when evaluated in noise compared to performance in quiet when the TV Streamer was not used. CONCLUSIONS: Use of the Cochlear Wireless TV Streamer designed to stream audio from a television directly to a CI sound processor provides better speech recognition in quiet and in noise when compared to performance obtained with use of the CI sound processor alone.


Assuntos
Implantes Cocleares , Percepção da Fala , Televisão , Tecnologia sem Fio , Adulto , Idoso , Idoso de 80 Anos ou mais , Humanos , Pessoa de Meia-Idade , Ruído , Fala , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...