Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 30.064
Filter
1.
Multisens Res ; 37(2): 125-141, 2024 Apr 03.
Article in English | MEDLINE | ID: mdl-38714314

ABSTRACT

Trust is an aspect critical to human social interaction and research has identified many cues that help in the assimilation of this social trait. Two of these cues are the pitch of the voice and the width-to-height ratio of the face (fWHR). Additionally, research has indicated that the content of a spoken sentence itself has an effect on trustworthiness; a finding that has not yet been brought into multisensory research. The current research aims to investigate previously developed theories on trust in relation to vocal pitch, fWHR, and sentence content in a multimodal setting. Twenty-six female participants were asked to judge the trustworthiness of a voice speaking a neutral or romantic sentence while seeing a face. The average pitch of the voice and the fWHR were varied systematically. Results indicate that the content of the spoken message was an important predictor of trustworthiness extending into multimodality. Further, the mean pitch of the voice and fWHR of the face appeared to be useful indicators in a multimodal setting. These effects interacted with one another across modalities. The data demonstrate that trust in the voice is shaped by task-irrelevant visual stimuli. Future research is encouraged to clarify whether these findings remain consistent across genders, age groups, and languages.


Subject(s)
Face , Trust , Voice , Humans , Female , Voice/physiology , Young Adult , Adult , Face/physiology , Speech Perception/physiology , Pitch Perception/physiology , Facial Recognition/physiology , Cues , Adolescent
2.
Cereb Cortex ; 34(5)2024 May 02.
Article in English | MEDLINE | ID: mdl-38715408

ABSTRACT

Speech comprehension in noise depends on complex interactions between peripheral sensory and central cognitive systems. Despite having normal peripheral hearing, older adults show difficulties in speech comprehension. It remains unclear whether the brain's neural responses could indicate aging. The current study examined whether individual brain activation during speech perception in different listening environments could predict age. We applied functional near-infrared spectroscopy to 93 normal-hearing human adults (20 to 70 years old) during a sentence listening task, which contained a quiet condition and 4 different signal-to-noise ratios (SNR = 10, 5, 0, -5 dB) noisy conditions. A data-driven approach, the region-based brain-age predictive modeling was adopted. We observed a significant behavioral decrease with age under the 4 noisy conditions, but not under the quiet condition. Brain activations in SNR = 10 dB listening condition could successfully predict individual's age. Moreover, we found that the bilateral visual sensory cortex, left dorsal speech pathway, left cerebellum, right temporal-parietal junction area, right homolog Wernicke's area, and right middle temporal gyrus contributed most to prediction performance. These results demonstrate that the activations of regions about sensory-motor mapping of sound, especially in noisy conditions, could be sensitive measures for age prediction than external behavior measures.


Subject(s)
Aging , Brain , Comprehension , Noise , Spectroscopy, Near-Infrared , Speech Perception , Humans , Adult , Speech Perception/physiology , Male , Female , Spectroscopy, Near-Infrared/methods , Middle Aged , Young Adult , Aged , Comprehension/physiology , Brain/physiology , Brain/diagnostic imaging , Aging/physiology , Brain Mapping/methods , Acoustic Stimulation/methods
3.
J Acoust Soc Am ; 155(5): 2934-2947, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38717201

ABSTRACT

Spatial separation and fundamental frequency (F0) separation are effective cues for improving the intelligibility of target speech in multi-talker scenarios. Previous studies predominantly focused on spatial configurations within the frontal hemifield, overlooking the ipsilateral side and the entire median plane, where localization confusion often occurs. This study investigated the impact of spatial and F0 separation on intelligibility under the above-mentioned underexplored spatial configurations. The speech reception thresholds were measured through three experiments for scenarios involving two to four talkers, either in the ipsilateral horizontal plane or in the entire median plane, utilizing monotonized speech with varying F0s as stimuli. The results revealed that spatial separation in symmetrical positions (front-back symmetry in the ipsilateral horizontal plane or front-back, up-down symmetry in the median plane) contributes positively to intelligibility. Both target direction and relative target-masker separation influence the masking release attributed to spatial separation. As the number of talkers exceeds two, the masking release from spatial separation diminishes. Nevertheless, F0 separation remains as a remarkably effective cue and could even facilitate spatial separation in improving intelligibility. Further analysis indicated that current intelligibility models encounter difficulties in accurately predicting intelligibility in scenarios explored in this study.


Subject(s)
Cues , Perceptual Masking , Sound Localization , Speech Intelligibility , Speech Perception , Humans , Female , Male , Young Adult , Adult , Speech Perception/physiology , Acoustic Stimulation , Auditory Threshold , Speech Acoustics , Speech Reception Threshold Test , Noise
4.
J Acoust Soc Am ; 155(5): 3060-3070, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38717210

ABSTRACT

Speakers tailor their speech to different types of interlocutors. For example, speech directed to voice technology has different acoustic-phonetic characteristics than speech directed to a human. The present study investigates the perceptual consequences of human- and device-directed registers in English. We compare two groups of speakers: participants whose first language is English (L1) and bilingual L1 Mandarin-L2 English talkers. Participants produced short sentences in several conditions: an initial production and a repeat production after a human or device guise indicated either understanding or misunderstanding. In experiment 1, a separate group of L1 English listeners heard these sentences and transcribed the target words. In experiment 2, the same productions were transcribed by an automatic speech recognition (ASR) system. Results show that transcription accuracy was highest for L1 talkers for both human and ASR transcribers. Furthermore, there were no overall differences in transcription accuracy between human- and device-directed speech. Finally, while human listeners showed an intelligibility benefit for coda repair productions, the ASR transcriber did not benefit from these enhancements. Findings are discussed in terms of models of register adaptation, phonetic variation, and human-computer interaction.


Subject(s)
Multilingualism , Speech Intelligibility , Speech Perception , Humans , Male , Female , Adult , Young Adult , Speech Acoustics , Phonetics , Speech Recognition Software
5.
J Acoust Soc Am ; 155(5): 2990-3004, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38717206

ABSTRACT

Speakers can place their prosodic prominence on any locations within a sentence, generating focus prosody for listeners to perceive new information. This study aimed to investigate age-related changes in the bottom-up processing of focus perception in Jianghuai Mandarin by clarifying the perceptual cues and the auditory processing abilities involved in the identification of focus locations. Young, middle-aged, and older speakers of Jianghuai Mandarin completed a focus identification task and an auditory perception task. The results showed that increasing age led to a decrease in listeners' accuracy rate in identifying focus locations, with all participants performing the worst when dynamic pitch cues were inaccessible. Auditory processing abilities did not predict focus perception performance in young and middle-aged listeners but accounted significantly for the variance in older adults' performance. These findings suggest that age-related deteriorations in focus perception can be largely attributed to declined auditory processing of perceptual cues. Poor ability to extract frequency modulation cues may be the most important underlying psychoacoustic factor for older adults' difficulties in perceiving focus prosody in Jianghuai Mandarin. The results contribute to our understanding of the bottom-up mechanisms involved in linguistic prosody processing in aging adults, particularly in tonal languages.


Subject(s)
Aging , Cues , Speech Perception , Humans , Middle Aged , Aged , Male , Female , Aging/psychology , Aging/physiology , Young Adult , Adult , Speech Perception/physiology , Age Factors , Speech Acoustics , Acoustic Stimulation , Pitch Perception , Language , Voice Quality , Psychoacoustics , Audiometry, Speech
6.
J Acoust Soc Am ; 155(5): 3090-3100, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38717212

ABSTRACT

The perceived level of femininity and masculinity is a prominent property by which a speaker's voice is indexed, and a vocal expression incongruent with the speaker's gender identity can greatly contribute to gender dysphoria. Our understanding of the acoustic cues to the levels of masculinity and femininity perceived by listeners in voices is not well developed, and an increased understanding of them would benefit communication of therapy goals and evaluation in gender-affirming voice training. We developed a voice bank with 132 voices with a range of levels of femininity and masculinity expressed in the voice, as rated by 121 listeners in independent, individually randomized perceptual evaluations. Acoustic models were developed from measures identified as markers of femininity or masculinity in the literature using penalized regression and tenfold cross-validation procedures. The 223 most important acoustic cues explained 89% and 87% of the variance in the perceived level of femininity and masculinity in the evaluation set, respectively. The median fo was confirmed to provide the primary cue, but other acoustic properties must be considered in accurate models of femininity and masculinity perception. The developed models are proposed to afford communication and evaluation of gender-affirming voice training goals and improve voice synthesis efforts.


Subject(s)
Cues , Speech Acoustics , Speech Perception , Voice Quality , Humans , Female , Male , Adult , Young Adult , Masculinity , Middle Aged , Femininity , Adolescent , Gender Identity , Acoustics
7.
JASA Express Lett ; 4(5)2024 May 01.
Article in English | MEDLINE | ID: mdl-38717469

ABSTRACT

The perceptual boundary between short and long categories depends on speech rate. We investigated the influence of speech rate on perceptual boundaries for short and long vowel and consonant contrasts by Spanish-English bilingual listeners and English monolinguals. Listeners tended to adapt their perceptual boundaries to speech rates, but the strategy differed between groups, especially for consonants. Understanding the factors that influence auditory processing in this population is essential for developing appropriate assessments of auditory comprehension. These findings have implications for the clinical care of older populations whose ability to rely on spectral and/or temporal information in the auditory signal may decline.


Subject(s)
Multilingualism , Speech Perception , Humans , Speech Perception/physiology , Female , Male , Adult , Phonetics , Young Adult
8.
Trends Hear ; 28: 23312165241253653, 2024.
Article in English | MEDLINE | ID: mdl-38715401

ABSTRACT

This study aimed to preliminarily investigate the associations between performance on the integrated Digit-in-Noise Test (iDIN) and performance on measures of general cognition and working memory (WM). The study recruited 81 older adult hearing aid users between 60 and 95 years of age with bilateral moderate to severe hearing loss. The Chinese version of the Montreal Cognitive Assessment Basic (MoCA-BC) was used to screen older adults for mild cognitive impairment. Speech reception thresholds (SRTs) were measured using 2- to 5-digit sequences of the Mandarin iDIN. The differences in SRT between five-digit and two-digit sequences (SRT5-2), and between five-digit and three-digit sequences (SRT5-3), were used as indicators of memory performance. The results were compared to those from the Digit Span Test and Corsi Blocks Tapping Test, which evaluate WM and attention capacity. SRT5-2 and SRT5-3 demonstrated significant correlations with the three cognitive function tests (rs ranging from -.705 to -.528). Furthermore, SRT5-2 and SRT5-3 were significantly higher in participants who failed the MoCA-BC screening compared to those who passed. The findings show associations between performance on the iDIN and performance on memory tests. However, further validation and exploration are needed to fully establish its effectiveness and efficacy.


Subject(s)
Cognition , Cognitive Dysfunction , Hearing Aids , Memory, Short-Term , Humans , Aged , Female , Male , Middle Aged , Aged, 80 and over , Memory, Short-Term/physiology , Cognitive Dysfunction/diagnosis , Noise/adverse effects , Speech Perception/physiology , Speech Reception Threshold Test , Age Factors , Persons With Hearing Impairments/psychology , Persons With Hearing Impairments/rehabilitation , Hearing Loss/rehabilitation , Hearing Loss/diagnosis , Hearing Loss/psychology , Mental Status and Dementia Tests , Memory , Acoustic Stimulation , Predictive Value of Tests , Correction of Hearing Impairment/instrumentation , Auditory Threshold
9.
JASA Express Lett ; 4(5)2024 May 01.
Article in English | MEDLINE | ID: mdl-38717468

ABSTRACT

This study evaluated whether adaptive training with time-compressed speech produces an age-dependent improvement in speech recognition in 14 adult cochlear-implant users. The protocol consisted of a pretest, 5 h of training, and a posttest using time-compressed speech and an adaptive procedure. There were significant improvements in time-compressed speech recognition at the posttest session following training (>5% in the average time-compressed speech recognition threshold) but no effects of age. These results are promising for the use of adaptive training in aural rehabilitation strategies for cochlear-implant users across the adult lifespan and possibly using speech signals, such as time-compressed speech, to train temporal processing.


Subject(s)
Cochlear Implants , Speech Perception , Humans , Speech Perception/physiology , Aged , Male , Middle Aged , Female , Adult , Aged, 80 and over , Cochlear Implantation/methods , Time Factors
10.
Proc Natl Acad Sci U S A ; 121(23): e2320489121, 2024 Jun 04.
Article in English | MEDLINE | ID: mdl-38805278

ABSTRACT

Neural oscillations reflect fluctuations in excitability, which biases the percept of ambiguous sensory input. Why this bias occurs is still not fully understood. We hypothesized that neural populations representing likely events are more sensitive, and thereby become active on earlier oscillatory phases, when the ensemble itself is less excitable. Perception of ambiguous input presented during less-excitable phases should therefore be biased toward frequent or predictable stimuli that have lower activation thresholds. Here, we show such a frequency bias in spoken word recognition using psychophysics, magnetoencephalography (MEG), and computational modelling. With MEG, we found a double dissociation, where the phase of oscillations in the superior temporal gyrus and medial temporal gyrus biased word-identification behavior based on phoneme and lexical frequencies, respectively. This finding was reproduced in a computational model. These results demonstrate that oscillations provide a temporal ordering of neural activity based on the sensitivity of separable neural populations.


Subject(s)
Language , Magnetoencephalography , Speech Perception , Humans , Speech Perception/physiology , Male , Female , Adult , Temporal Lobe/physiology , Young Adult , Models, Neurological
11.
PLoS One ; 19(5): e0304150, 2024.
Article in English | MEDLINE | ID: mdl-38805447

ABSTRACT

When comprehending speech, listeners can use information encoded in visual cues from a face to enhance auditory speech comprehension. For example, prior work has shown that the mouth movements reflect articulatory features of speech segments and durational information, while pitch and speech amplitude are primarily cued by eyebrow and head movements. Little is known about how the visual perception of segmental and prosodic speech information is influenced by linguistic experience. Using eye-tracking, we studied how perceivers' visual scanning of different regions on a talking face predicts accuracy in a task targeting both segmental versus prosodic information, and also asked how this was influenced by language familiarity. Twenty-four native English perceivers heard two audio sentences in either English or Mandarin (an unfamiliar, non-native language), which sometimes differed in segmental or prosodic information (or both). Perceivers then saw a silent video of a talking face, and judged whether that video matched either the first or second audio sentence (or whether both sentences were the same). First, increased looking to the mouth predicted correct responses only for non-native language trials. Second, the start of a successful search for speech information in the mouth area was significantly delayed in non-native versus native trials, but just when there were only prosodic differences in the auditory sentences, and not when there were segmental differences. Third, (in correct trials) the saccade amplitude in native language trials was significantly greater than in non-native trials, indicating more intensely focused fixations in the latter. Taken together, these results suggest that mouth-looking was generally more evident when processing a non-native versus native language in all analyses, but fascinatingly, when measuring perceivers' latency to fixate the mouth, this language effect was largest in trials where only prosodic information was useful for the task.


Subject(s)
Language , Phonetics , Speech Perception , Humans , Female , Male , Adult , Speech Perception/physiology , Young Adult , Face/physiology , Visual Perception/physiology , Eye Movements/physiology , Speech/physiology , Eye-Tracking Technology
12.
PLoS Biol ; 22(5): e3002631, 2024 May.
Article in English | MEDLINE | ID: mdl-38805517

ABSTRACT

Music and speech are complex and distinct auditory signals that are both foundational to the human experience. The mechanisms underpinning each domain are widely investigated. However, what perceptual mechanism transforms a sound into music or speech and how basic acoustic information is required to distinguish between them remain open questions. Here, we hypothesized that a sound's amplitude modulation (AM), an essential temporal acoustic feature driving the auditory system across processing levels, is critical for distinguishing music and speech. Specifically, in contrast to paradigms using naturalistic acoustic signals (that can be challenging to interpret), we used a noise-probing approach to untangle the auditory mechanism: If AM rate and regularity are critical for perceptually distinguishing music and speech, judging artificially noise-synthesized ambiguous audio signals should align with their AM parameters. Across 4 experiments (N = 335), signals with a higher peak AM frequency tend to be judged as speech, lower as music. Interestingly, this principle is consistently used by all listeners for speech judgments, but only by musically sophisticated listeners for music. In addition, signals with more regular AM are judged as music over speech, and this feature is more critical for music judgment, regardless of musical sophistication. The data suggest that the auditory system can rely on a low-level acoustic property as basic as AM to distinguish music from speech, a simple principle that provokes both neurophysiological and evolutionary experiments and speculations.


Subject(s)
Acoustic Stimulation , Auditory Perception , Music , Speech Perception , Humans , Male , Female , Adult , Auditory Perception/physiology , Acoustic Stimulation/methods , Speech Perception/physiology , Young Adult , Speech/physiology , Adolescent
13.
Sci Rep ; 14(1): 12203, 2024 May 28.
Article in English | MEDLINE | ID: mdl-38806554

ABSTRACT

Developmental Coordination Disorder (DCD) is a common neurodevelopmental disorder featuring deficits in motor coordination and motor timing among children. Deficits in rhythmic tracking, including perceptually tracking and synchronizing action with auditory rhythms, have been studied in a wide range of motor disorders, providing a foundation for developing rehabilitation programs incorporating auditory rhythms. We tested whether DCD also features these auditory-motor deficits among 7-10 year-old children. In a speech recognition task with no overt motor component, modulating the speech rhythm interfered more with the performance of children at risk for DCD than typically developing (TD) children. A set of auditory-motor tapping tasks further showed that, although children at risk for DCD performed worse than TD children in general, the presence of an auditory rhythmic cue (isochronous metronome or music) facilitated the temporal consistency of tapping. Finally, accuracy in the recognition of rhythmically modulated speech and tapping consistency correlated with performance on the standardized motor assessment. Together, the results show auditory rhythmic regularity benefits auditory perception and auditory-motor coordination in children at risk for DCD. This provides a foundation for future clinical studies to develop evidence-based interventions involving auditory-motor rhythmic coordination for children with DCD.


Subject(s)
Auditory Perception , Motor Skills Disorders , Humans , Child , Motor Skills Disorders/physiopathology , Female , Male , Auditory Perception/physiology , Psychomotor Performance/physiology , Acoustic Stimulation , Speech Perception/physiology
14.
Cogn Sci ; 48(5): e13449, 2024 May.
Article in English | MEDLINE | ID: mdl-38773754

ABSTRACT

We recently reported strong, replicable (i.e., replicated) evidence for lexically mediated compensation for coarticulation (LCfC; Luthra et al., 2021), whereby lexical knowledge influences a prelexical process. Critically, evidence for LCfC provides robust support for interactive models of cognition that include top-down feedback and is inconsistent with autonomous models that allow only feedforward processing. McQueen, Jesse, and Mitterer (2023) offer five counter-arguments against our interpretation; we respond to each of those arguments here and conclude that top-down feedback provides the most parsimonious explanation of extant data.


Subject(s)
Speech Perception , Humans , Speech Perception/physiology , Cognition , Language
15.
Am Ann Deaf ; 168(5): 241-257, 2024.
Article in English | MEDLINE | ID: mdl-38766937

ABSTRACT

Our study investigated the differences in speech performance and neurophysiological response in groups of school-age children with unilateral hearing loss (UHL) who were otherwise typically developing (TD). We recruited a total of 16 primary school-age children for our study (UHL = 9/TD = 7), who were screened by doctors at Shin Kong Wu-Ho-Su Memorial Hospital. We used the Peabody Picture Vocabulary Test-Revised (PPVT-R) to test word comprehension, and the PPVT-R percentile rank (PR) value was proportional to the auditory memory score (by The Children's Oral Comprehension Test) in both groups. Later, we assessed the latency and amplitude of auditory ERP P300 and found that the latency of auditory ERP P300 in the UHL group was prolonged compared with that in the TD group. Although students with UHL have typical hearing in one ear, based on our results, long-term UHL might be the cause of atypical organization of brain areas responsible for auditory processing or even visual perceptions attributed to speech delay and learning difficulties.


Subject(s)
Event-Related Potentials, P300 , Hearing Loss, Unilateral , Humans , Child , Event-Related Potentials, P300/physiology , Male , Female , Hearing Loss, Unilateral/physiopathology , Hearing Loss, Unilateral/rehabilitation , Reaction Time/physiology , Speech Perception/physiology , Evoked Potentials, Auditory/physiology , China , Case-Control Studies , Language , Comprehension
16.
Trends Hear ; 28: 23312165241256721, 2024.
Article in English | MEDLINE | ID: mdl-38773778

ABSTRACT

This study aimed to investigate the role of hearing aid (HA) usage in language outcomes among preschool children aged 3-5 years with mild bilateral hearing loss (MBHL). The data were retrieved from a total of 52 children with MBHL and 30 children with normal hearing (NH). The association between demographical, audiological factors and language outcomes was examined. Analyses of variance were conducted to compare the language abilities of HA users, non-HA users, and their NH peers. Furthermore, regression analyses were performed to identify significant predictors of language outcomes. Aided better ear pure-tone average (BEPTA) was significantly correlated with language comprehension scores. Among children with MBHL, those who used HA outperformed the ones who did not use HA across all linguistic domains. The language skills of children with MBHL were comparable to those of their peers with NH. The degree of improvement in audibility in terms of aided BEPTA was a significant predictor of language comprehension. It is noteworthy that 50% of the parents expressed reluctance regarding HA use for their children with MBHL. The findings highlight the positive impact of HA usage on language development in this population. Professionals may therefore consider HAs as a viable treatment option for children with MBHL, especially when there is a potential risk of language delay due to hearing loss. It was observed that 25% of the children with MBHL had late-onset hearing loss. Consequently, the implementation of preschool screening or a listening performance checklist is recommended to facilitate early detection.


Subject(s)
Child Language , Hearing Aids , Hearing Loss, Bilateral , Language Development , Humans , Male , Child, Preschool , Female , Hearing Loss, Bilateral/rehabilitation , Hearing Loss, Bilateral/diagnosis , Hearing Loss, Bilateral/physiopathology , Hearing Loss, Bilateral/psychology , Speech Perception , Case-Control Studies , Correction of Hearing Impairment/instrumentation , Treatment Outcome , Persons With Hearing Impairments/rehabilitation , Persons With Hearing Impairments/psychology , Severity of Illness Index , Comprehension , Hearing , Audiometry, Pure-Tone , Age Factors , Auditory Threshold , Language Tests
17.
Curr Biol ; 34(9): R348-R351, 2024 05 06.
Article in English | MEDLINE | ID: mdl-38714162

ABSTRACT

A recent study has used scalp-recorded electroencephalography to obtain evidence of semantic processing of human speech and objects by domesticated dogs. The results suggest that dogs do comprehend the meaning of familiar spoken words, in that a word can evoke the mental representation of the object to which it refers.


Subject(s)
Cognition , Semantics , Animals , Dogs/psychology , Cognition/physiology , Humans , Electroencephalography , Speech/physiology , Speech Perception/physiology , Comprehension/physiology
18.
Hear Res ; 447: 109023, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38733710

ABSTRACT

Limited auditory input, whether caused by hearing loss or by electrical stimulation through a cochlear implant (CI), can be compensated by the remaining senses. Specifically for CI users, previous studies reported not only improved visual skills, but also altered cortical processing of unisensory visual and auditory stimuli. However, in multisensory scenarios, it is still unclear how auditory deprivation (before implantation) and electrical hearing experience (after implantation) affect cortical audiovisual speech processing. Here, we present a prospective longitudinal electroencephalography (EEG) study which systematically examined the deprivation- and CI-induced alterations of cortical processing of audiovisual words by comparing event-related potentials (ERPs) in postlingually deafened CI users before and after implantation (five weeks and six months of CI use). A group of matched normal-hearing (NH) listeners served as controls. The participants performed a word-identification task with congruent and incongruent audiovisual words, focusing their attention on either the visual (lip movement) or the auditory speech signal. This allowed us to study the (top-down) attention effect on the (bottom-up) sensory cortical processing of audiovisual speech. When compared to the NH listeners, the CI candidates (before implantation) and the CI users (after implantation) exhibited enhanced lipreading abilities and an altered cortical response at the N1 latency range (90-150 ms) that was characterized by a decreased theta oscillation power (4-8 Hz) and a smaller amplitude in the auditory cortex. After implantation, however, the auditory-cortex response gradually increased and developed a stronger intra-modal connectivity. Nevertheless, task efficiency and activation in the visual cortex was significantly modulated in both groups by focusing attention on the visual as compared to the auditory speech signal, with the NH listeners additionally showing an attention-dependent decrease in beta oscillation power (13-30 Hz). In sum, these results suggest remarkable deprivation effects on audiovisual speech processing in the auditory cortex, which partially reverse after implantation. Although even experienced CI users still show distinct audiovisual speech processing compared to NH listeners, pronounced effects of (top-down) direction of attention on (bottom-up) audiovisual processing can be observed in both groups. However, NH listeners but not CI users appear to show enhanced allocation of cognitive resources in visually as compared to auditory attended audiovisual speech conditions, which supports our behavioural observations of poorer lipreading abilities and reduced visual influence on audition in NH listeners as compared to CI users.


Subject(s)
Acoustic Stimulation , Attention , Cochlear Implantation , Cochlear Implants , Deafness , Electroencephalography , Persons With Hearing Impairments , Photic Stimulation , Speech Perception , Humans , Male , Female , Middle Aged , Cochlear Implantation/instrumentation , Adult , Prospective Studies , Longitudinal Studies , Persons With Hearing Impairments/psychology , Persons With Hearing Impairments/rehabilitation , Deafness/physiopathology , Deafness/rehabilitation , Deafness/psychology , Case-Control Studies , Aged , Visual Perception , Lipreading , Time Factors , Hearing , Evoked Potentials, Auditory , Auditory Cortex/physiopathology , Evoked Potentials
19.
Int J Pediatr Otorhinolaryngol ; 180: 111968, 2024 May.
Article in English | MEDLINE | ID: mdl-38714045

ABSTRACT

AIM & OBJECTIVES: The study aimed to compare P1 latency and P1-N1 amplitude with receptive and expressive language ages in children using cochlear implant (CI) in one ear and a hearing aid (HA) in non-implanted ear. METHODS: The study included 30 children, consisting of 18 males and 12 females, aged between 48 and 96 months. The age at which the children received CI ranged from 42 to 69 months. A within-subject research design was utilized and participants were selected through purposive sampling. Auditory late latency responses (ALLR) were assessed using the Intelligent hearing system to measure P1 latency and P1-N1 amplitude. The assessment checklist for speech-language skills (ACSLS) was employed to evaluate receptive and expressive language age. Both assessments were conducted after cochlear implantation. RESULTS: A total of 30 children participated in the study, with a mean implant age of 20.03 months (SD: 8.14 months). The mean P1 latency and P1-N1 amplitude was 129.50 ms (SD: 15.05 ms) and 6.93 µV (SD: 2.24 µV) respectively. Correlation analysis revealed no significant association between ALLR measures and receptive or expressive language ages. However, there was significant negative correlation between the P1 latency and implant age (Spearman's rho = -0.371, p = 0.043). CONCLUSIONS: The study suggests that P1 latency which is an indicative of auditory maturation, may not be a reliable marker for predicting language outcomes. It can be concluded that language development is likely to be influenced by other factors beyond auditory maturation alone.


Subject(s)
Cochlear Implants , Language Development , Humans , Male , Female , Child, Preschool , Child , Cochlear Implantation/methods , Reaction Time/physiology , Deafness/surgery , Deafness/rehabilitation , Evoked Potentials, Auditory/physiology , Age Factors , Speech Perception/physiology
20.
Otol Neurotol ; 45(5): e381-e384, 2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38728553

ABSTRACT

OBJECTIVE: To examine patient preference after stapedotomy versus cochlear implantation in a unique case of a patient with symmetrical profound mixed hearing loss and similar postoperative speech perception improvement. PATIENTS: An adult patient with bilateral symmetrical far advanced otosclerosis, with profound mixed hearing loss. INTERVENTION: Stapedotomy in the left ear, cochlear implantation in the right ear. MAIN OUTCOME MEASURE: Performance on behavioral audiometry, and subjective report of hearing and intervention preference. RESULTS: A patient successfully underwent left stapedotomy and subsequent cochlear implantation on the right side, per patient preference. Preoperative audiometric characteristics were similar between ears (pure-tone average [PTA] [R: 114; L: 113 dB]; word recognition score [WRS]: 22%). Postprocedural audiometry demonstrated significant improvement after stapedotomy (PTA: 59 dB, WRS: 75%) and from cochlear implant (PTA: 20 dB, WRS: 60%). The patient subjectively reported a preference for the cochlear implant ear despite having substantial gains from stapedotomy. A nuanced discussion highlighting potentially overlooked benefits of cochlear implants in far advanced otosclerosis is conducted. CONCLUSION: In comparison with stapedotomy and hearing aids, cochlear implantation generally permits greater access to sound among patients with far advanced otosclerosis. Though the cochlear implant literature mainly focuses on speech perception outcomes, an underappreciated benefit of cochlear implantation is the high likelihood of achieving "normal" sound levels across the audiogram.


Subject(s)
Cochlear Implantation , Otosclerosis , Speech Perception , Stapes Surgery , Humans , Otosclerosis/surgery , Stapes Surgery/methods , Cochlear Implantation/methods , Speech Perception/physiology , Treatment Outcome , Male , Middle Aged , Hearing Loss, Mixed Conductive-Sensorineural/surgery , Audiometry, Pure-Tone , Patient Preference , Female , Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...