Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 92
Filter
1.
Otol Neurotol ; 45(5): 564-571, 2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38728560

ABSTRACT

OBJECTIVE: To investigate the safety and feasibility of precise delivery of a long-acting gel formulation containing 6% dexamethasone (SPT-2101) to the round window membrane for the treatment of Menière's disease. STUDY DESIGN: Prospective, unblinded, cohort study. SETTING: Tertiary care neurotology clinic. PATIENTS: Adults 18 to 85 years with a diagnosis of unilateral definite Menière's disease per Barany society criteria. INTERVENTIONS: A single injection of a long-acting gel formulation under direct visualization into the round window niche. MAIN OUTCOME MEASURES: Procedure success rate, adverse events, and vertigo control. Vertigo control was measured with definitive vertigo days (DVDs), defined as any day with a vertigo attack lasting 20 minutes or longer. RESULTS: Ten subjects with unilateral Menière's disease were enrolled. Precise placement of SPT-2101 at the round window was achieved in all subjects with in-office microendoscopy. Adverse events included one tympanic membrane perforation, which healed spontaneously after the study, and two instances of otitis media, which resolved with antibiotics. The average number of DVDs was 7.6 during the baseline month, decreasing to 3.3 by month 1, 3.7 by month 2, and 1.9 by month 3. Seventy percent of subjects had zero DVDs during the third month after treatment. CONCLUSIONS: SPT-2101 delivery to the round window is safe and feasible, and controlled trials are warranted to formally assess efficacy.


Subject(s)
Dexamethasone , Meniere Disease , Round Window, Ear , Humans , Meniere Disease/drug therapy , Dexamethasone/administration & dosage , Dexamethasone/therapeutic use , Middle Aged , Male , Female , Aged , Adult , Treatment Outcome , Prospective Studies , Aged, 80 and over , Delayed-Action Preparations , Cohort Studies , Vertigo/drug therapy , Anti-Inflammatory Agents/administration & dosage , Anti-Inflammatory Agents/therapeutic use , Gels , Young Adult
2.
Ear Hear ; 45(2): 411-424, 2024.
Article in English | MEDLINE | ID: mdl-37811966

ABSTRACT

OBJECTIVES: Children with cochlear implants (CIs) vary widely in their ability to identify emotions in speech. The causes of this variability are unknown, but this knowledge will be crucial if we are to design improvements in technological or rehabilitative interventions that are effective for individual patients. The objective of this study was to investigate how well factors such as age at implantation, duration of device experience (hearing age), nonverbal cognition, vocabulary, and socioeconomic status predict prosody-based emotion identification in children with CIs, and how the key predictors in this population compare to children with normal hearing who are listening to either normal emotional speech or to degraded speech. DESIGN: We measured vocal emotion identification in 47 school-age CI recipients aged 7 to 19 years in a single-interval, 5-alternative forced-choice task. None of the participants had usable residual hearing based on parent/caregiver report. Stimuli consisted of a set of semantically emotion-neutral sentences that were recorded by 4 talkers in child-directed and adult-directed prosody corresponding to five emotions: neutral, angry, happy, sad, and scared. Twenty-one children with normal hearing were also tested in the same tasks; they listened to both original speech and to versions that had been noise-vocoded to simulate CI information processing. RESULTS: Group comparison confirmed the expected deficit in CI participants' emotion identification relative to participants with normal hearing. Within the CI group, increasing hearing age (correlated with developmental age) and nonverbal cognition outcomes predicted emotion recognition scores. Stimulus-related factors such as talker and emotional category also influenced performance and were involved in interactions with hearing age and cognition. Age at implantation was not predictive of emotion identification. Unlike the CI participants, neither cognitive status nor vocabulary predicted outcomes in participants with normal hearing, whether listening to original speech or CI-simulated speech. Age-related improvements in outcomes were similar in the two groups. Participants with normal hearing listening to original speech showed the greatest differences in their scores for different talkers and emotions. Participants with normal hearing listening to CI-simulated speech showed significant deficits compared with their performance with original speech materials, and their scores also showed the least effect of talker- and emotion-based variability. CI participants showed more variation in their scores with different talkers and emotions than participants with normal hearing listening to CI-simulated speech, but less so than participants with normal hearing listening to original speech. CONCLUSIONS: Taken together, these results confirm previous findings that pediatric CI recipients have deficits in emotion identification based on prosodic cues, but they improve with age and experience at a rate that is similar to peers with normal hearing. Unlike participants with normal hearing, nonverbal cognition played a significant role in CI listeners' emotion identification. Specifically, nonverbal cognition predicted the extent to which individual CI users could benefit from some talkers being more expressive of emotions than others, and this effect was greater in CI users who had less experience with their device (or were younger) than CI users who had more experience with their device (or were older). Thus, in young prelingually deaf children with CIs performing an emotional prosody identification task, cognitive resources may be harnessed to a greater degree than in older prelingually deaf children with CIs or than children with normal hearing.


Subject(s)
Cochlear Implantation , Cochlear Implants , Speech Perception , Adult , Humans , Child , Aged , Hearing , Emotions
3.
Laryngoscope ; 134(3): 1381-1387, 2024 Mar.
Article in English | MEDLINE | ID: mdl-37665102

ABSTRACT

OBJECTIVE: Music is a highly complex acoustic stimulus in both spectral and temporal contents. Accurate representation and delivery of high-fidelity information are essential for music perception. However, it is unclear how well bone-anchored hearing implants (BAHIs) transmit music. The study objective is to establish music perception performance baselines for BAHI users and normal hearing (NH) listeners and compare outcomes between the cohorts. METHODS: A case-controlled, cross-sectional study was conducted among 18 BAHI users and 11 NH controls. Music perception was assessed via performance on seven major musical element tasks: pitch discrimination, melodic contour identification, rhythmic clocking, basic tempo discrimination, timbre identification, polyphonic pitch detection, and harmonic chord discrimination. RESULTS: BAHI users performed comparably well on all music perception tasks with their device compared with the unilateral condition with their better-hearing ear. BAHI performance was not statistically significantly different from NH listeners' performance. BAHI users performed just as well, if not better than NH listeners when using their control contralateral ear; there was no significant difference between the two groups except for the rhythmic timing (BAHI non-implanted ear 69% [95% CI: 62%-75%], NH 56% [95% CI: 49%-63%], p = 0.02), and basic tempo tasks (BAHI non-implanted ear 80% [95% CI: 65%-95%]; NH 75% [95% CI: 68%-82%, p = 0.03]). CONCLUSIONS: This study represents the first comprehensive study of basic music perception performance in BAHI users. Our results demonstrate that BAHI users perform as well with their implanted ear as with their contralateral better-hearing ear and NH controls in the major elements of music perception. LEVEL OF EVIDENCE: 3 Laryngoscope, 134:1381-1387, 2024.


Subject(s)
Cochlear Implantation , Cochlear Implants , Music , Humans , Auditory Perception , Cross-Sectional Studies , Hearing , Pitch Perception
4.
Otol Neurotol ; 44(10): 965-977, 2023 Dec 01.
Article in English | MEDLINE | ID: mdl-37758325

ABSTRACT

OBJECTIVE: Musical rehabilitation has been used in clinical and nonclinical contexts to improve postimplantation auditory processing in implanted individuals. This systematic review aimed to evaluate the efficacy of music rehabilitation in controlled experimental and quasi-experimental studies on cochlear implant (CI) user speech and music perception. DATABASES REVIEWED: PubMed/MEDLINE, EMBASE, Web of Science, PsycARTICLES, and PsycINFO databases through July 2022. METHODS: Controlled experimental trials and prospective studies were included if they compared pretest and posttest data and excluded hearing aid-only users. Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines were then used to extract data from 11 included studies with a total of 206 pediatric and adult participants. Interventions included group music therapy, melodic contour identification training, auditory-motor instruction, or structured digital music training. Studies used heterogeneous outcome measures evaluating speech and music perception. Risk of bias was assessed using the National Heart, Lung, and Blood Institute Quality Assessment Tool. RESULTS: A total of 735 studies were screened, and 11 met the inclusion criteria. Six trials reported both speech and music outcomes, whereas five reported only music perception outcomes after the intervention relative to control. For music perception outcomes, significant findings included improvements in melodic contour identification (five studies, p < 0.05), timbre recognition (three studies, p < 0.05), and song appraisal (three studies, p < 0.05) in their respective trials. For speech prosody outcomes, only vocal emotion identification demonstrated significant improvements (two studies, p < 0.05). CONCLUSION: Music rehabilitation improves performance on multiple measures of music perception, as well as tone-based characteristics of speech (i.e., emotional prosody). This suggests that rehabilitation may facilitate improvements in the discrimination of spectrally complex signals.


Subject(s)
Cochlear Implantation , Cochlear Implants , Music , Speech Perception , Adult , Humans , Child , Prospective Studies , Cochlear Implantation/rehabilitation , Auditory Perception , Pitch Perception
5.
Otol Neurotol ; 44(1): e8-e12, 2023 01 01.
Article in English | MEDLINE | ID: mdl-36509436

ABSTRACT

HYPOTHESIS: Electrical tinnitus suppression by cochlear implants requires stimulation of a subset of neural elements in the cochlea. BACKGROUND: Tinnitus is the phantom perception of sound in the ears and is a known correlate of hearing loss. Cochlear implants restore hearing and are known to lessen or extinguish tinnitus. The amount of electrical charge required and the number and location of electrodes required to extinguish tinnitus with a cochlear implant are factors that remain poorly understood. METHODS: In a subject with single-sided deafness, with tinnitus in the deaf ear, we enabled single electrodes and groups of electrodes along the cochlea and increased the current until tinnitus was diminished or extinguished. We recorded the subject's perception of these changes using loudness scaling of both the electrical stimuli and the tinnitus. RESULTS: Tinnitus could be extinguished with individual electrodes and more effectively extinguished by activating a greater number of electrodes. Tinnitus suppression and loudness growth of the electrical stimuli were imperfectly correlated. CONCLUSION: Tinnitus suppression in this cochlear implant patient was achieved by electrically stimulating multiple distinct portions of the cochlea, and the cochlear neural substrate for tinnitus suppression may be distinct from that for auditory perception.


Subject(s)
Cochlear Implantation , Cochlear Implants , Tinnitus , Humans , Tinnitus/surgery , Cochlea/surgery , Hearing
6.
Laryngoscope ; 133(4): 938-947, 2023 04.
Article in English | MEDLINE | ID: mdl-35906889

ABSTRACT

OBJECTIVE: To evaluate the impact of vocal boost manipulations on cochlear implant (CI) musical sound quality appraisals. METHODS: An anonymous, online study was distributed to 33 CI users. Participants listened to auditory tokens and assessed the musical quality of acoustic stimuli with vocal boosting and attenuation using a validated sound quality rating scale. Four versions of real-world musical stimuli were created: a version with +9 dB vocal boost, a version with -9 dB vocal attenuation, a composite stimulus containing a 1,000 Hz low-pass filter and white noise ("anchor"), and an unaltered version ("hidden reference"). Subjects listened to all four versions and provided ratings based on a 100-point scale that reflected the perceived sound quality difference of the music clip relative to the reference excerpt. RESULTS: Vocal boost increased musical sound quality ratings relative to the reference clip (11.7; 95% CI, 1.62-21.8, p = 0.016) and vocal attenuation decreased musical sound quality ratings relative to the reference clip (28.5; 95% CI, 18.64-38.44, p < 0.001). When comparing the non-musical training group and musical training group, there was a significant difference in musical sound quality rating scores for the vocal boost condition (21.2; 95% CI: 1.76-40.7, p = 0.028). CONCLUSIONS: CI-mediated musical sound quality appraisals are impacted by vocal boost and attenuation. Musically trained CI users to report greater musical sound quality enhancement with a vocal boost with respect to CI users with no musical training background. Implementation of front-end vocal boost manipulations in music may improve sound quality and music appreciation among CI users. LEVEL OF EVIDENCE: 2 (Individual cohort study) Laryngoscope, 133:938-947, 2023.


Subject(s)
Cochlear Implantation , Cochlear Implants , Humans , Cohort Studies , Sound , Auditory Perception
7.
Trends Hear ; 26: 23312165221120017, 2022.
Article in English | MEDLINE | ID: mdl-35983700

ABSTRACT

Cochlear implant (CI) users commonly report degraded musical sound quality. To improve CI-mediated music perception and enjoyment, we must understand factors that affect sound quality. In the present study, we utilize frequency response manipulation (FRM), a process that adjusts the energies of frequency bands within an audio signal, to determine its impact on CI-user sound quality assessments of musical stimuli. Thirty-three adult CI users completed an online study and listened to FRM-altered clips derived from the top songs in Billboard magazine. Participants assessed sound quality using the MUltiple Stimulus with Hidden Reference and Anchor for CI users (CI-MUSHRA) rating scale. FRM affected sound quality ratings (SQR). Specifically, increasing the gain for low and mid-range frequencies led to higher quality ratings than reducing them. In contrast, manipulating the gain for high frequencies (those above 2 kHz) had no impact. Participants with musical training were more sensitive to FRM than non-musically trained participants and demonstrated preference for gain increases over reductions. These findings suggest that, even among CI users, past musical training provides listeners with subtleties in musical appraisal, even though their hearing is now mediated electrically and bears little resemblance to their musical experience prior to implantation. Increased gain below 2 kHz may lead to higher sound quality than for equivalent reductions, perhaps because it offers greater access to lyrics in songs or because it provides more salient beat sensations.


Subject(s)
Cochlear Implantation , Cochlear Implants , Music , Adult , Auditory Perception/physiology , Humans , Sound
8.
Laryngoscope Investig Otolaryngol ; 7(1): 250-258, 2022 Feb.
Article in English | MEDLINE | ID: mdl-35155805

ABSTRACT

OBJECTIVES: To explore the effects of obligatory lexical tone learning on speech emotion recognition and the cross-culture differences between United States and Taiwan for speech emotion understanding in children with cochlear implant. METHODS: This cohort study enrolled 60 cochlear-implanted (cCI) Mandarin-speaking, school-aged children who underwent cochlear implantation before 5 years of age and 53 normal-hearing children (cNH) in Taiwan. The emotion recognition and the sensitivity of fundamental frequency (F0) changes for those school-aged cNH and cCI (6-17 years old) were examined in a tertiary referred center. RESULTS: The mean emotion recognition score of the cNH group was significantly better than the cCI. Female speakers' vocal emotions are more easily to be recognized than male speakers' emotion. There was a significant effect of age at test on voice recognition performance. The average score of cCI with full-spectrum speech was close to the average score of cNH with eight-channel narrowband vocoder speech. The average performance of voice emotion recognition across speakers for cCI could be predicted by their sensitivity to changes in F0. CONCLUSIONS: Better pitch discrimination ability comes with better voice emotion recognition for Mandarin-speaking cCI. Besides the F0 cues, cCI are likely to adapt their voice emotion recognition by relying more on secondary cues such as intensity and duration. Although cross-culture differences exist for the acoustic features of voice emotion, Mandarin-speaking cCI and their English-speaking cCI peer expressed a positive effect for age at test on emotion recognition, suggesting the learning effect and brain plasticity. Therefore, further device/processor development to improve presentation of pitch information and more rehabilitative efforts are needed to improve the transmission and perception of voice emotion in Mandarin. LEVEL OF EVIDENCE: 3.

9.
Ear Hear ; 43(3): 862-873, 2022.
Article in English | MEDLINE | ID: mdl-34812791

ABSTRACT

OBJECTIVES: Variations in loudness are a fundamental component of the music listening experience. Cochlear implant (CI) processing, including amplitude compression, and a degraded auditory system may further degrade these loudness cues and decrease the enjoyment of music listening. This study aimed to identify optimal CI sound processor compression settings to improve music sound quality for CI users. DESIGN: Fourteen adult MED-EL CI recipients participated (Experiment No. 1: n = 17 ears; Experiment No. 2: n = 11 ears) in the study. A software application using a modified comparison category rating (CCR) test method allowed participants to compare and rate the sound quality of various CI compression settings while listening to 25 real-world music clips. The two compression settings studied were (1) Maplaw, which informs audibility and compression of soft level sounds, and (2) automatic gain control (AGC), which applies compression to loud sounds. For each experiment, one compression setting (Maplaw or AGC) was held at the default, while the other was varied according to the values available in the clinical CI programming software. Experiment No. 1 compared Maplaw settings of 500, 1000 (default), and 2000. Experiment No. 2 compared AGC settings of 2.5:1, 3:1 (default), and 3.5:1. RESULTS: In Experiment No. 1, the group preferred a higher Maplaw setting of 2000 over the default Maplaw setting of 1000 (p = 0.003) for music listening. There was no significant difference in music sound quality between the Maplaw setting of 500 and the default setting (p = 0.278). In Experiment No. 2, a main effect of AGC setting was found; however, no significant difference in sound quality ratings for pairwise comparisons were found between the experimental settings and the default setting (2.5:1 versus 3:1 at p = 0.546; 3.5:1 versus 3:1 at p = 0.059). CONCLUSIONS: CI users reported improvements in music sound quality with higher than default Maplaw or AGC settings. Thus, participants preferred slightly higher compression for music listening, with results having clinical implications for improving music perception in CI users.


Subject(s)
Cochlear Implantation , Cochlear Implants , Deafness , Music , Adult , Auditory Perception , Deafness/rehabilitation , Humans , Sound
11.
Ear Hear ; 42(3): 732-743, 2021.
Article in English | MEDLINE | ID: mdl-33538429

ABSTRACT

OBJECTIVES: To determine the sources of variability for cochlear duct length (CDL) measurements for the purposes of fine-tuning cochlear implants (CI) and to propose a set of standardized landmarks for computed tomography (CT) pitch mapping. DESIGN: This was a retrospective cohort study involving 21 CI users at a tertiary referral center. The intervention involved flat-panel CT image acquisition and secondary reconstructions of CIs in vivo. The main outcome measures were CDL measurements, CI electrode localization measurements, and frequency calculations. RESULTS: Direct CT-based measurements of CI and intracochlear landmarks are methodologically valid, with a percentage of error of 1.0% ± 0.9%. Round window (RW) position markers (anterior edge, center, or posterior edge) and bony canal wall localization markers (medial edge, duct center, or lateral edge) significantly impact CDL calculations [F(2, 78) = 9.9, p < 0.001 and F(2, 78) = 1806, p < 0.001, respectively]. These pitch distortions could be as large as 11 semitones. When using predefined anatomical landmarks, there was still a difference between researchers [F(2, 78) = 12.5; p < 0.001], but the average variability of electrode location was reduced to differences of 1.6 semitones (from 11 semitones. CONCLUSIONS: A lack of standardization regarding RW and bony canal wall landmarks results in great CDL measurement variability and distorted pitch map calculations. We propose using the posterior edge of the RW and lateral bony wall as standardized anatomical parameters for CDL calculations in CI users to improve pitch map calculations. More accurate and precise pitch maps may improve CI-associated pitch outcomes.


Subject(s)
Cochlear Implantation , Cochlear Implants , Cochlear Duct/surgery , Humans , Retrospective Studies , Tomography, X-Ray Computed
12.
Front Neurosci ; 15: 588914, 2021.
Article in English | MEDLINE | ID: mdl-33584187

ABSTRACT

Attentional limits make it difficult to comprehend concurrent speech streams. However, multiple musical streams are processed comparatively easily. Coherence may be a key difference between music and stimuli like speech, which does not rely on the integration of multiple streams for comprehension. The musical organization between melodies in a composition may provide a cognitive scaffold to overcome attentional limitations when perceiving multiple lines of music concurrently. We investigated how listeners attend to multi-voiced music, examining biological indices associated with processing structured versus unstructured music. We predicted that musical structure provides coherence across distinct musical lines, allowing listeners to attend to simultaneous melodies, and that a lack of organization causes simultaneous melodies to be heard as separate streams. Musician participants attended to melodies in a Coherent music condition featuring flute duets and a Jumbled condition where those duets were manipulated to eliminate coherence between the parts. Auditory-evoked cortical potentials were collected to a tone probe. Analysis focused on the N100 response which is primarily generated within the auditory cortex and is larger for attended versus ignored stimuli. Results suggest that participants did not attend to one line over the other when listening to Coherent music, instead perceptually integrating the streams. Yet, for the Jumbled music, effects indicate that participants attended to one line while ignoring the other, abandoning their integration. Our findings lend support for the theory that musical organization aids attention when perceiving multi-voiced music.

13.
Neurosurg Focus Video ; 5(2): V6, 2021 Oct.
Article in English | MEDLINE | ID: mdl-36285245

ABSTRACT

Intravestibular schwannomas are rare tumors within the intralabyrinthine region and involve different management considerations compared to more common vestibular schwannomas. In this report, the authors review a case of a 52-year-old woman who presented with hearing loss and vestibular symptoms and was found to have a left intravestibular schwannoma. Given her debilitating vestibular symptoms, she underwent microsurgical resection. In this video, the authors review the relevant anatomy, surgical technique, and management considerations in these patients. The video can be found here: https://stream.cadmore.media/r10.3171/2021.7.FOCVID2187.

14.
Laryngoscope ; 131(5): E1695-E1698, 2021 05.
Article in English | MEDLINE | ID: mdl-33252138

ABSTRACT

This case report presents the successful use of multiple treatments of electroconvulsive therapy (ECT) in a patient with a cochlear implant (CI). A 60-year-old man with a left-sided CI and bipolar disorder presented with severe depression. A total of 9 separate sessions of unilateral ECT was administered to the contralateral side of the existing CI. We collected subjective, clinical, and audiological assessment of the patient and the CI prior, during, and after ECT therapy. The patient tolerated ECT well and there were no complications. Unilateral ECT was performed contralateral to the CI without any harm to the patient or implant. Laryngoscope, 131:E1695-E1698, 2021.


Subject(s)
Bipolar Disorder/therapy , Cochlear Implants , Electroconvulsive Therapy/methods , Cochlear Implantation/instrumentation , Electroconvulsive Therapy/adverse effects , Humans , Male , Meniere Disease/surgery , Middle Aged , Treatment Outcome
15.
Otol Neurotol ; 41(4): e422-e431, 2020 04.
Article in English | MEDLINE | ID: mdl-32176126

ABSTRACT

OBJECTIVE: Cochlear implant (CI) users struggle with tasks of pitch-based prosody perception. Pitch pattern recognition is vital for both music comprehension and understanding the prosody of speech, which signals emotion and intent. Research in normal-hearing individuals shows that auditory-motor training, in which participants produce the auditory pattern they are learning, is more effective than passive auditory training. We investigated whether auditory-motor training of CI users improves complex sound perception, such as vocal emotion recognition and pitch pattern recognition, compared with purely auditory training. STUDY DESIGN: Prospective cohort study. SETTING: Tertiary academic center. PATIENTS: Fifteen postlingually deafened adults with CIs. INTERVENTION(S): Participants were divided into 3 one-month training groups: auditory-motor (intervention), auditory-only (active control), and no training (control). Auditory-motor training was conducted with the "Contours" software program and auditory-only training was completed with the "AngelSound" software program. MAIN OUTCOME MEASURE: Pre and posttest examinations included tests of speech perception (consonant-nucleus-consonant, hearing-in-noise test sentence recognition), speech prosody perception, pitch discrimination, and melodic contour identification. RESULTS: Participants in the auditory-motor training group performed better than those in the auditory-only and no-training (p < 0.05) for the melodic contour identification task. No significant training effect was noted on tasks of speech perception, speech prosody perception, or pitch discrimination. CONCLUSIONS: These data suggest that short-term auditory-motor music training of CI users impacts pitch pattern recognition. This study offers approaches for enriching the world of complex sound in the CI user.


Subject(s)
Cochlear Implantation , Cochlear Implants , Music , Speech Perception , Adult , Auditory Perception , Humans , Pitch Perception , Prospective Studies
16.
Ear Hear ; 41(5): 1372-1382, 2020.
Article in English | MEDLINE | ID: mdl-32149924

ABSTRACT

OBJECTIVES: Cochlear implants (CIs) are remarkable in allowing individuals with severe to profound hearing loss to perceive speech. Despite these gains in speech understanding, however, CI users often struggle to perceive elements such as vocal emotion and prosody, as CIs are unable to transmit the spectro-temporal detail needed to decode affective cues. This issue becomes particularly important for children with CIs, but little is known about their emotional development. In a previous study, pediatric CI users showed deficits in voice emotion recognition with child-directed stimuli featuring exaggerated prosody. However, the large intersubject variability and differential developmental trajectory known in this population incited us to question the extent to which exaggerated prosody would facilitate performance in this task. Thus, the authors revisited the question with both adult-directed and child-directed stimuli. DESIGN: Vocal emotion recognition was measured using both child-directed (CDS) and adult-directed (ADS) speech conditions. Pediatric CI users, aged 7-19 years old, with no cognitive or visual impairments and who communicated through oral communication with English as the primary language participated in the experiment (n = 27). Stimuli comprised 12 sentences selected from the HINT database. The sentences were spoken by male and female talkers in a CDS or ADS manner, in each of the five target emotions (happy, sad, neutral, scared, and angry). The chosen sentences were semantically emotion-neutral. Percent correct emotion recognition scores were analyzed for each participant in each condition (CDS vs. ADS). Children also completed cognitive tests of nonverbal IQ and receptive vocabulary, while parents completed questionnaires of CI and hearing history. It was predicted that the reduced prosodic variations found in the ADS condition would result in lower vocal emotion recognition scores compared with the CDS condition. Moreover, it was hypothesized that cognitive factors, perceptual sensitivity to complex pitch changes, and elements of each child's hearing history may serve as predictors of performance on vocal emotion recognition. RESULTS: Consistent with our hypothesis, pediatric CI users scored higher on CDS compared with ADS speech stimuli, suggesting that speaking with an exaggerated prosody-akin to "motherese"-may be a viable way to convey emotional content. Significant talker effects were also observed in that higher scores were found for the female talker for both conditions. Multiple regression analysis showed that nonverbal IQ was a significant predictor of CDS emotion recognition scores while Years using CI was a significant predictor of ADS scores. Confusion matrix analyses revealed a dependence of results on specific emotions; for the CDS condition's female talker, participants had high sensitivity (d' scores) to happy and low sensitivity to the neutral sentences while for the ADS condition, low sensitivity was found for the scared sentences. CONCLUSIONS: In general, participants had higher vocal emotion recognition to the CDS condition which also had more variability in pitch and intensity and thus more exaggerated prosody, in comparison to the ADS condition. Results suggest that pediatric CI users struggle with vocal emotion perception in general, particularly to adult-directed speech. The authors believe these results have broad implications for understanding how CI users perceive emotions both from an auditory communication standpoint and a socio-developmental perspective.


Subject(s)
Cochlear Implantation , Cochlear Implants , Speech Perception , Adolescent , Adult , Child , Emotions , Female , Humans , Male , Speech , Young Adult
17.
Neuroimage ; 209: 116496, 2020 04 01.
Article in English | MEDLINE | ID: mdl-31899286

ABSTRACT

Improvisation is sometimes described as instant composition and offers a glimpse into real-time musical creativity. Over the last decade, researchers have built up our understanding of the core neural activity patterns associated with musical improvisation by investigating cohorts of professional musicians. However, since creative behavior calls on the unique individuality of an artist, averaging data across musicians may dilute important aspects of the creative process. By performing case study investigations of world-class artists, we may gain insight into their unique creative abilities and achieve a deeper understanding of the biological basis of musical creativity. In this experiment, functional magnetic resonance imaging and functional connectivity were used to study the neural correlates of improvisation in famed Classical music performer and improviser, Gabriela Montero. GM completed two control tasks of varying musical complexity; for the Scale condition she repeatedly played a chromatic scale and for the Memory condition she performed a given composition by memory. For the experimental improvisation condition, she performed improvisations. Thus, we were able to compare the neural activity that underlies a generative musical task like improvisation to 'rote' musical tasks of playing pre-learned and pre-memorized music. In GM, improvisation was largely associated with activation of auditory, frontal/cognitive, motor, parietal, occipital, and limbic areas, suggesting that improvisation is a multimodal activity for her. Functional connectivity analysis suggests that the visual network, default mode network, and subcortical networks are involved in improvisation as well. While these findings should not be generalized to other samples or populations, results here shed insight into the brain activity that underlies GM's unique abilities to perform Classical-style musical improvisations.


Subject(s)
Cerebral Cortex/physiology , Connectome , Creativity , Limbic System/physiology , Music , Nerve Net/physiology , Psychomotor Performance/physiology , Cerebral Cortex/diagnostic imaging , Female , Humans , Limbic System/diagnostic imaging , Magnetic Resonance Imaging , Middle Aged , Nerve Net/diagnostic imaging
18.
Ann N Y Acad Sci ; 1453(1): 22-28, 2019 10.
Article in English | MEDLINE | ID: mdl-31168793

ABSTRACT

Cochlear implants (CIs) are biomedical devices that provide sound to people with severe-to-profound hearing loss by direct electrical stimulation of auditory neurons in the cochlea. Despite the remarkable achievements with respect to speech perception in quiet environments, music perception with CIs remains generally poor due to the degradation of auditory input. Prior studies have shown that both pitch perception and timbre discrimination are poor in CI users, whereas the performance on rhythmic tasks is nearly equivalent to normal hearing participants. There are several caveats, however, to this generalization regarding rhythm processing for CI users. The purpose of this article is to summarize the literature on rhythmic perception for CI users while highlighting important limitations within these studies. We will also identify areas for future research and development of CI-mediated music processing. It is likely that rhythm processing will continue to advance as our understanding of electrical current delivery to the auditory nerve improves.


Subject(s)
Auditory Perception/physiology , Cochlear Implants , Music , Periodicity , Humans , Pitch Perception/physiology , Speech Perception/physiology
19.
J Assoc Res Otolaryngol ; 20(3): 247-262, 2019 06.
Article in English | MEDLINE | ID: mdl-30815761

ABSTRACT

Cochlear implant (CI) biomechanical constraints result in impoverished spectral cues and poor frequency resolution, making it difficult for users to perceive pitch and timbre. There is emerging evidence that music training may improve CI-mediated music perception; however, much of the existing studies involve time-intensive and less readily accessible in-person music training paradigms, without rigorous experimental control paradigms. Online resources for auditory rehabilitation remain an untapped potential resource for CI users. Furthermore, establishing immediate value from an acute music training program may encourage CI users to adhere to post-implantation rehabilitation exercises. In this study, we evaluated the impact of an acute online music training program on pitch discrimination and timbre identification. Via a randomized controlled crossover study design, 20 CI users and 21 normal hearing (NH) adults were assigned to one of two arms. Arm-A underwent 1 month of online self-paced music training (intervention) followed by 1 month of audiobook listening (control). Arm-B underwent 1 month of audiobook listening followed by 1 month of music training. Pitch and timbre sensitivity scores were taken across three visits: (1) baseline, (2) after 1 month of intervention, and (3) after 1 month of control. We found that performance improved in pitch discrimination among CI users and NH listeners, with both online music training and audiobook listening. Music training, however, provided slightly greater benefit for instrument identification than audiobook listening. For both tasks, this improvement appears to be related to both fast stimulus learning as well as procedural learning. In conclusion, auditory training (with either acute participation in an online music training program or audiobook listening) may improve performance on untrained tasks of pitch discrimination and timbre identification. These findings demonstrate a potential role for music training in perceptual auditory appraisal of complex stimuli. Furthermore, this study highlights the importance and the need for more tightly controlled training studies in order to accurately evaluate the impact of rehabilitation training protocols on auditory processing.


Subject(s)
Cochlear Implants , Music Therapy , Music , Pitch Discrimination , Adult , Aged , Aged, 80 and over , Cross-Over Studies , Humans , Middle Aged , Young Adult
20.
Ann Otol Rhinol Laryngol ; 128(6): 508-515, 2019 Jun.
Article in English | MEDLINE | ID: mdl-30744390

ABSTRACT

OBJECTIVE: To develop and validate an automated smartphone app that determines bone-conduction pure-tone thresholds. METHODS: A novel app, called EarBone, was developed as an automated test to determine best-cochlea pure-tone bone-conduction thresholds using a smartphone driving a professional-grade bone oscillator. Adult, English-speaking patients who were undergoing audiometric assessment by audiologists at an academic health system as part of their prescribed care were invited to use the EarBone app. Best-ear bone-conduction thresholds determined by the app and the gold standard audiologist were compared. RESULTS: Forty subjects with varied hearing thresholds were tested. Sixty-one percent of app-determined thresholds were within 5 dB of audiologist-determined thresholds, and 79% were within 10 dB. Nearly all subjects required assistance with placing the bone oscillator on their mastoid. CONCLUSION: Best-cochlea bone-conduction thresholds determined by the EarBone automated smartphone audiometry app approximate those determined by an audiologist. This serves as a proof of concept for automated smartphone-based bone-conduction threshold testing. Further improvements, such as the addition of contralateral ear masking, are needed to make the app clinically useful.


Subject(s)
Audiometry/instrumentation , Audiometry/methods , Auditory Threshold , Bone Conduction , Hearing Loss, Conductive/diagnosis , Hearing Loss, Sensorineural/diagnosis , Smartphone , Software Validation , Adult , Aged , Aged, 80 and over , Female , Humans , Male , Middle Aged , Proof of Concept Study , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...