Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters











Database
Language
Publication year range
1.
bioRxiv ; 2024 Jan 18.
Article in English | MEDLINE | ID: mdl-38187580

ABSTRACT

Sound is jointly processed along acoustic and emotional dimensions. These dimensions can become distorted and entangled in persons with sensory disorders, producing a spectrum of loudness hypersensitivity, phantom percepts, and - in some cases - debilitating sound aversion. Here, we looked for objective signatures of disordered hearing (DH) in the human face. Pupil dilations and micro facial movement amplitudes scaled with sound valence in neurotypical listeners but not DH participants with chronic tinnitus (phantom ringing) and sound sensitivity. In DH participants, emotionally evocative sounds elicited abnormally large pupil dilations but blunted and invariant facial reactions that jointly provided an accurate prediction of individual tinnitus and hyperacusis questionnaire handicap scores. By contrast, EEG measures of central auditory gain identified steeper neural response growth functions but no association with symptom severity. These findings highlight dysregulated affective sound processing in persons with bothersome tinnitus and sound sensitivity disorders and introduce approaches for their objective measurement.

2.
Front Neurosci ; 17: 1000079, 2023.
Article in English | MEDLINE | ID: mdl-36777633

ABSTRACT

The binaural system utilizes interaural timing cues to improve the detection of auditory signals presented in noise. In humans, the binaural mechanisms underlying this phenomenon cannot be directly measured and hence remain contentious. As an alternative, we trained modified autoencoder networks to mimic human-like behavior in a binaural detection task. The autoencoder architecture emphasizes interpretability and, hence, we "opened it up" to see if it could infer latent mechanisms underlying binaural detection. We found that the optimal networks automatically developed artificial neurons with sensitivity to timing cues and with dynamics consistent with a cross-correlation mechanism. These computations were similar to neural dynamics reported in animal models. That these computations emerged to account for human hearing attests to their generality as a solution for binaural signal detection. This study examines the utility of explanatory-driven neural network models and how they may be used to infer mechanisms of audition.

3.
Prog Brain Res ; 260: 283-300, 2021.
Article in English | MEDLINE | ID: mdl-33637224

ABSTRACT

The identification of phenotypes within populations with troublesome tinnitus is an important step towards individualizing tinnitus treatments to achieve optimal outcomes. However, previous application of clustering algorithms has called into question the existence of distinct tinnitus-related phenotypes. In this study, we attempted to characterize patients' symptom-based phenotypes as subpopulations in a Gaussian mixture model (GMM), and subsequently performed a comparison with tinnitus reporting. We were able to effectively evaluate the statistical models using cross-validation to establish the number of phenotypes in the cohort, or a lack thereof. We examined a cohort of adult cochlear implant (CI) users, a patient group for which a relation between psychological symptoms (anxiety, depression, or insomnia) and trouble tinnitus has previously been shown. Accordingly, individual item scores on the Hospital Anxiety and Depression Scale (HADS; 14 items) and the Insomnia Severity Index (ISI; 7 items) were selected as features for training the GMM. The resulting model indicated four symptom-based subpopulations, some primarily linked to one major symptom (e.g., anxiety), and others linked to varying severity across all three symptoms. The presence of tinnitus was self-reported and tinnitus-related handicap was characterized using the Tinnitus Handicap Inventory. Specific symptom profiles were found to be significantly associated with CI users' tinnitus characteristics. GMMs are a promising machine learning tool for identifying psychological symptom-based phenotypes, which may be relevant to determining appropriate tinnitus treatment.


Subject(s)
Cochlear Implants , Tinnitus , Adult , Anxiety/complications , Humans , Machine Learning , Sleep Initiation and Maintenance Disorders , Surveys and Questionnaires , Tinnitus/complications
4.
Acta Acust United Acust ; 104(5): 922-925, 2018.
Article in English | MEDLINE | ID: mdl-30369861

ABSTRACT

When presented with two vowels simultaneously, humans are often able to identify the constituent vowels. Computational models exist that simulate this ability, however they predict listener confusions poorly, particularly in the case where the two vowels have the same fundamental frequency. Presented here is a model that is uniquely able to predict the combined representation of concurrent vowels. The given model is able to predict listener's systematic perceptual decisions to a high degree of accuracy.

SELECTION OF CITATIONS
SEARCH DETAIL