Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 23.232
Filter
1.
Sci Rep ; 14(1): 13089, 2024 Jun 07.
Article in English | MEDLINE | ID: mdl-38849415

ABSTRACT

Speech-in-noise (SIN) perception is a primary complaint of individuals with audiometric hearing loss. SIN performance varies drastically, even among individuals with normal hearing. The present genome-wide association study (GWAS) investigated the genetic basis of SIN deficits in individuals with self-reported normal hearing in quiet situations. GWAS was performed on 279,911 individuals from the UB Biobank cohort, with 58,847 reporting SIN deficits despite reporting normal hearing in quiet. GWAS identified 996 single nucleotide polymorphisms (SNPs), achieving significance (p < 5*10-8) across four genomic loci. 720 SNPs across 21 loci achieved suggestive significance (p < 10-6). GWAS signals were enriched in brain tissues, such as the anterior cingulate cortex, dorsolateral prefrontal cortex, entorhinal cortex, frontal cortex, hippocampus, and inferior temporal cortex. Cochlear cell types revealed no significant association with SIN deficits. SIN deficits were associated with various health traits, including neuropsychiatric, sensory, cognitive, metabolic, cardiovascular, and inflammatory conditions. A replication analysis was conducted on 242 healthy young adults. Self-reported speech perception, hearing thresholds (0.25-16 kHz), and distortion product otoacoustic emissions (1-16 kHz) were utilized for the replication analysis. 73 SNPs were replicated with a self-reported speech perception measure. 211 SNPs were replicated with at least one and 66 with at least two audiological measures. 12 SNPs near or within MAPT, GRM3, and HLA-DQA1 were replicated for all audiological measures. The present study highlighted a polygenic architecture underlying SIN deficits in individuals with self-reported normal hearing.


Subject(s)
Genome-Wide Association Study , Multifactorial Inheritance , Noise , Polymorphism, Single Nucleotide , Speech Perception , Humans , Male , Female , Speech Perception/genetics , Adult , Middle Aged , Self Report , Aged , Hearing/genetics , Young Adult
2.
J Am Coll Cardiol ; 83(23): 2308-2323, 2024 Jun 11.
Article in English | MEDLINE | ID: mdl-38839205

ABSTRACT

Various forms of pollution carry a substantial burden with respect to increasing the risk of causing and exacerbating noncommunicable diseases, especially cardiovascular disease. The first part of this 2-part series on pollution and cardiovascular disease provided an overview of the impact of global warming and air pollution. This second paper provides an overview of the impact of water, soil, noise, and light pollution on the cardiovascular system. This review discusses the biological mechanisms underlying these effects and potential environmental biometrics of exposure. What is clear from both these pollution papers is that significant efforts and redoubled urgency are needed to reduce the sources of pollution in our environment, to incorporate environmental risk factors into medical education, to provide resources for research, and, ultimately, to protect those who are particularly vulnerable and susceptible.


Subject(s)
Cardiovascular Diseases , Environmental Pollution , Humans , Cardiovascular Diseases/prevention & control , Environmental Pollution/adverse effects , Noise/adverse effects , Soil , Environmental Exposure/adverse effects , Water Pollution
3.
Codas ; 36(3): e20230091, 2024.
Article in Portuguese, English | MEDLINE | ID: mdl-38836822

ABSTRACT

PURPOSE: To propose an instrument for assessing speech recognition in the presence of competing noise. To define its application strategy for use in clinical practice. To obtain evidence of criterion validity and present reference values. METHODS: The study was conducted in three stages: Organization of the material comprising the Word-with-Noise Test (Stage 1); Definition of the instrument's application strategy (Stage 2); Investigation of criterion validity and definition of reference values for the test (Stage 3) through the evaluation of 50 normal-hearing adult subjects and 12 subjects with hearing loss. RESULTS: The Word-with-Noise Test consists of lists of monosyllabic and disyllabic words and speech spectrum noise (Stage 1). The application strategy for the test was defined as the determination of the Speech Recognition Threshold with a fixed noise level at 55 dBHL (Stage 2). Regarding criterion validity, the instrument demonstrated adequate ability to distinguish between normal-hearing subjects and subjects with hearing loss (Stage 3). Reference values for the test were established as cut-off points expressed in terms of signal-to-noise ratio: 1.47 dB for the monosyllabic stimulus and -2.02 dB for the disyllabic stimulus. Conclusion: The Word-with-Noise Test proved to be quick to administer and interpret, making it a useful tool in audiological clinical practice. Furthermore, it showed satisfactory evidence of criterion validity, with established reference values.


OBJETIVO: Propor um instrumento para a avaliação do reconhecimento de fala na presença de ruído competitivo. Definir sua estratégia de aplicação, para ser aplicado na rotina clínica. Obter evidências de validade de critério e apresentar seus valores de referência. MÉTODO: Estudo realizado em três etapas: Organização do material que compôs o Teste de Palavras no Ruído (Etapa 1); Definição da estratégia de aplicação do instrumento (Etapa 2); Investigação da validade de critério e definição dos valores de referência para o teste (Etapa 3), por meio da avaliação de 50 sujeitos adultos normo-ouvintes e 12 sujeitos com perda auditiva. RESULTADOS: O Teste de Palavras no Ruído é composto por listas de vocábulos mono e dissilábicos e um ruído com espectro de fala (Etapa 1). Foi definida como estratégia de aplicação do teste, a realização do Limiar de Reconhecimento de Fala com ruído fixo em 55 dBNA (Etapa 2). Quanto à validade de critério, o instrumento apresentou adequada capacidade de distinção entre os sujeitos normo-ouvintes e os sujeitos com perda auditiva (Etapa 3). Foram definidos como valores de referência para o teste, os pontos de corte expressos em relação sinal/ruído de 1,47 dB para o estímulo monossilábico e de -2,02 dB para o dissilábico. CONCLUSÃO: O Teste de Palavras no Ruído demonstrou ser rápido e de fácil aplicação e interpretação dos resultados, podendo ser uma ferramenta útil a ser utilizada na rotina clínica audiológica. Além disso, apresentou evidências satisfatórias de validade de critério, com valores de referência estabelecidos.


Subject(s)
Noise , Humans , Reference Values , Adult , Female , Male , Young Adult , Reproducibility of Results , Middle Aged , Speech Perception/physiology , Signal-To-Noise Ratio , Auditory Threshold/physiology , Case-Control Studies , Hearing Loss/diagnosis , Hearing Loss/physiopathology , Speech Reception Threshold Test/methods , Speech Reception Threshold Test/standards , Aged , Adolescent
4.
J Acoust Soc Am ; 155(6): 3589-3599, 2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38829154

ABSTRACT

Frequency importance functions (FIFs) for simulated bimodal hearing were derived using sentence perception scores measured in quiet and noise. Acoustic hearing was simulated using low-pass filtering. Electric hearing was simulated using a six-channel vocoder with three input frequency ranges, resulting in overlap, meet, and gap maps, relative to the acoustic cutoff frequency. Spectral holes present in the speech spectra were created within electric stimulation by setting amplitude(s) of channels to zero. FIFs were significantly different between frequency maps. In quiet, the three FIFs were similar with gradually increasing weights with channels 5 and 6 compared to the first three channels. However, the most and least weighted channels slightly varied depending on the maps. In noise, the patterns of the three FIFs were similar to those in quiet, with steeper increasing weights with channels 5 and 6 compared to the first four channels. Thus, channels 5 and 6 contributed to speech perception the most, while channels 1 and 2 contributed the least, regardless of frequency maps. Results suggest that the contribution of cochlear implant frequency bands for bimodal speech perception depends on the degree of frequency overlap between acoustic and electric stimulation and if noise is absent or present.


Subject(s)
Acoustic Stimulation , Cochlear Implants , Electric Stimulation , Noise , Speech Perception , Humans , Noise/adverse effects , Cochlear Implantation/instrumentation , Persons With Hearing Impairments/psychology , Persons With Hearing Impairments/rehabilitation , Perceptual Masking , Adult
5.
Trends Hear ; 28: 23312165241260029, 2024.
Article in English | MEDLINE | ID: mdl-38831646

ABSTRACT

The extent to which active noise cancelation (ANC), when combined with hearing assistance, can improve speech intelligibility in noise is not well understood. One possible source of benefit is ANC's ability to reduce the sound level of the direct (i.e., vent-transmitted) path. This reduction lowers the "floor" imposed by the direct path, thereby allowing any increases to the signal-to-noise ratio (SNR) created in the amplified path to be "realized" at the eardrum. Here we used a modeling approach to estimate this benefit. We compared pairs of simulated hearing aids that differ only in terms of their ability to provide ANC and computed intelligibility metrics on their outputs. The difference in metric scores between simulated devices is termed the "ANC Benefit." These simulations show that ANC Benefit increases as (1) the environmental sound level increases, (2) the ability of the hearing aid to improve SNR increases, (3) the strength of the ANC increases, and (4) the hearing loss severity decreases. The predicted size of the ANC Benefit can be substantial. For a moderate hearing loss, the model predicts improvement in intelligibility metrics of >30% when environments are moderately loud (>70 dB SPL) and devices are moderately capable of increasing SNR (by >4 dB). It appears that ANC can be a critical ingredient in hearing devices that attempt to improve SNR in loud environments. ANC will become more and more important as advanced SNR-improving algorithms (e.g., artificial intelligence speech enhancement) are included in hearing devices.


Subject(s)
Hearing Aids , Noise , Perceptual Masking , Signal-To-Noise Ratio , Speech Intelligibility , Speech Perception , Humans , Noise/adverse effects , Computer Simulation , Acoustic Stimulation , Correction of Hearing Impairment/instrumentation , Persons With Hearing Impairments/rehabilitation , Persons With Hearing Impairments/psychology , Hearing Loss/diagnosis , Hearing Loss/rehabilitation , Hearing Loss/physiopathology , Equipment Design , Signal Processing, Computer-Assisted
6.
Neurotox Res ; 42(3): 29, 2024 Jun 10.
Article in English | MEDLINE | ID: mdl-38856796

ABSTRACT

Ethanol (EtOH) intake and noise exposure are particularly concerning among human adolescents because the potential to harm brain. Unfortunately, putative underlying mechanisms remain to be elucidated. Moreover, implementing non-pharmacological strategies, such as enriched environments (EE), would be pertinent in the field of neuroprotection. This study aims to explore possible underlying triggering mechanism of hippocampus-dependent behaviors in adolescent animals of both sexes following ethanol intake, noise exposure, or a combination of both, as well as the impact of EE. Adolescent Wistar rats of both sexes were subjected to an intermittent voluntary EtOH intake paradigm for one week. A subgroup of animals was exposed to white noise for two hours after the last session of EtOH intake. Some animals of both groups were housed in EE cages. Hippocampal-dependent behavioral assessment and hippocampal oxidative state evaluation were performed. Results show that different hippocampal-dependent behavioral alterations might be induced in animals of both sexes after EtOH intake and sequential noise exposure, that in some cases are sex-specific. Moreover, hippocampal oxidative imbalance seems to be one of the potential underlying mechanisms. Additionally, most behavioral and oxidative alterations were prevented by EE. These findings suggest that two frequently found environmental agents may impact behavior and oxidative pathways in both sexes in an animal model. In addition, EE resulted a partially effective neuroprotective strategy. Therefore, it could be suggested that the implementation of a non-pharmacological approach might also potentially provide neuroprotective advantages against other challenges. Finally, considering its potential for translational human benefit might be worth.


Subject(s)
Ethanol , Hippocampus , Noise , Rats, Wistar , Animals , Hippocampus/drug effects , Male , Female , Ethanol/administration & dosage , Ethanol/toxicity , Noise/adverse effects , Rats , Alcohol Drinking , Sex Characteristics , Oxidative Stress/drug effects , Oxidative Stress/physiology
7.
Sci Rep ; 14(1): 13241, 2024 Jun 09.
Article in English | MEDLINE | ID: mdl-38853168

ABSTRACT

Cochlear implants (CIs) do not offer the same level of effectiveness in noisy environments as in quiet settings. Current single-microphone noise reduction algorithms in hearing aids and CIs only remove predictable, stationary noise, and are ineffective against realistic, non-stationary noise such as multi-talker interference. Recent developments in deep neural network (DNN) algorithms have achieved noteworthy performance in speech enhancement and separation, especially in removing speech noise. However, more work is needed to investigate the potential of DNN algorithms in removing speech noise when tested with listeners fitted with CIs. Here, we implemented two DNN algorithms that are well suited for applications in speech audio processing: (1) recurrent neural network (RNN) and (2) SepFormer. The algorithms were trained with a customized dataset ( ∼ 30 h), and then tested with thirteen CI listeners. Both RNN and SepFormer algorithms significantly improved CI listener's speech intelligibility in noise without compromising the perceived quality of speech overall. These algorithms not only increased the intelligibility in stationary non-speech noise, but also introduced a substantial improvement in non-stationary noise, where conventional signal processing strategies fall short with little benefits. These results show the promise of using DNN algorithms as a solution for listening challenges in multi-talker noise interference.


Subject(s)
Algorithms , Cochlear Implants , Deep Learning , Noise , Speech Intelligibility , Humans , Female , Middle Aged , Male , Speech Perception/physiology , Aged , Adult , Neural Networks, Computer
8.
J Acoust Soc Am ; 155(5): 2934-2947, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38717201

ABSTRACT

Spatial separation and fundamental frequency (F0) separation are effective cues for improving the intelligibility of target speech in multi-talker scenarios. Previous studies predominantly focused on spatial configurations within the frontal hemifield, overlooking the ipsilateral side and the entire median plane, where localization confusion often occurs. This study investigated the impact of spatial and F0 separation on intelligibility under the above-mentioned underexplored spatial configurations. The speech reception thresholds were measured through three experiments for scenarios involving two to four talkers, either in the ipsilateral horizontal plane or in the entire median plane, utilizing monotonized speech with varying F0s as stimuli. The results revealed that spatial separation in symmetrical positions (front-back symmetry in the ipsilateral horizontal plane or front-back, up-down symmetry in the median plane) contributes positively to intelligibility. Both target direction and relative target-masker separation influence the masking release attributed to spatial separation. As the number of talkers exceeds two, the masking release from spatial separation diminishes. Nevertheless, F0 separation remains as a remarkably effective cue and could even facilitate spatial separation in improving intelligibility. Further analysis indicated that current intelligibility models encounter difficulties in accurately predicting intelligibility in scenarios explored in this study.


Subject(s)
Cues , Perceptual Masking , Sound Localization , Speech Intelligibility , Speech Perception , Humans , Female , Male , Young Adult , Adult , Speech Perception/physiology , Acoustic Stimulation , Auditory Threshold , Speech Acoustics , Speech Reception Threshold Test , Noise
9.
J Acoust Soc Am ; 155(5): 3254-3266, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38742964

ABSTRACT

Testudines are a highly threatened group facing an array of stressors, including alteration of their sensory environment. Underwater noise pollution has the potential to induce hearing loss and disrupt detection of biologically important acoustic cues and signals. To examine the conditions that induce temporary threshold shifts (TTS) in hearing in the freshwater Eastern painted turtle (Chrysemys picta picta), three individuals were exposed to band limited continuous white noise (50-1000 Hz) of varying durations and amplitudes (sound exposure levels ranged from 151 to 171 dB re 1 µPa2 s). Control and post-exposure auditory thresholds were measured and compared at 400 and 600 Hz using auditory evoked potential methods. TTS occurred in all individuals at both test frequencies, with shifts of 6.1-41.4 dB. While the numbers of TTS occurrences were equal between frequencies, greater shifts were observed at 600 Hz, a frequency of higher auditory sensitivity, compared to 400 Hz. The onset of TTS occurred at 154 dB re 1 µPa2 s for 600 Hz, compared to 158 dB re 1 µPa2 s at 400 Hz. The 400-Hz onset and patterns of TTS growth and recovery were similar to those observed in previously studied Trachemys scripta elegans, suggesting TTS may be comparable across Emydidae species.


Subject(s)
Acoustic Stimulation , Auditory Threshold , Turtles , Animals , Turtles/physiology , Time Factors , Noise/adverse effects , Evoked Potentials, Auditory/physiology , Hearing Loss, Noise-Induced/physiopathology , Hearing Loss, Noise-Induced/etiology , Male , Female , Hearing/physiology
10.
Sci Rep ; 14(1): 10518, 2024 05 08.
Article in English | MEDLINE | ID: mdl-38714827

ABSTRACT

Previous work assessing the effect of additive noise on the postural control system has found a positive effect of additive white noise on postural dynamics. This study covers two separate experiments that were run sequentially to better understand how the structure of the additive noise signal affects postural dynamics, while also furthering our knowledge of how the intensity of auditory stimulation of noise may elicit this phenomenon. Across the two experiments, we introduced three auditory noise stimulations of varying structure (white, pink, and brown noise). Experiment 1 presented the stimuli at 35 dB while Experiment 2 was presented at 75 dB. Our findings demonstrate a decrease in variability of the postural control system regardless of the structure of the noise signal presented, but only for high intensity auditory stimulation.


Subject(s)
Acoustic Stimulation , Noise , Humans , Female , Male , Adult , Young Adult , Postural Balance/physiology , Color , Posture/physiology , Standing Position
11.
Trends Hear ; 28: 23312165241239541, 2024.
Article in English | MEDLINE | ID: mdl-38738337

ABSTRACT

Cochlear synaptopathy, a form of cochlear deafferentation, has been demonstrated in a number of animal species, including non-human primates. Both age and noise exposure contribute to synaptopathy in animal models, indicating that it may be a common type of auditory dysfunction in humans. Temporal bone and auditory physiological data suggest that age and occupational/military noise exposure also lead to synaptopathy in humans. The predicted perceptual consequences of synaptopathy include tinnitus, hyperacusis, and difficulty with speech-in-noise perception. However, confirming the perceptual impacts of this form of cochlear deafferentation presents a particular challenge because synaptopathy can only be confirmed through post-mortem temporal bone analysis and auditory perception is difficult to evaluate in animals. Animal data suggest that deafferentation leads to increased central gain, signs of tinnitus and abnormal loudness perception, and deficits in temporal processing and signal-in-noise detection. If equivalent changes occur in humans following deafferentation, this would be expected to increase the likelihood of developing tinnitus, hyperacusis, and difficulty with speech-in-noise perception. Physiological data from humans is consistent with the hypothesis that deafferentation is associated with increased central gain and a greater likelihood of tinnitus perception, while human data on the relationship between deafferentation and hyperacusis is extremely limited. Many human studies have investigated the relationship between physiological correlates of deafferentation and difficulty with speech-in-noise perception, with mixed findings. A non-linear relationship between deafferentation and speech perception may have contributed to the mixed results. When differences in sample characteristics and study measurements are considered, the findings may be more consistent.


Subject(s)
Cochlea , Speech Perception , Tinnitus , Humans , Cochlea/physiopathology , Tinnitus/physiopathology , Tinnitus/diagnosis , Animals , Speech Perception/physiology , Hyperacusis/physiopathology , Noise/adverse effects , Auditory Perception/physiology , Synapses/physiology , Hearing Loss, Noise-Induced/physiopathology , Hearing Loss, Noise-Induced/diagnosis , Loudness Perception
12.
Philos Trans R Soc Lond B Biol Sci ; 379(1905): 20230185, 2024 Jul 08.
Article in English | MEDLINE | ID: mdl-38768208

ABSTRACT

Acoustic communication plays an important role in coordinating group dynamics and collective movements across a range of taxa. However, anthropogenic disturbance can inhibit the production or reception of acoustic signals. Here, we investigate the effects of noise and light pollution on the calling and collective behaviour of wild jackdaws (Corvus monedula), a highly social corvid species that uses vocalizations to coordinate collective movements at winter roosting sites. Using audio and video monitoring of roosts in areas with differing degrees of urbanization, we evaluate the influence of anthropogenic disturbance on vocalizations and collective movements. We found that when levels of background noise were higher, jackdaws took longer to settle following arrival at the roost in the evening and also called more during the night, suggesting that human disturbance may cause sleep disruption. High levels of overnight calling were, in turn, linked to disruption of vocal consensus decision-making and less cohesive group departures in the morning. These results raise the possibility that, by affecting cognitive and perceptual processes, human activities may interfere with animals' ability to coordinate collective behaviour. Understanding links between anthropogenic disturbance, communication, cognition and collective behaviour must be an important research priority in our increasingly urbanized world. This article is part of the theme issue 'The power of sound: unravelling how acoustic communication shapes group dynamics'.


Subject(s)
Crows , Noise , Social Behavior , Vocalization, Animal , Animals , Crows/physiology , Anthropogenic Effects , Human Activities
13.
Int J Qual Health Care ; 36(2)2024 May 20.
Article in English | MEDLINE | ID: mdl-38727537

ABSTRACT

Sleep disruptions in the hospital setting can have adverse effects on patient safety and well-being, leading to complications like delirium and prolonged recovery. This study aimed to comprehensively assess the factors influencing sleep disturbances in hospital wards, with a comparison of the sleep quality of patients staying in single rooms to those in shared rooms. A mixed-methods approach was used to examine patient-reported sleep quality and sleep disruption factors, in conjunction with objective noise measurements, across seven inpatient wards at an acute tertiary public hospital in Sydney, Australia. The most disruptive factor to sleep in the hospital was noise, ranked as 'very disruptive' by 20% of patients, followed by acute health conditions (11%) and nursing interventions (10%). Patients in shared rooms experienced the most disturbed sleep, with 51% reporting 'poor' or 'very poor' sleep quality. In contrast, only 17% of the patients in single rooms reported the same. Notably, sound levels in shared rooms surpassed 100 dB, highlighting the potential for significant sleep disturbances in shared patient accommodation settings. The results of this study provide a comprehensive overview of the sleep-related challenges faced by patients in hospital, particularly those staying in shared rooms. The insights from this study offer guidance for targeted healthcare improvements to minimize disruptions and enhance the quality of sleep for hospitalized patients.


Subject(s)
Noise , Sleep Wake Disorders , Humans , Male , Female , Sleep Wake Disorders/epidemiology , Noise/adverse effects , Middle Aged , Aged , Sleep Quality , Inpatients , Adult , Patients' Rooms , Hospitalization , Australia , Tertiary Care Centers
14.
J Neural Eng ; 21(3)2024 May 22.
Article in English | MEDLINE | ID: mdl-38729132

ABSTRACT

Objective.This study develops a deep learning (DL) method for fast auditory attention decoding (AAD) using electroencephalography (EEG) from listeners with hearing impairment (HI). It addresses three classification tasks: differentiating noise from speech-in-noise, classifying the direction of attended speech (left vs. right) and identifying the activation status of hearing aid noise reduction algorithms (OFF vs. ON). These tasks contribute to our understanding of how hearing technology influences auditory processing in the hearing-impaired population.Approach.Deep convolutional neural network (DCNN) models were designed for each task. Two training strategies were employed to clarify the impact of data splitting on AAD tasks: inter-trial, where the testing set used classification windows from trials that the training set had not seen, and intra-trial, where the testing set used unseen classification windows from trials where other segments were seen during training. The models were evaluated on EEG data from 31 participants with HI, listening to competing talkers amidst background noise.Main results.Using 1 s classification windows, DCNN models achieve accuracy (ACC) of 69.8%, 73.3% and 82.9% and area-under-curve (AUC) of 77.2%, 80.6% and 92.1% for the three tasks respectively on inter-trial strategy. In the intra-trial strategy, they achieved ACC of 87.9%, 80.1% and 97.5%, along with AUC of 94.6%, 89.1%, and 99.8%. Our DCNN models show good performance on short 1 s EEG samples, making them suitable for real-world applications. Conclusion: Our DCNN models successfully addressed three tasks with short 1 s EEG windows from participants with HI, showcasing their potential. While the inter-trial strategy demonstrated promise for assessing AAD, the intra-trial approach yielded inflated results, underscoring the important role of proper data splitting in EEG-based AAD tasks.Significance.Our findings showcase the promising potential of EEG-based tools for assessing auditory attention in clinical contexts and advancing hearing technology, while also promoting further exploration of alternative DL architectures and their potential constraints.


Subject(s)
Attention , Auditory Perception , Deep Learning , Electroencephalography , Hearing Loss , Humans , Attention/physiology , Female , Electroencephalography/methods , Male , Middle Aged , Hearing Loss/physiopathology , Hearing Loss/rehabilitation , Hearing Loss/diagnosis , Aged , Auditory Perception/physiology , Noise , Adult , Hearing Aids , Speech Perception/physiology , Neural Networks, Computer
15.
Cereb Cortex ; 34(5)2024 May 02.
Article in English | MEDLINE | ID: mdl-38715408

ABSTRACT

Speech comprehension in noise depends on complex interactions between peripheral sensory and central cognitive systems. Despite having normal peripheral hearing, older adults show difficulties in speech comprehension. It remains unclear whether the brain's neural responses could indicate aging. The current study examined whether individual brain activation during speech perception in different listening environments could predict age. We applied functional near-infrared spectroscopy to 93 normal-hearing human adults (20 to 70 years old) during a sentence listening task, which contained a quiet condition and 4 different signal-to-noise ratios (SNR = 10, 5, 0, -5 dB) noisy conditions. A data-driven approach, the region-based brain-age predictive modeling was adopted. We observed a significant behavioral decrease with age under the 4 noisy conditions, but not under the quiet condition. Brain activations in SNR = 10 dB listening condition could successfully predict individual's age. Moreover, we found that the bilateral visual sensory cortex, left dorsal speech pathway, left cerebellum, right temporal-parietal junction area, right homolog Wernicke's area, and right middle temporal gyrus contributed most to prediction performance. These results demonstrate that the activations of regions about sensory-motor mapping of sound, especially in noisy conditions, could be sensitive measures for age prediction than external behavior measures.


Subject(s)
Aging , Brain , Comprehension , Noise , Spectroscopy, Near-Infrared , Speech Perception , Humans , Adult , Speech Perception/physiology , Male , Female , Spectroscopy, Near-Infrared/methods , Middle Aged , Young Adult , Aged , Comprehension/physiology , Brain/physiology , Brain/diagnostic imaging , Aging/physiology , Brain Mapping/methods , Acoustic Stimulation/methods
16.
Adv Neonatal Care ; 24(3): 291-300, 2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38815280

ABSTRACT

BACKGROUND: Neonates experience varying intensities of pain after surgery. While white noise has been used for postoperative pain relief in infants, its effects on neonates after surgery need further exploration. PURPOSE: This study aimed to evaluate the effects of white noise on pain scores and salivary cortisol levels in surgical neonates. METHODS: In this randomized controlled trial, 64 neonates scheduled for surgery were recruited and assigned by block randomization into 2 groups. The intervention group listened to white noise at 50 dB, while the control group listened to white noise at 0 dB, for 30 minutes 6 times for 48 hours postoperatively. Pain scores, measured by the COMFORTneo Scale, and salivary cortisol levels were compared. RESULTS: Although pain scores decreased after surgery in all subjects, no statistically significant difference was observed between the 2 groups (P = .937). There was a significant difference between pre- and postintervention pain scores in the intervention group only (P = .006). Salivary cortisol levels decreased after intervention in the intervention group, but there was no significant difference between pre- and postintervention levels in the 2 groups (P = .716). IMPLICATIONS FOR PRACTICE: Given the reduction in pain scores and salivary cortisol concentrations after white noise intervention, white noise shows potential as an adjunctive soothing measure for neonates after surgery. IMPLICATIONS FOR RESEARCH: Future studies are needed to confirm the efficacy and utility of white noise intervention in clinical settings.


Subject(s)
Hydrocortisone , Noise , Pain Measurement , Pain, Postoperative , Saliva , Humans , Hydrocortisone/analysis , Hydrocortisone/metabolism , Infant, Newborn , Saliva/chemistry , Pain, Postoperative/metabolism , Female , Male , Pain Measurement/methods , Noise/adverse effects
17.
Cochrane Database Syst Rev ; 5: CD010333, 2024 05 30.
Article in English | MEDLINE | ID: mdl-38813836

ABSTRACT

BACKGROUND: Infants in the neonatal intensive care unit (NICU) are subjected to different types of stress, including sounds of high intensity. The sound levels in NICUs often exceed the maximum acceptable level recommended by the American Academy of Pediatrics, which is 45 decibels (dB). Hearing impairment is diagnosed in 2% to 10% of preterm infants compared to only 0.1% of the general paediatric population. Bringing sound levels under 45 dB can be achieved by lowering the sound levels in an entire unit; by treating the infant in a section of a NICU, in a 'private' room, or in incubators in which the sound levels are controlled; or by reducing sound levels at the individual level using earmuffs or earplugs. By lowering sound levels, the resulting stress can be diminished, thereby promoting growth and reducing adverse neonatal outcomes. This review is an update of one originally published in 2015 and first updated in 2020. OBJECTIVES: To determine the benefits and harms of sound reduction on the growth and long-term neurodevelopmental outcomes of neonates. SEARCH METHODS: We used standard, extensive Cochrane search methods. On 21 and 22 August 2023, a Cochrane Information Specialist searched CENTRAL, PubMed, Embase, two other databases, two trials registers, and grey literature via Google Scholar and conference abstracts from Pediatric Academic Societies. SELECTION CRITERIA: We included randomised controlled trials (RCTs) or quasi-RCTs in preterm infants (less than 32 weeks' postmenstrual age (PMA) or less than 1500 g birth weight) cared for in the resuscitation area, during transport, or once admitted to a NICU or stepdown unit. We specified three types of intervention: 1) intervention at the unit level (i.e. the entire neonatal department), 2) at the section or room level, or 3) at the individual level (e.g. hearing protection). DATA COLLECTION AND ANALYSIS: We used the standardised review methods of Cochrane Neonatal to assess the risk of bias in the studies. We used the risk ratio (RR) and risk difference (RD), with their 95% confidence intervals (CIs), for dichotomous data. We used the mean difference (MD) for continuous data. Our primary outcome was major neurodevelopmental disability. We used GRADE to assess the certainty of the evidence. MAIN RESULTS: We included one RCT, which enroled 34 newborn infants randomised to the use of silicone earplugs versus no earplugs for hearing protection. It was a single-centre study conducted at the University of Texas Medical School in Houston, Texas, USA. Earplugs were positioned at the time of randomisation and worn continuously until the infants were 35 weeks' postmenstrual age (PMA) or discharged (whichever came first). Newborns in the control group received standard care. The evidence is very uncertain about the effects of silicone earplugs on the following outcomes. • Cerebral palsy (RR 3.00, 95% CI 0.15 to 61.74)and Mental Developmental Index (MDI) (Bayley II) at 18 to 22 months' corrected age (MD 14.00, 95% CI 3.13 to 24.87); no other indicators of major neurodevelopmental disability were reported. • Normal auditory functioning at discharge (RR 1.65, 95% CI 0.93 to 2.94) • All-cause mortality during hospital stay (RR 2.07, 95% CI 0.64 to 6.70; RD 0.20, 95% CI -0.09 to 0.50) • Weight (kg) at 18 to 22 months' corrected age (MD 0.31, 95% CI -1.53 to 2.16) • Height (cm) at 18 to 22 months' corrected age (MD 2.70, 95% CI -3.13 to 8.53) • Days of assisted ventilation (MD -1.44, 95% CI -23.29 to 20.41) • Days of initial hospitalisation (MD 1.36, 95% CI -31.03 to 33.75) For all outcomes, we judged the certainty of evidence as very low. We identified one ongoing RCT that will compare the effects of reduced noise levels and cycled light on visual and neural development in preterm infants. AUTHORS' CONCLUSIONS: No studies evaluated interventions to reduce sound levels below 45 dB across the whole neonatal unit or in a room within it. We found only one study that evaluated the benefits of sound reduction in the neonatal intensive care unit for hearing protection in preterm infants. The study compared the use of silicone earplugs versus no earplugs in newborns of very low birth weight (less than 1500 g). Considering the very small sample size, imprecise results, and high risk of attrition bias, the evidence based on this research is very uncertain and no conclusions can be drawn. As there is a lack of evidence to inform healthcare or policy decisions, large, well designed, well conducted, and fully reported RCTs that analyse different aspects of noise reduction in NICUs are needed. They should report both short- and long-term outcomes.


Subject(s)
Infant, Premature , Infant, Very Low Birth Weight , Intensive Care Units, Neonatal , Noise , Randomized Controlled Trials as Topic , Humans , Infant, Newborn , Infant, Premature/growth & development , Noise/adverse effects , Infant, Very Low Birth Weight/growth & development , Sound , Ear Protective Devices , Bias , Hearing Loss, Noise-Induced/prevention & control
18.
Multisens Res ; 37(3): 243-259, 2024 May 23.
Article in English | MEDLINE | ID: mdl-38777333

ABSTRACT

Auditory speech can be difficult to understand but seeing the articulatory movements of a speaker can drastically improve spoken-word recognition and, on the longer-term, it helps listeners to adapt to acoustically distorted speech. Given that individuals with developmental dyslexia (DD) have sometimes been reported to rely less on lip-read speech than typical readers, we examined lip-read-driven adaptation to distorted speech in a group of adults with DD ( N = 29) and a comparison group of typical readers ( N = 29). Participants were presented with acoustically distorted Dutch words (six-channel noise-vocoded speech, NVS) in audiovisual training blocks (where the speaker could be seen) interspersed with audio-only test blocks. Results showed that words were more accurately recognized if the speaker could be seen (a lip-read advantage), and that performance steadily improved across subsequent auditory-only test blocks (adaptation). There were no group differences, suggesting that perceptual adaptation to disrupted spoken words is comparable for dyslexic and typical readers. These data open up a research avenue to investigate the degree to which lip-read-driven speech adaptation generalizes across different types of auditory degradation, and across dyslexic readers with decoding versus comprehension difficulties.


Subject(s)
Dyslexia , Lipreading , Reading , Speech Perception , Humans , Speech Perception/physiology , Male , Female , Dyslexia/physiopathology , Adult , Young Adult , Adaptation, Physiological/physiology , Noise , Acoustic Stimulation
19.
J Speech Lang Hear Res ; 67(6): 1964-1975, 2024 Jun 06.
Article in English | MEDLINE | ID: mdl-38690971

ABSTRACT

PURPOSE: There is increasing interest in the measurement of cognitive effort during listening tasks, for both research and clinical purposes. Quantification of task-evoked pupil responses (TEPRs) is a psychophysiological method that can be used to study cognitive effort. However, light level during cognitively demanding listening tasks may affect TEPRs, complicating interpretation of listening-related changes. The objective of this study was to examine the effects of light level on TEPRs during effortful listening across a range of signal-to-noise ratios (SNRs). METHOD: Thirty-six adults without hearing loss were asked to repeat target sentences presented in background babble noise while their pupil diameter was recorded. Light level and SNRs were manipulated in a 4 × 4 repeated-measures design. Repeated-measures analyses of variance were used to measure the effects. RESULTS: Peak and mean dilation were typically larger in more adverse SNR conditions (except for SNR -6 dB) and smaller in higher light levels. Differences in mean and peak dilation between SNR conditions were larger in dim light than in brighter light. CONCLUSIONS: Brighter light conditions make TEPRs less sensitive to variations in listening effort across levels of SNR. Therefore, light level must be considered and reported in detail to ensure sensitivity of TEPRs and for comparisons of findings across different studies. It is recommended that TEPR testing be conducted in relatively low light conditions, considering both background illumination and screen luminance. SUPPLEMENTAL MATERIAL: https://doi.org/10.23641/asha.25676538.


Subject(s)
Light , Noise , Pupil , Signal-To-Noise Ratio , Speech Perception , Humans , Male , Female , Pupil/physiology , Adult , Young Adult , Speech Perception/physiology
20.
Sci Total Environ ; 935: 173055, 2024 Jul 20.
Article in English | MEDLINE | ID: mdl-38723952

ABSTRACT

Anthropogenic noise is a global pollutant but its potential impacts on early life-stages in fishes are largely unknown. Here, using controlled laboratory experiments, we tested for impacts of continuous or intermittent exposure to low-frequency broadband noise on early life-stages of the common goby (Pomatoschistus microps), a marine fish with exclusive paternal care. Neither continuous nor intermittent noise exposure had an effect on filial cannibalism, showing that males were capable and willing to care for their broods. However, broods reared in continuous noise covered a smaller area and contained fewer eggs than control broods. Moreover, although developmental rate was the same in all treatments, larvae reared by males in continuous noise had, on average, a smaller yolk sac at hatching than those reared in the intermittent noise and control treatments, while larvae body length did not differ. Thus, it appears that the increased consumption of the yolk sac reserve was not utilised for increased growth. This suggests that exposure to noise in early life-stages affects fitness-related traits of surviving offspring, given the crucial importance of the yolk sac reserve during the early life of pelagic larvae. More broadly, our findings highlight the wide-ranging impacts of anthropogenic noise on aquatic wildlife living in an increasingly noisy world.


Subject(s)
Noise , Animals , Noise/adverse effects , Male , Larva/growth & development , Paternal Behavior , Perciformes
SELECTION OF CITATIONS
SEARCH DETAIL
...