Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 1.209
Filter
1.
JASA Express Lett ; 4(5)2024 May 01.
Article in English | MEDLINE | ID: mdl-38717467

ABSTRACT

A long-standing quest in audition concerns understanding relations between behavioral measures and neural representations of changes in sound intensity. Here, we examined relations between aspects of intensity perception and central neural responses within the inferior colliculus of unanesthetized rabbits (by averaging the population's spike count/level functions). We found parallels between the population's neural output and: (1) how loudness grows with intensity; (2) how loudness grows with duration; (3) how discrimination of intensity improves with increasing sound level; (4) findings that intensity discrimination does not depend on duration; and (5) findings that duration discrimination is a constant fraction of base duration.


Subject(s)
Inferior Colliculi , Loudness Perception , Animals , Rabbits , Loudness Perception/physiology , Inferior Colliculi/physiology , Acoustic Stimulation/methods , Discrimination, Psychological/physiology , Auditory Perception/physiology , Neurons/physiology
2.
J Speech Lang Hear Res ; 67(6): 1731-1751, 2024 Jun 06.
Article in English | MEDLINE | ID: mdl-38754028

ABSTRACT

PURPOSE: The present study examined whether participants respond to unperturbed parameters while experiencing specific perturbations in auditory feedback. For instance, we aim to determine if speakers adjust voice loudness when only pitch is artificially altered in auditory feedback. This phenomenon is referred to as the "accompanying effect" in the present study. METHOD: Thirty native Mandarin speakers were asked to sustain the vowel /ɛ/ for 3 s while their auditory feedback underwent single shifts in one of the three distinct ways: pitch shift (±100 cents; coded as PT), loudness shift (±6 dB; coded as LD), or first formant (F1) shift (±100 Hz; coded as FM). Participants were instructed to ignore the perturbations in their auditory feedback. Response types were categorized based on pitch, loudness, and F1 for each individual trial, such as Popp_Lopp_Fopp indicating opposing responses in all three domains. RESULTS: The accompanying effect appeared 93% of the time. Bayesian Poisson regression models indicate that opposing responses in all three domains (Popp_Lopp_Fopp) were the most prevalent response type across the conditions (PT, LD, and FM). The more frequently used response types exhibited opposing responses and significantly larger response curves than the less frequently used response types. Following responses became more prevalent only when the perturbed stimuli were perceived as voices from someone else (external references), particularly in the FM condition. In terms of isotropy, loudness and F1 tended to change in the same direction rather than loudness and pitch. CONCLUSION: The presence of the accompanying effect suggests that the motor systems responsible for regulating pitch, loudness, and formants are not entirely independent but rather interconnected to some degree.


Subject(s)
Bayes Theorem , Pitch Perception , Humans , Male , Female , Young Adult , Pitch Perception/physiology , Adult , Speech Perception/physiology , Loudness Perception/physiology , Feedback, Sensory/physiology , Voice/physiology , Acoustic Stimulation/methods , Speech Acoustics
3.
J Exp Psychol Hum Percept Perform ; 50(6): 554-569, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38546625

ABSTRACT

Crossmodal correspondences refer to systematic associations between stimulus attributes encountered in different sensory modalities. These correspondences can be probed in the speeded classification task where they tend to produce congruency effects. This study aimed to replicate and extend previous work conducted by Marks (1987, Experiment 3, Journal of Experimental Psychology: Human Perception and Performance, Vol. 13, No. 3, 384-394) which demonstrated a crossmodal correspondence between auditory and visual intensity attributes. Experiment 1 successfully replicates Marks' original finding that performance in a brightness classification task is affected by whether the loudness of a concurrently presented auditory distractor matches the brightness of the visual target. Furthermore, in line with the original study, we found that this effect was absent in a lightness classification task. In Experiment 2, we demonstrate that loudness-brightness correspondence is robust even when the exact stimulus input changes. This finding suggests that there is a context-dependent mapping between loudness and brightness levels, rather than an absolute mapping between any particular intensity levels. Finally, exploratory analysis using the diffusion model for conflict tasks indicated that evidence from the task-irrelevant modality generates a burst of weak, short-lived automatic activation that can bias decision-making in difficult tasks, but not in easy tasks. Our results provide further evidence for the existence of a flexible crossmodal correspondence between brightness and loudness, which might be helpful in determining one's distance to a stimulus source during the early stages of multisensory integration. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Subject(s)
Auditory Perception , Visual Perception , Humans , Adult , Young Adult , Male , Female , Visual Perception/physiology , Auditory Perception/physiology , Loudness Perception/physiology , Psychomotor Performance/physiology
4.
Otol Neurotol ; 45(5): e385-e392, 2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38518764

ABSTRACT

HYPOTHESIS: The behaviorally based programming with loudness balancing (LB) would result in better speech understanding, spectral-temporal resolution, and music perception scores, and there would be a relationship between these scores. BACKGROUND: Loudness imbalances at upper stimulation levels may cause sounds to be perceived as irregular, gravelly, or overly echoed and may negatively affect the listening performance of the cochlear implant (CI) user. LB should be performed after fitting to overcome these problems. METHODS: The study included 26 unilateral Med-EL CI users. Two different CI programs based on the objective electrically evoked stapedial reflex threshold (P1) and the behaviorally program with LB (P2) were recorded for each participant. The Turkish Matrix Sentence Test (TMS) was applied to evaluate speech perception; the Random Gap Detection Test (RGDT) and Spectral-Temporally Modulated Ripple Test (SMRT) were applied to evaluate spectral temporal resolution skills; the Mini Profile of Music Perception Skills (mini-PROMS) and Melodic Contour Identification (MCI) tests were applied to evaluate music perception, and the results were compared. RESULTS: Significantly better scores were obtained with P2 in TMS tests performed in noise and quiet. SMRT scores were significantly correlated with TMS in quiet and noise, and mini-PROMS sound perception results. Although better scores were obtained with P2 in the mini-PROMS total score and MCI, a significant difference was found only for MCI. CONCLUSION: The data from the current study showed that equalization of loudness across CI electrodes leads to better perceptual acuity. It also revealed the relationship between speech perception, spectral-temporal resolution, and music perception.


Subject(s)
Cochlear Implantation , Cochlear Implants , Music , Speech Perception , Humans , Male , Female , Middle Aged , Adult , Speech Perception/physiology , Cochlear Implantation/methods , Speech Intelligibility/physiology , Aged , Auditory Perception/physiology , Loudness Perception/physiology , Young Adult
5.
Trends Hear ; 27: 23312165231207229, 2023.
Article in English | MEDLINE | ID: mdl-37936420

ABSTRACT

Long stimuli have lower detection thresholds or are perceived louder than short stimuli with the same intensity, an effect known as temporal loudness integration (TLI). In electric hearing, TLI for pulse trains with a fixed rate but varying number of pulses, i.e. stimulus duration, has mainly been investigated at clinically used stimulation rates. To study the effect of an overall effective stimulation rate at 100% channel crosstalk, we investigated TLI with (a) a clinically used single-channel stimulation rate of 1,500 pps and (b) a high stimulation rate of 18,000 pps, both for an apical and a basal electrode. Thresholds (THR), a line of equal loudness (BAL), and maximum acceptable levels (MALs) were measured in 10 MED-EL cochlear implant users. Stimulus durations varied from a single pulse to 300 ms long pulse trains. At 18,000 pps, the dynamic range (DR) increased by 7.36±3.16 dB for the 300 ms pulse train. Amplitudes at THR, BAL, and MAL decreased monotonically with increasing stimulus duration. The decline was fitted with high accuracy with a power law function (R2=0.94±0.06). Threshold slopes were -1.05±0.36 and -1.66±0.30 dB per doubling of duration for the low and high rate, respectively, and were shallower than for acoustic hearing. The electrode location did not affect the amplitudes or slopes of the TLI curves. THR, BAL, and MAL were always lower for the higher rate and the DR was larger at the higher rate at all measured durations.


Subject(s)
Cochlear Implantation , Cochlear Implants , Deafness , Humans , Loudness Perception/physiology , Hearing , Electric Stimulation , Acoustic Stimulation
6.
J Psychiatr Res ; 159: 145-152, 2023 03.
Article in English | MEDLINE | ID: mdl-36724673

ABSTRACT

Previous research has suggested that fear of flying, which is defined as a situational, specific phobia, could overlap with depressive and anxiety disorders. Whether the neuronal dysfunctions including altered serotonergic activity in the brain and altered neural oscillations observed for depressive and anxiety disorders also overlap with alterations in fear of flying is unclear. Here, thirty-six participants with self-reported fear of flying (FF) and forty-one unaffected participants (NFF) were recruited. The participants completed the Beck Depression Inventory (BDI-II), the State-trait Anxiety Inventory (STAI) and the Fear of Flying Scale (FFS). EEG-recording was conducted during resting-state and during presentation of auditory stimuli with varying loudness levels for analysis of the Loudness Dependence of Auditory Evoked Potentials (LDAEP), which is suggested to be inversely related to central serotonergic activity. Participants with fear of flying did not differ from the control group with regard to BDI-II and STAI data. The LDAEP was higher over F4 electrode in the FF group compared to controls, whereas exploratory analysis suggest that differences between groups were conveyed by female participants. Moreover, the FF group showed relatively higher right frontal alpha activity compared to the control group, whereas no difference in frequency power (alpha, beta and theta) was observed. Thus, this study brought the first hint for reduced serotonergic activity in individuals with fear of flying and relatively higher right frontal activity. Thus, based on the preliminary findings, future research should aim to examine the boundaries with anxiety and depressive disorders and to clarify the distinct neural mechanisms.


Subject(s)
Loudness Perception , Phobic Disorders , Humans , Female , Loudness Perception/physiology , Evoked Potentials, Auditory/physiology , Brain , Electroencephalography
7.
J Assoc Res Otolaryngol ; 23(5): 665-680, 2022 10.
Article in English | MEDLINE | ID: mdl-35918501

ABSTRACT

The stimulation rate in cochlear implant (CI) sound coding, or the "carrier" rate in pulses per second (pps), is known to influence pitch perception, as well as loudness perception and sound quality. Our main objective was to investigate the effects of reduced carrier rate on the loudness and pitch of coded speech samples. We describe two experiments with 16 Nucleus® CI users, where we controlled modulation characteristics and carrier rate using Spectral and Temporal Enhanced Processing (STEP), a novel experimental multichannel sound coder. We used a fixed set of threshold and comfortable stimulation levels for each subject, obtained from clinical MAPs. In the first experiment, we determined equivalence for voice pitch ranking and voice gender categorization between the Advanced Combination Encoder (ACE), a widely used clinical strategy in Nucleus® recipients, and STEP for fundamental frequencies (F0) 120-250 Hz. In the second experiment, loudness was determined as a function of the input amplitude of speech samples for carrier rates of 1000, 500, and 250 pps per channel. Then, using equally loud sound coder programs, we evaluated the effect of carrier rate on voice pitch perception. Although nearly all subjects could categorize voice gender significantly above chance, pitch ranking varied across subjects. Overall, carrier rate did not substantially affect voice pitch ranking or voice gender categorization: as long as the carrier rate was at least twice the fundamental frequency, or when stimulation pulses for the lowest, 250 pps carrier were aligned to F0 peaks. These results indicate that carrier rates as low as 250 pps per channel are sufficient to support functional voice pitch perception for those CI users sensitive to temporal pitch cues; at least when temporal modulations and pulse timings in the coder output are well controlled by novel strategies such as STEP.


Subject(s)
Cochlear Implantation , Cochlear Implants , Speech Perception , Humans , Cochlear Implantation/methods , Pitch Perception/physiology , Loudness Perception/physiology , Cues , Speech Perception/physiology , Acoustic Stimulation/methods
8.
Audiol Neurootol ; 27(6): 469-477, 2022.
Article in English | MEDLINE | ID: mdl-36007501

ABSTRACT

INTRODUCTION: The common mechanism of tinnitus, hyperacusis, and loudness perception is hypothesized to be related to central gain. Although central gain increases with attempts to compensate hearing loss, reduced input can also be observed in those with clinically normal hearing. This study aimed to evaluate the loudness growth function of tinnitus patients with and without hyperacusis using behavioural and electrophysiological methods. METHODS: The study consists of three groups with a total of 60 clinically normal hearing subjects, including the control group (10 men and 10 women; mean age 39.8, SD 11.8 years), tinnitus group (10 men and 10 women; mean age 40.9, SD 12.2 years), and hyperacusis group (also have tinnitus) (7 men and 13 women; mean age 38.7, SD 14.6 years). Loudness discomfort levels (LDLs), categorical loudness scaling (CLS), and cortical auditory evoked potentials were used for the evaluation of loudness growth. N1-P2 component amplitudes and latencies were measured. RESULTS: LDL results of 500, 1,000, 2,000, 4,000, and 8,000 Hz showed a significant difference between the hyperacusis group and the other two groups (p < 0.001). In the loudness scale test performed with 500 Hz and 2,000 Hz narrow-band noise (NBN) stimulus, a significant difference was observed between the hyperacusis group and the other two groups in the "medium," "loud," and "very loud" categories (p < 0.001). In the cortical examination performed with 500 Hz and 2,000 Hz NBN stimulus at 40, 60, and 80 dB nHL intensities, no significant difference was observed between the groups in the N1, P2 latency, and N1-P2 peak-to-peak amplitude. CONCLUSION: Although the hyperacusis group is significantly different between groups in behavioural tests, the same cannot be said for electrophysiological tests. In our attempt to differentiate tinnitus and hyperacusis with electrophysiological tests over the loudness growth function, N1 and P2 responses were not seen as suitable methods. However, it appears to be beneficial to use CLS in addition to LDLs in behavioural tests.


Subject(s)
Hyperacusis , Tinnitus , Male , Humans , Female , Adult , Hearing , Loudness Perception/physiology , Hearing Tests
9.
J Int Med Res ; 50(7): 3000605221109789, 2022 Jul.
Article in English | MEDLINE | ID: mdl-35808808

ABSTRACT

OBJECTIVE: Although serotonergic dysfunction is significantly associated with major depressive disorder (MDD) and schizophrenia (SCZ), comparison of serotonergic dysfunction in both diseases has received little attention. Serotonin hypotheses have suggested diminished and elevated serotonin activity in MDD and SCZ, respectively. However, the foundations underlying these hypotheses are unclear regarding changes in serotonin neurotransmission in the aging brain. The loudness dependence of auditory evoked potentials (LDAEP) reflects serotonin neurotransmission. The present study compared the LDAEP between patients with SCZ or MDD and healthy controls (HCs). We further examined whether age was correlated with the LDAEP and clinical symptoms. METHODS: This prospective clinical study included 105 patients with SCZ (n = 54) or MDD (n = 51). Additionally, 35 HCs were recruited for this study. The LDAEP was measured on the midline channels via 62 electroencephalography channels. RESULTS: Patients with SCZ or MDD showed a significantly smaller mean LDAEP than those in HCs. The LDAEP was positively correlated with age in patients with SCZ or MDD. CONCLUSIONS: Changes in central serotonergic activity could be indicated by evaluating the LDAEP in patients with SCZ or MDD. Age-related reductions in serotonergic activity may be screened using the LDAEP in patients with SCZ or MDD.


Subject(s)
Depressive Disorder, Major , Schizophrenia , Depression , Electroencephalography , Evoked Potentials, Auditory/physiology , Humans , Loudness Perception/physiology , Prospective Studies , Serotonin
10.
Med Sci Monit ; 28: e936373, 2022 Apr 09.
Article in English | MEDLINE | ID: mdl-35396343

ABSTRACT

Loudness recruitment is a common symptom of hearing loss induced by cochlear lesions, which is defined as an abnormally fast growth of loudness perception of sound intensity. This is different from hyperacusis, which is defined as "abnormal intolerance to regular noises" or "extreme amplification of sounds that are comfortable to the average individual". Although both are characterized by abnormally high sound amplification, the mechanisms of occurrence are distinct. Damage to the outer hair cells alters the nonlinear characteristics of the basilar membrane, resulting in aberrant auditory nerve responses that may be connected to loudness recruitment. In contrast, hyperacusis is an aberrant condition characterized by maladaptation of the central auditory system. Peripheral injury can produce fluctuations in loudness recruitment, but this is not always the source of hyperacusis. Hyperacusis can also be accompanied by aversion to sound and fear of sound stimuli, in which the limbic system may play a critical role. This brief review aims to present the current status of the neurobiological mechanisms that distinguish between loudness recruitment and hyperacusis.


Subject(s)
Hearing Loss , Hyperacusis , Acoustic Stimulation , Cochlear Nerve , Humans , Loudness Perception/physiology
11.
Article in English | MEDLINE | ID: mdl-35328839

ABSTRACT

Novel electric air transportation is emerging as an industry that could help to improve the lives of people living in both metropolitan and rural areas through integration into infrastructure and services. However, as this new resource of accessibility increases in momentum, the need to investigate any potential adverse health impacts on the public becomes paramount. This paper details research investigating the effectiveness of available noise metrics and sound quality metrics (SQMs) for assessing perception of drone noise. A subjective experiment was undertaken to gather data on human response to a comprehensive set of drone sounds and to investigate the relationship between perceived annoyance, perceived loudness and perceived pitch and key psychoacoustic factors. Based on statistical analyses, subjective models were obtained for perceived annoyance, loudness and pitch of drone noise. These models provide understanding on key psychoacoustic features to consider in decision making in order to mitigate the impact of drone noise. For the drone sounds tested in this paper, the main contributors to perceived annoyance are perceived noise level (PNL) and sharpness; for perceived loudness are PNL and fluctuation strength; and for perceived pitch are sharpness, roughness and Aures tonality. Responses for the drone sounds tested were found to be highly sensitive to the distance between drone and receiver, measured in terms of height above ground level (HAGL). All these findings could inform the optimisation of drone operating conditions in order to mitigate community noise.


Subject(s)
Benchmarking , Unmanned Aerial Devices , Humans , Loudness Perception/physiology , Noise , Psychoacoustics
12.
Eur Psychiatry ; 65(1): e11, 2022 01 31.
Article in English | MEDLINE | ID: mdl-35094726

ABSTRACT

BACKGROUND: The experience of time, or the temporal order of external and internal events, is essential for humans. In psychiatric disorders such as depression and schizophrenia, impairment of time processing has been discussed for a long time. AIMS: In this explorative pilot study, therefore, the subjective time feeling as well as objective time perception were determined in patients with depression and schizophrenia, along with possible neurobiological correlates. METHODS: Depressed (n = 34; 32.4 ± 9.8 years; 21 men) and schizophrenic patients (n = 31; 35.1 ± 10.7 years; 22 men) and healthy subjects (n = 33; 32.8 ± 14.3 years; 16 men) were tested using time feeling questionnaires, time perception tasks and critical flicker-fusion frequency (CFF) and loudness dependence of auditory evoked potentials (LDAEP) to determine serotonergic neurotransmission. RESULTS: There were significant differences between the three groups regarding time feeling and also in time perception tasks (estimation of given time duration) and CFF (the "DOWN" condition). Regarding the LDAEP, patients with schizophrenia showed a significant negative correlation to time experience in terms of a pathologically increased serotonergic neurotransmission with disturbed time feeling. CONCLUSIONS: Impairment of time experience seems to play an important role in depression and schizophrenia, both subjectively and objectively, and novel neurobiological correlates have been uncovered. It is suggested, therefore, that alteration of experience of time should be increasingly included in the current psychopathological findings.


Subject(s)
Schizophrenia , Evoked Potentials, Auditory/physiology , Humans , Loudness Perception/physiology , Male , Mood Disorders , Pilot Projects
13.
Neurosci Lett ; 764: 136242, 2021 11 01.
Article in English | MEDLINE | ID: mdl-34509567

ABSTRACT

Central fatigue in Parkinson's disease (PD) is a common and disabling symptom that further worsens the patients' quality of life. A deficit in the serotonergic system may be implicated in the occurrence of fatigue in patients with PD as well as in those with other chronic conditions characterized by fatigue. The loudness dependence of auditory evoked potentials (LDAEP) is a neurophysiological tool that has proved to be effective in measuring the serotonergic central function in vivo. The aim of the present study was to assess central serotonergic activity in PD patients and to explore its possible association with the presence of fatigue. LDAEP was recorded in 38 PD patients (26 without fatigue - PDnF and 12 with fatigue - PDF) and 34 healthy controls. A significant difference between parkinsonian patients and controls emerged, with patients displaying stronger LDAEP values (which reflect a lower serotonergic central tone) than controls. By contrast, no differences in LDAEP emerged between PDF and PDnF. Our electrophysiological data confirmed the presence of a deficit in serotonergic central transmission in PD. An association between this deficit and fatigue was not demonstrated. It is likely that an altered dopamine/serotonin balance, rather than a serotonin deficit alone, is involved in the genesis of central fatigue. This complex and multifaceted symptom is related above all to a dysfunction in the striato-thalamo-cortical loop that connects the neostriatum to the frontal lobe and is strongly affected by motivation.


Subject(s)
Evoked Potentials, Auditory/physiology , Fatigue/metabolism , Motivation/physiology , Parkinson Disease/complications , Serotonin/metabolism , Aged , Case-Control Studies , Electroencephalography , Fatigue/etiology , Fatigue/physiopathology , Female , Frontal Lobe/metabolism , Frontal Lobe/physiopathology , Humans , Loudness Perception/physiology , Male , Middle Aged , Neostriatum/metabolism , Neostriatum/physiopathology , Parkinson Disease/metabolism , Parkinson Disease/physiopathology , Quality of Life , Synaptic Transmission
14.
PLoS Comput Biol ; 17(8): e1009251, 2021 08.
Article in English | MEDLINE | ID: mdl-34339409

ABSTRACT

In the auditory system, tonotopy is postulated to be the substrate for a place code, where sound frequency is encoded by the location of the neurons that fire during the stimulus. Though conceptually simple, the computations that allow for the representation of intensity and complex sounds are poorly understood. Here, a mathematical framework is developed in order to define clearly the conditions that support a place code. To accommodate both frequency and intensity information, the neural network is described as a space with elements that represent individual neurons and clusters of neurons. A mapping is then constructed from acoustic space to neural space so that frequency and intensity are encoded, respectively, by the location and size of the clusters. Algebraic operations -addition and multiplication- are derived to elucidate the rules for representing, assembling, and modulating multi-frequency sound in networks. The resulting outcomes of these operations are consistent with network simulations as well as with electrophysiological and psychophysical data. The analyses show how both frequency and intensity can be encoded with a purely place code, without the need for rate or temporal coding schemes. The algebraic operations are used to describe loudness summation and suggest a mechanism for the critical band. The mathematical approach complements experimental and computational approaches and provides a foundation for interpreting data and constructing models.


Subject(s)
Auditory Cortex/physiology , Auditory Perception/physiology , Models, Neurological , Acoustic Stimulation , Animals , Auditory Pathways/physiology , Computational Biology , Computer Simulation , Evoked Potentials, Auditory/physiology , Humans , Loudness Perception/physiology , Nerve Net/physiology , Neural Networks, Computer , Pitch Perception/physiology , Synaptic Transmission/physiology
15.
Hum Brain Mapp ; 42(6): 1742-1757, 2021 04 15.
Article in English | MEDLINE | ID: mdl-33544429

ABSTRACT

Psychoacoustic research suggests that judgments of perceived loudness change differ significantly between sounds with continuous increases and decreases of acoustic intensity, often referred to as "up-ramps" and "down-ramps." The magnitude and direction of this difference, in turn, appears to depend on focused attention and the specific task performed by the listeners. This has led to the suspicion that cognitive processes play an important role in the development of the observed context effects. The present study addressed this issue by exploring neural correlates of context-dependent loudness judgments. Normal hearing listeners continuously judged the loudness of complex-tone sequences which slowly changed in level over time while auditory fMRI was performed. Regression models that included information either about presented sound levels or about individual loudness judgments were used to predict activation throughout the brain. Our psychoacoustical data confirmed robust effects of the direction of intensity change on loudness judgments. Specifically, stimuli were judged softer when following a down-ramp, and louder in the context of an up-ramp. Levels and loudness estimates significantly predicted activation in several brain areas, including auditory cortex. However, only activation in nonauditory regions was more accurately predicted by context-dependent loudness estimates as compared with sound levels, particularly in the orbitofrontal cortex and medial temporal areas. These findings support the idea that cognitive aspects contribute to the generation of context effects with respect to continuous loudness judgments.


Subject(s)
Loudness Perception/physiology , Prefrontal Cortex/physiology , Psychoacoustics , Temporal Lobe/physiology , Adolescent , Adult , Auditory Cortex/diagnostic imaging , Auditory Cortex/physiology , Brain Mapping , Female , Humans , Magnetic Resonance Imaging , Male , Prefrontal Cortex/diagnostic imaging , Temporal Lobe/diagnostic imaging , Young Adult
16.
Neuroimage ; 213: 116733, 2020 06.
Article in English | MEDLINE | ID: mdl-32169543

ABSTRACT

Loudness dependence of auditory evoked potentials (LDAEP) has long been considered to reflect central basal serotonin transmission. However, the relationship between LDAEP and individual serotonin receptors and transporters has not been fully explored in humans and may involve other neurotransmitter systems. To examine LDAEP's relationship with the serotonin system, we performed PET using serotonin-1A (5-HT1A) imaging via [11C]CUMI-101 and serotonin transporter (5-HTT) imaging via [11C]DASB on a mixed sample of healthy controls (n â€‹= â€‹4: 4 females, 0 males), patients with unipolar (MDD, n â€‹= â€‹11: 4 females, 7 males) and bipolar depression (BD, n â€‹= â€‹8: 4 females, 4 males). On these same participants, we also performed electroencephalography (EEG) within a week of PET scanning, using 1000 â€‹Hz tones of varying intensity to evoke LDAEP. We then evaluated the relationship between LDAEP and 5-HT1A or 5-HTT binding in both the raphe (5-HT1A)/midbrain (5-HTT) areas and in the temporal cortex. We found that LDAEP was significantly correlated with 5-HT1A positively and with 5-HTT negatively in the temporal cortex (p â€‹< â€‹0.05), but not correlated with either in midbrain or raphe. In males only, exploratory analysis showed multiple regions in which LDAEP significantly correlated with 5-HT1A throughout the brain; we did not find this with 5-HTT. This multimodal study partially validates preclinical models of a serotonergic influence on LDAEP. Replication in larger samples is necessary to further clarify our understanding of the role of serotonin in perception of auditory tones.


Subject(s)
Brain/physiology , Evoked Potentials, Auditory/physiology , Loudness Perception/physiology , Serotonin Plasma Membrane Transport Proteins/metabolism , Serotonin/metabolism , Adolescent , Adult , Aged , Bipolar Disorder , Electroencephalography , Female , Humans , Male , Middle Aged , Positron-Emission Tomography , Young Adult
17.
Sci Rep ; 10(1): 1496, 2020 01 30.
Article in English | MEDLINE | ID: mdl-32001755

ABSTRACT

Whenever we move, speak, or play musical instruments, our actions generate auditory sensory input. The sensory consequences of our actions are thought to be predicted via sensorimotor integration, which involves anatomical and functional links between auditory and motor brain regions. The physiological connections are relatively well established, but less is known about how sensorimotor integration affects auditory perception. The sensory attenuation hypothesis suggests that the perceived loudness of self-generated sounds is attenuated to help distinguish self-generated sounds from ambient sounds. Sensory attenuation would work for louder ambient sounds, but could lead to less accurate perception if the ambient sounds were quieter. We hypothesize that a key function of sensorimotor integration is the facilitated processing of self-generated sounds, leading to more accurate perception under most conditions. The sensory attenuation hypothesis predicts better performance for higher but not lower intensity comparisons, whereas sensory facilitation predicts improved perception regardless of comparison sound intensity. A series of experiments tested these hypotheses, with results supporting the enhancement hypothesis. Overall, people were more accurate at comparing the loudness of two sounds when making one of the sounds themselves. We propose that the brain selectively modulates the perception of self-generated sounds to enhance representations of action consequences.


Subject(s)
Auditory Perception/physiology , Sensorimotor Cortex/physiology , Acoustic Stimulation , Auditory Cortex/physiology , Feedback, Sensory/physiology , Female , Humans , Loudness Perception/physiology , Male , Models, Neurological , Models, Psychological , Sensory Gating/physiology , Young Adult
18.
PLoS One ; 14(11): e0223075, 2019.
Article in English | MEDLINE | ID: mdl-31689327

ABSTRACT

Previous research has consistently shown that for sounds varying in intensity over time, the beginning of the sound is of higher importance for the perception of loudness than later parts (primacy effect). However, in all previous studies, the target sounds were presented in quiet, and at a fixed average sound level. In the present study, temporal loudness weights for a time-varying narrowband noise were investigated in the presence of a continuous bandpass-filtered background noise and the average sound levels of the target stimuli were varied across a range of 60 dB. Pronounced primacy effects were observed in all conditions and there were no significant differences between the temporal weights observed in the conditions in quiet and in background noise. Within the conditions in background noise, there was a significant effect of the sound level on the pattern of weights, which was mainly caused by a slight trend for increased weights at the end of the sounds ("recency effect") in the condition with lower average level. No such effect was observed for the in-quiet conditions. Taken together, the observed primacy effect is largely independent of masking as well as of sound level. Compatible with this conclusion, the observed primacy effects in quiet and in background noise can be well described by an exponential decay function using parameters based on previous studies. Simulations using a model for the partial loudness of time-varying sounds in background noise showed that the model does not predict the observed temporal loudness weights.


Subject(s)
Loudness Perception/physiology , Acoustic Stimulation , Adolescent , Adult , Auditory Threshold/physiology , Female , Humans , Male , Models, Psychological , Noise , Perceptual Masking/physiology , Psychoacoustics , Sound , Time Factors , Young Adult
19.
J Acoust Soc Am ; 145(6): 3586, 2019 06.
Article in English | MEDLINE | ID: mdl-31255128

ABSTRACT

Contributions of individual frequency bands to judgments of total loudness can be assessed by varying the level of each band independently from one presentation to the next and determining the relation between the change in level of each band and the loudness judgment. In a previous study, measures of perceptual weight obtained in this way for noise stimuli consisting of 15 bands showed greater weight associated with the highest and lowest bands than loudness models would predict. This was true even for noise with the long-term average speech spectrum, where the highest band contained little energy. One explanation is that listeners were basing decisions on some attribute other than loudness. The current study replicated earlier results for noise stimuli and included conditions using 15 tones located at the center frequencies of the noise bands. Although the two types of stimuli sound very different, the patterns of perceptual weight were nearly identical, suggesting that both sets of results are based on loudness judgments and that the edge bands play an important role in those judgments. The importance of the highest band was confirmed in a loudness-matching task involving all combinations of noise and tonal stimuli.


Subject(s)
Auditory Perception/physiology , Auditory Threshold/physiology , Loudness Perception/physiology , Perceptual Masking , Acoustic Stimulation/methods , Adult , Humans , Male , Noise , Sound
20.
Psicológica (Valencia. Internet) ; 40(2): 85-104, jul. 2019. ilus, graf
Article in English | IBECS | ID: ibc-191658

ABSTRACT

Although the perceptual association between verticality and pitch has been widely studied, the link between loudness and verticality is not fully understood yet. While loud and quiet sounds are assumed to be equally associated crossmodally with spatial elevation, there are perceptual differences between the two types of sounds that may suggest the contrary. For example, loud sounds tend to generate greater activity, both behaviourally and neurally, than quiet sounds. Here we investigated whether this difference percolates into the crossmodal correspondence between loudness and verticality. In an initial phase, participants learned one-to-one arbitrary associations between two tones differing in loudness (82dB vs.56dB) and two coloured rectangles (blue vs. yellow). During the experimental phase, they were presented with the two-coloured stimuli (each one located above or below a central "departure" point) together with one of the two tones. Participants had to indicate which of the two-coloured rectangles corresponded to the previously-associated tone by moving a mouse cursor from the departure point towards the target. The results revealed that participants were significantly faster responding to the loud tone when the visual target was located above (congruent condition) than when the target was below the departure point (incongruent condition). For quiet tones, no differences were found between the congruent (quiet-down) and the incongruent (quiet-up) conditions. Overall, this pattern of results suggests that possible differences in the neural activity generated by loud and quiet sounds influence the extent to which loudness and spatial elevation share representational content


Aunque la asociación perceptiva entre la verticalidad y la frecuencia auditiva ha sido ampliamente estudiada, la relación entre la intensidad y la verticalidad sigue sin entenderse completamente. Mientras que se asume que los sonidos más y menos intensos están asociados de forma igual con la elevación espacial, existen diferencias perceptivas entre los dos tipos de sonidos que sugieren lo contrario. Por ejemplo, los sonidos más intensos tienden a generar más actividad, tanto en el aspecto conductual como neuronal, que los sonidos más flojos. En este estudio, investigamos si esta diferencia influye en la correspondencia transmodal entre la intensidad y la verticalidad. En una fase inicial, los participantes aprendieron asociaciones arbitrarias entre uno de dos tonos que diferían en intensidad (82dB vs.56 dB) y uno de dos rectángulos coloreados (azul vs. amarillo). Durante la fase experimental, se les presentaron los dos estímulos coloreados (cada uno de ellos localizado por encima o debajo de un punto central de partida), junto con uno de los dos tonos. Los participantes tenían que indicar cuál de los dos rectángulos coloreados correspondía al tono previamente asociado moviendo el cursor del ratón desde el punto de partida hasta el objetivo. Los resultados mostraron que los participantes fueron significativamente más rápidos cuando respondían al tono más intenso cuando el objetivo visual se situaba arriba (condición congruente) que cuando se situaba abajo (condición incongruente). Para los sonidos menos intensos no se observaron diferencias entre las condiciones congruente (flojo-abajo) e incongruente (flojo-arriba). En general, este patrón de resultados sugiere que las posibles diferencias en la actividad neuronal generadas por sonidos de mayor y menor intensidad influyen el grado en el que la intensidad y la elevación espacial comparten contenido representacional


Subject(s)
Male , Adult , Humans , Female , Young Adult , Loudness Perception/physiology , Auditory Threshold/physiology , Photic Stimulation , Acoustic Stimulation
SELECTION OF CITATIONS
SEARCH DETAIL
...