Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 34
Filter
Add more filters










Publication year range
1.
Behav Res Methods ; 56(4): 3814-3830, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38684625

ABSTRACT

The ability to detect the absolute location of sensory stimuli can be quantified with either error-based metrics derived from single-trial localization errors or regression-based metrics derived from a linear regression of localization responses on the true stimulus locations. Here we tested the agreement between these two approaches in estimating accuracy and precision in a large sample of 188 subjects who localized auditory stimuli from different azimuthal locations. A subsample of 57 subjects was subsequently exposed to audiovisual stimuli with a consistent spatial disparity before performing the sound localization test again, allowing us to additionally test which of the different metrics best assessed correlations between the amount of crossmodal spatial recalibration and baseline localization performance. First, our findings support a distinction between accuracy and precision. Localization accuracy was mainly reflected in the overall spatial bias and was moderately correlated with precision metrics. However, in our data, the variability of single-trial localization errors (variable error in error-based metrics) and the amount by which the eccentricity of target locations was overestimated (slope in regression-based metrics) were highly correlated, suggesting that intercorrelations between individual metrics need to be carefully considered in spatial perception studies. Secondly, exposure to spatially discrepant audiovisual stimuli resulted in a shift in bias toward the side of the visual stimuli (ventriloquism aftereffect) but did not affect localization precision. The size of the aftereffect shift in bias was at least partly explainable by unspecific test repetition effects, highlighting the need to account for inter-individual baseline differences in studies of spatial learning.


Subject(s)
Space Perception , Humans , Space Perception/physiology , Female , Male , Adult , Sound Localization , Photic Stimulation , Visual Perception/physiology , Young Adult , Acoustic Stimulation/methods , Auditory Perception/physiology
2.
Psych J ; 13(3): 376-386, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38655599

ABSTRACT

The accurate estimation of time-to-collision (TTC) is essential for the survival of organisms. Previous studies have revealed that the emotional properties of approaching stimuli can influence the estimation of TTC, indicating that approaching threatening stimuli are perceived to collide with the observers earlier than they actually do, and earlier than non-threatening stimuli. However, not only are threatening stimuli more negative in valence, but they also have higher arousal compared to non-threatening stimuli. Up to now, the effect of arousal on TTC estimation remains unclear. In addition, inconsistent findings may result from the different experimental settings employed in previous studies. To investigate whether the underestimation of TTC is attributed to threat or high arousal, three experiments with the same settings were conducted. In Experiment 1, the underestimation of TTC estimation of threatening stimuli was replicated when arousal was not controlled, in comparison to non-threatening stimuli. In Experiments 2 and 3, the underestimation effect of threatening stimuli disappeared when compared to positive stimuli with similar arousal. These findings suggest that being threatening alone is not sufficient to explain the underestimation effect, and arousal also plays a significant role in the TTC estimation of approaching stimuli. Further studies are required to validate the effect of arousal on TTC estimation, as no difference was observed in Experiment 3 between the estimated TTC of high and low arousal stimuli.


Subject(s)
Arousal , Time Perception , Humans , Arousal/physiology , Female , Male , Adult , Young Adult , Time Perception/physiology , Emotions/physiology
3.
J Exp Child Psychol ; 241: 105864, 2024 May.
Article in English | MEDLINE | ID: mdl-38335709

ABSTRACT

Acquiring sequential information is of utmost importance, for example, for language acquisition in children. Yet, the long-term storage of statistical learning in children is poorly understood. To address this question, 27 7-year-olds and 28 young adults completed four sessions of visual sequence learning (Year 1). From this sample, 16 7-year-olds and 20 young adults participated in another four equivalent sessions after a 12-month-delay (Year 2). The first three sessions of each year used Stimulus Set 1, and the last session used Stimulus Set 2 to investigate transfer effects. Each session consisted of alternating learning and test phases in a modified artificial grammar learning task. In Year 1, 7-year-olds and adults learned the regularities and showed transfer to Stimulus Set 2. Both groups retained their final performance level over the 1-year period. In Year 2, children and adults continued to improve with Stimulus Set 1 but did not show additional transfer gains. Adults overall outperformed children, but transfer effects were indistinguishable between both groups. The current results suggest that long-term memory traces are formed from repeated sequence learning that can be used to generalize sequence rules to new visual input. However, the current study did not provide evidence for a childhood advantage in learning and remembering sequence rules.


Subject(s)
Language Development , Linguistics , Child , Young Adult , Humans , Spatial Learning , Mental Recall
4.
Philos Trans R Soc Lond B Biol Sci ; 378(1886): 20220340, 2023 09 25.
Article in English | MEDLINE | ID: mdl-37545299

ABSTRACT

Auditory and visual information involve different coordinate systems, with auditory spatial cues anchored to the head and visual spatial cues anchored to the eyes. Information about eye movements is therefore critical for reconciling visual and auditory spatial signals. The recent discovery of eye movement-related eardrum oscillations (EMREOs) suggests that this process could begin as early as the auditory periphery. How this reconciliation might happen remains poorly understood. Because humans and monkeys both have mobile eyes and therefore both must perform this shift of reference frames, comparison of the EMREO across species can provide insights to shared and therefore important parameters of the signal. Here we show that rhesus monkeys, like humans, have a consistent, significant EMREO signal that carries parametric information about eye displacement as well as onset times of eye movements. The dependence of the EMREO on the horizontal displacement of the eye is its most consistent feature, and is shared across behavioural tasks, subjects and species. Differences chiefly involve the waveform frequency (higher in monkeys than in humans) and patterns of individual variation (more prominent in monkeys than in humans), and the waveform of the EMREO when factors due to horizontal and vertical eye displacements were controlled for. This article is part of the theme issue 'Decision and control processes in multisensory perception'.


Subject(s)
Eye Movements , Tympanic Membrane , Humans , Cues , Movement
5.
Trends Cogn Sci ; 27(10): 961-973, 2023 10.
Article in English | MEDLINE | ID: mdl-37208286

ABSTRACT

Multisensory spatial processes are fundamental for efficient interaction with the world. They include not only the integration of spatial cues across sensory modalities, but also the adjustment or recalibration of spatial representations to changing cue reliabilities, crossmodal correspondences, and causal structures. Yet how multisensory spatial functions emerge during ontogeny is poorly understood. New results suggest that temporal synchrony and enhanced multisensory associative learning capabilities first guide causal inference and initiate early coarse multisensory integration capabilities. These multisensory percepts are crucial for the alignment of spatial maps across sensory systems, and are used to derive more stable biases for adult crossmodal recalibration. The refinement of multisensory spatial integration with increasing age is further promoted by the inclusion of higher-order knowledge.


Subject(s)
Auditory Perception , Spatial Processing , Humans , Adult , Cues , Acoustic Stimulation , Visual Perception
6.
bioRxiv ; 2023 May 22.
Article in English | MEDLINE | ID: mdl-36945629

ABSTRACT

Auditory and visual information involve different coordinate systems, with auditory spatial cues anchored to the head and visual spatial cues anchored to the eyes. Information about eye movements is therefore critical for reconciling visual and auditory spatial signals. The recent discovery of eye movement-related eardrum oscillations (EMREOs) suggests that this process could begin as early as the auditory periphery. How this reconciliation might happen remains poorly understood. Because humans and monkeys both have mobile eyes and therefore both must perform this shift of reference frames, comparison of the EMREO across species can provide insights to shared and therefore important parameters of the signal. Here we show that rhesus monkeys, like humans, have a consistent, significant EMREO signal that carries parametric information about eye displacement as well as onset times of eye movements. The dependence of the EMREO on the horizontal displacement of the eye is its most consistent feature, and is shared across behavioral tasks, subjects, and species. Differences chiefly involve the waveform frequency (higher in monkeys than in humans) and patterns of individual variation (more prominent in monkeys than humans), and the waveform of the EMREO when factors due to horizontal and vertical eye displacements were controlled for.

7.
iScience ; 25(6): 104439, 2022 Jun 17.
Article in English | MEDLINE | ID: mdl-35874923

ABSTRACT

To clarify the role of sensory experience during early development for adult multisensory learning capabilities, we probed audiovisual spatial processing in human individuals who had been born blind because of dense congenital cataracts (CCs) and who subsequently had received cataract removal surgery, some not before adolescence or adulthood. Their ability to integrate audio-visual input and to recalibrate multisensory spatial representations was compared to normally sighted control participants and individuals with a history of developmental (later onset) cataracts. Results in CC individuals revealed both normal multisensory integration in audiovisual trials (ventriloquism effect) and normal recalibration of unimodal auditory localization following audiovisual discrepant exposure (ventriloquism aftereffect) as observed in the control groups. In addition, only the CC group recalibrated unimodal visual localization after audiovisual exposure. Thus, in parallel to typical multisensory integration and learning, atypical crossmodal mechanisms coexisted in CC individuals, suggesting that multisensory recalibration capabilities are defined during a sensitive period in development.

8.
Multisens Res ; : 1-19, 2021 May 31.
Article in English | MEDLINE | ID: mdl-34062510

ABSTRACT

Reliability-based cue combination is a hallmark of multisensory integration, while the role of cue reliability for crossmodal recalibration is less understood. The present study investigated whether visual cue reliability affects audiovisual recalibration in adults and children. Participants had to localize sounds, which were presented either alone or in combination with a spatially discrepant high- or low-reliability visual stimulus. In a previous study we had shown that the ventriloquist effect (indicating multisensory integration) was overall larger in the children groups and that the shift in sound localization toward the spatially discrepant visual stimulus decreased with visual cue reliability in all groups. The present study replicated the onset of the immediate ventriloquist aftereffect (a shift in unimodal sound localization following a single exposure of a spatially discrepant audiovisual stimulus) at the age of 6-7 years. In adults the immediate ventriloquist aftereffect depended on visual cue reliability, whereas the cumulative ventriloquist aftereffect (reflecting the audiovisual spatial discrepancies over the complete experiment) did not. In 6-7-year-olds the immediate ventriloquist aftereffect was independent of visual cue reliability. The present results are compatible with the idea of immediate and cumulative crossmodal recalibrations being dissociable processes and that the immediate ventriloquist aftereffect is more closely related to genuine multisensory integration.

9.
Atten Percept Psychophys ; 82(7): 3490-3506, 2020 Oct.
Article in English | MEDLINE | ID: mdl-32627131

ABSTRACT

According to the Bayesian framework of multisensory integration, audiovisual stimuli associated with a stronger prior belief that they share a common cause (i.e., causal prior) are predicted to result in a greater degree of perceptual binding and therefore greater audiovisual integration. In the present psychophysical study, we systematically manipulated the causal prior while keeping sensory evidence constant. We paired auditory and visual stimuli during an association phase to be spatiotemporally either congruent or incongruent, with the goal of driving the causal prior in opposite directions for different audiovisual pairs. Following this association phase, every pairwise combination of the auditory and visual stimuli was tested in a typical ventriloquism-effect (VE) paradigm. The size of the VE (i.e., the shift of auditory localization towards the spatially discrepant visual stimulus) indicated the degree of multisensory integration. Results showed that exposure to an audiovisual pairing as spatiotemporally congruent compared to incongruent resulted in a larger subsequent VE (Experiment 1). This effect was further confirmed in a second VE paradigm, where the congruent and the incongruent visual stimuli flanked the auditory stimulus, and a VE in the direction of the congruent visual stimulus was shown (Experiment 2). Since the unisensory reliabilities for the auditory or visual components did not change after the association phase, the observed effects are likely due to changes in multisensory binding by association learning. As suggested by Bayesian theories of multisensory processing, our findings support the existence of crossmodal causal priors that are flexibly shaped by experience in a changing world.


Subject(s)
Auditory Perception , Visual Perception , Acoustic Stimulation , Bayes Theorem , Humans , Photic Stimulation
10.
Eur J Neurosci ; 52(7): 3763-3775, 2020 10.
Article in English | MEDLINE | ID: mdl-32403183

ABSTRACT

Visual input constantly recalibrates auditory spatial representations. Exposure to isochronous audiovisual stimuli with a fixed spatial disparity typically results in a subsequent auditory localization bias (ventriloquism aftereffect, VAE), whereas exposure to spatially congruent audiovisual stimuli improves subsequent auditory localization (multisensory enhancement, ME). Here, we tested whether cross-modal recalibration is affected by the stimulation rate and/or the distribution of audiovisual spatial disparities during training. Auditory localization was tested before and after participants were exposed either to audiovisual stimuli with a constant spatial disparity of 13.5° (VAE) or to spatially congruent audiovisual stimulation (ME). In a between-subjects design, audiovisual stimuli were presented either at a low frequency of 2 Hz, as used in previous studies of VAE and ME, or intermittently at a high frequency of 10 Hz, which mimics long-term potentiation (LTP) protocols and which was found superior in eliciting unisensory perceptual learning. Compared to low-frequency stimulation, VAE was reduced after high-frequency stimulation, whereas ME occurred regardless of the stimulation protocol. In two additional groups, we manipulated the spatial distribution of audiovisual stimuli in the low-frequency condition. Stimuli were presented with varying audiovisual disparities centered around 13.5° (VAE) or 0° (ME). Both VAE and ME were equally strong compared to a fixed spatial relationship of 13.5° or 0°, respectively. Taken together, our results suggest (a) that VAE and ME represent partly dissociable forms of learning and (b) that auditory representations adjust to the overall stimulus statistics rather than to a specific audiovisual spatial relationship.


Subject(s)
Auditory Perception , Sound Localization , Acoustic Stimulation , Humans , Learning , Photic Stimulation , Visual Perception
11.
Front Hum Neurosci ; 14: 72, 2020.
Article in English | MEDLINE | ID: mdl-32256326

ABSTRACT

Working memory (WM) refers to the temporary retention and manipulation of information, and its capacity is highly susceptible to training. Yet, the neural mechanisms that allow for increased performance under demanding conditions are not fully understood. We expected that post-training efficiency in WM performance modulates neural processing during high load tasks. We tested this hypothesis, using electroencephalography (EEG) (N = 39), by comparing source space spectral power of healthy adults performing low and high load auditory WM tasks. Prior to the assessment, participants either underwent a modality-specific auditory WM training, or a modality-irrelevant tactile WM training, or were not trained (active control). After a modality-specific training participants showed higher behavioral performance, compared to the control. EEG data analysis revealed general effects of WM load, across all training groups, in the theta-, alpha-, and beta-frequency bands. With increased load theta-band power increased over frontal, and decreased over parietal areas. Centro-parietal alpha-band power and central beta-band power decreased with load. Interestingly, in the high load condition a tendency toward reduced beta-band power in the right medial temporal lobe was observed in the modality-specific WM training group compared to the modality-irrelevant and active control groups. Our finding that WM processing during the high load condition changed after modality-specific WM training, showing reduced beta-band activity in voice-selective regions, possibly indicates a more efficient maintenance of task-relevant stimuli. The general load effects suggest that WM performance at high load demands involves complementary mechanisms, combining a strengthening of task-relevant and a suppression of task-irrelevant processing.

12.
Curr Biol ; 30(9): 1726-1732.e7, 2020 05 04.
Article in English | MEDLINE | ID: mdl-32197090

ABSTRACT

It has been hypothesized that crossmodal recalibration plays a crucial role for the development of multisensory integration capabilities [1]. To test the developmental trajectory of multisensory integration and crossmodal recalibration, we used a combined ventriloquist/ventriloquist aftereffect paradigm [2] in children aged 5-9 years. The ventriloquist effect (indicating multisensory integration), that is, the shift of auditory localization toward simultaneously presented but spatially discrepant visual stimuli, was larger in children than in adults, which was attributed to a lower auditory localization precision in the children. In fact, the size of the ventriloquist effect depended on the visual stimulus reliability in both children and adults. In all groups, the ventriloquist effect was best explained by a causal inference model. In contrast to their multisensory integration capabilities, 5-year-old children did not recalibrate. The immediate ventriloquist aftereffect (indicating recalibration after a single exposure to a spatially discrepant audio-visual stimulus) emerged in 6- to 7-year-old children, whereas the cumulative ventriloquist aftereffect (reflecting recalibration to the audio-visual spatial discrepancies over the complete experiment) was not observed before the age of 8 years. First, in contrast to common beliefs, the present results provide evidence that multisensory integration precedes rather than follows crossmodal recalibration during development. Second, we report developmental evidence for a dissociation of the processes involved in multisensory integration and immediate as well as cumulative recalibration. We speculate that multisensory integration is a prerequisite for crossmodal recalibration, because the multisensory percept, rather than unimodal cues, might comprise a crucial signal for the calibration of the sensory systems.


Subject(s)
Auditory Perception/physiology , Sound Localization/physiology , Visual Perception/physiology , Acoustic Stimulation , Adult , Child , Child, Preschool , Female , Humans , Male , Photic Stimulation , Psychomotor Performance/physiology , Young Adult
13.
Front Robot AI ; 7: 85, 2020.
Article in English | MEDLINE | ID: mdl-33501252

ABSTRACT

Extracting information from noisy signals is of fundamental importance for both biological and artificial perceptual systems. To provide tractable solutions to this challenge, the fields of human perception and machine signal processing (SP) have developed powerful computational models, including Bayesian probabilistic models. However, little true integration between these fields exists in their applications of the probabilistic models for solving analogous problems, such as noise reduction, signal enhancement, and source separation. In this mini review, we briefly introduce and compare selective applications of probabilistic models in machine SP and human psychophysics. We focus on audio and audio-visual processing, using examples of speech enhancement, automatic speech recognition, audio-visual cue integration, source separation, and causal inference to illustrate the basic principles of the probabilistic approach. Our goal is to identify commonalities between probabilistic models addressing brain processes and those aiming at building intelligent machines. These commonalities could constitute the closest points for interdisciplinary convergence.

14.
Front Integr Neurosci ; 13: 51, 2019.
Article in English | MEDLINE | ID: mdl-31572136

ABSTRACT

Ventriloquism, the illusion that a voice appears to come from the moving mouth of a puppet rather than from the actual speaker, is one of the classic examples of multisensory processing. In the laboratory, this illusion can be reliably induced by presenting simple meaningless audiovisual stimuli with a spatial discrepancy between the auditory and visual components. Typically, the perceived location of the sound source is biased toward the location of the visual stimulus (the ventriloquism effect). The strength of the visual bias reflects the relative reliability of the visual and auditory inputs as well as prior expectations that the two stimuli originated from the same source. In addition to the ventriloquist illusion, exposure to spatially discrepant audiovisual stimuli results in a subsequent recalibration of unisensory auditory localization (the ventriloquism aftereffect). In the past years, the ventriloquism effect and aftereffect have seen a resurgence as an experimental tool to elucidate basic mechanisms of multisensory integration and learning. For example, recent studies have: (a) revealed top-down influences from the reward and motor systems on cross-modal binding; (b) dissociated recalibration processes operating at different time scales; and (c) identified brain networks involved in the neuronal computations underlying multisensory integration and learning. This mini review article provides a brief overview of established experimental paradigms to measure the ventriloquism effect and aftereffect before summarizing these pathbreaking new advancements. Finally, it is pointed out how the ventriloquism effect and aftereffect could be utilized to address some of the current open questions in the field of multisensory research.

15.
J Exp Psychol Hum Percept Perform ; 45(4): 435-440, 2019 Apr.
Article in English | MEDLINE | ID: mdl-30920284

ABSTRACT

Sensory representations are constantly realigned. For instance, in the ventriloquism aftereffect, short exposure to audiovisual stimuli with a consistent spatial disparity results in an adjustment of auditory spatial representations. Here we tested whether repeated audiovisual training over several sessions enhances recalibration in the ventriloquism aftereffect. One group of participants (n = 16) received incremental training in which the presented degree of audiovisual spatial disparity increased over the course of 3 days, whereas a second group (n = 16) was constantly exposed to the largest disparity during all three sessions (constant training). Within each session, a significant ventriloquism aftereffect was observed in both groups. However, the size of the final ventriloquism aftereffect was larger in the constant group, due to an increase over days that was not evident in the incremental group. These findings replicated results obtained in two prestudies that either used a smaller sample size or included constant training only. Taken together, our findings provide strong evidence that recalibration effects are retained and consolidated between sessions, contrary to the intuitive assumption that natural audiovisual stimulation outside the laboratory would immediately overwrite recalibration. Repeated training seems to be particularly effective for consistent changes in cross-modal stimulation. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Subject(s)
Auditory Perception/physiology , Learning/physiology , Visual Perception/physiology , Adult , Calibration , Female , Humans , Male , Middle Aged , Sound Localization/physiology , Young Adult
16.
Sci Rep ; 9(1): 1666, 2019 02 07.
Article in English | MEDLINE | ID: mdl-30733577

ABSTRACT

The brain has evolved to extract behaviourally meaningful information from the environment. For example, it has been shown that visual perceptual learning (VPL) can occur for task-irrelevant stimulus features when those features are consistently paired with internal or external reinforcement signals. It is, however, unclear whether or not task-irrelevant VPL is influenced by stimulus features that are unrelated to reinforcement in a given sensory context. To address this question, we exposed participants to task-irrelevant and subliminal coherent motion stimuli in the background while they performed a central character identification task. A specific motion direction was consistently paired with the task-targets, while two other directions occurred only with distractors and, thus, were unrelated to reinforcement. We found that the magnitude of VPL of the target-paired direction was significantly greater when the distractor-paired directions were close to the target-paired direction, compared to when they were farther. Thus, even very weak signals that are both subliminal and unrelated to reinforcement are processed and exert an influence on VPL. This finding suggests that the outcome of VPL depends on the sensory context in which learning takes place and calls for a refinement of VPL theories to incorporate exposure-based influences on learning.


Subject(s)
Attention/physiology , Brain/physiology , Learning/physiology , Neuronal Plasticity/physiology , Recognition, Psychology , Sensation/physiology , Visual Perception/physiology , Adolescent , Adult , Female , Humans , Male , Young Adult
17.
Psychol Res ; 83(7): 1400-1415, 2019 Oct.
Article in English | MEDLINE | ID: mdl-29285647

ABSTRACT

Exposure to audiovisual stimuli with a consistent spatial misalignment seems to result in a recalibration of unisensory auditory spatial representations. The previous studies have suggested that this so-called ventriloquism aftereffect is confined to the trained region of space, but yielded inconsistent results as to whether or not recalibration generalizes to untrained sound frequencies. Here, we reassessed the spatial and frequency specificity of the ventriloquism aftereffect by testing whether auditory spatial perception can be independently recalibrated for two different sound frequencies and/or at two different spatial locations. Recalibration was confined to locations within the trained hemifield, suggesting that spatial representations were independently adjusted for the two hemifields. The frequency specificity of the ventriloquism aftereffect depended on the presence or the absence of conflicting audiovisual adaptation stimuli within the same hemifield. Moreover, adaptation of two different sound frequencies in opposite directions (leftward vs. rightward) resulted in a selective suppression of leftward recalibration, even when the adapting stimuli were presented in different hemifields. Thus, instead of representing a fixed stimulus-driven process, cross-modal recalibration seems to critically depend on the sensory context and takes into account inconsistencies in the cross-modal input.


Subject(s)
Auditory Perception , Sound Localization , Space Perception , Acoustic Stimulation , Adaptation, Physiological , Adult , Female , Humans , Male , Young Adult
18.
Front Integr Neurosci ; 13: 74, 2019.
Article in English | MEDLINE | ID: mdl-32009913

ABSTRACT

In an ever-changing environment, crossmodal recalibration is crucial to maintain precise and coherent spatial estimates across different sensory modalities. Accordingly, it has been found that perceived auditory space is recalibrated toward vision after consistent exposure to spatially misaligned audio-visual stimuli (VS). While this so-called ventriloquism aftereffect (VAE) yields internal consistency between vision and audition, it does not necessarily lead to consistency between the perceptual representation of space and the actual environment. For this purpose, feedback about the true state of the external world might be necessary. Here, we tested whether the size of the VAE is modulated by external feedback and reward. During adaptation audio-VS with a fixed spatial discrepancy were presented. Participants had to localize the sound and received feedback about the magnitude of their localization error. In half of the sessions the feedback was based on the position of the VS and in the other half it was based on the position of the auditory stimulus. An additional monetary reward was given if the localization error fell below a certain threshold that was based on participants' performance in the pretest. As expected, when error feedback was based on the position of the VS, auditory localization during adaptation trials shifted toward the position of the VS. Conversely, feedback based on the position of the auditory stimuli reduced the visual influence on auditory localization (i.e., the ventriloquism effect) and improved sound localization accuracy. After adaptation with error feedback based on the VS position, a typical auditory VAE (but no visual aftereffect) was observed in subsequent unimodal localization tests. By contrast, when feedback was based on the position of the auditory stimuli during adaptation, no auditory VAE was observed in subsequent unimodal auditory trials. Importantly, in this situation no visual aftereffect was found either. As feedback did not change the physical attributes of the audio-visual stimulation during adaptation, the present findings suggest that crossmodal recalibration is subject to top-down influences. Such top-down influences might help prevent miscalibration of audition toward conflicting visual stimulation in situations in which external feedback indicates that visual information is inaccurate.

19.
Cognition ; 182: 349-359, 2019 01.
Article in English | MEDLINE | ID: mdl-30389144

ABSTRACT

The processing and perception of stimuli is altered when these stimuli are not passively presented but rather are actively triggered, or "self-initiated", by the participants. For unimodal stimuli, perceptual changes in stimulus timing and intensity have been demonstrated. Initial results have suggested that self-initiation may affect multisensory processing as well. The present study examined the effects of self-initiation on audiovisual integration in the ventriloquism effect (VE), that is, the mislocalization of auditory stimuli toward a spatially displaced visual stimulus. The effects of self-initiation on the VE were investigated with audiovisual stimuli that featured varying degrees of spatial and temporal separation. Stimuli were either triggered by the participants' button press or not, and stimulus onsets were either predictable or not. Arguing from the perspective of Bayesian causal inference models, we hypothesized self-initiation to increase the prior probability of two stimuli being integrated. Contrary to this intuitive assumption, less VE was observed when the stimuli were self-initiated by the participants than when they were externally generated. Since no effects of self-initiation on unimodal processing were observed, these effects must specifically pertain to multisensory processes. Finally, data were fit with a causal inference model, where self-initiation was associated with a reduction of the prior probability to integrate audiovisual stimuli. In conclusion, the presence of a self-initiated motor signal influences audiovisual integration, such that auditory localization is less biased by visual stimuli, which likely depends on top-down signals.


Subject(s)
Motor Activity/physiology , Sound Localization/physiology , Space Perception/physiology , Visual Perception/physiology , Adult , Humans , Psychomotor Performance/physiology , Time Factors , Young Adult
20.
Acta Psychol (Amst) ; 190: 135-141, 2018 Oct.
Article in English | MEDLINE | ID: mdl-30114672

ABSTRACT

Tactile perception results from the interplay of peripheral and central mechanisms for detection and sensation of objects and the discrimination and evaluation of their size, shapes, and surface characteristics. For different tasks, we investigated this interaction between more bottom-up stimulus-driven and rather top-down attention-related and cognitive processes in tactile perception. Moreover, we were interested in effects of age and tactile experiences on this interaction. 299 right-handed women participated in our study and were divided into five age groups: 18-25 years (N = 77), 30-45 years (N = 76), 50-65 years (N = 62), 66-75 years (N = 63) and older than 75 years (N = 21). They filled a questionnaire on tactile experiences and rated their skin as either very dry, dry, normal, or oily. Further they performed three tactile tests with the left and right index fingers. Sensitivity for touch stimuli was assessed with von Frey filaments. A sand paper test was used to examine texture discrimination performance. Spatial discrimination was investigated with a tactile Landolt ring test. Multivariate ANOVA confirmed a linear decline in tactile perceptual skills with age (F(3, 279) = 76.740; p < .000; pEta2 = 0.452), starting in early adulthood. Largest age effects were found for the Landolt ring test and smallest age effects for the Sand paper test, indicating different aging slopes. Tactile experiences had a positive effect on tactile performance (F (3,279) = 4.450; p = .005; pEta2 = 0.046) and univariate ANOVA confirmed this effect for the sand paper and the Landolt ring test, but not for the von Frey test. Using structural equation modelling, we confirmed two dimensions of tactile performance; one related to more peripheral or early sensory cortical (bottom-up) processes (i.e., sensitivity) and one more associated with cognitive or evaluative (top-down) processes (i.e., perception). Interestingly, the top-down processes were stronger influenced by age than bottom-up ones, suggesting that age-related deficits in tactile performance are mainly caused by a decline of central perceptive-evaluative capacities rather than by reduced sensitivity.


Subject(s)
Aging/physiology , Life Change Events , Longevity/physiology , Touch Perception/physiology , Touch/physiology , Adolescent , Adult , Aged , Aging/psychology , Attention/physiology , Discrimination, Psychological/physiology , Female , Fingers/physiology , Humans , Male , Middle Aged , Physical Stimulation/methods , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...