Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 17 de 17
Filter
Add more filters










Publication year range
1.
Sci Rep ; 14(1): 567, 2024 01 04.
Article in English | MEDLINE | ID: mdl-38177170

ABSTRACT

Responses to multisensory signals are often faster compared to their unisensory components. This speed-up is typically attributed to target redundancy in that a correct response can be triggered by one or the other signal. In addition, semantic congruency of signals can also modulate multisensory responses; however, the contribution of semantic content is difficult to isolate as its manipulation commonly changes signal redundancy as well. To disentangle the effects of redundancy and semantic congruency, we manipulated semantic content but kept redundancy constant. We presented semantically congruent/incongruent animal pictures and sounds and asked participants to respond with the same response to two target animals (cats and dogs). We find that the speed-up of multisensory responses is larger for congruent (e.g., barking dogs) than incongruent combinations (e.g., barking cats). We then used a computational modelling approach to analyse audio-visual processing interferences that may underlie the effect. Our data is best described by a model that explains the semantic congruency modulation with a parameter that was previously linked to trial sequence effects, which in our experiment occur from the repetition/switching of both sensory modality and animal category. Yet, a systematic analysis of such trial sequence effects shows that the reported congruency effect is an independent phenomenon. Consequently, we discuss potential contributors to the semantic modulation of multisensory responses.


Subject(s)
Semantics , Visual Perception , Humans , Animals , Dogs , Cats , Acoustic Stimulation , Photic Stimulation , Visual Perception/physiology , Computer Simulation
2.
Cognition ; 227: 105204, 2022 10.
Article in English | MEDLINE | ID: mdl-35753178

ABSTRACT

Looming motion is an ecologically salient signal that often signifies danger. In both audition and vision, humans show behavioral biases in response to perceiving looming motion, which is suggested to indicate an adaptation for survival. However, it is an open question whether such biases occur also in the combined processing of multisensory signals. Towards this aim, Cappe, Thut, Romei, and Murraya (2009) found that responses to audiovisual signals were faster for congruent looming motion compared to receding motion or incongruent combinations. They considered this as evidence for selective integration of multisensory looming signals. To test this proposal, here, we successfully replicate the behavioral results by Cappe et al. (2009). We then show that the redundant signals effect (RSE - a speedup of multisensory compared to unisensory responses) is not distinct for congruent looming motion. Instead, as predicted by a simple probability summation rule, the RSE is primarily modulated by the looming bias in audition, which suggests that multisensory processing inherits a unisensory effect. Finally, we compare a large set of so-called race models that implement probability summation, but that allow for interference between auditory and visual processing. The best-fitting model, selected by the Akaike Information Criterion (AIC), virtually perfectly explained the RSE across conditions with interference parameters that were either constant or varied only with auditory motion. In the absence of effects jointly caused by auditory and visual motion, we conclude that selective integration is not required to explain the behavioral benefits that occur with audiovisual looming motion.


Subject(s)
Auditory Perception , Visual Perception , Acoustic Stimulation/methods , Auditory Perception/physiology , Humans , Photic Stimulation/methods , Reaction Time/physiology , Visual Perception/physiology
3.
Sci Rep ; 9(1): 2921, 2019 02 27.
Article in English | MEDLINE | ID: mdl-30814642

ABSTRACT

Multisensory signals allow faster responses than the unisensory components. While this redundant signals effect (RSE) has been studied widely with diverse signals, no modelling approach explored the RSE systematically across studies. For a comparative analysis, here, we propose three steps: The first quantifies the RSE compared to a simple, parameter-free race model. The second quantifies processing interactions beyond the race mechanism: history effects and so-called violations of Miller's bound. The third models the RSE on the level of response time distributions using a context-variant race model with two free parameters that account for the interactions. Mimicking the diversity of studies, we tested different audio-visual signals that target the interactions using a 2 × 2 design. We show that the simple race model provides overall a strong prediction of the RSE. Regarding interactions, we found that history effects do not depend on low-level feature repetition. Furthermore, violations of Miller's bound seem linked to transient signal onsets. Critically, the latter dissociates from the RSE, demonstrating that multisensory interactions and multisensory benefits are not equivalent. Overall, we argue that our approach, as a blueprint, provides both a general framework and the precision needed to understand the RSE when studied across diverse signals and participant groups.

4.
J Neurosci ; 33(17): 7463-74, 2013 Apr 24.
Article in English | MEDLINE | ID: mdl-23616552

ABSTRACT

The combined use of multisensory signals is often beneficial. Based on neuronal recordings in the superior colliculus of cats, three basic rules were formulated to describe the effectiveness of multisensory signals: the enhancement of neuronal responses to multisensory compared with unisensory signals is largest when signals occur at the same location ("spatial rule"), when signals are presented at the same time ("temporal rule"), and when signals are rather weak ("principle of inverse effectiveness"). These rules are also considered with respect to multisensory benefits as observed with behavioral measures, but do they capture these benefits best? To uncover the principles that rule benefits in multisensory behavior, we here investigated the classical redundant signal effect (RSE; i.e., the speedup of response times in multisensory compared with unisensory conditions) in humans. Based on theoretical considerations using probability summation, we derived two alternative principles to explain the effect. First, the "principle of congruent effectiveness" states that the benefit in multisensory behavior (here the speedup of response times) is largest when behavioral performance in corresponding unisensory conditions is similar. Second, the "variability rule" states that the benefit is largest when performance in corresponding unisensory conditions is unreliable. We then tested these predictions in two experiments, in which we manipulated the relative onset and the physical strength of distinct audiovisual signals. Our results, which are based on a systematic analysis of response time distributions, show that the RSE follows these principles very well, thereby providing compelling evidence in favor of probability summation as the underlying combination rule.


Subject(s)
Acoustic Stimulation/methods , Auditory Perception/physiology , Photic Stimulation/methods , Psychomotor Performance/physiology , Reaction Time/physiology , Visual Perception/physiology , Adult , Female , Humans , Male , Young Adult
5.
Curr Biol ; 22(15): 1391-6, 2012 Aug 07.
Article in English | MEDLINE | ID: mdl-22771043

ABSTRACT

Perceptual decisions involve the accumulation of sensory evidence over time, a process that is corrupted by noise [1]. Here, we extend the decision-making framework to crossmodal research [2, 3] and the parallel processing of two distinct signals presented to different sensory modalities like vision and audition. Contrary to the widespread view that multisensory signals are integrated prior to a single decision [4-10], we show that evidence is accumulated for each signal separately and that consequent decisions are flexibly coupled by logical operations. We find that the strong correlation of response latencies from trial to trial is critical to explain the short latencies of multisensory decisions. Most critically, we show that increased noise in multisensory decisions is needed to explain the mean and the variability of response latencies. Precise knowledge of these key factors is fundamental for the study and understanding of parallel decision processes with multisensory signals.


Subject(s)
Auditory Perception/physiology , Decision Making/physiology , Motion Perception/physiology , Noise , Female , Humans , Male , Young Adult
6.
Front Psychol ; 3: 119, 2012.
Article in English | MEDLINE | ID: mdl-22557985

ABSTRACT

To investigate the integration of features, we have developed a paradigm in which an element is rendered invisible by visual masking. Still, the features of the element are visible as part of other display elements presented at different locations and times (sequential metacontrast). In this sense, we can "transport" features non-retinotopically across space and time. The features of the invisible element integrate with features of other elements if and only if the elements belong to the same spatio-temporal group. The mechanisms of this kind of feature integration seem to be quite different from classical mechanisms proposed for feature binding. We propose that feature processing, binding, and integration occur concurrently during processes that group elements into wholes.

7.
J Vis ; 10(12): 8, 2010 Oct 01.
Article in English | MEDLINE | ID: mdl-21047740

ABSTRACT

Features of moving objects are non-retinotopically integrated along their motion trajectories as demonstrated by a variety of recent studies. The mechanisms of non-retinotopic feature integration are largely unknown. Here, we investigated the role of attention in non-retinotopic feature integration by using the sequential metacontrast paradigm. A central line was offset either to the left or right. A sequence of flanking lines followed eliciting the percept of two diverging motion streams. Although the central line was invisible, its offset was perceived within the streams. Observers attended to one stream. If an offset was introduced to one of the flanking lines in the attended stream, this offset integrated with the central line offset. No integration occurred when the offset was in the non-attended stream. Here, we manipulated the allocation of attention by using an auditory cueing paradigm. First, we show that mandatory non-retinotopic integration occurred even when the cue came long after the motion sequence. Second, we used more than two streams of which two could merge. Offsets in different streams were integrated when the streams merged. However, offsets of one stream were not integrated when this stream had to be ignored. We propose a hierarchical two stage model, in which motion grouping determines mandatory feature integration while attention selects motion streams for optional feature integration.


Subject(s)
Attention/physiology , Contrast Sensitivity/physiology , Cues , Motion Perception/physiology , Perceptual Masking/physiology , Visual Pathways/physiology , Acoustic Stimulation/methods , Humans , Photic Stimulation/methods , Retina/physiology , Visual Fields/physiology
8.
Psychol Sci ; 21(8): 1058-63, 2010 Aug.
Article in English | MEDLINE | ID: mdl-20585052

ABSTRACT

Perceptual learning is the ability to improve perception through practice. Perceptual learning is usually specific for the task and features learned. For example, improvements in performance for a certain stimulus do not transfer if the stimulus is rotated by 90 degrees or is presented at a different location. These findings are usually taken as evidence that orientation-specific, retinotopic encoding processes are changed during training. In this study, we used a novel masking paradigm in which the offset in an invisible, oblique vernier stimulus was perceived in an aligned vertical or horizontal flanking stimulus presented at a different location. Our results show that learning is specific for the perceived orientation of the vernier offset but not for its actual orientation and location. Specific encoding processes cannot be invoked to explain this improvement. We propose that perceptual learning involves changes in nonretinotopic, attentional readout processes.


Subject(s)
Learning , Visual Perception , Adult , Attention/physiology , Discrimination, Psychological/physiology , Female , Humans , Learning/physiology , Male , Models, Psychological , Motion Perception , Orientation/physiology , Photic Stimulation/methods , Visual Perception/physiology , Young Adult
9.
J Exp Psychol Hum Percept Perform ; 35(6): 1670-86, 2009 Dec.
Article in English | MEDLINE | ID: mdl-19968428

ABSTRACT

The perception of a visual target can be strongly influenced by flanking stimuli. In static displays, performance on the target improves when the distance to the flanking elements increases-presumably because feature pooling and integration vanishes with distance. Here, we studied feature integration with dynamic stimuli. We show that features of single elements presented within a continuous motion stream are integrated largely independent of spatial distance (and orientation). Hence, space-based models of feature integration cannot be extended to dynamic stimuli. We suggest that feature integration is guided by perceptual grouping operations that maintain the identity of perceptual objects over space and time.


Subject(s)
Orientation/physiology , Space Perception/physiology , Time Perception/physiology , Field Dependence-Independence , Humans , Motion Perception/physiology , Photic Stimulation
10.
Neuroimage ; 48(2): 405-14, 2009 Nov 01.
Article in English | MEDLINE | ID: mdl-19540924

ABSTRACT

When presented with dynamic scenes, the brain integrates visual elements across space and time. Such non-retinotopic processing has been intensively studied from a psychophysical point of view, but little is known about the underlying neural processes. Here we used high-density EEG to reveal neural correlates of non-retinotopic feature integration. In an offset-discrimination task we presented sequences of lines for which feature integration depended on a small, endogenous shift of attention. Attention effects were observed in the stimulus-locked evoked potentials but non-retinotopic feature integration was reflected in voltage topographies time-locked to the behavioral response, lasting for about 400 ms. Statistical parametric mapping of estimated current densities revealed that this integration reduced electrical activity in an extensive network of brain areas, with the effects progressing from high-level visual, via frontal, to central ones. The results suggest that endogenously timed neural processes, rather than bottom-up ones, underlie non-retinotopic feature integration.


Subject(s)
Attention/physiology , Brain/physiology , Visual Perception/physiology , Adult , Analysis of Variance , Brain Mapping , Electroencephalography , Evoked Potentials , Female , Humans , Male , Neuropsychological Tests , Photic Stimulation , Psychophysics , Reaction Time , Task Performance and Analysis , Time Factors , Video Recording , Young Adult
11.
J Vis ; 9(13): 5.1-11, 2009 Dec 05.
Article in English | MEDLINE | ID: mdl-20055538

ABSTRACT

In human vision, the optics of the eye map neighboring points of the environment onto neighboring photoreceptors in the retina. This retinotopic encoding principle is preserved in the early visual areas. Under normal viewing conditions, due to the motion of objects and to eye movements, the retinotopic representation of the environment undergoes fast and drastic shifts. Yet, perceptually our environment appears stable suggesting the existence of non-retinotopic representations in addition to the well-known retinotopic ones. Here, we present a simple psychophysical test to determine whether a given visual process is accomplished in retino- or non-retinotopic coordinates. As examples, we show that visual search and motion perception can occur within a non-retinotopic frame of reference. These findings suggest that more mechanisms than previously thought operate non-retinotopically. Whether this is true for a given visual process can easily be found out with our "litmus test."


Subject(s)
Motion Perception/physiology , Retina/physiology , Visual Cortex/physiology , Humans , Photic Stimulation , Reference Values
12.
J Vis ; 8(1): 5.1-8, 2008 Jan 11.
Article in English | MEDLINE | ID: mdl-18318608

ABSTRACT

In perceptual learning, performance often improves within a short time if only one stimulus variant is presented, such as a line bisection stimulus with one outer-line-distance. However, performance stagnates if two bisection stimuli with two outer-line-distances are presented randomly interleaved. Recently, S. G. Kuai, J. Y. Zhang, S. A. Klein, D. M. Levi, and C. Yu, (2005) proposed that learning under roving conditions is impossible in general. Contrary to this proposition, we show here that perceptual learning with bisection stimuli under roving is possible with extensive training of 18000 trials. Despite this extensive training, the improvement of performance is still largely specific. Furthermore, this improvement of performance cannot be explained by an accommodation to stimulus uncertainty caused by roving.


Subject(s)
Distance Perception/physiology , Learning/physiology , Motion Perception/physiology , Adult , Humans , Photic Stimulation , Task Performance and Analysis
13.
J Vis ; 8(7): 16.1-15, 2008 Jun 30.
Article in English | MEDLINE | ID: mdl-19146249

ABSTRACT

The motion correspondence problem, one of the classical examples of perceptual organization, addresses the question of how elements are grouped across space and time. Here, we investigate motion correspondences using a new feature attribution technique. We present, for example, a grating of four lines followed by a spatially shifted grating of three lines. Observers perceive a contracting grating. To study individual line-to-line correspondences, (1) we add, as a "perceptual marker," a small Vernier offset to one line of the first grating and (2) determine to which line of the second grating this offset is attributed. This procedure allows us inferring motion correspondences because this kind of feature attribution follows perceptual grouping in dynamic displays (H. Ogmen, T. U. Otto, & M. H. Herzog, 2006). Our results show that feature attribution between outer lines of the grating is more consistent than between inner lines. We interpret our results according to the principle of the "primacy of bounding contours," which states that bounding contours of an object provide a framework for element correspondences that is more important than the internal structure of that object.


Subject(s)
Motion Perception/physiology , Space Perception/physiology , Time Perception/physiology , Humans , Photic Stimulation
14.
Adv Cogn Psychol ; 3(1-2): 107-9, 2008 Jul 15.
Article in English | MEDLINE | ID: mdl-20517502

ABSTRACT

The visibility of a target can be strongly suppressed by metacontrast masking. Still, some features of the target can be perceived within the mask. Usually, these rare cases of feature mis-localizations are assumed to reflect errors of the visual system. To the contrary, I will show that feature "mis-localizations" in metacontrast masking follow rules of motion grouping and, hence, should be viewed as part of a systematic feature attribution process.

15.
J Vis ; 6(10): 1079-86, 2006 Sep 22.
Article in English | MEDLINE | ID: mdl-17132079

ABSTRACT

How features are attributed to objects is one of the most puzzling issues in the neurosciences. A deeply entrenched view is that features are perceived at the locations where they are presented. Here, we show that features in motion displays can be systematically attributed from one location to another although the elements possessing the features are invisible. Furthermore, features can be integrated across locations. Feature mislocalizations are usually treated as errors and limits of the visual system. On the contrary, we show that the nonretinotopic feature attributions, reported herein, follow rules of grouping precisely suggesting that they reflect a fundamental computational strategy and not errors of visual processing.


Subject(s)
Attention , Motion Perception , Optical Illusions , Visual Perception , Contrast Sensitivity , Humans , Photic Stimulation/methods
16.
Vision Res ; 46(19): 3234-42, 2006 Oct.
Article in English | MEDLINE | ID: mdl-16750550

ABSTRACT

The human visual system computes features of moving objects with high precision despite the fact that these features can change or blend into each other in the retinotopic image. Very little is known about how the human brain accomplishes this complex feat. Using a Ternus-Pikler display, introduced by Gestalt psychologists about a century ago, we show that human observers can perceive features of moving objects at locations these features are not present. More importantly, our results indicate that these non-retinotopic feature attributions are not errors caused by the limitations of the perceptual system but follow rules of perceptual grouping. From a computational perspective, our data imply sophisticated real-time transformations of retinotopic relations in the visual cortex. Our results suggest that the human motion and form systems interact with each other to remap the retinotopic projection of the physical space in order to maintain the identity of moving objects in the perceptual space.


Subject(s)
Optical Illusions , Visual Perception/physiology , Eye Movements/physiology , Form Perception/physiology , Humans , Motion Perception/physiology , Psychophysics , Retina/physiology
17.
Vision Res ; 46(19): 3223-33, 2006 Oct.
Article in English | MEDLINE | ID: mdl-16690098

ABSTRACT

In perceptual learning, stimuli are usually assumed to be presented to a constant retinal location during training. However, due to tremor, drift, and microsaccades of the eyes, the same stimulus covers different retinal positions on sequential trials. Because of these variations the mathematical decision problem changes from linear to non-linear (). This non-linearity implies three predictions. First, varying the spatial position of a stimulus within a moderate range does not deteriorate perceptual learning. Second, improvement for one stimulus variant can yield negative transfer to other variants. Third, interleaved training with two stimulus variants yields no or strongly diminished learning. Using a bisection task, we found psychophysical evidence for the first and last prediction. However, no negative transfer was found as opposed to the second prediction.


Subject(s)
Attention , Learning , Models, Psychological , Uncertainty , Visual Perception/physiology , Eye Movements/physiology , Humans , Psychophysics
SELECTION OF CITATIONS
SEARCH DETAIL
...