Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
Add more filters










Publication year range
1.
Article in English | MEDLINE | ID: mdl-38273181

ABSTRACT

Where we move our eyes during visual search is controlled by the relative saliency and relevance of stimuli in the visual field. However, the visual field is not homogeneous, as both sensory representations and attention change with eccentricity. Here we present an experiment investigating how eccentricity differences between competing stimuli affect saliency- and relevance-driven selection. Participants made a single eye movement to a predefined orientation singleton target that was simultaneously presented with an orientation singleton distractor in a background of multiple homogenously oriented other items. The target was either more or less salient than the distractor. Moreover, each of the two singletons could be presented at one of three different retinal eccentricities, such that both were presented at the same eccentricity, one eccentricity value apart, or two eccentricity values apart. The results showed that selection was initially determined by saliency, followed after about 300 ms by relevance. In addition, observers preferred to select the closer over the more distant singleton, and this central selection bias increased with increasing eccentricity difference. Importantly, it largely emerged within the same time window as the saliency effect, thereby resulting in a net reduction of the influence of saliency on the selection outcome. In contrast, the relevance effect remained unaffected by eccentricity. Together, these findings demonstrate that eccentricity is a major determinant of selection behavior, even to the extent that it modifies the relative contribution of saliency in determining where people move their eyes.

2.
Atten Percept Psychophys ; 86(2): 422-438, 2024 Feb.
Article in English | MEDLINE | ID: mdl-37258897

ABSTRACT

Visual attention may be captured by an irrelevant yet salient distractor, thereby slowing search for a relevant target. This phenomenon has been widely studied using the additional singleton paradigm in which search items are typically all presented at one and the same eccentricity. Yet, differences in eccentricity may well bias the competition between target and distractor. Here we investigate how attentional capture is affected by the relative eccentricities of a target and a distractor. Participants searched for a shape-defined target in a grid of homogeneous nontargets of the same color. On 75% of trials, one of the nontarget items was replaced by a salient color-defined distractor. Crucially, target and distractor eccentricities were independently manipulated across three levels of eccentricity (i.e., near, middle, and far). Replicating previous work, we show that the presence of a distractor slows down search. Interestingly, capture as measured by manual reaction times was not affected by target and distractor eccentricity, whereas capture as measured by the eyes was: items close to fixation were more likely to be selected than items presented further away. Furthermore, the effects of target and distractor eccentricity were largely additive, suggesting that the competition between saliency- and relevance-driven selection was modulated by an independent eccentricity-based spatial component. Implications of the dissociation between manual and oculomotor responses are also discussed.


Subject(s)
Eye Movements , Pattern Recognition, Visual , Humans , Pattern Recognition, Visual/physiology , Reaction Time/physiology , Visual Perception/physiology
3.
Article in English | MEDLINE | ID: mdl-37740153

ABSTRACT

It has previously been shown that grouping by proximity is well described by a linear function relating the perceived orientation of a dot lattice to the ratio of the distances between the dots in the different orientations. Similarly, luminance influences how observers perceptually group stimuli. Using the dot lattice paradigm, it has been shown that proximity and luminance similarity interact additively, which means that their effects can be summed to predict an observers' percept. In this study, we revisit the additive interplay between proximity and luminance similarity and we ask whether this pattern might be the result of inappropriately averaging different types of observers or the imbalance between the strength of proximity grouping and luminance similarity grouping. To address these questions, we first ran a replication of the original study reporting the additive interplay between proximity and luminance similarity. Our results showed a convincing replication at the aggregate and individual level. However, at the individual level, all observers showed grouping by proximity whereas some observers did not show grouping by luminance similarity. In response, we ran a second experiment with enlarged luminance differences to reinforce the strength of grouping by luminance similarity and balance the strength of the two grouping cues. Interestingly, in this second experiment, additivity was not observed but instead a significant interaction was obtained. This disparity suggests that the additivity or interaction between two grouping cues in a visual stimulus is not a general rule of perceptual grouping but a consequence of relative grouping strength.

4.
Vision Res ; 205: 108177, 2023 04.
Article in English | MEDLINE | ID: mdl-36669432

ABSTRACT

An important function of peripheral vision is to provide the target of the next eye movement. Here we investigate the extent to which the eyes are biased to select a target closer to fixation over one further away. Participants were presented with displays containing two identical singleton targets and were asked to move their eyes to either one of them. The targets could be presented at three different eccentricities relative to central fixation. In one condition both singletons were presented at the same eccentricity, providing an estimate of the speed of selection at each of the eccentricities. The saccadic latency distributions from this same-eccentricity condition were then used to predict the selection bias when both targets were presented at different eccentricities. The results show that when targets are presented at different eccentricities, participants are biased to select the item closest to fixation. This eccentricity-based bias was considerably stronger than predicted on the basis of saccadic latency distributions in the same-eccentricity condition. This rules out speed of processing per se as a sole explanation for such a bias. Instead, the results are consistent with attentional competition being weighted in favour of items close to fixation.


Subject(s)
Attentional Bias , Eye Movements , Humans , Fixation, Ocular , Saccades , Visual Perception
5.
Psychon Bull Rev ; 29(4): 1327-1337, 2022 Aug.
Article in English | MEDLINE | ID: mdl-35378672

ABSTRACT

Human vision involves selectively directing the eyes to potential objects of interest. According to most prominent theories, selection is the quantal outcome of an ongoing competition between saliency-driven signals on the one hand, and relevance-driven signals on the other, with both types of signals continuously and concurrently projecting onto a common priority map. Here, we challenge this view. We asked participants to make a speeded eye movement towards a target orientation, which was presented together with a non-target of opposing tilt. In addition to the difference in relevance, the target and non-target also differed in saliency, with the target being either more or less salient than the non-target. We demonstrate that saliency- and relevance-driven eye movements have highly idiosyncratic temporal profiles, with saliency-driven eye movements occurring rapidly after display onset while relevance-driven eye movements occur only later. Remarkably, these types of eye movements can be fully separated in time: We find that around 250 ms after display onset, eye movements are no longer driven by saliency differences between potential targets, but also not yet driven by relevance information, resulting in a period of non-selectivity, which we refer to as the attentional limbo. Binomial modeling further confirmed that visual selection is not necessarily the outcome of a direct battle between saliency- and relevance-driven signals. Instead, selection reflects the dynamic changes in the underlying saliency- and relevance-driven processes themselves, and the time at which an action is initiated then determines which of the two will emerge as the driving force of behavior.


Subject(s)
Attention , Saccades , Eye Movements , Humans , Photic Stimulation/methods , Visual Perception
6.
J Vis ; 21(3): 2, 2021 03 01.
Article in English | MEDLINE | ID: mdl-33651878

ABSTRACT

Both saliency and goal information are important factors in driving visual selection. Saliency-driven selection occurs primarily in early responses, whereas goal-driven selection happens predominantly in later responses. Here, we investigated how eccentricity affects the time courses of saliency-driven and goal-driven visual selection. In three experiments, we asked people to make a speeded eye movement toward a predefined target singleton which was simultaneously presented with a non-target singleton in a background of multiple homogeneously oriented other items. The target singleton could be either more or less salient than the non-target singleton. Both singletons were presented at one of three eccentricities (i.e., near, middle, or far). The results showed that, even though eccentricity had only little effect on overall selection performance, the underlying time courses of saliency-driven and goal-driven selection altered such that saliency effects became protracted and relevance effects became delayed for far eccentricity conditions. The protracted saliency effect was shown to be modulated by expectations as induced by the preceding trial. The results demonstrate the importance of incorporating both time and eccentricity as factors in models of visual selection.


Subject(s)
Eye Movements/physiology , Reaction Time/physiology , Visual Perception/physiology , Adult , Attention/physiology , Female , Goals , Humans , Male , Orientation, Spatial , Young Adult
7.
J Vis ; 19(13): 9, 2019 11 01.
Article in English | MEDLINE | ID: mdl-31715632

ABSTRACT

In the flash-grab effect, when a disk is flashed on a moving background at the moment it reverses direction, the perceived location of the disk is strongly displaced in the direction of the motion that follows the reversal. Here, we ask whether increased expectation of the reversal reduces its effect on the motion-induced shift, as suggested by predictive coding models with first order predictions. Across four experiments we find that when the reversal is expected, the illusion gets stronger, not weaker. We rule out accumulating motion adaptation as a contributing factor. The pattern of results cannot be accounted for by first-order predictions of location. Instead, it appears that second-order predictions of event timing play a role. Specifically, we conclude that temporal expectation causes a transient increase in temporal attention, boosting the strength of the motion signal and thereby increasing the strength of the illusion.


Subject(s)
Motion Perception/physiology , Pattern Recognition, Visual/physiology , Photic Stimulation , Adult , Female , Humans , Illusions/physiology , Male , Young Adult
8.
J Vis ; 19(1): 3, 2019 01 02.
Article in English | MEDLINE | ID: mdl-30630191

ABSTRACT

Neural processing of sensory input in the brain takes time, and for that reason our awareness of visual events lags behind their actual occurrence. One way the brain might compensate to minimize the impact of the resulting delays is through extrapolation. Extrapolation mechanisms have been argued to underlie perceptual illusions in which moving and static stimuli are mislocalised relative to one another (such as the flash-lag and related effects). However, where in the visual hierarchy such extrapolation processes take place remains unknown. Here, we address this question by identifying monocular and binocular contributions to the flash-grab illusion. In this illusion, a brief target is flashed on a moving background that reverses direction. As a result, the perceived position of the target is shifted in the direction of the reversal. We show that the illusion is attenuated, but not eliminated, when the motion reversal and the target are presented dichoptically to separate eyes. This reveals extrapolation mechanisms at both monocular and binocular processing stages contribute to the illusion. We interpret the results in a hierarchical predictive coding framework, and argue that prediction errors in this framework manifest directly as perceptual illusions.


Subject(s)
Motion Perception/physiology , Optical Illusions/physiology , Vision, Binocular/physiology , Vision, Monocular/physiology , Visual Pathways/physiology , Adult , Analysis of Variance , Humans , Photic Stimulation/methods
9.
J Neurosci ; 38(38): 8243-8250, 2018 09 19.
Article in English | MEDLINE | ID: mdl-30104339

ABSTRACT

Transmission delays in the nervous system pose challenges for the accurate localization of moving objects as the brain must rely on outdated information to determine their position in space. Acting effectively in the present requires that the brain compensates not only for the time lost in the transmission and processing of sensory information, but also for the expected time that will be spent preparing and executing motor programs. Failure to account for these delays will result in the mislocalization and mistargeting of moving objects. In the visuomotor system, where sensory and motor processes are tightly coupled, this predicts that the perceived position of an object should be related to the latency of saccadic eye movements aimed at it. Here we use the flash-grab effect, a mislocalization of briefly flashed stimuli in the direction of a reversing moving background, to induce shifts of perceived visual position in human observers (male and female). We find a linear relationship between saccade latency and perceived position shift, challenging the classic dissociation between "vision for action" and "vision for perception" for tasks of this kind and showing that oculomotor position representations are either shared with or tightly coupled to perceptual position representations. Altogether, we show that the visual system uses both the spatial and temporal characteristics of an upcoming saccade to localize visual objects for both action and perception.SIGNIFICANCE STATEMENT Accurately localizing moving objects is a computational challenge for the brain due to the inevitable delays that result from neural transmission. To solve this, the brain might implement motion extrapolation, predicting where an object ought to be at the present moment. Here, we use the flash-grab effect to induce perceptual position shifts and show that the latency of imminent saccades predicts the perceived position of the objects they target. This counterintuitive finding is important because it not only shows that motion extrapolation mechanisms indeed work to reduce the behavioral impact of neural transmission delays in the human brain, but also that these mechanisms are closely matched in the perceptual and oculomotor systems.


Subject(s)
Brain/physiology , Eye Movements/physiology , Motion Perception/physiology , Visual Perception/physiology , Female , Humans , Male , Motion , Photic Stimulation , Young Adult
10.
Behav Res Methods ; 50(1): 94-106, 2018 02.
Article in English | MEDLINE | ID: mdl-29330763

ABSTRACT

Measurement of pupil size (pupillometry) has recently gained renewed interest from psychologists, but there is little agreement on how pupil-size data is best analyzed. Here we focus on one aspect of pupillometric analyses: baseline correction, i.e., analyzing changes in pupil size relative to a baseline period. Baseline correction is useful in experiments that investigate the effect of some experimental manipulation on pupil size. In such experiments, baseline correction improves statistical power by taking into account random fluctuations in pupil size over time. However, we show that baseline correction can also distort data if unrealistically small pupil sizes are recorded during the baseline period, which can easily occur due to eye blinks, data loss, or other distortions. Divisive baseline correction (corrected pupil size = pupil size/baseline) is affected more strongly by such distortions than subtractive baseline correction (corrected pupil size = pupil size - baseline). We discuss the role of baseline correction as a part of preprocessing of pupillometric data, and make five recommendations: (1) before baseline correction, perform data preprocessing to mark missing and invalid data, but assume that some distortions will remain in the data; (2) use subtractive baseline correction; (3) visually compare your corrected and uncorrected data; (4) be wary of pupil-size effects that emerge faster than the latency of the pupillary response allows (within ±220 ms after the manipulation that induces the effect); and (5) remove trials on which baseline pupil size is unrealistically small (indicative of blinks and other distortions).


Subject(s)
Eye Movement Measurements/standards , Pupil/physiology , Adult , Eye Movement Measurements/instrumentation , Female , Humans , Individuality
SELECTION OF CITATIONS
SEARCH DETAIL
...