Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 50
Filter
Add more filters










Publication year range
2.
Cognition ; 242: 105624, 2024 01.
Article in English | MEDLINE | ID: mdl-37944314

ABSTRACT

Research on gaze control has long shown that increased visual-cognitive processing demands in scene viewing are associated with longer fixation durations. More recently, though, longer durations have also been linked to mind wandering, a perceptually decoupled state of attention marked by decreased visual-cognitive processing. Toward better understanding the relationship between fixation durations and visual-cognitive processing, we ran simulations using an established random-walk model for saccade timing and programming and assessed which model parameters best predicted modulations in fixation durations associated with mind wandering compared to attentive viewing. Mind wandering-related fixation durations were best described as an increase in the variability of the fixation-generating process, leading to more variable-sometimes very long-durations. In contrast, past research showed that increased processing demands increased the mean duration of the fixation-generating process. The findings thus illustrate that mind wandering and processing demands modulate fixation durations through different mechanisms in scene viewing. This suggests that processing demands cannot be inferred from changes in fixation durations without understanding the underlying mechanism by which these changes were generated.


Subject(s)
Fixation, Ocular , Saccades , Humans , Visual Perception , Attention , Computer Simulation
3.
Exp Brain Res ; 241(9): 2345-2360, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37610677

ABSTRACT

Pseudoneglect, that is the tendency to pay more attention to the left side of space, is typically assessed with paper-and-pencil tasks, particularly line bisection. In the present study, we used an everyday task with more complex stimuli. Subjects' task was to look for pre-specified objects in images of real-world scenes. In half of the scenes, the search object was located on the left side of the image (L-target); in the other half of the scenes, the target was on the right side (R-target). To control for left-right differences in the composition of the scenes, half of the scenes were mirrored horizontally. Eye-movement recordings were used to track the course of pseudoneglect on a millisecond timescale. Subjects' initial eye movements were biased to the left of the scene, but less so for R-targets than for L-targets, indicating that pseudoneglect was modulated by task demands and scene guidance. We further analyzed how horizontal gaze positions changed over time. When the data for L- and R-targets were pooled, the leftward bias lasted, on average, until the first second of the search process came to an end. Even for right-side targets, the gaze data showed an early left-bias, which was compensated by adjustments in the direction and amplitude of later saccades. Importantly, we found that pseudoneglect affected search efficiency by leading to less efficient scan paths and consequently longer search times for R-targets compared with L-targets. It may therefore be prudent to take spatial asymmetries into account when studying visual search in scenes.


Subject(s)
Eye Movements , Saccades , Humans
4.
J Exp Psychol Gen ; 152(7): 1907-1936, 2023 Jul.
Article in English | MEDLINE | ID: mdl-37126050

ABSTRACT

Scene meaning is processed rapidly, with "gist" extracted even when presentation duration spans a few dozen milliseconds. This has led some to suggest a primacy of bottom-up information. However, gist research has typically relied on showing successions of unrelated scene images, contrary to our everyday experience in which the world unfolds around us in a predictable manner. Thus, we investigated whether top-down information-in the form of observers' predictions of an upcoming scene-facilitates gist processing. Within each trial, participants (N = 370) experienced a series of images, organized to represent an approach to a destination (e.g., walking down a sidewalk), followed by a target scene either congruous or incongruous with the expected destination (e.g., a store interior or a bedroom). A series of behavioral experiments revealed that appropriate expectations facilitated gist processing; inappropriate expectations interfered with gist processing; sequentially-arranged scene images benefitted gist processing when semantically related to the target scene; expectation-based facilitation was most apparent when presentation duration was most curtailed; and findings were not simply the result of response bias. We then investigated the neural correlates of predictability on scene processing using event-related potentials (ERPs) (N = 24). Congruency-related differences were found in a putative scene-selective ERP component, related to integrating visual properties (P2), and in later components related to contextual integration including semantic and syntactic coherence (N400 and P600, respectively). Together, results suggest that in real-world situations, top-down predictions of an upcoming scene influence even the earliest stages of its processing, affecting both the integration of visual properties and meaning. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Subject(s)
Evoked Potentials , Motivation , Humans , Male , Female , Evoked Potentials/physiology , Electroencephalography , Reaction Time/physiology , Photic Stimulation
5.
Atten Percept Psychophys ; 85(6): 1868-1887, 2023 Aug.
Article in English | MEDLINE | ID: mdl-36725782

ABSTRACT

The presence of a weapon in a scene has been found to attract observers' attention and to impair their memory of the person holding the weapon. Here, we examined the role of attention in this weapon focus effect (WFE) under different viewing conditions. German participants viewed stimuli in which a man committed a robbery while holding a gun or a cell phone. The stimuli were based on material used in a recent U.S. study reporting large memory effects. Recording eye movements allowed us to test whether observers' attention in the gun condition shifted away from the perpetrator towards the gun, compared with the phone condition. When using videos (Experiment 1), weapon presence did not appear to modulate the viewing time for the perpetrator, whereas the evidence concerning the critical object remained inconclusive. When using slide shows (Experiment 2), the gun attracted more gaze than the phone, replicating previous research. However, the attentional shift towards the weapon did not come at a cost of viewing time on the perpetrator. In both experiments, observers focused their attention predominantly on the depicted people and much less on the gun or phone. The presence of a weapon did not cause participants to recall fewer details about the perpetrator's appearance in either experiment. This null effect was replicated in an online study using the original videos and testing more participants. The results seem at odds with the attention-shift explanation of the WFE. Moreover, the results indicate that the WFE is not a universal phenomenon.


Subject(s)
Attention , Cell Phone , Male , Humans , Eye Movements , Mental Recall
6.
Behav Res Methods ; 55(1): 364-416, 2023 01.
Article in English | MEDLINE | ID: mdl-35384605

ABSTRACT

In this paper, we present a review of how the various aspects of any study using an eye tracker (such as the instrument, methodology, environment, participant, etc.) affect the quality of the recorded eye-tracking data and the obtained eye-movement and gaze measures. We take this review to represent the empirical foundation for reporting guidelines of any study involving an eye tracker. We compare this empirical foundation to five existing reporting guidelines and to a database of 207 published eye-tracking studies. We find that reporting guidelines vary substantially and do not match with actual reporting practices. We end by deriving a minimal, flexible reporting guideline based on empirical research (Section "An empirically based minimal reporting guideline").


Subject(s)
Eye Movements , Eye-Tracking Technology , Humans , Empirical Research
7.
Vision Res ; 201: 108105, 2022 12.
Article in English | MEDLINE | ID: mdl-36081228

ABSTRACT

Human vision requires us to analyze the visual periphery to decide where to fixate next. In the present study, we investigated this process in people with age-related macular degeneration (AMD). In particular, we examined viewing biases and the extent to which visual salience guides fixation selection during free-viewing of naturalistic scenes. We used an approach combining generalized linear mixed modeling (GLMM) with a-priori scene parcellation. This method allows one to investigate group differences in terms of scene coverage and observers' well-known tendency to look at the center of scene images. Moreover, it allows for testing whether image salience influences fixation probability above and beyond what can be accounted for by the central bias. Compared with age-matched normally sighted control subjects (and young subjects), AMD patients' viewing behavior was less exploratory, with a stronger central fixation bias. All three subject groups showed a salience effect on fixation selection-higher-salience scene patches were more likely to be fixated. Importantly, the salience effect for the AMD group was of similar size as the salience effect for the control group, suggesting that guidance by visual salience was still intact. The variances for by-subject random effects in the GLMM indicated substantial individual differences. A separate model exclusively considered the AMD data and included fixation stability as a covariate, with the results suggesting that reduced fixation stability was associated with a reduced impact of visual salience on fixation selection.


Subject(s)
Fixation, Ocular , Macular Degeneration , Humans , Visual Perception , Attention , Bias
8.
J Vis ; 22(1): 10, 2022 01 04.
Article in English | MEDLINE | ID: mdl-35044436

ABSTRACT

How important foveal, parafoveal, and peripheral vision are depends on the task. For object search and letter search in static images of real-world scenes, peripheral vision is crucial for efficient search guidance, whereas foveal vision is relatively unimportant. Extending this research, we used gaze-contingent Blindspots and Spotlights to investigate visual search in complex dynamic and static naturalistic scenes. In Experiment 1, we used dynamic scenes only, whereas in Experiments 2 and 3, we directly compared dynamic and static scenes. Each scene contained a static, contextually irrelevant target (i.e., a gray annulus). Scene motion was not predictive of target location. For dynamic scenes, the search-time results from all three experiments converge on the novel finding that neither foveal nor central vision was necessary to attain normal search proficiency. Since motion is known to attract attention and gaze, we explored whether guidance to the target was equally efficient in dynamic as compared to static scenes. We found that the very first saccade was guided by motion in the scene. This was not the case for subsequent saccades made during the scanning epoch, representing the actual search process. Thus, effects of task-irrelevant motion were fast-acting and short-lived. Furthermore, when motion was potentially present (Spotlights) or absent (Blindspots) in foveal or central vision only, we observed differences in verification times for dynamic and static scenes (Experiment 2). When using scenes with greater visual complexity and more motion (Experiment 3), however, the differences between dynamic and static scenes were much reduced.


Subject(s)
Fovea Centralis , Visual Perception , Attention , Humans , Saccades , Vision, Ocular
9.
J Vis ; 21(4): 2, 2021 04 01.
Article in English | MEDLINE | ID: mdl-33792616

ABSTRACT

We address two questions concerning eye guidance during visual search in naturalistic scenes. First, search has been described as a task in which visual salience is unimportant. Here, we revisit this question by using a letter-in-scene search task that minimizes any confounding effects that may arise from scene guidance. Second, we investigate how important the different regions of the visual field are for different subprocesses of search (target localization, verification). In Experiment 1, we manipulated both the salience (low vs. high) and the size (small vs. large) of the target letter (a "T"), and we implemented a foveal scotoma (radius: 1°) in half of the trials. In Experiment 2, observers searched for high- and low-salience targets either with full vision or with a central or peripheral scotoma (radius: 2.5°). In both experiments, we found main effects of salience with better performance for high-salience targets. In Experiment 1, search was faster for large than for small targets, and high-salience helped more for small targets. When searching with a foveal scotoma, performance was relatively unimpaired regardless of the target's salience and size. In Experiment 2, both visual-field manipulations led to search time costs, but the peripheral scotoma was much more detrimental than the central scotoma. Peripheral vision proved to be important for target localization, and central vision for target verification. Salience affected eye movement guidance to the target in both central and peripheral vision. Collectively, the results lend support for search models that incorporate salience for predicting eye-movement behavior.


Subject(s)
Scotoma , Visual Fields , Eye Movements , Fovea Centralis , Humans , Visual Perception
10.
Sci Rep ; 10(1): 22057, 2020 12 16.
Article in English | MEDLINE | ID: mdl-33328485

ABSTRACT

Whether fixation selection in real-world scenes is guided by image salience or by objects has been a matter of scientific debate. To contrast the two views, we compared effects of location-based and object-based visual salience in young and older (65 + years) adults. Generalized linear mixed models were used to assess the unique contribution of salience to fixation selection in scenes. When analysing fixation guidance without recurrence to objects, visual salience predicted whether image patches were fixated or not. This effect was reduced for the elderly, replicating an earlier finding. When using objects as the unit of analysis, we found that highly salient objects were more frequently selected for fixation than objects with low visual salience. Interestingly, this effect was larger for older adults. We also analysed where viewers fixate within objects, once they are selected. A preferred viewing location close to the centre of the object was found for both age groups. The results support the view that objects are important units of saccadic selection. Reconciling the salience view with the object view, we suggest that visual salience contributes to prioritization among objects. Moreover, the data point towards an increasing relevance of object-bound information with increasing age.


Subject(s)
Fixation, Ocular/physiology , Saccades/physiology , Visual Perception/physiology , Adolescent , Adult , Aged , Aged, 80 and over , Female , Humans , Male , Young Adult
11.
Vision Res ; 177: 41-55, 2020 12.
Article in English | MEDLINE | ID: mdl-32957035

ABSTRACT

The importance of high-acuity foveal vision to visual search can be assessed by denying foveal vision using the gaze-contingent Moving Mask technique. Foveal vision was necessary to attain normal performance when searching for a target letter in alphanumeric displays, Perception & Psychophysics, 62 (2000) 576-585. In contrast, foveal vision was not necessary to correctly locate and identify medium-sized target objects in natural scenes, Journal of Experimental Psychology: Human Perception and Performance, 40 (2014) 342-360. To explore these task differences, we used grayscale pictures of real-world scenes which included a target letter (Experiment 1: T, Experiment 2: T or L). To reduce between-scene variability with regard to target salience, we developed the Target Embedding Algorithm (T.E.A.) to place the letter in a location for which there was a median change in local contrast when inserting the letter into the scene. The presence or absence of foveal vision was crossed with four target sizes. In both experiments, search performance decreased for smaller targets, and was impaired when searching the scene without foveal vision. For correct trials, the process of target localization remained completely unimpaired by the foveal scotoma, but it took longer to accept the target. We reasoned that the size of the target may affect the importance of foveal vision to the task, but the present data remain ambiguous. In summary, the data highlight the importance of extrafoveal vision for target localization, and the importance of foveal vision for target verification during letter-in-scene search.


Subject(s)
Fovea Centralis , Humans , Scotoma
12.
J Vis ; 20(4): 15, 2020 04 09.
Article in English | MEDLINE | ID: mdl-32330229

ABSTRACT

Fixation durations provide insights into processing demands. We investigated factors controlling fixation durations during scene viewing in two experiments. In Experiment 1, we tested the degree to which fixation durations adapt to global scene processing difficulty by manipulating the contrast (from original contrast to isoluminant) and saturation (original vs. grayscale) of the entire scene. We observed longer fixation durations for lower levels of contrast, and longer fixation durations for grayscale than for color scenes. Thus fixation durations were globally slowed as visual information became more and more degraded, making scene processing increasingly more difficult. In Experiment 2, we investigated two possible sources for this slow-down. We used "checkerboard" stimuli in which unmodified patches alternated with patches from which luminance information had been removed (isoluminant patches). Fixation durations showed an inverted immediacy effect (longer, rather than shorter, fixation durations on unmodified patches) along with a parafoveal-on-foveal effect (shorter fixation durations, when an unmodified patch was fixated next). This effect was stronger when the currently fixated patch was isoluminant as opposed to unmodified. Our results suggest that peripheral scene information substantially affects fixation durations and are consistent with the notion of competition among the current and potential future fixation locations.


Subject(s)
Fixation, Ocular/physiology , Visual Perception/physiology , Adult , Female , Humans , Male , Time Factors , Young Adult
13.
J Cogn Neurosci ; 32(4): 571-589, 2020 04.
Article in English | MEDLINE | ID: mdl-31765602

ABSTRACT

In vision science, a particularly controversial topic is whether and how quickly the semantic information about objects is available outside foveal vision. Here, we aimed at contributing to this debate by coregistering eye movements and EEG while participants viewed photographs of indoor scenes that contained a semantically consistent or inconsistent target object. Linear deconvolution modeling was used to analyze the ERPs evoked by scene onset as well as the fixation-related potentials (FRPs) elicited by the fixation on the target object (t) and by the preceding fixation (t - 1). Object-scene consistency did not influence the probability of immediate target fixation or the ERP evoked by scene onset, which suggests that object-scene semantics was not accessed immediately. However, during the subsequent scene exploration, inconsistent objects were prioritized over consistent objects in extrafoveal vision (i.e., looked at earlier) and were more effortful to process in foveal vision (i.e., looked at longer). In FRPs, we demonstrate a fixation-related N300/N400 effect, whereby inconsistent objects elicit a larger frontocentral negativity than consistent objects. In line with the behavioral findings, this effect was already seen in FRPs aligned to the pretarget fixation t - 1 and persisted throughout fixation t, indicating that the extraction of object semantics can already begin in extrafoveal vision. Taken together, the results emphasize the usefulness of combined EEG/eye movement recordings for understanding the mechanisms of object-scene integration during natural viewing.


Subject(s)
Brain/physiology , Evoked Potentials , Fixation, Ocular , Pattern Recognition, Visual/physiology , Semantics , Adolescent , Adult , Electroencephalography , Eye Movement Measurements , Female , Humans , Male , Young Adult
14.
PLoS One ; 14(5): e0217051, 2019.
Article in English | MEDLINE | ID: mdl-31120948

ABSTRACT

There is ongoing debate on whether object meaning can be processed outside foveal vision, making semantics available for attentional guidance. Much of the debate has centred on whether objects that do not fit within an overall scene draw attention, in complex displays that are often difficult to control. Here, we revisited the question by reanalysing data from three experiments that used displays consisting of standalone objects from a carefully controlled stimulus set. Observers searched for a target object, as per auditory instruction. On the critical trials, the displays contained no target but objects that were semantically related to the target, visually related, or unrelated. Analyses using (generalized) linear mixed-effects models showed that, although visually related objects attracted most attention, semantically related objects were also fixated earlier in time than unrelated objects. Moreover, semantic matches affected the very first saccade in the display. The amplitudes of saccades that first entered semantically related objects were larger than 5° on average, confirming that object semantics is available outside foveal vision. Finally, there was no semantic capture of attention for the same objects when observers did not actively look for the target, confirming that it was not stimulus-driven. We discuss the implications for existing models of visual cognition.


Subject(s)
Fovea Centralis/physiology , Pattern Recognition, Visual , Saccades , Semantics , Adolescent , Adult , Female , Fixation, Ocular , Humans , Male , Random Allocation , Reaction Time , Vision, Ocular , Visual Perception , Young Adult
15.
J Eye Mov Res ; 12(7)2019 Nov 25.
Article in English | MEDLINE | ID: mdl-33828769

ABSTRACT

Keynote at the 20th European Conference on Eye Movement Research (ECEM) in Alicante, 22.8.2019 Video stream: https://vimeo.com/361729502.

16.
Front Hum Neurosci ; 11: 491, 2017.
Article in English | MEDLINE | ID: mdl-29163092

ABSTRACT

Since the turn of the millennium, a large number of computational models of visual salience have been put forward. How best to evaluate a given model's ability to predict where human observers fixate in images of real-world scenes remains an open research question. Assessing the role of spatial biases is a challenging issue; this is particularly true when we consider the tendency for high-salience items to appear in the image center, combined with a tendency to look straight ahead ("central bias"). This problem is further exacerbated in the context of model comparisons, because some-but not all-models implicitly or explicitly incorporate a center preference to improve performance. To address this and other issues, we propose to combine a-priori parcellation of scenes with generalized linear mixed models (GLMM), building upon previous work. With this method, we can explicitly model the central bias of fixation by including a central-bias predictor in the GLMM. A second predictor captures how well the saliency model predicts human fixations, above and beyond the central bias. By-subject and by-item random effects account for individual differences and differences across scene items, respectively. Moreover, we can directly assess whether a given saliency model performs significantly better than others. In this article, we describe the data processing steps required by our analysis approach. In addition, we demonstrate the GLMM analyses by evaluating the performance of different saliency models on a new eye-tracking corpus. To facilitate the application of our method, we make the open-source Python toolbox "GridFix" available.

17.
Vision Res ; 134: 43-59, 2017 05.
Article in English | MEDLINE | ID: mdl-28159609

ABSTRACT

The goal of this article is to investigate the unexplored mechanisms underlying the development of saccadic control in infancy by determining the generalizability and potential limitations of extending the CRISP theoretical framework and computational model of fixation durations (FDs) in adult scene-viewing to infants. The CRISP model was used to investigate the underlying mechanisms modulating FDs in 6-month-olds by applying the model to empirical eye-movement data gathered from groups of infants and adults during free-viewing of naturalistic and semi-naturalistic videos. Participants also performed a gap-overlap task to measure their disengagement abilities. Results confirmed the CRISP model's applicability to infant data. Specifically, model simulations support the view that infant saccade programming is completed in two stages: an initial labile stage, followed by a non-labile stage. Moreover, results from the empirical data and simulation studies highlighted the influence of the material viewed on the FD distributions in infants and adults, as well as the impact that the developmental state of the oculomotor system can have on saccade programming and execution at 6months. The present work suggests that infant FDs reflect on-line perceptual and cognitive activity in a similar way to adults, but that the individual developmental state of the oculomotor system affects this relationship at 6months. Furthermore, computational modeling filled the gaps of psychophysical studies and allowed the effects of these two factors on FDs to be simulated in infant data providing greater insights into the development of oculomotor and attentional control than can be gained from behavioral results alone.


Subject(s)
Fixation, Ocular/physiology , Models, Theoretical , Pattern Recognition, Visual/physiology , Saccades/physiology , Adult , Female , Humans , Infant , Male , Time Factors
19.
Psychon Bull Rev ; 24(2): 370-392, 2017 Apr.
Article in English | MEDLINE | ID: mdl-27480268

ABSTRACT

Scene perception requires the orchestration of image- and task-related processes with oculomotor constraints. The present study was designed to investigate how these factors influence how long the eyes remain fixated on a given location. Linear mixed models (LMMs) were used to test whether local image statistics (including luminance, luminance contrast, edge density, visual clutter, and the number of homogeneous segments), calculated for 1° circular regions around fixation locations, modulate fixation durations, and how these effects depend on task-related control. Fixation durations and locations were recorded from 72 participants, each viewing 135 scenes under three different viewing instructions (memorization, preference judgment, and search). Along with the image-related predictors, the LMMs simultaneously considered a number of oculomotor and spatiotemporal covariates, including the amplitudes of the previous and next saccades, and viewing time. As a key finding, the local image features around the current fixation predicted this fixation's duration. For instance, greater luminance was associated with shorter fixation durations. Such immediacy effects were found for all three viewing tasks. Moreover, in the memorization and preference tasks, some evidence for successor effects emerged, such that some image characteristics of the upcoming location influenced how long the eyes stayed at the current location. In contrast, in the search task, scene processing was not distributed across fixation durations within the visual span. The LMM-based framework of analysis, applied to the control of fixation durations in scenes, suggests important constraints for models of scene perception and search, and for visual attention in general.


Subject(s)
Field Dependence-Independence , Fixation, Ocular , Pattern Recognition, Visual , Adult , Attention , Female , Humans , Judgment , Linear Models , Male , Mental Recall , Orientation , Saccades , Time Factors , Young Adult
20.
PLoS One ; 11(9): e0162449, 2016.
Article in English | MEDLINE | ID: mdl-27658191

ABSTRACT

Saccades to single targets in peripheral vision are typically characterized by an undershoot bias. Putting this bias to a test, Kapoula [1] used a paradigm in which observers were presented with two different sets of target eccentricities that partially overlapped each other. Her data were suggestive of a saccadic range effect (SRE): There was a tendency for saccades to overshoot close targets and undershoot far targets in a block, suggesting that there was a response bias towards the center of eccentricities in a given block. Our Experiment 1 was a close replication of the original study by Kapoula [1]. In addition, we tested whether the SRE is sensitive to top-down requirements associated with the task, and we also varied the target presentation duration. In Experiments 1 and 2, we expected to replicate the SRE for a visual discrimination task. The simple visual saccade-targeting task in Experiment 3, entailing minimal top-down influence, was expected to elicit a weaker SRE. Voluntary saccades to remembered target locations in Experiment 3 were expected to elicit the strongest SRE. Contrary to these predictions, we did not observe a SRE in any of the tasks. Our findings complement the results reported by Gillen et al. [2] who failed to find the effect in a saccade-targeting task with a very brief target presentation. Together, these results suggest that unlike arm movements, saccadic eye movements are not biased towards making saccades of a constant, optimal amplitude for the task.

SELECTION OF CITATIONS
SEARCH DETAIL
...