Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 30
Filter
Add more filters










Publication year range
1.
Behav Res Ther ; 173: 104461, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38134499

ABSTRACT

There is some evidence for heterogeneity in attentional processes among individuals with social anxiety. However, there is limited work considering how attentional processes may differ as a mechanism in a naturalistic task-based context (e.g., public speaking). In this secondary analysis we tested attentional heterogeneity among individuals diagnosed with social anxiety disorder (N = 21) in the context of a virtual reality exposure treatment study. Participants completed a public speaking challenge in an immersive 360°-video virtual reality environment with eye tracking at pre-treatment, post-treatment, and at 1-week follow-up. Using a Hidden Markov Model (HMM) approach with clustering we tested whether there were distinct profiles of attention pre-treatment and whether there were changes following the intervention. As a secondary aim we tested whether the distinct attentional profiles at pre-treatment predicted differential treatment outcomes. We found two distinct attentional profiles pre-treatment that we characterized as audience-focused and audience-avoidant. However, by the 1-week follow-up the two profiles were no longer meaningfully different. We found a meaningful difference between HMM groups for fear of public speaking at post-treatment b = -8.54, 95% Highest Density Interval (HDI) [-16.00, -0.90], Bayes Factor (BF) = 8.31 but not at one-week follow-up b = -5.83, 95% HDI [-13.25, 1.81], BF = 2.28. These findings provide support for heterogeneity in attentional processes among socially anxious individuals, but our findings indicate that this may change following treatment. Moreover, our results offer preliminary mechanistic evidence that patterns of avoidance may be specifically related to poorer treatment outcomes for virtual reality exposure therapy.


Subject(s)
Phobia, Social , Phobic Disorders , Humans , Phobic Disorders/therapy , Phobia, Social/therapy , Bayes Theorem , Anxiety , Attention
2.
Cogn Behav Ther ; 51(5): 371-387, 2022 09.
Article in English | MEDLINE | ID: mdl-35383544

ABSTRACT

Biased attention to social threats has been implicated in social anxiety disorder. Modifying visual attention during exposure therapy offers a direct test of this mechanism. We developed and tested a brief virtual reality exposure therapy (VRET) protocol using 360°-video and eye tracking. Participants (N = 21) were randomized to either standard VRET or VRET + attention guidance training (AGT). Multilevel Bayesian models were used to test (1) whether there was an effect of condition over time and (2) whether post-treatment changes in gaze patterns mediated the effect of condition at follow-up. There was a large overall effect of the intervention on symptoms of social anxiety, as well as an effect of the AGT augmentation on changes in visual attention to audience members. There was weak evidence against an effect of condition on fear of public speaking and weak evidence supporting a mediation effect, however these estimates were strongly influenced by model priors. Taken together, our findings suggest that attention can be modified within and during VRET and that modification of visual gaze avoidance may be casually linked to reductions in social anxiety. Replication with a larger sample size is needed.


Subject(s)
Phobia, Social , Virtual Reality Exposure Therapy , Bayes Theorem , Humans , Phobia, Social/therapy , Pilot Projects , Virtual Reality Exposure Therapy/methods
3.
PLoS Comput Biol ; 18(2): e1009575, 2022 02.
Article in English | MEDLINE | ID: mdl-35192614

ABSTRACT

We examine the structure of the visual motion projected on the retina during natural locomotion in real world environments. Bipedal gait generates a complex, rhythmic pattern of head translation and rotation in space, so without gaze stabilization mechanisms such as the vestibular-ocular-reflex (VOR) a walker's visually specified heading would vary dramatically throughout the gait cycle. The act of fixation on stable points in the environment nulls image motion at the fovea, resulting in stable patterns of outflow on the retinae centered on the point of fixation. These outflowing patterns retain a higher order structure that is informative about the stabilized trajectory of the eye through space. We measure this structure by applying the curl and divergence operations on the retinal flow velocity vector fields and found features that may be valuable for the control of locomotion. In particular, the sign and magnitude of foveal curl in retinal flow specifies the body's trajectory relative to the gaze point, while the point of maximum divergence in the retinal flow field specifies the walker's instantaneous overground velocity/momentum vector in retinotopic coordinates. Assuming that walkers can determine the body position relative to gaze direction, these time-varying retinotopic cues for the body's momentum could provide a visual control signal for locomotion over complex terrain. In contrast, the temporal variation of the eye-movement-free, head-centered flow fields is large enough to be problematic for use in steering towards a goal. Consideration of optic flow in the context of real-world locomotion therefore suggests a re-evaluation of the role of optic flow in the control of action during natural behavior.


Subject(s)
Optic Flow , Eye Movements , Locomotion , Reflex, Vestibulo-Ocular , Retina
4.
Atten Percept Psychophys ; 84(2): 396-407, 2022 Feb.
Article in English | MEDLINE | ID: mdl-35064557

ABSTRACT

It has recently become feasible to study selective visual attention to social cues in increasingly ecologically valid ways. In this secondary analysis, we examined gaze behavior in response to the actions of others in a social context. Participants (N = 84) were asked to give a 5-minute speech to a five-member audience that had been filmed in 360° video, displayed in a virtual reality headset containing a built-in eye tracker. Audience members were coached to make movements that would indicate interest or lack of interest (e.g., nodding vs. looking away). The goal of this paper was to analyze whether these actions influenced the speaker's gaze. We found that participants showed reliable evidence of gaze towards audience member actions in general, and towards audience member actions involving their phone specifically (compared with other actions like looking away or leaning back). However, there were no differences in gaze towards actions reflecting interest (like nodding) compared with actions reflecting lack of interest (like looking away). Participants were more likely to look away from audience member actions as well, but there were no specific actions that elicited looking away more or less. Taken together, these findings suggest that the actions of audience members are broadly influential in motivating gaze behaviors in a realistic, contextually embedded (public speaking) setting. Further research is needed to examine the ways in which these findings can be elucidated in more controlled laboratory environments as well as in the real world.


Subject(s)
Speech , Virtual Reality , Cues , Fixation, Ocular , Humans , Motion Pictures , Social Environment , Speech/physiology
5.
Proc AAAI Conf Artif Intell ; 34(4): 6811-6820, 2020 Feb.
Article in English | MEDLINE | ID: mdl-32901213

ABSTRACT

Large-scale public datasets have been shown to benefit research in multiple areas of modern artificial intelligence. For decision-making research that requires human data, high-quality datasets serve as important benchmarks to facilitate the development of new methods by providing a common reproducible standard. Many human decision-making tasks require visual attention to obtain high levels of performance. Therefore, measuring eye movements can provide a rich source of information about the strategies that humans use to solve decision-making tasks. Here, we provide a large-scale, high-quality dataset of human actions with simultaneously recorded eye movements while humans play Atari video games. The dataset consists of 117 hours of gameplay data from a diverse set of 20 games, with 8 million action demonstrations and 328 million gaze samples. We introduce a novel form of gameplay, in which the human plays in a semi-frame-by-frame manner. This leads to near-optimal game decisions and game scores that are comparable or better than known human records. We demonstrate the usefulness of the dataset through two simple applications: predicting human gaze and imitating human demonstrated actions. The quality of the data leads to promising results in both tasks. Moreover, using a learned human gaze model to inform imitation learning leads to an 115% increase in game performance. We interpret these results as highlighting the importance of incorporating human visual attention in models of decision making and demonstrating the value of the current dataset to the research community. We hope that the scale and quality of this dataset can provide more opportunities to researchers in the areas of visual attention, imitation learning, and reinforcement learning.

6.
Behav Res Ther ; 134: 103706, 2020 11.
Article in English | MEDLINE | ID: mdl-32920165

ABSTRACT

Social anxiety (SA) is thought to be maintained in part by avoidance of social threat, which exacerbates fear of negative evaluation. Yet, relatively little research has been conducted to evaluate the connection between social anxiety and attentional processes in realistic contexts. The current pilot study examined patterns of attention (eye movements) in a commonly feared social context - public speaking. Participants (N = 84) with a range of social anxiety symptoms gave an impromptu five-minute speech in an immersive 360°-video environment, while wearing a virtual reality headset equipped with eye-tracking hardware. We found evidence for the expected interaction between fear of public speaking and social threat (uninterested vs. interested audience members). Consistent with prediction, participants with greater fear of public speaking looked fewer times at uninterested members of the audience (high social threat) compared to interested members of the audience (low social threat) b = 0.418, p = 0.046, 95% CI [0.008, 0.829]. Analyses of attentional indices over the course of the speech revealed that the interaction between fear of public speaking and gaze on audience members was only significant in the first three-minutes. Our results provide support for theoretical models implicating avoidance of social threat as a maintaining factor in social anxiety. Future research is needed to test whether guided attentional training targeting in vivo attentional avoidance may improve clinical outcomes for those presenting with social anxiety.


Subject(s)
Anxiety/psychology , Avoidance Learning , Phobia, Social/psychology , Speech , Adolescent , Adult , Anxiety/physiopathology , Eye Movement Measurements , Female , Fixation, Ocular , Humans , Male , Middle Aged , Phobia, Social/physiopathology , Pilot Projects , Virtual Reality , Young Adult
7.
PLoS Comput Biol ; 14(10): e1006518, 2018 10.
Article in English | MEDLINE | ID: mdl-30359364

ABSTRACT

Although a standard reinforcement learning model can capture many aspects of reward-seeking behaviors, it may not be practical for modeling human natural behaviors because of the richness of dynamic environments and limitations in cognitive resources. We propose a modular reinforcement learning model that addresses these factors. Based on this model, a modular inverse reinforcement learning algorithm is developed to estimate both the rewards and discount factors from human behavioral data, which allows predictions of human navigation behaviors in virtual reality with high accuracy across different subjects and with different tasks. Complex human navigation trajectories in novel environments can be reproduced by an artificial agent that is based on the modular model. This model provides a strategy for estimating the subjective value of actions and how they influence sensory-motor decisions in natural behavior.


Subject(s)
Decision Making/physiology , Psychomotor Performance/physiology , Reinforcement, Psychology , Algorithms , Computational Biology , Humans , Models, Biological , Reward
8.
Interface Focus ; 8(4): 20180009, 2018 Aug 06.
Article in English | MEDLINE | ID: mdl-29951189

ABSTRACT

The development of better eye and body tracking systems, and more flexible virtual environments have allowed more systematic exploration of natural vision and contributed a number of insights. In natural visually guided behaviour, humans make continuous sequences of sensory-motor decisions to satisfy current goals, and the role of vision is to provide the relevant information in order to achieve those goals. This paper reviews the factors that control gaze in natural visually guided actions such as locomotion, including the rewards and costs associated with the immediate behavioural goals, uncertainty about the state of the world and prior knowledge of the environment. These general features of human gaze control may inform the development of artificial systems.

9.
J Vis ; 18(4): 10, 2018 04 01.
Article in English | MEDLINE | ID: mdl-29710300

ABSTRACT

The essentially active nature of vision has long been acknowledged but has been difficult to investigate because of limitations in the available instrumentation, both for measuring eye and body movements and for presenting realistic stimuli in the context of active behavior. These limitations have been substantially reduced in recent years, opening up a wider range of contexts where experimental control is possible. Given this, it is important to examine just what the benefits are for exploring natural vision, with its attendant disadvantages. Work over the last two decades provides insights into these benefits. Natural behavior turns out to be a rich domain for investigation, as it is remarkably stable and opens up new questions, and the behavioral context helps specify the momentary visual computations and their temporal evolution.


Subject(s)
Awards and Prizes , Fixation, Ocular/physiology , Ophthalmology/history , Vision, Ocular/physiology , Visual Perception/physiology , History, 21st Century , Humans
10.
J Vis ; 18(4): 12, 2018 04 01.
Article in English | MEDLINE | ID: mdl-29710302

ABSTRACT

Little is known about distance discrimination in real scenes, especially at long distances. This is not surprising given the logistical difficulties of making such measurements. To circumvent these difficulties, we collected 81 stereo images of outdoor scenes, together with precisely registered range images that provided the ground-truth distance at each pixel location. We then presented the stereo images in the correct viewing geometry and measured the ability of human subjects to discriminate the distance between locations in the scene, as a function of absolute distance (3 m to 30 m) and the angular spacing between the locations being compared (2°, 5°, and 10°). Measurements were made for binocular and monocular viewing. Thresholds for binocular viewing were quite small at all distances (Weber fractions less than 1% at 2° spacing and less than 4% at 10° spacing). Thresholds for monocular viewing were higher than those for binocular viewing out to distances of 15-20 m, beyond which they were the same. Using standard cue-combination analysis, we also estimated what the thresholds would be based on binocular-stereo cues alone. With two exceptions, we show that the entire pattern of results is consistent with what one would expect from classical studies of binocular disparity thresholds and separation/size discrimination thresholds measured with simple laboratory stimuli. The first exception is some deviation from the expected pattern at close distances (especially for monocular viewing). The second exception is that thresholds in natural scenes are lower, presumably because of the rich figural cues contained in natural images.


Subject(s)
Cues , Distance Perception/physiology , Vision, Binocular/physiology , Vision, Monocular/physiology , Visual Perception/physiology , Adult , Depth Perception/physiology , Humans , Male , Young Adult
11.
Curr Biol ; 28(8): 1224-1233.e5, 2018 04 23.
Article in English | MEDLINE | ID: mdl-29657116

ABSTRACT

Human locomotion through natural environments requires precise coordination between the biomechanics of the bipedal gait cycle and the eye movements that gather the information needed to guide foot placement. However, little is known about how the visual and locomotor systems work together to support movement through the world. We developed a system to simultaneously record gaze and full-body kinematics during locomotion over different outdoor terrains. We found that not only do walkers tune their gaze behavior to the specific information needed to traverse paths of varying complexity but that they do so while maintaining a constant temporal look-ahead window across all terrains. This strategy allows walkers to use gaze to tailor their energetically optimal preferred gait cycle to the upcoming path in order to balance between the drive to move efficiently and the need to place the feet in stable locations. Eye movements and locomotion are intimately linked in a way that reflects the integration of energetic costs, environmental uncertainty, and momentary informational demands of the locomotor task. Thus, the relationship between gaze and gait reveals the structure of the sensorimotor decisions that support successful performance in the face of the varying demands of the natural world. VIDEO ABSTRACT.


Subject(s)
Eye Movements/physiology , Psychomotor Performance/physiology , Walking/physiology , Adult , Biomechanical Phenomena/physiology , Eye/metabolism , Female , Fixation, Ocular/physiology , Gait/physiology , Humans , Locomotion/physiology , Male , Ocular Physiological Phenomena , Visual Perception/physiology , Young Adult
12.
Sci Rep ; 8(1): 4324, 2018 03 12.
Article in English | MEDLINE | ID: mdl-29531297

ABSTRACT

Search is a central visual function. Most of what is known about search derives from experiments where subjects view 2D displays on computer monitors. In the natural world, however, search involves movement of the body in large-scale spatial contexts, and it is unclear how this might affect search strategies. In this experiment, we explore the nature of memory representations developed when searching in an immersive virtual environment. By manipulating target location, we demonstrate that search depends on episodic spatial memory as well as learnt spatial priors. Subjects rapidly learned the large-scale structure of the space, with shorter paths and less head rotation to find targets. These results suggest that spatial memory of the global structure allows a search strategy that involves efficient attention allocation based on the relevance of scene regions. Thus spatial memory may allow less energetically costly search strategies.


Subject(s)
Memory, Episodic , Space Perception , Virtual Reality , Visual Perception , Attention , Head Movements , Humans , Learning
13.
Annu Rev Vis Sci ; 3: 389-413, 2017 09 15.
Article in English | MEDLINE | ID: mdl-28715958

ABSTRACT

Investigation of natural behavior has contributed a number of insights to our understanding of visual guidance of actions by highlighting the importance of behavioral goals and focusing attention on how vision and action play out in time. In this context, humans make continuous sequences of sensory-motor decisions to satisfy current behavioral goals, and the role of vision is to provide the relevant information for making good decisions in order to achieve those goals. This conceptualization of visually guided actions as a sequence of sensory-motor decisions has been formalized within the framework of statistical decision theory, which structures the problem and provides the context for much recent progress in vision and action. Components of a good decision include the task, which defines the behavioral goals, the rewards and costs associated with those goals, uncertainty about the state of the world, and prior knowledge.


Subject(s)
Motor Activity/physiology , Visual Perception/physiology , Decision Making/physiology , Eye Movements/physiology , Feedback, Sensory/physiology , Goals , Memory/physiology , Motivation/physiology , Movement/physiology , Reward , Sensorimotor Cortex/physiology
14.
J Vis ; 17(1): 28, 2017 01 01.
Article in English | MEDLINE | ID: mdl-28114501

ABSTRACT

While it is universally acknowledged that both bottom up and top down factors contribute to allocation of gaze, we currently have limited understanding of how top-down factors determine gaze choices in the context of ongoing natural behavior. One purely top-down model by Sprague, Ballard, and Robinson (2007) suggests that natural behaviors can be understood in terms of simple component behaviors, or modules, that are executed according to their reward value, with gaze targets chosen in order to reduce uncertainty about the particular world state needed to execute those behaviors. We explore the plausibility of the central claims of this approach in the context of a task where subjects walk through a virtual environment performing interceptions, avoidance, and path following. Many aspects of both walking direction choices and gaze allocation are consistent with this approach. Subjects use gaze to reduce uncertainty for task-relevant information that is used to inform action choices. Notably the addition of motion to peripheral objects did not affect fixations when the objects were irrelevant to the task, suggesting that stimulus saliency was not a major factor in gaze allocation. The modular approach of independent component behaviors is consistent with the main aspects of performance, but there were a number of deviations suggesting that modules interact. Thus the model forms a useful, but incomplete, starting point for understanding top-down factors in active behavior.


Subject(s)
Fixation, Ocular/physiology , Reward , Uncertainty , Visual Perception/physiology , Walking , Eye Movements/physiology , Female , Humans , Male , Models, Theoretical , Psychomotor Performance , Young Adult
15.
J Vis ; 16(8): 9, 2016 06 01.
Article in English | MEDLINE | ID: mdl-27299769

ABSTRACT

The role of memory in guiding attention allocation in daily behaviors is not well understood. In experiments with two-dimensional (2D) images, there is mixed evidence about the importance of memory. Because the stimulus context in laboratory experiments and daily behaviors differs extensively, we investigated the role of memory in visual search, in both two-dimensional (2D) and three-dimensional (3D) environments. A 3D immersive virtual apartment composed of two rooms was created, and a parallel 2D visual search experiment composed of snapshots from the 3D environment was developed. Eye movements were tracked in both experiments. Repeated searches for geometric objects were performed to assess the role of spatial memory. Subsequently, subjects searched for realistic context objects to test for incidental learning. Our results show that subjects learned the room-target associations in 3D but less so in 2D. Gaze was increasingly restricted to relevant regions of the room with experience in both settings. Search for local contextual objects, however, was not facilitated by early experience. Incidental fixations to context objects do not necessarily benefit search performance. Together, these results demonstrate that memory for global aspects of the environment guides search by restricting allocation of attention to likely regions, whereas task relevance determines what is learned from the active search experience. Behaviors in 2D and 3D environments are comparable, although there is greater use of memory in 3D.


Subject(s)
Attention/physiology , Environment , Eye Movements/physiology , Memory/physiology , Visual Perception/physiology , Adult , Female , Humans , Learning , Male
16.
Exp Brain Res ; 217(1): 125-36, 2012 Mar.
Article in English | MEDLINE | ID: mdl-22183755

ABSTRACT

In the natural world, the brain must handle inherent delays in visual processing. This is a problem particularly during dynamic tasks. A possible solution to visuo-motor delays is prediction of a future state of the environment based on the current state and properties of the environment learned from experience. Prediction is well known to occur in both saccades and pursuit movements and is likely to depend on some kind of internal visual model as the basis for this prediction. However, most evidence comes from controlled laboratory studies using simple paradigms. In this study, we examine eye movements made in the context of demanding natural behavior, while playing squash. We show that prediction is a pervasive component of gaze behavior in this context. We show in addition that these predictive movements are extraordinarily precise and operate continuously in time across multiple trajectories and multiple movements. This suggests that prediction is based on complex dynamic visual models of the way that balls move, accumulated over extensive experience. Since eye, head, arm, and body movements all co-occur, it seems likely that a common internal model of predicted visual state is shared by different effectors to allow flexible coordination patterns. It is generally agreed that internal models are responsible for predicting future sensory state for control of body movements. The present work suggests that model-based prediction is likely to be a pervasive component in natural gaze control as well.


Subject(s)
Movement/physiology , Pursuit, Smooth/physiology , Saccades/physiology , Vision, Ocular/physiology , Adult , Humans
17.
J Vis ; 11(10)2011 Sep 27.
Article in English | MEDLINE | ID: mdl-21954297

ABSTRACT

Ganglion cells in the peripheral retina have lower density and larger receptive fields than in the fovea. Consequently, the visual signals relayed from the periphery have substantially lower resolution than those relayed by the fovea. The information contained in peripheral ganglion cell responses can be quantified by how well they predict the foveal ganglion cell responses to the same stimulus. We constructed a model of human ganglion cell outputs by combining existing measurements of the optical transfer function with the receptive field properties and sampling densities of midget (P) ganglion cells. We then simulated a spatial population of P-cell responses to image patches sampled from a large collection of luminance-calibrated natural images. Finally, we characterized the population response to each image patch, at each eccentricity, with two parameters of the spatial power spectrum of the responses: the average response contrast (standard deviation of the response patch) and the falloff in power with spatial frequency. The primary finding is that the optimal estimate of response contrast in the fovea is dependent on both the response contrast and the steepness of the falloff observed in the periphery. Humans could exploit this information when decoding peripheral signals to estimate contrasts, estimate blur levels, or select the most informative locations for saccadic eye movements.


Subject(s)
Contrast Sensitivity/physiology , Fovea Centralis/physiology , Retinal Ganglion Cells/physiology , Space Perception/physiology , Visual Cortex/physiology , Humans , Photic Stimulation/methods
18.
J Vis ; 11(5): 5, 2011 May 27.
Article in English | MEDLINE | ID: mdl-21622729

ABSTRACT

Models of gaze allocation in complex scenes are derived mainly from studies of static picture viewing. The dominant framework to emerge has been image salience, where properties of the stimulus play a crucial role in guiding the eyes. However, salience-based schemes are poor at accounting for many aspects of picture viewing and can fail dramatically in the context of natural task performance. These failures have led to the development of new models of gaze allocation in scene viewing that address a number of these issues. However, models based on the picture-viewing paradigm are unlikely to generalize to a broader range of experimental contexts, because the stimulus context is limited, and the dynamic, task-driven nature of vision is not represented. We argue that there is a need to move away from this class of model and find the principles that govern gaze allocation in a broader range of settings. We outline the major limitations of salience-based selection schemes and highlight what we have learned from studies of gaze allocation in natural vision. Clear principles of selection are found across many instances of natural vision and these are not the principles that might be expected from picture-viewing studies. We discuss the emerging theoretical framework for gaze allocation on the basis of reward maximization and uncertainty reduction.


Subject(s)
Attention/physiology , Eye Movements/physiology , Fixation, Ocular/physiology , Photic Stimulation/methods , Behavior/physiology , Humans , Learning , Models, Psychological , Reward , Saccades/physiology , Time Factors , Vision, Ocular/physiology
19.
Wiley Interdiscip Rev Cogn Sci ; 2(2): 158-166, 2011 Mar.
Article in English | MEDLINE | ID: mdl-26302007

ABSTRACT

Historically, the study of visual perception has followed a reductionist strategy, with the goal of understanding complex visually guided behavior by separate analysis of its elemental components. Recent developments in monitoring behavior, such as measurement of eye movements in unconstrained observers, have allowed investigation of the use of vision in the natural world. This has led to a variety of insights that would be difficult to achieve in more constrained experimental contexts. In general, it shifts the focus of vision away from the properties of the stimulus toward a consideration of the behavioral goals of the observer. It appears that behavioral goals are a critical factor in controlling the acquisition of visual information from the world. This insight has been accompanied by a growing understanding of the importance of reward in modulating the underlying neural mechanisms and by theoretical developments using reinforcement learning models of complex behavior. These developments provide us with the tools to understanding how tasks are represented in the brain, and how they control acquisition of information through use of gaze. WIREs Cogni Sci 2011 2 158-166 DOI: 10.1002/wcs.113 For further resources related to this article, please visit the WIREs website.

20.
Vis cogn ; 17(6-7): 1185-1204, 2009 Aug 01.
Article in English | MEDLINE | ID: mdl-20411027

ABSTRACT

Gaze changes and the resultant fixations that orchestrate the sequential acquisition of information from the visual environment are the central feature of primate vision. How are we to understand their function? For the most part, theories of fixation targets have been image based: The hypothesis being that the eye is drawn to places in the scene that contain discontinuities in image features such as motion, colour, and texture. But are these features the cause of the fixations, or merely the result of fixations that have been planned to serve some visual function? This paper examines the issue and reviews evidence from various image-based and task-based sources. Our conclusion is that the evidence is overwhelmingly in favour of fixation control being essentially task based.

SELECTION OF CITATIONS
SEARCH DETAIL
...