Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
J Vis ; 20(8): 26, 2020 08 03.
Article in English | MEDLINE | ID: mdl-32845961

ABSTRACT

Research on eye movements has primarily been performed in two distinct ways: (1) under highly controlled conditions using simple stimuli such as dots on a uniform background, or (2) under free-viewing conditions with complex images, real-world movies, or even with observers moving around in the world. Although both approaches offer important insights, the generalizability among eye movement behaviors observed under these different conditions is unclear. Here, we compared eye movement responses to video clips showing moving objects within their natural context with responses to simple Gaussian blobs on a blank screen. Importantly, for both conditions, the targets moved along the same trajectories at the same speed. We measured standard oculometric measures for both stimulus complexities, as well as the effect of the relative angle between saccades and pursuit, and compared them across conditions. In general, eye movement responses were qualitatively similar, especially with respect to pursuit gain. For both types of stimuli, the accuracy of saccades and subsequent pursuit was highest when both eye movements were collinear. We also found interesting differences; for example, latencies of initial saccades to moving Gaussian blob targets were significantly faster compared to saccades to moving objects in video scenes, whereas pursuit accuracy was significantly higher in video scenes. These findings suggest a lower processing demand for simple target conditions during saccade preparation and an advantage for tracking behavior in natural scenes due to higher predictability provided by the context information.


Subject(s)
Motion Perception/physiology , Motion Pictures , Oculomotor Nerve/physiology , Pursuit, Smooth/physiology , Saccades/physiology , Adult , Female , Humans , Male , Normal Distribution , Video Recording , Young Adult
2.
Neuroimage ; 216: 116491, 2020 08 01.
Article in English | MEDLINE | ID: mdl-31923604

ABSTRACT

Most fMRI studies investigating smooth pursuit (SP) related brain activity have used simple synthetic stimuli such as a sinusoidally moving dot. However, real-life situations are much more complex and SP does not occur in isolation but within sequences of saccades and fixations. This raises the question whether the same brain networks for SP that have been identified under laboratory conditions are activated when following moving objects in a movie. Here, we used the publicly available studyforrest data set that provides eye movement recordings along with 3 â€‹T fMRI recordings from 15 subjects while watching the Hollywood movie "Forrest Gump". All three major eye movement events, namely fixations, saccades, and smooth pursuit, were detected with a state-of-the-art algorithm. In our analysis, smooth pursuit (SP) was the eye movement of interest, while saccades were acting as the steady state of viewing behaviour due to their lower variability. For the fMRI analysis we used an event-related design modelling saccades and SP as regressors initially. Because of the interdependency of SP and content motion, we then added a new low-level content motion regressor to separate brain activations from these two sources. We identified higher BOLD-responses during SP than saccades bilaterally in MT+/V5, in middle cingulate extending to precuneus, and in the right temporoparietal junction. When the motion regressor was added, SP showed higher BOLD-response relative to saccades bilaterally in the cortex lining the superior temporal sulcus, precuneus, and supplementary eye field, presumably due to a confounding effect of background motion. Only parts of V2 showed higher activation during saccades in comparison to SP. Taken together, our approach should be regarded as proof of principle for deciphering brain activity related to SP, which is one of the most prominent eye movements besides saccades, in complex dynamic naturalistic situations.


Subject(s)
Brain/diagnostic imaging , Brain/physiology , Magnetic Resonance Imaging/methods , Motion Perception/physiology , Motion Pictures , Pursuit, Smooth/physiology , Algorithms , Female , Humans , Male , Photic Stimulation/methods , Saccades/physiology
3.
J Eye Mov Res ; 13(4)2020 Jul 27.
Article in English | MEDLINE | ID: mdl-33828806

ABSTRACT

In this short article we present our manual annotation of the eye movement events in a subset of the large-scale eye tracking data set Hollywood2. Our labels include fixations, saccades, and smooth pursuits, as well as a noise event type (the latter representing either blinks, loss of tracking, or physically implausible signals). In order to achieve more consistent annotations, the gaze samples were labelled by a novice rater based on rudimentary algorithmic suggestions, and subsequently corrected by an expert rater. Overall, we annotated eye movement events in the recordings corresponding to 50 randomly selected test set clips and 6 training set clips from Hollywood2, which were viewed by 16 observers and amount to a total of approximately 130 minutes of gaze data. In these labels, 62.4% of the samples were attributed to fixations, 9.1% - to saccades, and, notably, 24.2% - to pursuit (the remainder marked as noise). After evaluation of 15 published eye movement classification algorithms on our newly collected annotated data set, we found that the most recent algorithms perform very well on average, and even reach human-level labelling quality for fixations and saccades, but all have a much larger room for improvement when it comes to smooth pursuit classification. The data set is made available at https://gin.g-node.org/ioannis.agtzidis/hollywood2_em.

4.
J Vis ; 19(14): 10, 2019 12 02.
Article in English | MEDLINE | ID: mdl-31830239

ABSTRACT

Eye movements are fundamental to our visual experience of the real world, and tracking smooth pursuit eye movements play an important role because of the dynamic nature of our environment. Static images, however, do not induce this class of eye movements, and commonly used synthetic moving stimuli lack ecological validity because of their low scene complexity compared to the real world. Traditionally, ground truth data for pursuit analyses with naturalistic stimuli are obtained via laborious hand-labelling. Therefore, previous studies typically remained small in scale. We here present the first large-scale quantitative characterization of human smooth pursuit. In order to achieve this, we first provide a methodological framework for such analyses by collecting a large set of manual annotations for eye movements in dynamic scenes and by examining the bias and variance of human annotators. To enable further research on even larger future data sets, we also describe, improve, and thoroughly analyze a novel algorithm to automatically classify eye movements. Our approach incorporates unsupervised learning techniques and thus demonstrates improved performance with the addition of unlabelled data. The code and data related to our manual and automated eye movement annotation are publicly available via https://web.gin.g-node.org/ioannis.agtzidis/gazecom_annotations/.


Subject(s)
Fixation, Ocular/physiology , Motion Perception/physiology , Pursuit, Smooth/physiology , Visual Perception/physiology , Algorithms , Functional Laterality/physiology , Humans , Psychomotor Performance/physiology
5.
Eur Arch Psychiatry Clin Neurosci ; 269(4): 407-418, 2019 Jun.
Article in English | MEDLINE | ID: mdl-29305645

ABSTRACT

BACKGROUND: Eye tracking dysfunction (ETD) observed with standard pursuit stimuli represents a well-established biomarker for schizophrenia. How ETD may manifest during free visual exploration of real-life movies is unclear. METHODS: Eye movements were recorded (EyeLink®1000) while 26 schizophrenia patients and 25 healthy age-matched controls freely explored nine uncut movies and nine pictures of real-life situations for 20 s each. Subsequently, participants were shown still shots of these scenes to decide whether they had explored them as movies or pictures. Participants were additionally assessed on standard eye-tracking tasks. RESULTS: Patients made smaller saccades (movies (p = 0.003), pictures (p = 0.002)) and had a stronger central bias (movies and pictures (p < 0.001)) than controls. In movies, patients' exploration behavior was less driven by image-defined, bottom-up stimulus saliency than controls (p < 0.05). Proportions of pursuit tracking on movies differed between groups depending on the individual movie (group*movie p = 0.011, movie p < 0.001). Eye velocity on standard pursuit stimuli was reduced in patients (p = 0.029) but did not correlate with pursuit behavior on movies. Additionally, patients obtained lower rates of correctly identified still shots as movies or pictures (p = 0.046). CONCLUSION: Our results suggest a restricted centrally focused visual exploration behavior in patients not only on pictures, but also on movies of real-life scenes. While ETD observed in the laboratory cannot be directly transferred to natural viewing conditions, these alterations support a model of impairments in motion information processing in patients resulting in a reduced ability to perceive moving objects and less saliency driven exploration behavior presumably contributing to alterations in the perception of the natural environment.


Subject(s)
Exploratory Behavior/physiology , Motion Perception/physiology , Pattern Recognition, Visual/physiology , Pursuit, Smooth/physiology , Schizophrenia/physiopathology , Adult , Eye Movement Measurements , Female , Humans , Male , Middle Aged , Motion Pictures
6.
Behav Res Methods ; 51(2): 556-572, 2019 04.
Article in English | MEDLINE | ID: mdl-30411227

ABSTRACT

Deep learning approaches have achieved breakthrough performance in various domains. However, the segmentation of raw eye-movement data into discrete events is still done predominantly either by hand or by algorithms that use hand-picked parameters and thresholds. We propose and make publicly available a small 1D-CNN in conjunction with a bidirectional long short-term memory network that classifies gaze samples as fixations, saccades, smooth pursuit, or noise, simultaneously assigning labels in windows of up to 1 s. In addition to unprocessed gaze coordinates, our approach uses different combinations of the speed of gaze, its direction, and acceleration, all computed at different temporal scales, as input features. Its performance was evaluated on a large-scale hand-labeled ground truth data set (GazeCom) and against 12 reference algorithms. Furthermore, we introduced a novel pipeline and metric for event detection in eye-tracking recordings, which enforce stricter criteria on the algorithmically produced events in order to consider them as potentially correct detections. Results show that our deep approach outperforms all others, including the state-of-the-art multi-observer smooth pursuit detector. We additionally test our best model on an independent set of recordings, where our approach stays highly competitive compared to literature methods.


Subject(s)
Eye Movements , Pattern Recognition, Automated/methods , Algorithms , Humans , Pursuit, Smooth , Saccades
SELECTION OF CITATIONS
SEARCH DETAIL
...