Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 14 de 14
Filter
Add more filters










Publication year range
1.
PLoS One ; 19(1): e0291823, 2024.
Article in English | MEDLINE | ID: mdl-38166054

ABSTRACT

In this study, we provide a detailed analysis of entropy measures calculated for fixation eye movement trajectories from the three different datasets. We employed six key metrics (Fuzzy, Increment, Sample, Gridded Distribution, Phase, and Spectral Entropies). We calculate these six metrics on three sets of fixations: (1) fixations from the GazeCom dataset, (2) fixations from what we refer to as the "Lund" dataset, and (3) fixations from our own research laboratory ("OK Lab" dataset). For each entropy measure, for each dataset, we closely examined the 36 fixations with the highest entropy and the 36 fixations with the lowest entropy. From this, it was clear that the nature of the information from our entropy metrics depended on which dataset was evaluated. These entropy metrics found various types of misclassified fixations in the GazeCom dataset. Two entropy metrics also detected fixation with substantial linear drift. For the Lund dataset, the only finding was that low spectral entropy was associated with what we call "bumpy" fixations. These are fixations with low-frequency oscillations. For the OK Lab dataset, three entropies found fixations with high-frequency noise which probably represent ocular microtremor. In this dataset, one entropy found fixations with linear drift. The between-dataset results are discussed in terms of the number of fixations in each dataset, the different eye movement stimuli employed, and the method of eye movement classification.


Subject(s)
Eye Movements , Fixation, Ocular , Entropy
2.
Sci Data ; 10(1): 177, 2023 03 30.
Article in English | MEDLINE | ID: mdl-36997558

ABSTRACT

We present GazeBaseVR, a large-scale, longitudinal, binocular eye-tracking (ET) dataset collected at 250 Hz with an ET-enabled virtual-reality (VR) headset. GazeBaseVR comprises 5,020 binocular recordings from a diverse population of 407 college-aged participants. Participants were recorded up to six times each over a 26-month period, each time performing a series of five different ET tasks: (1) a vergence task, (2) a horizontal smooth pursuit task, (3) a video-viewing task, (4) a self-paced reading task, and (5) a random oblique saccade task. Many of these participants have also been recorded for two previously published datasets with different ET devices, and 11 participants were recorded before and after COVID-19 infection and recovery. GazeBaseVR is suitable for a wide range of research on ET data in VR devices, especially eye movement biometrics due to its large population and longitudinal nature. In addition to ET data, additional participant details are provided to enable further research on topics such as fairness.


Subject(s)
Eye Movements , Eye-Tracking Technology , Virtual Reality , Humans , Young Adult , Saccades
3.
Behav Res Methods ; 55(1): 417-427, 2023 01.
Article in English | MEDLINE | ID: mdl-35411475

ABSTRACT

Manual classification of eye-movements is used in research and as a basis for comparison with automatic algorithms in the development phase. However, human classification will not be useful if it is unreliable and unrepeatable. Therefore, it is important to know what factors might influence and enhance the accuracy and reliability of human classification of eye-movements. In this report we compare three datasets of human manual classification, two from earlier datasets and one, our own dataset, which we present here for the first time. For inter-rater reliability, we assess both the event-level F1-score and sample-level Cohen's κ, across groups of raters. The report points to several possible influences on human classification reliability: eye-tracker quality, use of head restraint, characteristics of the recorded subjects, the availability of detailed scoring rules, and the characteristics and training of the raters.


Subject(s)
Algorithms , Eye Movements , Humans , Reproducibility of Results , Observer Variation
4.
Front Neurol ; 13: 963968, 2022.
Article in English | MEDLINE | ID: mdl-36034311

ABSTRACT

Background: Nystagmus identification and interpretation is challenging for non-experts who lack specific training in neuro-ophthalmology or neuro-otology. This challenge is magnified when the task is performed via telemedicine. Deep learning models have not been heavily studied in video-based eye movement detection. Methods: We developed, trained, and validated a deep-learning system (aEYE) to classify video recordings as normal or bearing at least two consecutive beats of nystagmus. The videos were retrospectively collected from a subset of the monocular (right eye) video-oculography (VOG) recording used in the Acute Video-oculography for Vertigo in Emergency Rooms for Rapid Triage (AVERT) clinical trial (#NCT02483429). Our model was derived from a preliminary dataset representing about 10% of the total AVERT videos (n = 435). The videos were trimmed into 10-sec clips sampled at 60 Hz with a resolution of 240 × 320 pixels. We then created 8 variations of the videos by altering the sampling rates (i.e., 30 Hz and 15 Hz) and image resolution (i.e., 60 × 80 pixels and 15 × 20 pixels). The dataset was labeled as "nystagmus" or "no nystagmus" by one expert provider. We then used a filtered image-based motion classification approach to develop aEYE. The model's performance at detecting nystagmus was calculated by using the area under the receiver-operating characteristic curve (AUROC), sensitivity, specificity, and accuracy. Results: An ensemble between the ResNet-soft voting and the VGG-hard voting models had the best performing metrics. The AUROC, sensitivity, specificity, and accuracy were 0.86, 88.4, 74.2, and 82.7%, respectively. Our validated folds had an average AUROC, sensitivity, specificity, and accuracy of 0.86, 80.3, 80.9, and 80.4%, respectively. Models created from the compressed videos decreased in accuracy as image sampling rate decreased from 60 Hz to 15 Hz. There was only minimal change in the accuracy of nystagmus detection when decreasing image resolution and keeping sampling rate constant. Conclusion: Deep learning is useful in detecting nystagmus in 60 Hz video recordings as well as videos with lower image resolutions and sampling rates, making it a potentially useful tool to aid future automated eye-movement enabled neurologic diagnosis.

5.
J Eye Mov Res ; 14(3)2021.
Article in English | MEDLINE | ID: mdl-34745443

ABSTRACT

This paper is a follow-on to our earlier paper (7), which focused on the multimodality of angular offsets. This paper applies the same analysis to the measurement of spatial precision. Following the literature, we refer these measurements as estimates of device precision, but, in fact, subject characteristics clearly affect the measurements. One typical measure of the spatial precision of an eye-tracking device is the standard deviation (SD) of the position signals (horizontal and vertical) during a fixation. The SD is a highly interpretable measure of spread if the underlying error distribution is unimodal and normal. However, in the context of an underlying multimodal distribution, the SD is less interpretable. We will present evidence that the majority of such distributions are multimodal (68-70% strongly multimodal). Only 21-23% of position distributions were unimodal. We present an alternative method for measuring precision that is appropriate for both unimodal and multimodal distributions. This alternative method produces precision estimates that are substantially smaller than classic measures. We present illustrations of both unimodality and multimodality with either drift or a microsaccade present during fixation. At present, these observations apply only to the EyeLink 1000, and the subjects evaluated herein.

6.
Sensors (Basel) ; 21(14)2021 Jul 13.
Article in English | MEDLINE | ID: mdl-34300511

ABSTRACT

This paper summarizes the OpenEDS 2020 Challenge dataset, the proposed baselines, and results obtained by the top three winners of each competition: (1) Gaze prediction Challenge, with the goal of predicting the gaze vector 1 to 5 frames into the future based on a sequence of previous eye images, and (2) Sparse Temporal Semantic Segmentation Challenge, with the goal of using temporal information to propagate semantic eye labels to contiguous eye image frames. Both competitions were based on the OpenEDS2020 dataset, a novel dataset of eye-image sequences captured at a frame rate of 100 Hz under controlled illumination, using a virtual-reality head-mounted display with two synchronized eye-facing cameras. The dataset, which we make publicly available for the research community, consists of 87 subjects performing several gaze-elicited tasks, and is divided into 2 subsets, one for each competition task. The proposed baselines, based on deep learning approaches, obtained an average angular error of 5.37 degrees for gaze prediction, and a mean intersection over union score (mIoU) of 84.1% for semantic segmentation. The winning solutions were able to outperform the baselines, obtaining up to 3.17 degrees for the former task and 95.2% mIoU for the latter.


Subject(s)
Smart Glasses , Virtual Reality , Eye-Tracking Technology , Humans , Photography , Semantics
7.
J Eye Mov Res ; 14(3)2021 Jun 03.
Article in English | MEDLINE | ID: mdl-34122749

ABSTRACT

Typically, the position error of an eye-tracking device is measured as the distance of the eye-position from the target position in two-dimensional space (angular offset). Accuracy is the mean angular offset. The mean is a highly interpretable measure of central tendency if the underlying error distribution is unimodal and normal. However, in the context of an underlying multimodal distribution, the mean is less interpretable. We will present evidence that the majority of such distributions are multimodal. Only 14.7% of fixation angular offset distributions were unimodal, and of these, only 11.5% were normally distributed. (Of the entire dataset, 1.7% were unimodal and normal.) This multimodality is true even if there is only a single, continuous tracking fixation segment per trial. We present several approaches to measure accuracy in the face of multimodality. We also address the role of fixation drift in partially explaining multimodality.

8.
J Eye Mov Res ; 14(3)2021.
Article in English | MEDLINE | ID: mdl-38957345

ABSTRACT

The Fourier theorem states that any time-series can be decomposed into a set of sinusoidal frequencies, each with its own phase and amplitude. The literature suggests that some frequencies are important to reproduce key qualities of eye-movements ("signal") and some of frequencies are not important ("noise"). To investigate what is signal and what is noise, we analyzed our dataset in three ways: (1) visual inspection of plots of saccade, microsaccade and smooth pursuit exemplars; (2) analysis of the percentage of variance accounted for (PVAF) in 1,033 unfiltered saccade trajectories by each frequency band; (3) analyzing the main sequence relationship between saccade peak velocity and amplitude, based on a power law fit. Visual inspection suggested that frequencies up to 75 Hz are required to represent microsaccades. Our PVAF analysis indicated that signals in the 0-25 Hz band account for nearly 100% of the variance in saccade trajectories. Power law coefficients (a, b) return to unfiltered levels for signals low-pass filtered at 75 Hz or higher. We conclude that to maintain eyemovement signal and reduce noise, a cutoff frequency of 75 Hz is appropriate. We explain why, given this finding, a minimum sampling rate of 750 Hz is suggested.

9.
J Eye Mov Res ; 14(3)2021.
Article in English | MEDLINE | ID: mdl-38957346

ABSTRACT

In a prior report (Raju et al., 2023) we concluded that, if the goal was to preserve events such as saccades, microsaccades, and smooth pursuit in eye-tracking recordings, data with sine wave frequencies less than 75 Hz were the signal and data above 75 Hz were noise. Here, we compare five filters in their ability to preserve signal and remove noise. We compared the proprietary STD and EXTRA heuristic filters provided by our EyeLink 1000 (SR-Research, Ottawa, Canada), a Savitzky- Golay (SG) filter, an infinite impulse response (IIR) filter (low-pass Butterworth), and a finite impulse filter (FIR). For each of the non-heuristic filters, we systematically searched for optimal parameters. Both the IIR and the FIR filters were zero-phase filters. All filters were evaluated on 216 fixation segments (256 samples), from nine subjects. Mean frequency response profiles and amplitude spectra for all five filters are provided. Also, we examined the effect of our filters on a noisy recording. Our FIR filter had the sharpest roll-off of any filter. Therefore, it maintained the signal and removed noise more effectively than any other filter. On this basis, we recommend the use of our FIR filter. We also report on the effect of these filters on temporal autocorrelation.

10.
Sensors (Basel) ; 20(16)2020 Aug 14.
Article in English | MEDLINE | ID: mdl-32823860

ABSTRACT

It is generally accepted that relatively more permanent (i.e., more temporally persistent) traits are more valuable for biometric performance than less permanent traits. Although this finding is intuitive, there is no current work identifying exactly where in the biometric analysis temporal persistence makes a difference. In this paper, we answer this question. In a recent report, we introduced the intraclass correlation coefficient (ICC) as an index of temporal persistence for such features. Here, we present a novel approach using synthetic features to study which aspects of a biometric identification study are influenced by the temporal persistence of features. What we show is that using more temporally persistent features produces effects on the similarity score distributions that explain why this quality is so key to biometric performance. The results identified with the synthetic data are largely reinforced by an analysis of two datasets, one based on eye-movements and one based on gait. There was one difference between the synthetic and real data, related to the intercorrelation of features in real data. Removing these intercorrelations for real datasets with a decorrelation step produced results which were very similar to that obtained with synthetic features.


Subject(s)
Biometric Identification , Eye Movements , Gait Analysis , Biometry , Eye-Tracking Technology , Humans
11.
Behav Res Methods ; 50(4): 1374-1397, 2018 08.
Article in English | MEDLINE | ID: mdl-29766396

ABSTRACT

NystrÓ§m and Holmqvist have published a method for the classification of eye movements during reading (ONH) (Nyström & Holmqvist, 2010). When we applied this algorithm to our data, the results were not satisfactory, so we modified the algorithm (now the MNH) to better classify our data. The changes included: (1) reducing the amount of signal filtering, (2) excluding a new type of noise, (3) removing several adaptive thresholds and replacing them with fixed thresholds, (4) changing the way that the start and end of each saccade was determined, (5) employing a new algorithm for detecting PSOs, and (6) allowing a fixation period to either begin or end with noise. A new method for the evaluation of classification algorithms is presented. It was designed to provide comprehensive feedback to an algorithm developer, in a time-efficient manner, about the types and numbers of classification errors that an algorithm produces. This evaluation was conducted by three expert raters independently, across 20 randomly chosen recordings, each classified by both algorithms. The MNH made many fewer errors in determining when saccades start and end, and it also detected some fixations and saccades that the ONH did not. The MNH fails to detect very small saccades. We also evaluated two additional algorithms: the EyeLink Parser and a more current, machine-learning-based algorithm. The EyeLink Parser tended to find more saccades that ended too early than did the other methods, and we found numerous problems with the output of the machine-learning-based algorithm.


Subject(s)
Algorithms , Reading , Saccades/physiology , Female , Humans , Machine Learning , Male , Young Adult
12.
PLoS One ; 12(6): e0178501, 2017.
Article in English | MEDLINE | ID: mdl-28575030

ABSTRACT

We introduce the intraclass correlation coefficient (ICC) to the biometric community as an index of the temporal persistence, or stability, of a single biometric feature. It requires, as input, a feature on an interval or ratio scale, and which is reasonably normally distributed, and it can only be calculated if each subject is tested on 2 or more occasions. For a biometric system, with multiple features available for selection, the ICC can be used to measure the relative stability of each feature. We show, for 14 distinct data sets (1 synthetic, 8 eye-movement-related, 2 gait-related, and 2 face-recognition-related, and one brain-structure-related), that selecting the most stable features, based on the ICC, resulted in the best biometric performance generally. Analyses based on using only the most stable features produced superior Rank-1-Identification Rate (Rank-1-IR) performance in 12 of 14 databases (p = 0.0065, one-tailed), when compared to other sets of features, including the set of all features. For Equal Error Rate (EER), using a subset of only high-ICC features also produced superior performance in 12 of 14 databases (p = 0. 0065, one-tailed). In general, then, for our databases, prescreening potential biometric features, and choosing only highly reliable features yields better performance than choosing lower ICC features or than choosing all features combined. We also determined that, as the ICC of a group of features increases, the median of the genuine similarity score distribution increases and the spread of this distribution decreases. There was no statistically significant similar relationships for the impostor distributions. We believe that the ICC will find many uses in biometric research. In case of the eye movement-driven biometrics, the use of reliable features, as measured by ICC, allowed to us achieve the authentication performance with EER = 2.01%, which was not possible before.


Subject(s)
Biometry , Brain , Database Management Systems , Face , Gait , Humans
13.
Behav Res Methods ; 45(1): 203-15, 2013 Mar.
Article in English | MEDLINE | ID: mdl-22806708

ABSTRACT

Ternary eye movement classification, which separates fixations, saccades, and smooth pursuit from the raw eye positional data, is extremely challenging. This article develops new and modifies existing eye-tracking algorithms for the purpose of conducting meaningful ternary classification. To this end, a set of qualitative and quantitative behavior scores is introduced to facilitate the assessment of classification performance and to provide means for automated threshold selection. Experimental evaluation of the proposed methods is conducted using eye movement records obtained from 11 subjects at 1000 Hz in response to a step-ramp stimulus eliciting fixations, saccades, and smooth pursuits. Results indicate that a simple hybrid method that incorporates velocity and dispersion thresholding allows producing robust classification performance. It is concluded that behavior scores are able to aid automated threshold selection for the algorithms capable of successful classification.


Subject(s)
Algorithms , Eye Movement Measurements , Models, Biological , Pattern Recognition, Automated/methods , Pursuit, Smooth , Saccades , Adolescent , Adult , Fixation, Ocular , Humans , Reaction Time , Reference Values , Young Adult
14.
J Stud Alcohol Drugs ; 70(5): 652-9, 2009 Sep.
Article in English | MEDLINE | ID: mdl-19737488

ABSTRACT

OBJECTIVE: Heavy episodic drinking in college is an issue of major concern in our society. In the college setting, where alcohol misuse is prevalent, alcohol-related perceptions and automatic attentional biases may be important determinants in students' decisions to engage in risking drinking behaviors. The current study examined college students' attention to alcohol-related beverages in real time using ocular-imaging techniques. The authors hypothesized that alcohol-consumption characteristics such as quantity-frequency of alcohol consumption would predict ocular-imaging indices of attentional bias to alcohol-related images. METHOD: Twenty-six college students successfully completed questionnaires assessing basic demographics and alcohol-consumption characteristics, followed by an eye-tracking task in which they viewed pictorial stimuli consisting of photographs of alcohol-related scenes, household objects, or a combination of these items. RESULTS: Quantity-frequency index (QFI) of alcohol consumption was positively related to the percentage of initial ocular fixations on the alcohol-related items (r = .62, p = .001), whereas QFI negatively predicted the percentage of initial ocular fixations on the control images (r = -.60, p = .002). In addition, QFI positively predicted participants' dwell time on alcohol-related images (r = .57, p = .005), and negatively predicted dwell time on control images (r = -.41, p = .05). Age at first drink and days since last alcohol consumption were not related to eye-tracking metrics. CONCLUSIONS: Ocular-imaging methods are a valuable tool for use in the study of attentional bias to alcohol-related images in college drinkers. Further research is needed to determine the potential application of these methods to the prevention and treatment of alcohol misuse on college campuses.


Subject(s)
Alcohol Drinking/physiopathology , Attention/physiology , Fixation, Ocular/physiology , Photic Stimulation/methods , Students , Universities , Adolescent , Adult , Alcohol Drinking/psychology , Eye Movements/physiology , Female , Humans , Male , Students/psychology , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...