Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 1.036
Filter
1.
Sensors (Basel) ; 24(10)2024 May 08.
Article in English | MEDLINE | ID: mdl-38793839

ABSTRACT

Understanding human actions often requires in-depth detection and interpretation of bio-signals. Early eye disengagement from the target (EEDT) represents a significant eye behavior that involves the proactive disengagement of the gazes from the target to gather information on the anticipated pathway, thereby enabling rapid reactions to the environment. It remains unknown how task difficulty and task repetition affect EEDT. We aim to provide direct evidence of how these factors influence EEDT. We developed a visual tracking task in which participants viewed arrow movement videos while their eye movements were tracked. The task complexity was increased by increasing movement steps. Every movement pattern was performed twice to assess the effect of repetition on eye movement. Participants were required to recall the movement patterns for recall accuracy evaluation and complete cognitive load assessment. EEDT was quantified by the fixation duration and frequency within the areas of eye before arrow. When task difficulty increased, we found the recall accuracy score decreased, the cognitive load increased, and EEDT decreased significantly. The EEDT was higher in the second trial, but significance only existed in tasks with lower complexity. EEDT was positively correlated with recall accuracy and negatively correlated with cognitive load. Performing EEDT was reduced by task complexity and increased by task repetition. EEDT may be a promising sensory measure for assessing task performance and cognitive load and can be used for the future development of eye-tracking-based sensors.


Subject(s)
Eye Movements , Eye-Tracking Technology , Humans , Male , Eye Movements/physiology , Female , Adult , Young Adult , Task Performance and Analysis , Cognition/physiology , Fixation, Ocular/physiology
2.
Sci Rep ; 14(1): 11661, 2024 05 22.
Article in English | MEDLINE | ID: mdl-38778122

ABSTRACT

Gaze estimation is long been recognised as having potential as the basis for human-computer interaction (HCI) systems, but usability and robustness of performance remain challenging . This work focuses on systems in which there is a live video stream showing enough of the subjects face to track eye movements and some means to infer gaze location from detected eye features. Currently, systems generally require some form of calibration or set-up procedure at the start of each user session. Here we explore some simple strategies for enabling gaze based HCI to operate immediately and robustly without any explicit set-up tasks. We explore different choices of coordinate origin for combining extracted features from multiple subjects and the replacement of subject specific calibration by system initiation based on prior models. Results show that referencing all extracted features to local coordinate origins determined by subject start position enables robust immediate operation. Combining this approach with an adaptive gaze estimation model using an interactive user interface enables continuous operation with the 75th percentile gaze errors of 0.7 ∘ , and maximum gaze errors of 1.7 ∘ during prospective testing. There constitute state-of-the-art results and have the potential to enable a new generation of reliable gaze based HCI systems.


Subject(s)
Eye Movements , Fixation, Ocular , User-Computer Interface , Humans , Fixation, Ocular/physiology , Eye Movements/physiology , Male , Eye-Tracking Technology , Female , Adult
3.
Nat Commun ; 15(1): 3692, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38693186

ABSTRACT

Over the last decades, cognitive neuroscience has identified a distributed set of brain regions that are critical for attention. Strong anatomical overlap with brain regions critical for oculomotor processes suggests a joint network for attention and eye movements. However, the role of this shared network in complex, naturalistic environments remains understudied. Here, we investigated eye movements in relation to (un)attended sentences of natural speech. Combining simultaneously recorded eye tracking and magnetoencephalographic data with temporal response functions, we show that gaze tracks attended speech, a phenomenon we termed ocular speech tracking. Ocular speech tracking even differentiates a target from a distractor in a multi-speaker context and is further related to intelligibility. Moreover, we provide evidence for its contribution to neural differences in speech processing, emphasizing the necessity to consider oculomotor activity in future research and in the interpretation of neural differences in auditory cognition.


Subject(s)
Attention , Eye Movements , Magnetoencephalography , Speech Perception , Speech , Humans , Attention/physiology , Eye Movements/physiology , Male , Female , Adult , Young Adult , Speech Perception/physiology , Speech/physiology , Acoustic Stimulation , Brain/physiology , Eye-Tracking Technology
4.
Sensors (Basel) ; 24(9)2024 Apr 24.
Article in English | MEDLINE | ID: mdl-38732794

ABSTRACT

High-quality eye-tracking data are crucial in behavioral sciences and medicine. Even with a solid understanding of the literature, selecting the most suitable algorithm for a specific research project poses a challenge. Empowering applied researchers to choose the best-fitting detector for their research needs is the primary contribution of this paper. We developed a framework to systematically assess and compare the effectiveness of 13 state-of-the-art algorithms through a unified application interface. Hence, we more than double the number of algorithms that are currently usable within a single software package and allow researchers to identify the best-suited algorithm for a given scientific setup. Our framework validation on retrospective data underscores its suitability for algorithm selection. Through a detailed and reproducible step-by-step workflow, we hope to contribute towards significantly improved data quality in scientific experiments.


Subject(s)
Algorithms , Eye-Tracking Technology , Humans , Software , Data Accuracy , Eye Movements/physiology , Reproducibility of Results
5.
Sci Rep ; 14(1): 12000, 2024 05 25.
Article in English | MEDLINE | ID: mdl-38796509

ABSTRACT

In a retrospective study, 54 patients with treatment-resistant major depressive disorder (TRD) completed a free-viewing task in which they had to freely explore pairs of faces (an emotional face (happy or sad) opposite to a neutral face). Attentional bias to emotional faces was calculated for early and sustained attention. We observed a significant negative correlation between depression severity as measured by the 10-item Montgomery-Åsberg Depression Rating Scale (MADRS) and sustained attention to happy faces. In addition, we observed a positive correlation between depression severity and sustained attention to sad faces. No significant correlation between depression severity and early attention was found for either happy or sad faces. Although conclusions from the current study are limited by the lack of comparison with a control group, the eye-tracking free-viewing task appears to be a relevant, accessible and easy-to-use tool for measuring depression severity through emotional attentional biases in TRD.


Subject(s)
Attentional Bias , Depressive Disorder, Major , Emotions , Facial Expression , Humans , Male , Female , Adult , Attentional Bias/physiology , Middle Aged , Emotions/physiology , Depressive Disorder, Major/psychology , Depressive Disorder, Major/physiopathology , Retrospective Studies , Eye-Tracking Technology , Depressive Disorder, Treatment-Resistant/psychology , Severity of Illness Index , Attention/physiology
6.
PLoS One ; 19(5): e0304150, 2024.
Article in English | MEDLINE | ID: mdl-38805447

ABSTRACT

When comprehending speech, listeners can use information encoded in visual cues from a face to enhance auditory speech comprehension. For example, prior work has shown that the mouth movements reflect articulatory features of speech segments and durational information, while pitch and speech amplitude are primarily cued by eyebrow and head movements. Little is known about how the visual perception of segmental and prosodic speech information is influenced by linguistic experience. Using eye-tracking, we studied how perceivers' visual scanning of different regions on a talking face predicts accuracy in a task targeting both segmental versus prosodic information, and also asked how this was influenced by language familiarity. Twenty-four native English perceivers heard two audio sentences in either English or Mandarin (an unfamiliar, non-native language), which sometimes differed in segmental or prosodic information (or both). Perceivers then saw a silent video of a talking face, and judged whether that video matched either the first or second audio sentence (or whether both sentences were the same). First, increased looking to the mouth predicted correct responses only for non-native language trials. Second, the start of a successful search for speech information in the mouth area was significantly delayed in non-native versus native trials, but just when there were only prosodic differences in the auditory sentences, and not when there were segmental differences. Third, (in correct trials) the saccade amplitude in native language trials was significantly greater than in non-native trials, indicating more intensely focused fixations in the latter. Taken together, these results suggest that mouth-looking was generally more evident when processing a non-native versus native language in all analyses, but fascinatingly, when measuring perceivers' latency to fixate the mouth, this language effect was largest in trials where only prosodic information was useful for the task.


Subject(s)
Language , Phonetics , Speech Perception , Humans , Female , Male , Adult , Speech Perception/physiology , Young Adult , Face/physiology , Visual Perception/physiology , Eye Movements/physiology , Speech/physiology , Eye-Tracking Technology
7.
Ann Behav Med ; 58(6): 445-456, 2024 May 23.
Article in English | MEDLINE | ID: mdl-38718146

ABSTRACT

BACKGROUND: Little is known about the influence of e-cigarette marketing features on the antecedents of e-cigarette use. PURPOSE: Using an eye-tracking experiment, we examined visual attention to common features in e-cigarette ads and its associations with positive e-cigarette perceptions among young adults. METHODS: Young adults (ages 18-29) who smoke cigarettes (n = 40) or do not use tobacco (n = 71) viewed 30 e-cigarette ads on a computer screen. Eye-tracking technology measured dwell time (fixation duration) and entry time (time to first fixation) for 14 pre-defined ad features. Participants then completed a survey about perceptions of e-cigarettes shown in the ads. We used regression models to examine the associations between ad features and standardized attention metrics among all participants and by tobacco-use status and person-aggregated standardized attention for each ad feature and positive e-cigarette perceptions. RESULTS: Dwell time was the longest for smoker-targeted claims, positive experience claims, and price promotions. Entry time was the shortest for multiple flavor descriptions, nicotine warnings, and people. Those who do not use tobacco had a longer dwell time for minor sales restrictions and longer entry time for purchasing information than those who smoke. Longer dwell time for multiple flavor descriptions was associated with e-cigarette appeal. A shorter entry time for fruit flavor description was associated with positive e-cigarette-use expectancies. CONCLUSIONS: Young adults allocated attention differently to various e-cigarette ad features, and such viewing patterns were largely similar by tobacco-use statuses. Multiple or fruit flavors may be the features that contribute to the positive influence of e-cigarette marketing among young adults.


E-cigarette marketing exposure is associated with e-cigarette use among young adults. However, little is known about the influence of e-cigarette marketing features among this population. This study used eye-tracking technology to objectively measure dwell time and entry time for 14 pre-defined e-cigarette ad features. Young adults (ages 18­29) who smoke cigarettes (n = 40) or do not use tobacco (n = 71) viewed 30 e-cigarette ads on a computer screen and completed an online survey about positive e-cigarette perceptions. The study found that dwell time was the longest for smoker-targeted claims, positive experience claims, and price promotions. Entry time was the shortest for multiple flavor descriptions, nicotine warnings, and people. Those who do not use tobacco had a longer dwell time for minor sales restrictions and longer entry time for purchasing information than those who smoke. Longer dwell time for multiple flavor descriptions was associated with e-cigarette appeal. A shorter entry time for fruit flavor description was associated with positive e-cigarette-use expectancies. The results suggest that young adults allocated attention differently to various e-cigarette ad features, and such viewing patterns were largely similar by tobacco-use statuses. Multiple or fruit flavors may be the features that contribute to the positive influence of e-cigarette marketing among young adults.


Subject(s)
Advertising , Attention , Electronic Nicotine Delivery Systems , Eye-Tracking Technology , Humans , Young Adult , Male , Female , Adult , Adolescent , Vaping/psychology
8.
JAMA Netw Open ; 7(5): e2411190, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38743420

ABSTRACT

Importance: Finding effective and scalable solutions to address diagnostic delays and disparities in autism is a public health imperative. Approaches that integrate eye-tracking biomarkers into tiered community-based models of autism evaluation hold promise for addressing this problem. Objective: To determine whether a battery of eye-tracking biomarkers can reliably differentiate young children with and without autism in a community-referred sample collected during clinical evaluation in the primary care setting and to evaluate whether combining eye-tracking biomarkers with primary care practitioner (PCP) diagnosis and diagnostic certainty is associated with diagnostic outcome. Design, Setting, and Participants: Early Autism Evaluation (EAE) Hub system PCPs referred a consecutive sample of children to this prospective diagnostic study for blinded eye-tracking index test and follow-up expert evaluation from June 7, 2019, to September 23, 2022. Participants included 146 children (aged 14-48 months) consecutively referred by 7 EAE Hubs. Of 154 children enrolled, 146 provided usable data for at least 1 eye-tracking measure. Main Outcomes and Measures: The primary outcomes were sensitivity and specificity of a composite eye-tracking (ie, index) test, which was a consolidated measure based on significant eye-tracking indices, compared with reference standard expert clinical autism diagnosis. Secondary outcome measures were sensitivity and specificity of an integrated approach using an index test and PCP diagnosis and certainty. Results: Among 146 children (mean [SD] age, 2.6 [0.6] years; 104 [71%] male; 21 [14%] Hispanic or Latine and 96 [66%] non-Latine White; 102 [70%] with a reference standard autism diagnosis), 113 (77%) had concordant autism outcomes between the index (composite biomarker) and reference outcomes, with 77.5% sensitivity (95% CI, 68.4%-84.5%) and 77.3% specificity (95% CI, 63.0%-87.2%). When index diagnosis was based on the combination of a composite biomarker, PCP diagnosis, and diagnostic certainty, outcomes were concordant with reference standard for 114 of 127 cases (90%) with a sensitivity of 90.7% (95% CI, 83.3%-95.0%) and a specificity of 86.7% (95% CI, 70.3%-94.7%). Conclusions and Relevance: In this prospective diagnostic study, a composite eye-tracking biomarker was associated with a best-estimate clinical diagnosis of autism, and an integrated diagnostic model including PCP diagnosis and diagnostic certainty demonstrated improved sensitivity and specificity. These findings suggest that equipping PCPs with a multimethod diagnostic approach has the potential to substantially improve access to timely, accurate diagnosis in local communities.


Subject(s)
Autistic Disorder , Biomarkers , Eye-Tracking Technology , Primary Health Care , Humans , Male , Female , Child, Preschool , Primary Health Care/methods , Prospective Studies , Infant , Biomarkers/blood , Biomarkers/analysis , Autistic Disorder/diagnosis , Sensitivity and Specificity
9.
PLoS One ; 19(5): e0302644, 2024.
Article in English | MEDLINE | ID: mdl-38701068

ABSTRACT

Narcissism is a part of the Dark Triad that consists also of the traits of Machiavellianism and psychopathy. Two main types of narcissism exist: grandiose and vulnerable narcissism. Being a Dark Triad trait, narcissism is typically associated with negative outcomes. However, recent research suggests that at least the grandiose type may be linked (directly or indirectly) to positive outcomes including lower levels of psychopathology, higher school grades in adolescents, deeper and more strategic learning in university students and higher cognitive performance in experimental settings. The current pre-registered, quasi-experimental study implemented eye-tracking to assess whether grandiose narcissism indirectly predicts cognitive performance through wider distribution of attention on the Raven's Progressive Matrices task. Fifty-four adults completed measures of the Dark Triad, self-esteem and psychopathology. Eight months to one year later, participants completed the Raven's, while their eye-movements were monitored during high stress conditions. When controlling for previous levels of psychopathology, grandiose narcissism predicted higher Raven's scores indirectly, through increased variability in the number of fixations across trials. These findings suggest that grandiose narcissism predicts higher cognitive performance, at least in experimental settings, and call for further research to understand the implications of this seemingly dark trait for performance across various settings.


Subject(s)
Attention , Cognition , Narcissism , Humans , Male , Female , Cognition/physiology , Adult , Attention/physiology , Young Adult , Eye-Tracking Technology , Stress, Psychological , Adolescent , Self Concept
10.
PLoS One ; 19(5): e0303755, 2024.
Article in English | MEDLINE | ID: mdl-38758747

ABSTRACT

Recent eye tracking studies have linked gaze reinstatement-when eye movements from encoding are reinstated during retrieval-with memory performance. In this study, we investigated whether gaze reinstatement is influenced by the affective salience of information stored in memory, using an adaptation of the emotion-induced memory trade-off paradigm. Participants learned word-scene pairs, where scenes were composed of negative or neutral objects located on the left or right side of neutral backgrounds. This allowed us to measure gaze reinstatement during scene memory tests based on whether people looked at the side of the screen where the object had been located. Across two experiments, we behaviorally replicated the emotion-induced memory trade-off effect, in that negative object memory was better than neutral object memory at the expense of background memory. Furthermore, we found evidence that gaze reinstatement was related to recognition memory for the object and background scene components. This effect was generally comparable for negative and neutral memories, although the effects of valence varied somewhat between the two experiments. Together, these findings suggest that gaze reinstatement occurs independently of the processes contributing to the emotion-induced memory trade-off effect.


Subject(s)
Emotions , Eye Movements , Eye-Tracking Technology , Memory , Humans , Emotions/physiology , Female , Male , Young Adult , Adult , Memory/physiology , Eye Movements/physiology , Fixation, Ocular/physiology , Adolescent , Recognition, Psychology/physiology , Photic Stimulation
11.
Sensors (Basel) ; 24(9)2024 Apr 23.
Article in English | MEDLINE | ID: mdl-38732772

ABSTRACT

In mobile eye-tracking research, the automatic annotation of fixation points is an important yet difficult task, especially in varied and dynamic environments such as outdoor urban landscapes. This complexity is increased by the constant movement and dynamic nature of both the observer and their environment in urban spaces. This paper presents a novel approach that integrates the capabilities of two foundation models, YOLOv8 and Mask2Former, as a pipeline to automatically annotate fixation points without requiring additional training or fine-tuning. Our pipeline leverages YOLO's extensive training on the MS COCO dataset for object detection and Mask2Former's training on the Cityscapes dataset for semantic segmentation. This integration not only streamlines the annotation process but also improves accuracy and consistency, ensuring reliable annotations, even in complex scenes with multiple objects side by side or at different depths. Validation through two experiments showcases its efficiency, achieving 89.05% accuracy in a controlled data collection and 81.50% accuracy in a real-world outdoor wayfinding scenario. With an average runtime per frame of 1.61 ± 0.35 s, our approach stands as a robust solution for automatic fixation annotation.


Subject(s)
Eye-Tracking Technology , Fixation, Ocular , Humans , Fixation, Ocular/physiology , Video Recording/methods , Algorithms , Eye Movements/physiology
12.
Psychol Sport Exerc ; 73: 102654, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38740079

ABSTRACT

INTRODUCTION: In the Olympic climbing discipline of bouldering, climbers can preview boulders before actually climbing them. Whilst such pre-climbing route previewing is considered as central to subsequent climbing performance, research on cognitive-behavioural processes during the preparatory phase in the modality of bouldering is lacking. The present study aimed at extending existing findings on neural efficiency processes associated with advanced skill level during motor activity preparation by examining cognitive-behavioural processes during the previewing of boulders. METHODS: Intermediate (n = 20), advanced (n = 20), and elite (n = 20) climbers were asked to preview first, and then attempt two boulders of different difficulty levels (boulder 1: advanced difficulty; boulder 2: elite difficulty). During previewing, climbers' gaze behaviour was gathered using a portable eye-tracker. RESULTS: Linear regression revealed for both boulders a significant relation between participants' skill levels and both preview duration and number of scans during previewing. Elite climbers more commonly used a superficial scan path than advanced and intermediate climbers. In the more difficult boulder, both elite and advanced climbers showed longer preview durations, performed more scans, and applied less often a superficial scan path than in the easier boulder. CONCLUSION: Findings revealed that cognitive-behavioural processes during route previewing are associated with climbing expertise and boulder difficulty. Superior domain-specific cognitive proficiency seems to account for the expertise-processing-paradigm in boulder previewing, contributing to faster and more conscious acquisition of perceptual cues, more efficient visual search strategies, and better identification of representative patterns among experts.


Subject(s)
Cognition , Mountaineering , Humans , Male , Cognition/physiology , Adult , Young Adult , Mountaineering/physiology , Mountaineering/psychology , Female , Athletic Performance/physiology , Athletic Performance/psychology , Motor Skills/physiology , Psychomotor Performance/physiology , Eye-Tracking Technology
13.
Cereb Cortex ; 34(4)2024 Apr 01.
Article in English | MEDLINE | ID: mdl-38602738

ABSTRACT

Cerebral small vessel disease is the one of the most prevalent causes of vascular cognitive impairment. We aimed to find objective and process-based indicators related to memory function to assist in the detection of memory impairment in patients with cerebral small vessel disease. Thirty-nine cerebral small vessel disease patients and 22 healthy controls were invited to complete neurological examinations, neuropsychological assessments, and eye tracking tasks. Eye tracking indicators were recorded and analyzed in combination with imaging features. The cerebral small vessel disease patients scored lower on traditional memory task and performed worse on eye tracking memory task performance compared to the healthy controls. The cerebral small vessel disease patients exhibited longer visit duration and more visit count within areas of interest and targets and decreased percentage value of total visit duration on target images to total visit duration on areas of interest during decoding stage among all levels. Our results demonstrated the cerebral small vessel disease patients performed worse in memory scale and eye tracking memory task, potentially due to their heightened attentional allocation to nontarget images during the retrieval stage. The eye tracking memory task could provide process-based indicators to be a beneficial complement to memory assessment and new insights into mechanism of memory impairment in cerebral small vessel disease patients.


Subject(s)
Cerebral Small Vessel Diseases , Cognitive Dysfunction , Humans , Eye-Tracking Technology , Memory Disorders/diagnostic imaging , Memory Disorders/etiology , Cerebral Small Vessel Diseases/complications , Cerebral Small Vessel Diseases/diagnostic imaging , Cognition
14.
Comput Biol Med ; 174: 108364, 2024 May.
Article in English | MEDLINE | ID: mdl-38599067

ABSTRACT

Eye movement analysis is critical to studying human brain phenomena such as perception, cognition, and behavior. However, under uncontrolled real-world settings, the recorded gaze coordinates (commonly used to track eye movements) are typically noisy and make it difficult to track change in the state of each phenomenon precisely, primarily because the expected change is usually a slower transient process. This paper proposes an approach, Improved Naive Segmented linear regression (INSLR), which approximates the gaze coordinates with a piecewise linear function (PLF) referred to as a hypothesis. INSLR improves the existing NSLR approach by employing a hypotheses clustering algorithm, which redefines the final hypothesis estimation in two steps: (1) At each time-stamp, measure the likelihood of each hypothesis in the candidate list of hypotheses by using the least square fit score and its distance from the k-means of the hypotheses in the list. (2) Filter hypothesis based on a pre-defined threshold. We demonstrate the significance of the INSLR method in addressing the challenges of uncontrolled real-world settings such as gaze denoising and minimizing gaze prediction errors from cost-effective devices like webcams. Experiment results show INSLR consistently outperforms the baseline NSLR in denoising noisy signals from three eye movement datasets and minimizes the error in gaze prediction from a low precision device for 71.1% samples. Furthermore, this improvement in denoising quality is further validated by the improved accuracy of the oculomotor event classifier called NSLR-HMM and enhanced sensitivity in detecting variations in attention induced by distractor during online lecture.


Subject(s)
Eye Movements , Humans , Eye Movements/physiology , Linear Models , Algorithms , Eye-Tracking Technology
15.
Epilepsy Behav ; 155: 109749, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38636142

ABSTRACT

OBJECTIVE: Epilepsy patients often report memory deficits despite normal objective testing, suggesting that available measures are insensitive or that non-mnemonic factors are involved. The Visual Paired Comparison Task (VPCT) assesses novelty preference, the tendency to fixate on novel images rather than previously viewed items, requiring recognition memory for the "old" images. As novelty preference is a sensitive measure of hippocampal-dependent memory function, we predicted impaired VPCT performance in epilepsy patients compared to healthy controls. METHODS: We assessed 26 healthy adult controls and 31 epilepsy patients (16 focal-onset, 13 generalized-onset, 2 unknown-onset) with the VPCT using delays of 2 or 30 s between encoding and recognition. Fifteen healthy controls and 17 epilepsy patients (10 focal-onset, 5 generalized-onset, 2 unknown-onset) completed the task at 2-, 5-, and 30-minute delays. Subjects also performed standard memory measures, including the Medical College of Georgia (MCG) Paragraph Test, California Verbal Learning Test-Second Edition (CVLT-II), and Brief Visual Memory Test-Revised (BVMT-R). RESULTS: The epilepsy group was high functioning, with greater estimated IQ (p = 0.041), greater years of education (p = 0.034), and higher BVMT-R scores (p = 0.024) compared to controls. Both the control group and epilepsy cohort, as well as focal- and generalized-onset subgroups, had intact novelty preference at the 2- and 30-second delays (p-values ≤ 0.001) and declined at 30 min (p-values > 0.05). Only the epilepsy patients had early declines at 2- and 5-minute delays (controls with intact novelty preference at p = 0.003 and p ≤ 0.001, respectively; epilepsy groups' p-values > 0.05). CONCLUSIONS: Memory for the "old" items decayed more rapidly in overall, focal-onset, and generalized-onset epilepsy groups. The VPCT detected deficits while standard memory measures were largely intact, suggesting that the VPCT may be a more sensitive measure of temporal lobe memory function than standard neuropsychological batteries.


Subject(s)
Epilepsy , Memory Disorders , Neuropsychological Tests , Recognition, Psychology , Humans , Male , Female , Adult , Epilepsy/psychology , Epilepsy/diagnosis , Epilepsy/physiopathology , Epilepsy/complications , Recognition, Psychology/physiology , Memory Disorders/diagnosis , Memory Disorders/etiology , Middle Aged , Young Adult , Eye-Tracking Technology , Photic Stimulation/methods
16.
J Affect Disord ; 358: 326-334, 2024 Aug 01.
Article in English | MEDLINE | ID: mdl-38615846

ABSTRACT

BACKGROUND: Early identification of autism spectrum disorder (ASD) improves long-term outcomes, yet significant diagnostic delays persist. METHODS: A retrospective cohort of 449 children (ASD: 246, typically developing [TD]: 203) was used for model development. Eye-movement data were collected from the participants watching videos that featured eye-tracking paradigms for assessing social and non-social cognition. Five machine learning algorithms, namely random forest, support vector machine, logistic regression, artificial neural network, and extreme gradient boosting, were trained to classify children with ASD and TD. The best-performing algorithm was selected to build the final model which was further evaluated in a prospective cohort of 80 children. The Shapley values interpreted important eye-tracking features. RESULTS: Random forest outperformed other algorithms during model development and achieved an area under the curve of 0.849 (< 3 years: 0.832, ≥ 3 years: 0.868) on the external validation set. Of the ten most important eye-tracking features, three measured social cognition, and the rest were related to non-social cognition. A deterioration in model performance was observed using only the social or non-social cognition-related eye-tracking features. LIMITATIONS: The sample size of this study, although larger than that of existing studies of ASD based on eye-tracking data, was still relatively small compared to the number of features. CONCLUSIONS: Machine learning models based on eye-tracking data have the potential to be cost- and time-efficient digital tools for the early identification of ASD. Eye-tracking phenotypes related to social and non-social cognition play an important role in distinguishing children with ASD from TD children.


Subject(s)
Autism Spectrum Disorder , Eye-Tracking Technology , Machine Learning , Humans , Autism Spectrum Disorder/diagnosis , Autism Spectrum Disorder/physiopathology , Male , Female , Child, Preschool , Child , Retrospective Studies , Early Diagnosis , Eye Movements/physiology , Social Cognition , Algorithms , Prospective Studies , Support Vector Machine
17.
Sensors (Basel) ; 24(8)2024 Apr 16.
Article in English | MEDLINE | ID: mdl-38676162

ABSTRACT

Pupil size is a significant biosignal for human behavior monitoring and can reveal much underlying information. This study explored the effects of task load, task familiarity, and gaze position on pupil response during learning a visual tracking task. We hypothesized that pupil size would increase with task load, up to a certain level before decreasing, decrease with task familiarity, and increase more when focusing on areas preceding the target than other areas. Fifteen participants were recruited for an arrow tracking learning task with incremental task load. Pupil size data were collected using a Tobii Pro Nano eye tracker. A 2 × 3 × 5 three-way factorial repeated measures ANOVA was conducted using R (version 4.2.1) to evaluate the main and interactive effects of key variables on adjusted pupil size. The association between individuals' cognitive load, assessed by NASA-TLX, and pupil size was further analyzed using a linear mixed-effect model. We found that task repetition resulted in a reduction in pupil size; however, this effect was found to diminish as the task load increased. The main effect of task load approached statistical significance, but different trends were observed in trial 1 and trial 2. No significant difference in pupil size was detected among the three gaze positions. The relationship between pupil size and cognitive load overall followed an inverted U curve. Our study showed how pupil size changes as a function of task load, task familiarity, and gaze scanning. This finding provides sensory evidence that could improve educational outcomes.


Subject(s)
Eye-Tracking Technology , Pupil , Humans , Pupil/physiology , Male , Female , Adult , Young Adult , Fixation, Ocular/physiology , Eye Movements/physiology
18.
J Exp Child Psychol ; 243: 105928, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38643735

ABSTRACT

Previous studies have shown that adults exhibit the strongest attentional bias toward neutral infant faces when viewing faces with different expressions at different attentional processing stages due to different stimulus presentation times. However, it is not clear how the characteristics of the temporal processing associated with the strongest effect change over time. Thus, we combined a free-viewing task with eye-tracking technology to measure adults' attentional bias toward infant and adult faces with happy, neutral, and sad expressions of the same face. The results of the analysis of the total time course indicated that the strongest effect occurred during the strategic processing stage. However, the results of the analysis of the split time course revealed that sad infant faces first elicited adults' attentional bias at 0 to 500 ms, whereas the strongest effect of attentional bias toward neutral infant faces was observed at 1000 to 3000 ms, peaking at 1500 to 2000 ms. In addition, women and men had no differences in their responses to different expressions. In summary, this study provides further evidence that adults' attentional bias toward infant faces across stages of attention processing is modulated by expressions. Specifically, during automatic processing adults' attentional bias was directed toward sad infant faces, followed by a shift to the processing of neutral infant faces during strategic processing, which ultimately resulted in the strongest effect. These findings highlight that this strongest effect is dynamic and associated with a specific time window in the strategic process.


Subject(s)
Attentional Bias , Facial Expression , Facial Recognition , Humans , Female , Male , Attentional Bias/physiology , Young Adult , Adult , Facial Recognition/physiology , Infant , Eye-Tracking Technology , Attention , Time Factors
19.
Appl Ergon ; 118: 104286, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38583317

ABSTRACT

The human-nature connection is one of the main aspects determining supportive and comfortable office environments. In this context, the application of eye-tracking-equipped Virtual Reality (VR) devices to support an evaluation on the effect of greenery elements indoors on individuals' efficiency and engagement is limited. A new approach to investigate visual attention, distraction, cognitive load and performance in this field is carried out via a pilot-study comparing three virtual office layouts (Indoor Green, Outdoor Green and Non-Biophilic). 63 participants completed cognitive tasks and surveys while measuring gaze behaviour. Sense of presence, immersivity and cybersickness results supported the ecological validity of VR. Visual attention was positively influenced by the proximity of users to the greenery element, while visual distraction from tasks was negatively influenced by the dimension of the greenery. In the presence of greenery elements, lower cognitive loads and more efficient information searching, resulting in improved performance, were also highlighted.


Subject(s)
Attention , Cognition , Eye-Tracking Technology , Virtual Reality , Humans , Pilot Projects , Male , Female , Adult , Young Adult , Workplace/psychology , Task Performance and Analysis , User-Computer Interface
20.
J Vis Exp ; (206)2024 Apr 12.
Article in English | MEDLINE | ID: mdl-38682904

ABSTRACT

The study of behavioral responses to visual stimuli is a key component of understanding visual system function. One notable response is the optokinetic reflex (OKR), a highly conserved innate behavior necessary for image stabilization on the retina. The OKR provides a robust readout of image tracking ability and has been extensively studied to understand visual system circuitry and function in animals from different genetic backgrounds. The OKR consists of two phases: a slow tracking phase as the eye follows a stimulus to the edge of the visual plane and a compensatory fast phase saccade that resets the position of the eye in the orbit. Previous methods of tracking gain quantification, although reliable, are labor intensive and can be subjective or arbitrarily derived. To obtain more rapid and reproducible quantification of eye tracking ability, we have developed a novel semi-automated analysis program, PyOKR, that allows for quantification of two-dimensional eye tracking motion in response to any directional stimulus, in addition to being adaptable to any type of video-oculography equipment. This method provides automated filtering, selection of slow tracking phases, modeling of vertical and horizontal eye vectors, quantification of eye movement gains relative to stimulus speed, and organization of resultant data into a usable spreadsheet for statistical and graphical comparisons. This quantitative and streamlined analysis pipeline, readily accessible via PyPI import, provides a fast and direct measurement of OKR responses, thereby facilitating the study of visual behavioral responses.


Subject(s)
Eye-Tracking Technology , Animals , Nystagmus, Optokinetic/physiology , Eye Movements/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...