Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 3.804
Filter
1.
Laryngoscope ; 2024 Jul 11.
Article in English | MEDLINE | ID: mdl-38989899

ABSTRACT

OBJECTIVES: Training of temporal bone drilling requires more than mastering technical skills with the drill. Skills such as visual imagery, bimanual dexterity, and stress management need to be mastered along with precise knowledge of anatomy. In otorhinolaryngology, these psychomotor skills underlie performance in the drilling of the temporal bone for access to the inner ear in cochlear implant surgery. However, little is known about how psychomotor skills and workload management impact the practitioners' continuous and overall performance. METHODS: To understand how the practitioner's workload and performance unfolds over time, we examine task-evoked pupillary responses (TEPR) of 22 medical students who performed transmastoid-posterior tympanotomy (TMPT) and removal of the bony overhang of the round window niche in a 3D-printed model of the temporal bone. We investigate how students' TEPR metrics (Average Pupil Size [APS], Index of Pupil Activity [IPA], and Low/High Index of Pupillary Activity [LHIPA]) and time spent in drilling phases correspond to the performance in key drilling phases. RESULTS: All TEPR measures revealed significant differences between key drilling phases that corresponded to the anticipated workload. Enlarging the facial recess lasted significantly longer than other phases. IPA captured significant increase of workload in thinning of the posterior canal wall, while APS revealed increased workload during the drilling of the bony overhang. CONCLUSION: Our findings contribute to the contemporary competency-based medical residency programs where objective and continuous monitoring of participants' progress allows to track progress in expertise acquisition. Laryngoscope, 2024.

2.
Cognition ; 250: 105868, 2024 Jul 02.
Article in English | MEDLINE | ID: mdl-38959638

ABSTRACT

It has long been hypothesized that the linguistic structure of events, including event participants and their relative prominence, draws on the non-linguistic nature of events and the roles that these events license. However, the precise relation between the prominence of event participants in language and cognition has not been tested experimentally in a systematic way. Here we address this gap. In four experiments, we investigate the relative prominence of (animate) Agents, Patients, Goals and Instruments in the linguistic encoding of complex events and the prominence of these event roles in cognition as measured by visual search and change blindness tasks. The relative prominence of these event roles was largely similar-though not identical-across linguistic and non-linguistic measures. Across linguistic and non-linguistic tasks, Patients were more salient than Goals, which were more salient than Instruments. (Animate) Agents were more salient than Patients in linguistic descriptions and visual search; however, this asymmetrical pattern did not emerge in change detection. Overall, our results reveal homologies between the linguistic and non-linguistic prominence of individual event participants, thereby lending support to the claim that the linguistic structure of events builds on underlying conceptual event representations. We discuss implications of these findings for linguistic theory and theories of event cognition.

3.
Br J Anaesth ; 2024 Jul 02.
Article in English | MEDLINE | ID: mdl-38960830

ABSTRACT

The most effective way of delivering regional anaesthesia training and the best means of demonstrating competency have not been established. Clinical competency, based on the Dreyfus and Dreyfus lexicon, appears unachievable using current training approaches. Lessons should be taken from the worlds of music, chess, and sports. Modern skills training programmes should be built on an explicit and detailed understanding with measurement of a variety of factors such as perception, attention, psychomotor and visuospatial function, and kinesthetics, coupled with quantitative, accurate, and reliable measurement of performance.

4.
JMIR Serious Games ; 12: e54220, 2024 Jun 27.
Article in English | MEDLINE | ID: mdl-38952012

ABSTRACT

Background: Incentive salience processes are important for the development and maintenance of addiction. Eye characteristics such as gaze fixation time, pupil diameter, and spontaneous eyeblink rate (EBR) are theorized to reflect incentive salience and may serve as useful biomarkers. However, conventional cue exposure paradigms have limitations that may impede accurate assessment of these markers. Objective: This study sought to evaluate the validity of these eye-tracking metrics as indicators of incentive salience within a virtual reality (VR) environment replicating real-world situations of nicotine and tobacco product (NTP) use. Methods: NTP users from the community were recruited and grouped by NTP use patterns: nondaily (n=33) and daily (n=75) use. Participants underwent the NTP cue VR paradigm and completed measures of nicotine craving, NTP use history, and VR-related assessments. Eye-gaze fixation time (attentional bias) and pupillometry in response to NTP versus control cues and EBR during the active and neutral VR scenes were recorded and analyzed using ANOVA and analysis of covariance models. Results: Greater subjective craving, as measured by the Tobacco Craving Questionnaire-Short Form, following active versus neutral scenes was observed (F1,106=47.95; P<.001). Greater mean eye-gaze fixation time (F1,106=48.34; P<.001) and pupil diameter (F1,102=5.99; P=.02) in response to NTP versus control cues were also detected. Evidence of NTP use group effects was observed in fixation time and pupillometry analyses, as well as correlations between these metrics, NTP use history, and nicotine craving. No significant associations were observed with EBR. Conclusions: This study provides additional evidence for attentional bias, as measured via eye-gaze fixation time, and pupillometry as useful biomarkers of incentive salience, and partially supports theories suggesting that incentive salience diminishes as nicotine dependence severity increases.

5.
Article in English | MEDLINE | ID: mdl-38977612

ABSTRACT

Extensive research conducted in controlled laboratory settings has prompted an inquiry into how results can be generalized to real-world situations influenced by the subjects' actions. Virtual reality lends itself ideally to investigating complex situations but requires accurate classification of eye movements, especially when combining it with time-sensitive data such as EEG. We recorded eye-tracking data in virtual reality and classified it into gazes and saccades using a velocity-based classification algorithm, and we cut the continuous data into smaller segments to deal with varying noise levels, as introduced in the REMoDNav algorithm. Furthermore, we corrected for participants' translational movement in virtual reality. Various measures, including visual inspection, event durations, and the velocity and dispersion distributions before and after gaze onset, indicate that we can accurately classify the continuous, free-exploration data. Combining the classified eye-tracking with the EEG data, we generated fixation-onset event-related potentials (ERPs) and event-related spectral perturbations (ERSPs), providing further evidence for the quality of the eye-movement classification and timing of the onset of events. Finally, investigating the correlation between single trials and the average ERP and ERSP identified that fixation-onset ERSPs are less time sensitive, require fewer repetitions of the same behavior, and are potentially better suited to study EEG signatures in naturalistic settings. We modified, designed, and tested an algorithm that allows the combination of EEG and eye-tracking data recorded in virtual reality.

6.
Brain Behav Immun Health ; 39: 100806, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38974339

ABSTRACT

Introduction: The study aimed to investigate whether an exercise-induced pro-inflammatory response alters the perception as well as visual exploration of emotional body language in social interactions. Methods: In a within-subject design, 19 male, healthy adults aged between 19 and 33 years performed a downhill run for 45 min at 70% of their VO2max on a treadmill to induce maximal myokine blood elevations, leading to a pro-inflammatory status. Two control conditions were selected: a control run with no decline and a rest condition without physical exercise. Blood samples were taken before (T0), directly after (T1), 3 h after (T3), and 24 h after (T24) each exercise for analyzing the inflammatory response. 3 h after exercise, participants observed point-light displays (PLDs) of human interactions portraying four emotions (happiness, affection, sadness, and anger). Participants categorized the emotional content, assessed the emotional intensity of the stimuli, and indicated their confidence in their ratings. Eye movements during the entire paradigm and self-reported current mood were also recorded. Results: The downhill exercise condition resulted in significant elevations of measured cytokines (IL6, CRP, MCP-1) and markers for muscle damage (Myoglobin) compared to the control running condition, indicating a pro-inflammatory state after the downhill run. Emotion recognition rates decreased significantly after the downhill run, whereas no such effect was observed after control running. Participants' sensitivity to emotion-specific cues also declined. However, the downhill run had no effect on the perceived emotional intensity or the subjective confidence in the given ratings. Visual scanning behavior was affected after the downhill run, with participants fixating more on sad stimuli, in contrast to the control conditions, where participants exhibited more fixations while observing happy stimuli. Conclusion: Our study demonstrates that inflammation, induced through a downhill running model, impairs perception and emotional recognition abilities. Specifically, inflammation leads to decreased recognition rates of emotional content of social interactions, attributable to diminished discrimination capabilities across all emotional categories. Additionally, we observed alterations in visual exploration behavior. This confirms that inflammation significantly affects an individual's responsiveness to social and affective stimuli.

8.
J Autism Dev Disord ; 2024 Jun 29.
Article in English | MEDLINE | ID: mdl-38951312

ABSTRACT

PURPOSE: Predictive coding theories posit that autism is characterized by an over-adjustment to prediction errors, resulting in frequent updates of prior beliefs. Atypical weighting of prediction errors is generally considered to negatively impact the construction of stable models of the world, but may also yield beneficial effects. In a novel associative learning paradigm, we investigated whether unexpected events trigger faster learning updates in favour of subtle but fully predictive cues in autistic children compared to their non-autistic counterparts. We also explored the relationship between children's language proficiency and their predictive performances. METHODS: Anticipatory fixations and explicit predictions were recorded during three associative learning tasks with deterministic or probabilistic contingencies. One of the probabilistic tasks was designed so that a fully predictive but subtle cue was overshadowed by a less predictive salient one. RESULTS: Both autistic and non-autistic children based their learning on the salient cue, and, contrary to our predictions, showed no signs of updating in favour of the subtle cue. While both groups demonstrated associative learning, autistic children made less accurate explicit predictions than their non-autistic peers in all tasks. Explicit prediction performances were positively correlated with language proficiency in non-autistic children, but no such correlation was observed in autistic children. CONCLUSION: These results suggest no over-adjustment to prediction errors in autistic children and highlight the need to control for general performance in cue-outcome associative learning in predictive processing studies. Further research is needed to explore the nature of the relationship between predictive processing and language development in autism.

9.
Front Psychol ; 15: 1384486, 2024.
Article in English | MEDLINE | ID: mdl-38957884

ABSTRACT

Introduction: The testing of visuocognitive development in preterm infants shows strong interactions between perinatal characteristics and cognition, learning and overall neurodevelopment evolution. The assessment of anticipatory gaze data of object-location bindings via eye-tracking can predict the neurodevelopment of preterm infants at the age of 3 years; little is known, however, about the early cognitive function and its assessment methods during the first year of life. Methods: The current study presents data from a novel assessment tool, a Delayed Match Retrieval (DMR) paradigm via eye-tracking was used to measure visual working memory (VWM) and attention skills. The eye-tracking task that was designed to measure infants' ability to actively localize objects and to make online predictions of object-location bindings. 63 infants participated in the study, 39 preterm infants and 24 healthy full term infants - at a corrected age of 8-9 months for premature infants and similar chronological age for full term infants. Infants were also administered the Bayley Scales of Infant and Toddler Development. Results: The analysis of the Bayley scores showed no significant difference between the two groups while the eye-tracking data showed a significant group effect on all measurements. Moreover, preterm infants' VWM performance was significantly lower than full term's. Birth weight affected the gaze time on all Areas Of Interest (AOIs), overall VWM performance and the scores at the Cognitive Bayley subscale. Furthermore, preterm infants with fetal growth restriction (FGR) showed significant performance effects in the eye-tracking measurements but not on their Bayley scores verifying the high discriminatory value of the eye gaze data. Conclusion: Visual working memory and attention as measured via eye-tracking is a non-intrusive, painless, short duration procedure (approx. 4-min) was found to be a significant tool for identifying prematurity and FGR effects on the development of cognition during the first year of life. Bayley Scales alone may not pick up these deficits. Identifying tools for early neurodevelopmental assessments and cognitive function is important in order to enable earlier support and intervention in the vulnerable group of premature infants, given the associations between foundational executive functional skills and later cognitive and academic ability.

10.
J Neural Eng ; 2024 Jul 10.
Article in English | MEDLINE | ID: mdl-38986464

ABSTRACT

Eye-tracking research has proven valuable in understanding numerous cognitive functions. Recently, Frey et al. provided an exciting deep learning method for learning eye movements from functional magnetic resonance imaging (fMRI) data. It employed the multi-step co-registration of fMRI into the group template to obtain eyeball signal, and thus required additional templates and was time consuming. To resolve this issue, in this paper, we propose a framework named MRGazer for predicting eye gaze points from fMRI in individual space. The MRGazer consists of an eyeball extraction module and a residual network-based eye gaze prediction module. Compared to the previous method, the proposed framework skips the fMRI co-registration step, simplifies the processing protocol, and achieves end-to-end eye gaze regression. The proposed method achieved superior performance in eye fixation regression (Euclidean error, EE=2.04°) than the co-registration-based method (EE=2.89°), and delivered objective results within a shorter time (~0.02 second/volume) than prior method (~0.3 second/volume). The code is available at https://github.com/ustc-bmec/MRGazer.

11.
Sensors (Basel) ; 24(12)2024 Jun 09.
Article in English | MEDLINE | ID: mdl-38931537

ABSTRACT

It is common to see cases in which, when performing tasks in close vision in front of a digital screen, the posture or position of the head is not adequate, especially in young people; it is essential to have a correct posture of the head to avoid visual, muscular, or joint problems. Most of the current systems to control head inclination require an external part attached to the subject's head. The aim of this study is the validation of a procedure that, through a detection algorithm and eye tracking, can control the correct position of the head in real time when subjects are in front of a digital device. The system only needs a digital device with a CCD receiver and downloadable software through which we can detect the inclination of the head, indicating if a bad posture is adopted due to a visual problem or simply inadequate visual-postural habits, alerting us to the postural anomaly to correct it.The system was evaluated in subjects with disparate interpupillary distances, at different working distances in front of the digital device, and at each distance, different tilt angles were evaluated. The system evaluated favorably in different lighting environments, correctly detecting the subjects' pupils. The results showed that for most of the variables, particularly good absolute and relative reliability values were found when measuring head tilt with lower accuracy than most of the existing systems. The evaluated results have been positive, making it a considerably inexpensive and easily affordable system for all users. It is the first application capable of measuring the head tilt of the subject at their working or reading distance in real time by tracking their eyes.


Subject(s)
Algorithms , Head , Posture , Humans , Posture/physiology , Head/physiology , Artificial Intelligence , Software , Male , Female , Adult
12.
Phys Eng Sci Med ; 2024 Jun 26.
Article in English | MEDLINE | ID: mdl-38922382

ABSTRACT

Particle (proton, carbon ion, or others) radiotherapy for ocular tumors is highly dependent on precise dose distribution, and any misalignment can result in severe complications. The proposed eye positioning and tracking system (EPTS) was designed to non-invasively position eyeballs and is reproducible enough to ensure accurate dose distribution by guiding gaze direction and tracking eye motion. Eye positioning was performed by guiding the gaze direction with separately controlled light sources. Eye tracking was performed by a robotic arm with cameras and a mirror. The cameras attached to its end received images through mirror reflection. To maintain a light weight, certain materials, such as carbon fiber, were utilized where possible. The robotic arm was controlled by a robot operating system. The robotic arm, turntables, and light source were actively and remotely controlled in real time. The videos captured by the cameras could be annotated, saved, and loaded into software. The available range of gaze guidance is 360° (azimuth). Weighing a total of 18.55 kg, the EPTS could be installed or uninstalled in 10 s. The structure, motion, and electromagnetic compatibility were verified via experiments. The EPTS shows some potential due to its non-invasive wide-range flexible eye positioning and tracking, light weight, non-collision with other equipment, and compatibility with CT imaging and dose delivery. The EPTS can also be remotely controlled in real time and offers sufficient reproducibility. This system is expected to have a positive impact on ocular particle radiotherapy.

13.
Med Eng Phys ; 129: 104180, 2024 07.
Article in English | MEDLINE | ID: mdl-38906567

ABSTRACT

Objective Vestibular/ocular deficits occur with mild traumatic brain injury (mTBI). The vestibular/ocular motor screening (VOMS) tool is used to assess individuals post-mTBI, which primarily relies upon subjective self-reported symptoms. Instrumenting the VOMS (iVOMS) with technology may allow for more objective assessment post-mTBI, which reflects actual task performance. This study aimed to validate the iVOMS analytically and clinically in mTBI and controls. Methods Seventy-nine people with sub-acute mTBI (<12 weeks post-injury) and forty-four healthy control participants performed the VOMS whilst wearing a mobile eye-tracking on a one-off visit. People with mTBI were included if they were within 12 weeks of a physician diagnosis. Participants were excluded if they had any musculoskeletal, neurological or sensory deficits which could explain dysfunction. A series of custom-made eye tracking algorithms were used to assess recorded eye-movements. Results The iVOMS was analytically valid compared to the reference (ICC2,1 0.85-0.99) in mTBI and controls. The iVOMS outcomes were clinically valid as there were significant differences between groups for convergence, vertical saccades, smooth pursuit, vestibular ocular reflex and visual motion sensitivity outcomes. However, there was no significant relationship between iVOMS outcomes and self-reported symptoms. Conclusion The iVOMS is analytically and clinically valid in mTBI and controls, but further work is required to examine the sensitivity of iVOMS outcomes across the mTBI spectrum. Findings also highlighted that symptom and physiological issue resolution post-mTBI may not coincide and relationships need further examination.


Subject(s)
Brain Concussion , Eye Movements , Humans , Male , Female , Adult , Case-Control Studies , Brain Concussion/physiopathology , Brain Concussion/diagnosis , Middle Aged , Vestibule, Labyrinth/physiopathology , Young Adult , Eye-Tracking Technology
14.
Front Psychol ; 15: 1425219, 2024.
Article in English | MEDLINE | ID: mdl-38887629
15.
Traffic Inj Prev ; : 1-8, 2024 Jun 11.
Article in English | MEDLINE | ID: mdl-38860883

ABSTRACT

OBJECTIVE: Vehicle automation technologies have the potential to address the mobility needs of older adults. However, age-related cognitive declines may pose new challenges for older drivers when they are required to take back or "takeover" control of their automated vehicle. This study aims to explore the impact of age on takeover performance under partially automated driving conditions and the interaction effect between age and voluntary non-driving-related tasks (NDRTs) on takeover performance. METHOD: A total of 42 older drivers (M = 65.5 years, SD = 4.4) and 40 younger drivers (M = 37.2 years, SD = 4.5) participated in this mixed-design driving simulation experiment (between subjects: age [older drivers vs. younger drivers] and NDRT engagement [road monitoring vs. voluntary NDRTs]; within subjects: hazardous event occurrence time [7.5th min vs. 38.5th min]). RESULTS: Older drivers exhibited poorer visual exploration performance (i.e., longer fixation point duration and smaller saccade amplitude), lower use of advanced driving assistance systems (ADAS; e.g., lower percentage of time adaptive cruise control activated [ACCA]) and poorer takeover performance (e.g., longer takeover time, larger maximum resulting acceleration, and larger standard deviation of lane position) compared to younger drivers. Furthermore, older drivers were less likely to experience driving drowsiness (e.g., lower percentage of time the eyes are fully closed and Karolinska Sleepiness Scale levels); however, this advantage did not compensate for the differences in takeover performance with younger drivers. Older drivers had lower NDRT engagement (i.e., lower percentage of fixation time on NDRTs), and NDRTs did not significantly affect their drowsiness but impaired takeover performance (e.g., higher collision rate, longer takeover time, and larger maximum resulting acceleration). CONCLUSIONS: These findings indicate the necessity of addressing the impaired takeover performance due to cognitive decline in older drivers and discourage them from engaging in inappropriate NDRTs, thereby reducing their crash risk during automated driving.

16.
Res Dev Disabil ; 151: 104767, 2024 Jun 10.
Article in English | MEDLINE | ID: mdl-38861794

ABSTRACT

Visual search problems are often reported in children with Cerebral Visual Impairment (CVI). To tackle the clinical challenge of objectively differentiating CVI from other neurodevelopmental disorders, we developed a novel test battery. Visual search tasks were coupled with verbal and gaze-based measurements. Two search tasks were performed by children with CVI (n: 22; mean age (SD): 9.63 (.46) years) ADHD (n: 32; mean age (SD): 10.51 (.25) years), dyslexia (n: 28; mean age (SD): 10.29 (.20) years) and neurotypical development (n: 44; mean age (SD): 9.30 (.30) years). Children with CVI had more impaired search performance compared to all other groups, especially in crowded and unstructured displays and even when they had normal visual acuity. In-depth gaze-based analyses revealed that this group searched in overall larger areas and needed more time to recognize a target, particularly after their initial fixation on the target. Our gaze-based approach to visual search offers new insights into the distinct search patterns and behaviours of children with CVI. Their tendency to overlook targets whilst fixating on it, point towards higher-order visual function (HOVF) deficits. The novel method is feasible, valid, and promising for clinical differential-diagnostic evaluation between CVI, ADHD and dyslexia, and for informing individualized training.

18.
J Eye Mov Res ; 17(3)2024.
Article in English | MEDLINE | ID: mdl-38826772

ABSTRACT

Prior research has shown that sighting eye dominance is a dynamic behavior and dependent on horizontal viewing angle. Virtual reality (VR) offers high flexibility and control for studying eye movement and human behavior, yet eye dominance has not been given significant attention within this domain. In this work, we replicate Khan and Crawford's (2001) original study in VR to confirm their findings within this specific context. Additionally, this study extends its scope to study alignment with objects presented at greater depth in the visual field. Our results align with previous results, remaining consistent when targets are presented at greater distances in the virtual scene. Using greater target distances presents opportunities to investigate alignment with objects at varying depths, providing greater flexibility for the design of methods that infer eye dominance from interaction in VR.

19.
Epilepsy Behav ; 157: 109887, 2024 Jun 19.
Article in English | MEDLINE | ID: mdl-38905916

ABSTRACT

AIM: To explore multiple features of attention impairments in patients with temporal lobe epilepsy (TLE). METHODS: A total of 93 patients diagnosed with TLE at Xiangya Hospital during May 2022 and December 2022 and 85 healthy controls were included in this study. Participants were asked to complete neuropsychological scales and attention network test (ANT) with recording of eye-tracking and electroencephalogram. RESULTS: All means of evaluation showed impaired attention functions in TLE patients. ANT results showed impaired orienting (p < 0.001) and executive control (p = 0.041) networks. Longer mean first saccade time (p = 0.046) and more total saccadic counts (p = 0.035) were found in eye-tracking results, indicating abnormal alerting and orienting networks. Both alerting, orienting and executive control networks were abnormal, manifesting as decreased amplitudes (N1 & P3, p < 0.001) and extended latency (P3, p = 0.002). The energy of theta, alpha and beta were all sensitive to the changes of alerting and executive control network with time, but only beta power was sensitive to the changes of orienting network. CONCLUSION: Our findings are helpful for early identification of patients with TLE combined with attention impairments, which have strong clinical guiding significance for long-term monitoring and intervention.

20.
J Vasc Access ; : 11297298241258628, 2024 Jun 10.
Article in English | MEDLINE | ID: mdl-38856000

ABSTRACT

BACKGROUND: There is limited knowledge about gaze patterns of intensive care unit (ICU) trainee doctors during the insertion of a central venous catheter (CVC). The primary objective of this study was to examine visual patterns exhibited by ICU trainee doctors during CVC insertion. Additionally, the study investigated whether differences in gaze patterns could be identified between more and less experienced trainee doctors. METHODS: In a real-life, prospective observational study conducted at the interdisciplinary ICU at the University Hospital Zurich, Switzerland, ICU trainee doctors underwent eye-tracking during CVC insertion in a real ICU patient. Using mixed-effects model analyses, the primary outcomes were dwell time, first fixation duration, revisits, fixation count, and average fixation time on different areas of interest (AOI). Secondary outcomes were above eye-tracking outcome measures stratified according to experience level of participants. RESULTS: Eighteen participants were included, of whom 10 were inexperienced and eight more experienced. Dwell time was highest for CVC preparation table (p = 0.02), jugular vein on ultrasound image (p < 0.001) and cervical puncture location (p < 0.001). Concerning experience, dwell time and revisits on jugular vein on ultrasound image (p = 0.02 and p = 0.04, respectively) and cervical puncture location (p = 0.004 and p = 0.01, respectively) were decreased in more experienced ICU trainees. CONCLUSIONS: Various AOIs have distinct significance for ICU trainee doctors during CVC insertion. Experienced participants exhibited different gaze behavior, requiring less attention for preparation and handling tasks, emphasizing the importance of hand-eye coordination.

SELECTION OF CITATIONS
SEARCH DETAIL
...