Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 57
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
2.
Artigo em Inglês | MEDLINE | ID: mdl-38555550

RESUMO

Self-monitoring is essential for effectively regulating learning, but difficult in visual diagnostic tasks such as radiograph interpretation. Eye-tracking technology can visualize viewing behavior in gaze displays, thereby providing information about visual search and decision-making. We hypothesized that individually adaptive gaze-display feedback improves posttest performance and self-monitoring of medical students who learn to detect nodules in radiographs. We investigated the effects of: (1) Search displays, showing which part of the image was searched by the participant; and (2) Decision displays, showing which parts of the image received prolonged attention in 78 medical students. After a pretest and instruction, participants practiced identifying nodules in 16 cases under search-display, decision-display, or no feedback conditions (n = 26 per condition). A 10-case posttest, without feedback, was administered to assess learning outcomes. After each case, participants provided self-monitoring and confidence judgments. Afterward, participants reported on self-efficacy, perceived competence, feedback use, and perceived usefulness of the feedback. Bayesian analyses showed no benefits of gaze displays for post-test performance, monitoring accuracy (absolute difference between participants' estimated and their actual test performance), completeness of viewing behavior, self-efficacy, and perceived competence. Participants receiving search-displays reported greater feedback utilization than participants receiving decision-displays, and also found the feedback more useful when the gaze data displayed was precise and accurate. As the completeness of search was not related to posttest performance, search displays might not have been sufficiently informative to improve self-monitoring. Information from decision displays was rarely used to inform self-monitoring. Further research should address if and when gaze displays can support learning.

3.
Behav Res Methods ; 56(4): 3280-3299, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38424292

RESUMO

Blinks, the closing and opening of the eyelids, are used in a wide array of fields where human function and behavior are studied. In data from video-based eye trackers, blink rate and duration are often estimated from the pupil-size signal. However, blinks and their parameters can be estimated only indirectly from this signal, since it does not explicitly contain information about the eyelid position. We ask whether blinks detected from an eye openness signal that estimates the distance between the eyelids (EO blinks) are comparable to blinks detected with a traditional algorithm using the pupil-size signal (PS blinks) and how robust blink detection is when data quality is low. In terms of rate, there was an almost-perfect overlap between EO and PS blink (F1 score: 0.98) when the head was in the center of the eye tracker's tracking range where data quality was high and a high overlap (F1 score 0.94) when the head was at the edge of the tracking range where data quality was worse. When there was a difference in blink rate between EO and PS blinks, it was mainly due to data loss in the pupil-size signal. Blink durations were about 60 ms longer in EO blinks compared to PS blinks. Moreover, the dynamics of EO blinks was similar to results from previous literature. We conclude that the eye openness signal together with our proposed blink detection algorithm provides an advantageous method to detect and describe blinks in greater detail.


Assuntos
Algoritmos , Piscadela , Tecnologia de Rastreamento Ocular , Humanos , Piscadela/fisiologia , Pupila/fisiologia , Pálpebras/fisiologia , Masculino , Adulto , Feminino , Movimentos Oculares/fisiologia
4.
Behav Res Methods ; 2024 Jan 10.
Artigo em Inglês | MEDLINE | ID: mdl-38200239

RESUMO

We built a novel setup to record large gaze shifts (up to 140[Formula: see text]). The setup consists of a wearable eye tracker and a high-speed camera with fiducial marker technology to track the head. We tested our setup by replicating findings from the classic eye-head gaze shift literature. We conclude that our new inexpensive setup is good enough to investigate the dynamics of large eye-head gaze shifts. This novel setup could be used for future research on large eye-head gaze shifts, but also for research on gaze during e.g., human interaction. We further discuss reference frames and terminology in head-free eye tracking. Despite a transition from head-fixed eye tracking to head-free gaze tracking, researchers still use head-fixed eye movement terminology when discussing world-fixed gaze phenomena. We propose to use more specific terminology for world-fixed phenomena, including gaze fixation, gaze pursuit, and gaze saccade.

5.
Behav Res Methods ; 56(3): 1476-1484, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37326770

RESUMO

According to the proposal for a minimum reporting guideline for an eye tracking study by Holmqvist et al. (2022), the accuracy (in degrees) of eye tracking data should be reported. Currently, there is no easy way to determine accuracy for wearable eye tracking recordings. To enable determining the accuracy quickly and easily, we have produced a simple validation procedure using a printable poster and accompanying Python software. We tested the poster and procedure with 61 participants using one wearable eye tracker. In addition, the software was tested with six different wearable eye trackers. We found that the validation procedure can be administered within a minute per participant and provides measures of accuracy and precision. Calculating the eye-tracking data quality measures can be done offline on a simple computer and requires no advanced computer skills.


Assuntos
Confiabilidade dos Dados , Tecnologia de Rastreamento Ocular , Humanos , Software
6.
Behav Res Methods ; 56(3): 1900-1915, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37101100

RESUMO

Computer-vision-based gaze estimation refers to techniques that estimate gaze direction directly from video recordings of the eyes or face without the need for an eye tracker. Although many such methods exist, their validation is often found in the technical literature (e.g., computer science conference papers). We aimed to (1) identify which computer-vision-based gaze estimation methods are usable by the average researcher in fields such as psychology or education, and (2) evaluate these methods. We searched for methods that do not require calibration and have clear documentation. Two toolkits, OpenFace and OpenGaze, were found to fulfill these criteria. First, we present an experiment where adult participants fixated on nine stimulus points on a computer screen. We filmed their face with a camera and processed the recorded videos with OpenFace and OpenGaze. We conclude that OpenGaze is accurate and precise enough to be used in screen-based experiments with stimuli separated by at least 11 degrees of gaze angle. OpenFace was not sufficiently accurate for such situations but can potentially be used in sparser environments. We then examined whether OpenFace could be used with horizontally separated stimuli in a sparse environment with infant participants. We compared dwell measures based on OpenFace estimates to the same measures based on manual coding. We conclude that OpenFace gaze estimates may potentially be used with measures such as relative total dwell time to sparse, horizontally separated areas of interest, but should not be used to draw conclusions about measures such as dwell duration.


Assuntos
Movimentos Oculares , Visão Ocular , Adulto , Humanos , Olho , Calibragem , Gravação em Vídeo
8.
Behav Res Methods ; 56(4): 3226-3241, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38114880

RESUMO

We present a deep learning method for accurately localizing the center of a single corneal reflection (CR) in an eye image. Unlike previous approaches, we use a convolutional neural network (CNN) that was trained solely using synthetic data. Using only synthetic data has the benefit of completely sidestepping the time-consuming process of manual annotation that is required for supervised training on real eye images. To systematically evaluate the accuracy of our method, we first tested it on images with synthetic CRs placed on different backgrounds and embedded in varying levels of noise. Second, we tested the method on two datasets consisting of high-quality videos captured from real eyes. Our method outperformed state-of-the-art algorithmic methods on real eye images with a 3-41.5% reduction in terms of spatial precision across data sets, and performed on par with state-of-the-art on synthetic images in terms of spatial accuracy. We conclude that our method provides a precise method for CR center localization and provides a solution to the data availability problem, which is one of the important common roadblocks in the development of deep learning models for gaze estimation. Due to the superior CR center localization and ease of application, our method has the potential to improve the accuracy and precision of CR-based eye trackers.


Assuntos
Córnea , Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Humanos , Processamento de Imagem Assistida por Computador/métodos , Córnea/diagnóstico por imagem , Córnea/fisiologia , Algoritmos
9.
Behav Res Methods ; 2023 Jul 28.
Artigo em Inglês | MEDLINE | ID: mdl-37507649

RESUMO

A guideline is proposed that comprises the minimum items to be reported in research studies involving an eye tracker and human or non-human primate participant(s). This guideline was developed over a 3-year period using a consensus-based process via an open invitation to the international eye tracking community. This guideline will be reviewed at maximum intervals of 4 years.

10.
Behav Res Methods ; 55(1): 364-416, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-35384605

RESUMO

In this paper, we present a review of how the various aspects of any study using an eye tracker (such as the instrument, methodology, environment, participant, etc.) affect the quality of the recorded eye-tracking data and the obtained eye-movement and gaze measures. We take this review to represent the empirical foundation for reporting guidelines of any study involving an eye tracker. We compare this empirical foundation to five existing reporting guidelines and to a database of 207 published eye-tracking studies. We find that reporting guidelines vary substantially and do not match with actual reporting practices. We end by deriving a minimal, flexible reporting guideline based on empirical research (Section "An empirically based minimal reporting guideline").


Assuntos
Movimentos Oculares , Tecnologia de Rastreamento Ocular , Humanos , Pesquisa Empírica
11.
Behav Res Methods ; 55(2): 657-669, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-35419703

RESUMO

Estimating the gaze direction with a digital video-based pupil and corneal reflection (P-CR) eye tracker is challenging partly since a video camera is limited in terms of spatial and temporal resolution, and because the captured eye images contain noise. Through computer simulation, we evaluated the localization accuracy of pupil-, and CR centers in the eye image for small eye rotations (≪ 1 deg). Results highlight how inaccuracies in center localization are related to 1) how many pixels the pupil and CR span in the eye camera image, 2) the method to compute the center of the pupil and CRs, and 3) the level of image noise. Our results provide a possible explanation to why the amplitude of small saccades may not be accurately estimated by many currently used video-based eye trackers. We conclude that eye movements with arbitrarily small amplitudes can be accurately estimated using the P-CR eye-tracking principle given that the level of image noise is low and the pupil and CR span enough pixels in the eye camera, or if localization of the CR is based on the intensity values in the eye image instead of a binary representation.


Assuntos
Movimentos Oculares , Movimentos Sacádicos , Humanos , Simulação por Computador , Pupila
12.
Behav Res Methods ; 55(4): 1513-1536, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-35680764

RESUMO

Pupil-corneal reflection (P-CR) eye tracking has gained a prominent role in studying dog visual cognition, despite methodological challenges that often lead to lower-quality data than when recording from humans. In the current study, we investigated if and how the morphology of dogs might interfere with tracking of P-CR systems, and to what extent such interference, possibly in combination with dog-unique eye-movement characteristics, may undermine data quality and affect eye-movement classification when processed through algorithms. For this aim, we have conducted an eye-tracking experiment with dogs and humans, and investigated incidences of tracking interference, compared how they blinked, and examined how differential quality of dog and human data affected the detection and classification of eye-movement events. Our results show that the morphology of dogs' face and eye can interfere with tracking methods of the systems, and dogs blink less often but their blinks are longer. Importantly, the lower quality of dog data lead to larger differences in how two different event detection algorithms classified fixations, indicating that the results of key dependent variables are more susceptible to choice of algorithm in dog than human data. Further, two measures of the Nyström & Holmqvist (Behavior Research Methods, 42(4), 188-204, 2010) algorithm showed that dog fixations are less stable and dog data have more trials with extreme levels of noise. Our findings call for analyses better adjusted to the characteristics of dog eye-tracking data, and our recommendations help future dog eye-tracking studies acquire quality data to enable robust comparisons of visual cognition between dogs and humans.


Assuntos
Confiabilidade dos Dados , Tecnologia de Rastreamento Ocular , Humanos , Cães , Animais , Movimentos Oculares , Piscadela , Cognição
13.
Behav Res Methods ; 55(8): 4128-4142, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-36326998

RESUMO

How well can modern wearable eye trackers cope with head and body movement? To investigate this question, we asked four participants to stand still, walk, skip, and jump while fixating a static physical target in space. We did this for six different eye trackers. All the eye trackers were capable of recording gaze during the most dynamic episodes (skipping and jumping). The accuracy became worse as movement got wilder. During skipping and jumping, the biggest error was 5.8∘. However, most errors were smaller than 3∘. We discuss the implications of decreased accuracy in the context of different research scenarios.


Assuntos
Movimentos Oculares , Dispositivos Eletrônicos Vestíveis , Humanos , Movimento , Medições dos Movimentos Oculares , Movimentos da Cabeça
14.
Atten Percept Psychophys ; 84(8): 2623-2640, 2022 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-35996058

RESUMO

Eye contact is essential for human interactions. We investigated whether humans are able to avoid eye contact while navigating crowds. At a science festival, we fitted 62 participants with a wearable eye tracker and instructed them to walk a route. Half of the participants were further instructed to avoid eye contact. We report that humans can flexibly allocate their gaze while navigating crowds and avoid eye contact primarily by orienting their head and eyes towards the floor. We discuss implications for crowd navigation and gaze behavior. In addition, we address a number of issues encountered in such field studies with regard to data quality, control of the environment, and participant adherence to instructions. We stress that methodological innovation and scientific progress are strongly interrelated.


Assuntos
Tecnologia de Rastreamento Ocular , Dispositivos Eletrônicos Vestíveis , Humanos , Aglomeração , Caminhada , Olho , Fixação Ocular
15.
PeerJ ; 10: e12948, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35186506

RESUMO

Eyeblink conditioning is the most popular paradigm for studying classical conditioning in humans. But the fact that eyelids are under voluntary control means it is ultimately impossible to ascertain whether a blink response is 'conditioned' or a timed 'voluntary' blink response. In contrast, the pupillary response is an autonomic response, not under voluntary control. By conditioning the pupillary response, one might avoid potential volition-related confounds. Several attempts have been made to condition the pupillary constriction and dilation responses, with the earliest published attempts dating back to the beginning of the 20th century. While a few early studies reported successful conditioning of pupillary constriction, later studies have failed to replicate this. The apparatus for recording pupil size, the type of stimuli used and the interval between the stimuli has varied in previous attempts-which may explain the inconsistent results. Moreover, measuring the pupil size used to be cumbersome compared with today when an eyetracker can continuously measure pupil size non-invasively. Here we used an eyetracker to test whether it is possible to condition the autonomic pupillary constriction response by pairing a tone (CS) and a light (US) with a 1s CS-US interval. Unlike in previous studies, our subjects went through multiple training sessions to ensure that any potential lack of conditioning would not be due to too little training. A total of 10 participants went through 2-12 conditioning sessions, each lasting approximately 20 min. One training session consisted of 75 paired, tone + light, trials and 25 randomly interspersed CS alone trials. The eyetracker (Tobii Pro Nano), continuously measured participants' pupil size. To test statistically whether conditioning of the pupillary response occurred we compared the pupil size after the tone on the first session and the last session. The results showed a complete lack of evidence of conditioning. Though the pupil size varied slightly between participants, the size did not change as a result of the training-irrespective of the number of training sessions. The data replicate previous findings that pupillary constriction does not show conditioning. We conclude that it is not possible to condition pupillary constriction-at least not by pairing a tone and a light. One hypothesis is that when pupillary conditioning has been observed in previous studies, it has been mediated by conditioning of an emotional response.


Assuntos
Condicionamento Clássico , Pupila , Humanos , Constrição , Pupila/fisiologia , Condicionamento Clássico/fisiologia
16.
Behav Res Methods ; 54(6): 2765-2776, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-35023066

RESUMO

Eye trackers are applied in many research fields (e.g., cognitive science, medicine, marketing research). To give meaning to the eye-tracking data, researchers have a broad choice of classification methods to extract various behaviors (e.g., saccade, blink, fixation) from the gaze signal. There is extensive literature about the different classification algorithms. Surprisingly, not much is known about the effect of fixation and saccade selection rules that are usually (implicitly) applied. We want to answer the following question: What is the impact of the selection-rule parameters (minimal saccade amplitude and minimal fixation duration) on the distribution of fixation durations? To answer this question, we used eye-tracking data with high and low quality and seven different classification algorithms. We conclude that selection rules play an important role in merging and selecting fixation candidates. For eye-tracking data with good-to-moderate precision (RMSD < 0.5∘), the classification algorithm of choice does not matter too much as long as it is sensitive enough and is followed by a rule that selects saccades with amplitudes larger than 1.0∘ and a rule that selects fixations with duration longer than 60 ms. Because of the importance of selection, researchers should always report whether they performed selection and the values of their parameters.

17.
Infancy ; 27(1): 25-45, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-34687142

RESUMO

The Tobii Pro TX300 is a popular eye tracker in developmental eye-tracking research, yet it is no longer manufactured. If a TX300 breaks down, it may have to be replaced. The data quality of the replacement eye tracker may differ from that of the TX300, which may affect the experimental outcome measures. This is problematic for longitudinal and multi-site studies, and for researchers replacing eye trackers between studies. We, therefore, ask how the TX300 and its successor, the Tobii Pro Spectrum, compare in terms of eye-tracking data quality. Data quality-operationalized through precision, accuracy, and data loss-was compared between eye trackers for three age groups (around 5-months, 10-months, and 3-years). Precision was better for all gaze position signals obtained with the Spectrum in comparison to the TX300. Accuracy of the Spectrum was higher for the 5-month-old and 10-month-old children. For the three-year-old children, accuracy was similar across both eye trackers. Gaze position signals from the Spectrum exhibited lower proportions of data loss, and the duration of the data loss periods tended to be shorter. In conclusion, the Spectrum produces gaze position signals with higher data quality, especially for the younger infants. Implications for data analysis are discussed.


Assuntos
Confiabilidade dos Dados , Tecnologia de Rastreamento Ocular , Criança , Pré-Escolar , Coleta de Dados , Movimentos Oculares , Humanos , Lactente
18.
Iperception ; 12(6): 20416695211055766, 2021 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-34900212

RESUMO

The concept of optic flow, a global pattern of visual motion that is both caused by and signals self-motion, is canonically ascribed to James Gibson's 1950 book "The Perception of the Visual World." There have, however, been several other developments of this concept, chiefly by Gwilym Grindley and Edward Calvert. Based on rarely referenced scientific literature and archival research, this article describes the development of the concept of optic flow by the aforementioned authors and several others. The article furthermore presents the available evidence for interactions between these authors, focusing on whether parts of Gibson's proposal were derived from the work of Grindley or Calvert. While Grindley's work may have made Gibson aware of the geometrical facts of optic flow, Gibson's work is not derivative of Grindley's. It is furthermore shown that Gibson only learned of Calvert's work in 1956, almost a decade after Gibson first published his proposal. In conclusion, the development of the concept of optic flow presents an intriguing example of convergent thought in the progress of science.

19.
Behav Res Methods ; 53(5): 1986-2006, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-33709298

RESUMO

The pupil size artefact (PSA) is the gaze deviation reported by an eye tracker during pupil size changes if the eye does not rotate. In the present study, we ask three questions: 1) how stable is the PSA over time, 2) does the PSA depend on properties of the eye tracker set up, and 3) does the PSA depend on the participants' viewing direction? We found that the PSA is very stable over time for periods as long as 1 year, but may differ between participants. When comparing the magnitude of the PSA between eye trackers, we found the magnitude of the obtained PSA to be related to the direction of the eye-tracker-camera axis, suggesting that the angle between the participants' viewing direction and the camera axis affects the PSA. We then investigated the PSA as a function of the participants' viewing direction. The PSA was non-zero for viewing direction 0∘ and depended on the viewing direction. These findings corroborate the suggestion by Choe et al. (Vision Research 118(6755):48-59, 2016), that the PSA can be described by an idiosyncratic and a viewing direction-dependent component. Based on a simulation, we cannot claim that the viewing direction-dependent component of the PSA is caused by the optics of the cornea.


Assuntos
Artefatos , Pupila , Medições dos Movimentos Oculares , Movimentos Oculares , Humanos
20.
Iperception ; 12(1): 2041669520983338, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33628410

RESUMO

A number of virtual reality head-mounted displays (HMDs) with integrated eye trackers have recently become commercially available. If their eye tracking latency is low and reliable enough for gaze-contingent rendering, this may open up many interesting opportunities for researchers. We measured eye tracking latencies for the Fove-0, the Varjo VR-1, and the High Tech Computer Corporation (HTC) Vive Pro Eye using simultaneous electrooculography measurements. We determined the time from the occurrence of an eye position change to its availability as a data sample from the eye tracker (delay) and the time from an eye position change to the earliest possible change of the display content (latency). For each test and each device, participants performed 60 saccades between two targets 20° of visual angle apart. The targets were continuously visible in the HMD, and the saccades were instructed by an auditory cue. Data collection and eye tracking calibration were done using the recommended scripts for each device in Unity3D. The Vive Pro Eye was recorded twice, once using the SteamVR SDK and once using the Tobii XR SDK. Our results show clear differences between the HMDs. Delays ranged from 15 ms to 52 ms, and the latencies ranged from 45 ms to 81 ms. The Fove-0 appears to be the fastest device and best suited for gaze-contingent rendering.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...