Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Proc Natl Acad Sci U S A ; 121(12): e2302239121, 2024 Mar 19.
Artigo em Inglês | MEDLINE | ID: mdl-38470927

RESUMO

Humans coordinate their eye, head, and body movements to gather information from a dynamic environment while maximizing reward and minimizing biomechanical and energetic costs. However, such natural behavior is not possible in traditional experiments employing head/body restraints and artificial, static stimuli. Therefore, it is unclear to what extent mechanisms of fixation selection discovered in lab studies, such as inhibition-of-return (IOR), influence everyday behavior. To address this gap, participants performed nine real-world tasks, including driving, visually searching for an item, and building a Lego set, while wearing a mobile eye tracker (169 recordings; 26.6 h). Surprisingly, in all tasks, participants most often returned to what they just viewed and saccade latencies were shorter preceding return than forward saccades, i.e., consistent with facilitation, rather than inhibition, of return. We hypothesize that conservation of eye and head motor effort ("laziness") contributes. Correspondingly, we observed center biases in fixation position and duration relative to the head's orientation. A model that generates scanpaths by randomly sampling these distributions reproduced all return phenomena we observed, including distinct 3-fixation sequences for forward versus return saccades. After controlling for orbital eccentricity, one task (building a Lego set) showed evidence for IOR. This, along with small discrepancies between model and data, indicates that the brain balances minimization of motor costs with maximization of rewards (e.g., accomplished by IOR and other mechanisms) and that the optimal balance varies according to task demands. Supporting this account, the orbital range of motion used in each task traded off lawfully with fixation duration.


Assuntos
Encéfalo , Movimentos Sacádicos , Humanos , Inibição Psicológica , Fixação Ocular
2.
Artigo em Inglês | MEDLINE | ID: mdl-37027619

RESUMO

In this study, we establish a much-needed baseline for evaluating eye tracking interactions using an eye tracking enabled Meta Quest 2 VR headset with 30 participants. Each participant went through 1098 targets using multiple conditions representative of AR/VR targeting and selecting tasks, including both traditional standards and those more aligned with AR/VR interactions today. We use circular white world-locked targets, and an eye tracking system with sub-1-degree mean accuracy errors running at approximately 90Hz. In a targeting and button press selection task, we, by design, compare completely unadjusted, cursor-less, eye tracking with controller and head tracking, which both had cursors. Across all inputs, we presented targets in a configuration similar to the ISO 9241-9 reciprocal selection task and another format with targets more evenly distributed near the center. Targets were laid out either flat on a plane or tangent to a sphere and rotated toward the user. Even though we intended this to be a baseline study, we see unmodified eye tracking, without any form of a cursor, or feedback, outperformed the head by 27.9% and performed comparably to the controller (5.63% decrease) in throughput. Eye tracking had improved subjective ratings relative to head in Ease of Use, Adoption, and Fatigue (66.4%, 89.8%, and 116.1% improvements, respectively) and had similar ratings relative to the controller (reduction by 4.2%, 8.9%, and 5.2% respectively). Eye tracking had a higher miss percentage than controller and head (17.3% vs 4.7% vs 7.2% respectively). Collectively, the results of this baseline study serve as a strong indicator that eye tracking, with even minor sensible interaction design modifications, has tremendous potential in reshaping interactions in next-generation AR/VR head mounted displays.

3.
J Vis ; 22(8): 1, 2022 07 11.
Artigo em Inglês | MEDLINE | ID: mdl-35816048

RESUMO

Psychophysical, motor control, and modeling studies have revealed that sensorimotor reference frame transformations (RFTs) add variability to transformed signals. For perceptual decision-making, this phenomenon could decrease the fidelity of a decision signal's representation or alternatively improve its processing through stochastic facilitation. We investigated these two hypotheses under various sensorimotor RFT constraints. Participants performed a time-limited, forced-choice motion discrimination task under eight combinations of head roll and/or stimulus rotation while responding either with a saccade or button press. This paradigm, together with the use of a decision model, allowed us to parameterize and correlate perceptual decision behavior with eye-, head-, and shoulder-centered sensory and motor reference frames. Misalignments between sensory and motor reference frames produced systematic changes in reaction time and response accuracy. For some conditions, these changes were consistent with a degradation of motion evidence commensurate with a decrease in stimulus strength in our model framework. Differences in participant performance were explained by a continuum of eye-head-shoulder representations of accumulated motion evidence, with an eye-centered bias during saccades and a shoulder-centered bias during button presses. In addition, we observed evidence for stochastic facilitation during head-rolled conditions (i.e., head roll resulted in faster, more accurate decisions in oblique motion for a given stimulus-response misalignment). We show that perceptual decision-making and stochastic RFTs are inseparable within the present context. We show that by simply rolling one's head, perceptual decision-making is altered in a way that is predicted by stochastic RFTs.


Assuntos
Tomada de Decisões , Movimentos Sacádicos , Tomada de Decisões/fisiologia , Humanos , Estimulação Luminosa , Tempo de Reação , Rotação
4.
J Vis ; 22(1): 2, 2022 01 04.
Artigo em Inglês | MEDLINE | ID: mdl-34982104

RESUMO

Numerous studies have demonstrated that visuospatial attention is a requirement for successful working memory encoding. It is unknown, however, whether this established relationship manifests in consistent gaze dynamics as people orient their visuospatial attention toward an encoding target when searching for information in naturalistic environments. To test this hypothesis, participants' eye movements were recorded while they searched for and encoded objects in a virtual apartment (Experiment 1). We decomposed gaze into 61 features that capture gaze dynamics and a trained sliding window logistic regression model that has potential for use in real-time systems to predict when participants found target objects for working memory encoding. A model trained on group data successfully predicted when people oriented to a target for encoding for the trained task (Experiment 1) and for a novel task (Experiment 2), where a new set of participants found objects and encoded an associated nonword in a cluttered virtual kitchen. Six of these features were predictive of target orienting for encoding, even during the novel task, including decreased distances between subsequent fixation/saccade events, increased fixation probabilities, and slower saccade decelerations before encoding. This suggests that as people orient toward a target to encode new information at the end of search, they decrease task-irrelevant, exploratory sampling behaviors. This behavior was common across the two studies. Together, this research demonstrates how gaze dynamics can be used to capture target orienting for working memory encoding and has implications for real-world use in technology and special populations.


Assuntos
Memória de Curto Prazo , Realidade Virtual , Atenção , Movimentos Oculares , Fixação Ocular , Humanos , Movimentos Sacádicos
5.
J Vis ; 19(12): 21, 2019 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-31647515

RESUMO

Depth perception requires the use of an internal model of the eye-head geometry to infer distance from binocular retinal images and extraretinal 3D eye-head information, particularly ocular vergence. Similarly, for motion in depth perception, gaze angle is required to correctly interpret the spatial direction of motion from retinal images; however, it is unknown whether the brain can make adequate use of extraretinal version and vergence information to correctly transform binocular retinal motion into 3D spatial coordinates. Here we tested this hypothesis by asking participants to reconstruct the spatial trajectory of an isolated disparity stimulus moving in depth either peri-foveally or peripherally while participants' gaze was oriented at different vergence and version angles. We found large systematic errors in the perceived motion trajectory that reflected an intermediate reference frame between a purely retinal interpretation of binocular retinal motion (not accounting for veridical vergence and version) and the spatially correct motion. We quantify these errors with a 3D reference frame model accounting for target, eye, and head position upon motion percept encoding. This model could capture the behavior well, revealing that participants tended to underestimate their version by up to 17%, overestimate their vergence by up to 22%, and underestimate the overall change in retinal disparity by up to 64%, and that the use of extraretinal information depended on retinal eccentricity. Since such large perceptual errors are not observed in everyday viewing, we suggest that both monocular retinal cues and binocular extraretinal signals are required for accurate real-world motion in depth perception.


Assuntos
Percepção de Profundidade , Movimentos Oculares , Percepção de Movimento , Retina/fisiologia , Disparidade Visual , Sinais (Psicologia) , Desenho de Equipamento , Feminino , Fóvea Central/fisiologia , Humanos , Imageamento Tridimensional , Masculino , Reprodutibilidade dos Testes , Visão Binocular , Adulto Jovem
6.
J Vis ; 19(11): 10, 2019 09 03.
Artigo em Inglês | MEDLINE | ID: mdl-31533148

RESUMO

Natural orienting of gaze often results in a retinal image that is rotated relative to space due to ocular torsion. However, we perceive neither this rotation nor a moving world despite visual rotational motion on the retina. This perceptual stability is often attributed to the phenomenon known as predictive remapping, but the current remapping literature ignores this torsional component. In addition, studies often simply measure remapping across either space or features (e.g., orientation) but in natural circumstances, both components are bound together for stable perception. One natural circumstance in which the perceptual system must account for the current and future eye orientation to correctly interpret the orientation of external stimuli occurs during movements to or from oblique eye orientations (i.e., eye orientations with both a horizontal and vertical angular component relative to the primary position). Here we took advantage of oblique eye orientation-induced ocular torsion to examine perisaccadic orientation perception. First, we found that orientation perception was largely predicted by the rotated retinal image. Second, we observed a presaccadic remapping of orientation perception consistent with maintaining a stable (but spatially inaccurate) retinocentric perception throughout the saccade. These findings strongly suggest that our seamless perceptual stability relies on retinocentric signals that are predictively remapped in all three ocular dimensions with each saccade.


Assuntos
Orientação Espacial/fisiologia , Movimentos Sacádicos/fisiologia , Percepção Visual/fisiologia , Adulto , Feminino , Fixação Ocular/fisiologia , Humanos , Masculino , Retina/fisiologia , Rotação , Visão Ocular/fisiologia , Adulto Jovem
7.
J Neurophysiol ; 113(5): 1377-99, 2015 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-25475344

RESUMO

Smooth pursuit eye movements are driven by retinal motion and enable us to view moving targets with high acuity. Complicating the generation of these movements is the fact that different eye and head rotations can produce different retinal stimuli but giving rise to identical smooth pursuit trajectories. However, because our eyes accurately pursue targets regardless of eye and head orientation (Blohm G, Lefèvre P. J Neurophysiol 104: 2103-2115, 2010), the brain must somehow take these signals into account. To learn about the neural mechanisms potentially underlying this visual-to-motor transformation, we trained a physiologically inspired neural network model to combine two-dimensional (2D) retinal motion signals with three-dimensional (3D) eye and head orientation and velocity signals to generate a spatially correct 3D pursuit command. We then simulated conditions of 1) head roll-induced ocular counterroll, 2) oblique gaze-induced retinal rotations, 3) eccentric gazes (invoking the half-angle rule), and 4) optokinetic nystagmus to investigate how units in the intermediate layers of the network accounted for different 3D constraints. Simultaneously, we simulated electrophysiological recordings (visual and motor tunings) and microstimulation experiments to quantify the reference frames of signals at each processing stage. We found a gradual retinal-to-intermediate-to-spatial feedforward transformation through the hidden layers. Our model is the first to describe the general 3D transformation for smooth pursuit mediated by eye- and head-dependent gain modulation. Based on several testable experimental predictions, our model provides a mechanism by which the brain could perform the 3D visuomotor transformation for smooth pursuit.


Assuntos
Modelos Neurológicos , Desempenho Psicomotor , Acompanhamento Ocular Uniforme , Percepção Visual , Humanos , Retina/fisiologia
8.
J Neurophysiol ; 110(3): 732-47, 2013 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-23678014

RESUMO

To compute spatially correct smooth pursuit eye movements, the brain uses both retinal motion and extraretinal signals about the eyes and head in space (Blohm and Lefèvre 2010). However, when smooth eye movements rely solely on memorized target velocity, such as during anticipatory pursuit, it is unknown if this velocity memory also accounts for extraretinal information, such as head roll and ocular torsion. To answer this question, we used a novel behavioral updating paradigm in which participants pursued a repetitive, spatially constant fixation-gap-ramp stimulus in series of five trials. During the first four trials, participants' heads were rolled toward one shoulder, inducing ocular counterroll (OCR). With each repetition, participants increased their anticipatory pursuit gain, indicating a robust encoding of velocity memory. On the fifth trial, they rolled their heads to the opposite shoulder before pursuit, also inducing changes in ocular torsion. Consequently, for spatially accurate anticipatory pursuit, the velocity memory had to be updated across changes in head roll and ocular torsion. We tested how the velocity memory accounted for head roll and OCR by observing the effects of changes to these signals on anticipatory trajectories of the memory decoding (fifth) trials. We found that anticipatory pursuit was updated for changes in head roll; however, we observed no evidence of compensation for OCR, representing the absence of ocular torsion signals within the velocity memory. This indicated that the directional component of the memory must be coded retinally and updated to account for changes in head roll, but not OCR.


Assuntos
Antecipação Psicológica/fisiologia , Memória/fisiologia , Acompanhamento Ocular Uniforme/fisiologia , Retina/fisiologia , Adulto , Feminino , Humanos , Masculino , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...