Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
PLoS One ; 19(3): e0289855, 2024.
Article in English | MEDLINE | ID: mdl-38457388

ABSTRACT

When humans navigate through complex environments, they coordinate gaze and steering to sample the visual information needed to guide movement. Gaze and steering behavior have been extensively studied in the context of automobile driving along a winding road, leading to accounts of movement along well-defined paths over flat, obstacle-free surfaces. However, humans are also capable of visually guiding self-motion in environments that are cluttered with obstacles and lack an explicit path. An extreme example of such behavior occurs during first-person view drone racing, in which pilots maneuver at high speeds through a dense forest. In this study, we explored the gaze and steering behavior of skilled drone pilots. Subjects guided a simulated quadcopter along a racecourse embedded within a custom-designed forest-like virtual environment. The environment was viewed through a head-mounted display equipped with an eye tracker to record gaze behavior. In two experiments, subjects performed the task in multiple conditions that varied in terms of the presence of obstacles (trees), waypoints (hoops to fly through), and a path to follow. Subjects often looked in the general direction of things that they wanted to steer toward, but gaze fell on nearby objects and surfaces more often than on the actual path or hoops. Nevertheless, subjects were able to perform the task successfully, steering at high speeds while remaining on the path, passing through hoops, and avoiding collisions. In conditions that contained hoops, subjects adapted how they approached the most immediate hoop in anticipation of the position of the subsequent hoop. Taken together, these findings challenge existing models of steering that assume that steering is tightly coupled to where actors look. We consider the study's broader implications as well as limitations, including the focus on a small sample of highly skilled subjects and inherent noise in measurement of gaze direction.


Subject(s)
Automobile Driving , Movement , Humans , Motion , Psychomotor Performance , Fixation, Ocular
2.
Front Comput Neurosci ; 16: 844289, 2022.
Article in English | MEDLINE | ID: mdl-35431848

ABSTRACT

This paper introduces a self-tuning mechanism for capturing rapid adaptation to changing visual stimuli by a population of neurons. Building upon the principles of efficient sensory encoding, we show how neural tuning curve parameters can be continually updated to optimally encode a time-varying distribution of recently detected stimulus values. We implemented this mechanism in a neural model that produces human-like estimates of self-motion direction (i.e., heading) based on optic flow. The parameters of speed-sensitive units were dynamically tuned in accordance with efficient sensory encoding such that the network remained sensitive as the distribution of optic flow speeds varied. In two simulation experiments, we found that model performance with dynamic tuning yielded more accurate, shorter latency heading estimates compared to the model with static tuning. We conclude that dynamic efficient sensory encoding offers a plausible approach for capturing adaptation to varying visual environments in biological visual systems and neural models alike.

3.
J Vis ; 20(3): 8, 2020 03 17.
Article in English | MEDLINE | ID: mdl-32232376

ABSTRACT

Affordance-based control and current-future control offer competing theoretical accounts of the visual control of locomotion. The aim of this study was to test predictions derived from these accounts about the necessity of self-motion (Experiment 1) and target-ground contact (Experiment 2) in perceiving whether a moving target can be intercepted before it reaches an escape zone. We designed a novel interception task wherein the ability to perceive target catchability before initiating movement was advantageous. Subjects pursued a target moving through a field in a virtual environment and attempted to intercept the target before it escaped into a forest. Targets were catchable on some trials but not others. If subjects perceived that they could not reach the target, they were instructed to immediately give up by pressing a button. After each trial, subjects received a point reward that incentivized them to pursue only those targets that were catchable. On the majority of trials, subjects either pursued and successfully intercepted the target or chose not to pursue at all, demonstrating that humans are sensitive to catchability while stationary. Performance also degraded when the target was floating rather than in contact with the ground. Both findings are incompatible with the current-future account and support the affordance-based account of choosing whether to pursue moving targets.


Subject(s)
Decision Making/physiology , Motion Perception/physiology , Psychomotor Performance/physiology , Adolescent , Female , Humans , Male , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...