Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 22
Filter
Add more filters










Publication year range
1.
Front Behav Neurosci ; 18: 1399716, 2024.
Article in English | MEDLINE | ID: mdl-38835838

ABSTRACT

Introduction: In order to successfully move from place to place, our brain often combines sensory inputs from various sources by dynamically weighting spatial cues according to their reliability and relevance for a given task. Two of the most important cues in navigation are the spatial arrangement of landmarks in the environment, and the continuous path integration of travelled distances and changes in direction. Several studies have shown that Bayesian integration of cues provides a good explanation for navigation in environments dominated by small numbers of easily identifiable landmarks. However, it remains largely unclear how cues are combined in more complex environments. Methods: To investigate how humans process and combine landmarks and path integration in complex environments, we conducted a series of triangle completion experiments in virtual reality, in which we varied the number of landmarks from an open steppe to a dense forest, thus going beyond the spatially simple environments that have been studied in the past. We analysed spatial behaviour at both the population and individual level with linear regression models and developed a computational model, based on maximum likelihood estimation (MLE), to infer the underlying combination of cues. Results: Overall homing performance was optimal in an environment containing three landmarks arranged around the goal location. With more than three landmarks, individual differences between participants in the use of cues are striking. For some, the addition of landmarks does not worsen their performance, whereas for others it seems to impair their use of landmark information. Discussion: It appears that navigation success in complex environments depends on the ability to identify the correct clearing around the goal location, suggesting that some participants may not be able to see the forest for the trees.

2.
PLoS One ; 18(11): e0293536, 2023.
Article in English | MEDLINE | ID: mdl-37943845

ABSTRACT

Spatial navigation research in humans increasingly relies on experiments using virtual reality (VR) tools, which allow for the creation of highly flexible, and immersive study environments, that can react to participant interaction in real time. Despite the popularity of VR, tools simplifying the creation and data management of such experiments are rare and often restricted to a specific scope-limiting usability and comparability. To overcome those limitations, we introduce the Virtual Navigation Toolbox (VNT), a collection of interchangeable and independent tools for the development of spatial navigation VR experiments using the popular Unity game engine. The VNT's features are packaged in loosely coupled and reusable modules, facilitating convenient implementation of diverse experimental designs. Here, we depict how the VNT fulfils feature requirements of different VR environments and experiments, guiding through the implementation and execution of a showcase study using the toolbox. The presented showcase study reveals that homing performance in a classic triangle completion task is invariant to translation velocity of the participant's avatar, but highly sensitive to the number of landmarks. The VNT is freely available under a creative commons license, and we invite researchers to contribute, extending and improving tools using the provided repository.


Subject(s)
Spatial Navigation , Virtual Reality , Humans
3.
Proc Biol Sci ; 288(1943): 20203051, 2021 01 27.
Article in English | MEDLINE | ID: mdl-33468001

ABSTRACT

To minimize the risk of colliding with the ground or other obstacles, flying animals need to control both their ground speed and ground height. This task is particularly challenging in wind, where head winds require an animal to increase its airspeed to maintain a constant ground speed and tail winds may generate negative airspeeds, rendering flight more difficult to control. In this study, we investigate how head and tail winds affect flight control in the honeybee Apis mellifera, which is known to rely on the pattern of visual motion generated across the eye-known as optic flow-to maintain constant ground speeds and heights. We find that, when provided with both longitudinal and transverse optic flow cues (in or perpendicular to the direction of flight, respectively), honeybees maintain a constant ground speed but fly lower in head winds and higher in tail winds, a response that is also observed when longitudinal optic flow cues are minimized. When the transverse component of optic flow is minimized, or when all optic flow cues are minimized, the effect of wind on ground height is abolished. We propose that the regular sidewards oscillations that the bees make as they fly may be used to extract information about the distance to the ground, independently of the longitudinal optic flow that they use for ground speed control. This computationally simple strategy could have potential uses in the development of lightweight and robust systems for guiding autonomous flying vehicles in natural environments.


Subject(s)
Optic Flow , Animals , Bees , Cues , Environment , Flight, Animal , Wind
4.
Front Behav Neurosci ; 11: 132, 2017.
Article in English | MEDLINE | ID: mdl-28769773

ABSTRACT

Memories of places often include landmark cues, i.e., information provided by the spatial arrangement of distinct objects with respect to the target location. To study how humans combine landmark information for navigation, we conducted two experiments: To this end, participants were either provided with auditory landmarks while walking in a large sports hall or with visual landmarks while walking on a virtual-reality treadmill setup. We found that participants cannot reliably locate their home position due to ambiguities in the spatial arrangement when only one or two uniform landmarks provide cues with respect to the target. With three visual landmarks that look alike, the task is solved without ambiguity, while audio landmarks need to play three unique sounds for a similar performance. This reduction in ambiguity through integration of landmark information from 1, 2, and 3 landmarks is well modeled using a probabilistic approach based on maximum likelihood estimation. Unlike any deterministic model of human navigation (based e.g., on distance or angle information), this probabilistic model predicted both the precision and accuracy of the human homing performance. To further examine how landmark cues are integrated we introduced systematic conflicts in the visual landmark configuration between training of the home position and tests of the homing performance. The participants integrated the spatial information from each landmark near-optimally to reduce spatial variability. When the conflict becomes big, this integration breaks down and precision is sacrificed for accuracy. That is, participants return again closer to the home position, because they start ignoring the deviant third landmark. Relying on two instead of three landmarks, however, goes along with responses that are scattered over a larger area, thus leading to higher variability. To model the breakdown of integration with increasing conflict, the probabilistic model based on a simple Gaussian distribution used for Experiment 1 needed a slide extension in from of a mixture of Gaussians. All parameters for the Mixture Model were fixed based on the homing performance in the baseline condition which contained a single landmark. from the 1-Landmark Condition. This way we found that the Mixture Model could predict the integration performance and its breakdown with no additional free parameters. Overall these data suggest that humans use similar optimal probabilistic strategies in visual and auditory navigation, integrating landmark information to improve homing precision and balance homing precision with homing accuracy.

5.
Curr Biol ; 26(4): 470-82, 2016 Feb 22.
Article in English | MEDLINE | ID: mdl-26877083

ABSTRACT

Nesting insects perform learning flights to establish a visual representation of the nest environment that allows them to subsequently return to the nest. It has remained unclear when insects learn what during these flights, what determines their overall structure, and, in particular, how what is learned is used to guide an insect's return. We analyzed learning flights in ground-nesting wasps (Sphecidae: Cerceris australis) using synchronized high-speed cameras to determine 3D head position and orientation. Wasps move along arcs centered on the nest entrance, whereby rapid changes in gaze assure that the nest is seen at lateral positions in the left or the right visual field. Between saccades, the wasps translate along arc segments around the nest while keeping gaze fixed. We reconstructed panoramic views along the paths of learning and homing wasps to test specific predictions about what wasps learn during their learning flights and how they use this information to guide their return. Our evidence suggests that wasps monitor changing views during learning flights and use the differences they experience relative to previously encountered views to decide when to begin a new arc. Upon encountering learned views, homing wasps move left or right, depending on the nest direction associated with that view, and in addition appear to be guided by features on the ground close to the nest. We test our predictions on how wasps use views for homing by simulating homing flights of a virtual wasp guided by views rendered in a 3D model of a natural wasp environment.


Subject(s)
Homing Behavior , Learning , Visual Perception , Wasps/physiology , Animals , Orientation
6.
Neurosci Lett ; 617: 72-5, 2016 Mar 23.
Article in English | MEDLINE | ID: mdl-26854843

ABSTRACT

For navigation through our environment, we can rely on information from various modalities, such as vision and audition. This information enables us for example to estimate our position relative to the starting position, or to integrate velocity and acceleration signals from the vestibular organ and proprioception to estimate the displacement due to self-motion. To better understand the mechanisms that underlie human navigation we analysed the performance of participants in an angle-walking task in the absence of visual and auditory signals. To this end, we guided them along paths of different lengths and asked them to turn by an angle of ±90°. We found significant biases in turn angles, i.e. systematic deviations from the correct angle and that these were characteristic for individual participants. Varying path length, however, had little effect on turn accuracy and precision. To check whether this idiosyncrasy was persistent over time and present in another type of walking task, we performed a second experiment several weeks later. Here, the same participants were guided to walk angles with varying amplitude. We then asked them to judge whether they had walked an angle larger or smaller than 90° in a two-alternative forced-choice paradigm. The personal bias was highly correlated between the two experiments even though they were conducted weeks apart. The presence of a persistent bias in walked angles in the absence of external directional cues indicates a possible error component for navigation, which is surprisingly time stable and idiosyncratic.


Subject(s)
Spatial Navigation , Adult , Choice Behavior , Discrimination, Psychological , Female , Humans , Male , Middle Aged , Time Factors , Young Adult
7.
PLoS One ; 10(9): e0135020, 2015.
Article in English | MEDLINE | ID: mdl-26352836

ABSTRACT

Changes in flight direction in flying insects are largely due to roll, yaw and pitch rotations of their body. Head orientation is stabilized for most of the time by counter rotation. Here, we use high-speed video to analyse head- and body-movements of the bumblebee Bombus terrestris while approaching and departing from a food source located between three landmarks in an indoor flight-arena. The flight paths consist of almost straight flight segments that are interspersed with rapid turns. These short and fast yaw turns ("saccades") are usually accompanied by even faster head yaw turns that change gaze direction. Since a large part of image rotation is thereby reduced to brief instants of time, this behavioural pattern facilitates depth perception from visual motion parallax during the intersaccadic intervals. The detailed analysis of the fine structure of the bees' head turning movements shows that the time course of single head saccades is very stereotypical. We find a consistent relationship between the duration, peak velocity and amplitude of saccadic head movements, which in its main characteristics resembles the so-called "saccadic main sequence" in humans. The fact that bumblebee head saccades are highly stereotyped as in humans, may hint at a common principle, where fast and precise motor control is used to reliably reduce the time during which the retinal images moves.


Subject(s)
Bees/anatomy & histology , Bees/physiology , Animals , Behavior, Animal , Flight, Animal , Head Movements , Orientation , Stereotyped Behavior , Video Recording
8.
Front Behav Neurosci ; 8: 335, 2014.
Article in English | MEDLINE | ID: mdl-25309374

ABSTRACT

Bees use visual memories to find the spatial location of previously learnt food sites. Characteristic learning flights help acquiring these memories at newly discovered foraging locations where landmarks-salient objects in the vicinity of the goal location-can play an important role in guiding the animal's homing behavior. Although behavioral experiments have shown that bees can use a variety of visual cues to distinguish objects as landmarks, the question of how landmark features are encoded by the visual system is still open. Recently, it could be shown that motion cues are sufficient to allow bees localizing their goal using landmarks that can hardly be discriminated from the background texture. Here, we tested the hypothesis that motion sensitive neurons in the bee's visual pathway provide information about such landmarks during a learning flight and might, thus, play a role for goal localization. We tracked learning flights of free-flying bumblebees (Bombus terrestris) in an arena with distinct visual landmarks, reconstructed the visual input during these flights, and replayed ego-perspective movies to tethered bumblebees while recording the activity of direction-selective wide-field neurons in their optic lobe. By comparing neuronal responses during a typical learning flight and targeted modifications of landmark properties in this movie we demonstrate that these objects are indeed represented in the bee's visual motion pathway. We find that object-induced responses vary little with object texture, which is in agreement with behavioral evidence. These neurons thus convey information about landmark properties that are useful for view-based homing.

9.
Proc Natl Acad Sci U S A ; 110(46): 18686-91, 2013 Nov 12.
Article in English | MEDLINE | ID: mdl-24167269

ABSTRACT

Landing is a challenging aspect of flight because, to land safely, speed must be decreased to a value close to zero at touchdown. The mechanisms by which animals achieve this remain unclear. When landing on horizontal surfaces, honey bees control their speed by holding constant the rate of front-to-back image motion (optic flow) generated by the surface as they reduce altitude. As inclination increases, however, this simple pattern of optic flow becomes increasingly complex. How do honey bees control speed when landing on surfaces that have different orientations? To answer this, we analyze the trajectories of honey bees landing on a vertical surface that produces various patterns of motion. We find that landing honey bees control their speed by holding the rate of expansion of the image constant. We then test and confirm this hypothesis rigorously by analyzing landings when the apparent rate of expansion generated by the surface is manipulated artificially. This strategy ensures that speed is reduced, gradually and automatically, as the surface is approached. We then develop a mathematical model of this strategy and show that it can effectively be used to guide smooth landings on surfaces of any orientation, including horizontal surfaces. This biological strategy for guiding landings does not require knowledge about either the distance to the surface or the speed at which it is approached. The simplicity and generality of this landing strategy suggests that it is likely to be exploited by other flying animals and makes it ideal for implementation in the guidance systems of flying robots.


Subject(s)
Bees/physiology , Flight, Animal/physiology , Models, Biological , Motion Perception/physiology , Psychomotor Performance/physiology , Animals , Linear Models , Photic Stimulation
10.
Front Neural Circuits ; 6: 108, 2012.
Article in English | MEDLINE | ID: mdl-23269913

ABSTRACT

Insects such as flies or bees, with their miniature brains, are able to control highly aerobatic flight maneuvres and to solve spatial vision tasks, such as avoiding collisions with obstacles, landing on objects, or even localizing a previously learnt inconspicuous goal on the basis of environmental cues. With regard to solving such spatial tasks, these insects still outperform man-made autonomous flying systems. To accomplish their extraordinary performance, flies and bees have been shown by their characteristic behavioral actions to actively shape the dynamics of the image flow on their eyes ("optic flow"). The neural processing of information about the spatial layout of the environment is greatly facilitated by segregating the rotational from the translational optic flow component through a saccadic flight and gaze strategy. This active vision strategy thus enables the nervous system to solve apparently complex spatial vision tasks in a particularly efficient and parsimonious way. The key idea of this review is that biological agents, such as flies or bees, acquire at least part of their strength as autonomous systems through active interactions with their environment and not by simply processing passively gained information about the world. These agent-environment interactions lead to adaptive behavior in surroundings of a wide range of complexity. Animals with even tiny brains, such as insects, are capable of performing extraordinarily well in their behavioral contexts by making optimal use of the closed action-perception loop. Model simulations and robotic implementations show that the smart biological mechanisms of motion computation and visually-guided flight control might be helpful to find technical solutions, for example, when designing micro air vehicles carrying a miniaturized, low-weight on-board processor.

11.
J Exp Biol ; 215(Pt 14): 2501-14, 2012 Jul 15.
Article in English | MEDLINE | ID: mdl-22723490

ABSTRACT

Blowfly flight consists of two main components, saccadic turns and intervals of mostly straight gaze direction, although, as a consequence of inertia, flight trajectories usually change direction smoothly. We investigated how flight behavior changes depending on the surroundings and how saccadic turns and intersaccadic translational movements might be controlled in arenas of different width with and without obstacles. Blowflies do not fly in straight trajectories, even when traversing straight flight arenas; rather, they fly in meandering trajectories. Flight speed and the amplitude of meanders increase with arena width. Although saccade duration is largely constant, peak angular velocity and succession into either direction are variable and depend on the visual surroundings. Saccade rate and amplitude also vary with arena layout and are correlated with the 'time-to-contact' to the arena wall. We provide evidence that both saccade and velocity control rely to a large extent on the intersaccadic optic flow generated in eye regions looking well in front of the fly, rather than in the lateral visual field, where the optic flow at least during forward flight tends to be strongest.


Subject(s)
Diptera/physiology , Environment , Flight, Animal/physiology , Optic Flow/physiology , Animals , Saccades/physiology , Time Factors
12.
Article in English | MEDLINE | ID: mdl-22279431

ABSTRACT

Honeybees use visual cues to relocate profitable food sources and their hive. What bees see while navigating, depends on the appearance of the cues, the bee's current position, orientation, and movement relative to them. Here we analyze the detailed flight behavior during the localization of a goal surrounded by cylinders that are characterized either by a high contrast in luminance and texture or by mostly motion contrast relative to the background. By relating flight behavior to the nature of the information available from these landmarks, we aim to identify behavioral strategies that facilitate the processing of visual information during goal localization. We decompose flight behavior into prototypical movements using clustering algorithms in order to reduce the behavioral complexity. The determined prototypical movements reflect the honeybee's saccadic flight pattern that largely separates rotational from translational movements. During phases of translational movements between fast saccadic rotations, the bees can gain information about the 3D layout of their environment from the translational optic flow. The prototypical movements reveal the prominent role of sideways and up- or downward movements, which can help bees to gather information about objects, particularly in the frontal visual field. We find that the occurrence of specific prototypes depends on the bees' distance from the landmarks and the feeder and that changing the texture of the landmarks evokes different prototypical movements. The adaptive use of different behavioral prototypes shapes the visual input and can facilitate information processing in the bees' visual system during local navigation.

13.
Front Behav Neurosci ; 5: 20, 2011.
Article in English | MEDLINE | ID: mdl-21541258

ABSTRACT

Honeybees visually pinpoint the location of a food source using landmarks. Studies on the role of visual memories have suggested that bees approach the goal by finding a close match between their current view and a memorized view of the goal location. The most relevant landmark features for this matching process seem to be their retinal positions, the size as defined by their edges, and their color. Recently, we showed that honeybees can use landmarks that are statically camouflaged, suggesting that motion cues are relevant as well. Currently it is unclear how bees weight these different landmark features when accomplishing navigational tasks, and whether this depends on their saliency. Since natural objects are often distinguished by their texture, we investigate the behavioral relevance and the interplay of the spatial configuration and the texture of landmarks. We show that landmark texture is a feature that bees memorize, and being given the opportunity to identify landmarks by their texture improves the bees' navigational performance. Landmark texture is weighted more strongly than landmark configuration when it provides the bees with positional information and when the texture is salient. In the vicinity of the landmark honeybees changed their flight behavior according to its texture.

14.
J Exp Biol ; 213(Pt 17): 2913-23, 2010 Sep.
Article in English | MEDLINE | ID: mdl-20709919

ABSTRACT

Visual landmarks guide humans and animals including insects to a goal location. Insects, with their miniature brains, have evolved a simple strategy to find their nests or profitable food sources; they approach a goal by finding a close match between the current view and a memorised retinotopic representation of the landmark constellation around the goal. Recent implementations of such a matching scheme use raw panoramic images ('image matching') and show that it is well suited to work on robots and even in natural environments. However, this matching scheme works only if relevant landmarks can be detected by their contrast and texture. Therefore, we tested how honeybees perform in localising a goal if the landmarks can hardly be distinguished from the background by such cues. We recorded the honeybees' flight behaviour with high-speed cameras and compared the search behaviour with computer simulations. We show that honeybees are able to use landmarks that have the same contrast and texture as the background and suggest that the bees use relative motion cues between the landmark and the background. These cues are generated on the eyes when the bee moves in a characteristic way in the vicinity of the landmarks. This extraordinary navigation performance can be explained by a matching scheme that includes snapshots based on optic flow amplitudes ('optic flow matching'). This new matching scheme provides a robust strategy for navigation, as it depends primarily on the depth structure of the environment.


Subject(s)
Appetitive Behavior/physiology , Bees/physiology , Goals , Optic Flow/physiology , Animals , Flight, Animal/physiology
15.
Proc Biol Sci ; 277(1689): 1899-906, 2010 Jun 22.
Article in English | MEDLINE | ID: mdl-20147329

ABSTRACT

Honeybees turn their thorax and thus their flight motor to change direction or to fly sideways. If the bee's head were fixed to its thorax, such movements would have great impact on vision. Head movements independent of thorax orientation can stabilize gaze and thus play an important and active role in shaping the structure of the visual input the animal receives. Here, we investigate how gaze and flight control interact in a homing task. We use high-speed video equipment to record the head and body movements of honeybees approaching and departing from a food source that was located between three landmarks in an indoor flight arena. During these flights, the bees' trajectories consist of straight flight segments combined with rapid turns. These short and fast yaw turns ('saccades') are in most cases accompanied by even faster head yaw turns that start about 8 ms earlier than the body saccades. Between saccades, gaze stabilization leads to a behavioural elimination of rotational components from the optical flow pattern, which facilitates depth perception from motion parallax.


Subject(s)
Bees/physiology , Behavior, Animal/physiology , Flight, Animal/physiology , Motor Activity/physiology , Ocular Physiological Phenomena , Animals , Head , Video Recording
16.
Proc Biol Sci ; 277(1685): 1209-17, 2010 Apr 22.
Article in English | MEDLINE | ID: mdl-20007175

ABSTRACT

As animals travel through the environment, powerful reflexes help stabilize their gaze by actively maintaining head and eyes in a level orientation. Gaze stabilization reduces motion blur and prevents image rotations. It also assists in depth perception based on translational optic flow. Here we describe side-to-side flight manoeuvres in honeybees and investigate how the bees' gaze is stabilized against rotations during these movements. We used high-speed video equipment to record flight paths and head movements in honeybees visiting a feeder. We show that during their approach, bees generate lateral movements with a median amplitude of about 20 mm. These movements occur with a frequency of up to 7 Hz and are generated by periodic roll movements of the thorax with amplitudes of up to + or - 60 degrees . During such thorax roll oscillations, the head is held close to horizontal, thereby minimizing rotational optic flow. By having bees fly through an oscillating, patterned drum, we show that head stabilization is based mainly on visual motion cues. Bees exposed to a continuously rotating drum, however, hold their head fixed at an oblique angle. This result shows that although gaze stabilization is driven by visual motion cues, it is limited by other mechanisms, such as the dorsal light response or gravity reception.


Subject(s)
Bees/physiology , Flight, Animal/physiology , Visual Perception/physiology , Animals , Behavior, Animal , Movement
18.
Biol Lett ; 4(1): 16-8, 2008 Feb 23.
Article in English | MEDLINE | ID: mdl-18029300

ABSTRACT

Lateralization is a well-described phenomenon in humans and other vertebrates and there are interesting parallels across a variety of different vertebrate species. However, there are only a few studies of lateralization in invertebrates. In a recent report, we showed lateralization of olfactory learning in the honeybee (Apis mellifera). Here, we investigate lateralization of another sensory modality, vision. By training honeybees on a modified version of a visual proboscis extension reflex task, we find that bees learn a colour stimulus better with their right eye.


Subject(s)
Bees/physiology , Functional Laterality/physiology , Learning/physiology , Vision, Ocular/physiology , Animals , Behavior, Animal/physiology , Color , Reward , Visual Perception/physiology
19.
Article in English | MEDLINE | ID: mdl-17333206

ABSTRACT

The pursuit system controlling chasing behaviour in male blowflies has to cope with extremely fast and dynamically changing visual input. An identified male-specific visual neuron called Male Lobula Giant 1 (MLG1) is presumably one major element of this pursuit system. Previous behavioural and modelling analyses have indicated that angular target size, retinal target position and target velocity are relevant input variables of the pursuit system. To investigate whether MLG1 specifically represents any of these visual parameters we obtained in vivo intracellular recordings while replaying optical stimuli that simulate the visual signals received by a male fly during chasing manoeuvres. On the basis of these naturalistic stimuli we find that MLG1 shows distinct direction sensitivity and is depolarised if the target motion contains an upward component. The responses of MLG1 are jointly determined by the retinal position, the speed and direction, and the duration of target motimotion. Coherence analysis reveals that although retinal target size and position are in some way inherent in the responses of MLG1, we find no confirmation of the hypothesis that MLG1 encodes any of these parameters exclusively.


Subject(s)
Diptera/physiology , Sexual Behavior, Animal/physiology , Animals , Electrophysiology , Male , Neurons/physiology , Neurons, Afferent/physiology , Photic Stimulation , Time Factors
20.
J Exp Biol ; 208(Pt 8): 1563-72, 2005 Apr.
Article in English | MEDLINE | ID: mdl-15802679

ABSTRACT

During courtship, male blowflies perform aerobatic pursuits that rank among the fastest visual behaviours that can be observed in nature. The viewing strategies during pursuit behaviour of blowflies appear to be very similar to eye movements during pursuit in primates: a combination of smooth pursuit and catch-up saccades. Whereas in primates these two components of pursuit eye movements are thought to be controlled by distinct oculomotor subsystems, we present evidence that in blowflies both types of pursuit responses can be produced by a single control system. In numerical simulations of chasing behaviour the proposed control system generates qualitatively the same behaviour as with real blowflies. As a consequence of time constants in the control system, mimicking neuronal processing times, muscular dynamics and inertia, saccade-like changes in gaze direction are generated if the target is displaced rapidly on the pursuing fly's retina. In the behavioural context of visual pursuit, saccade-like changes of the fly's gaze direction can thus be parsimoniously explained as an emergent property of a smooth pursuit system without assuming a priori different mechanisms underlying smooth and saccadic tracking behaviour.


Subject(s)
Diptera/physiology , Models, Neurological , Pursuit, Smooth/physiology , Saccades/physiology , Visual Perception/physiology , Animals , Biomechanical Phenomena , Computer Simulation , Male
SELECTION OF CITATIONS
SEARCH DETAIL
...