Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
Add more filters










Publication year range
1.
Cereb Cortex ; 25(10): 3932-52, 2015 Oct.
Article in English | MEDLINE | ID: mdl-25491118

ABSTRACT

A fundamental question in sensorimotor control concerns the transformation of spatial signals from the retina into eye and head motor commands required for accurate gaze shifts. Here, we investigated these transformations by identifying the spatial codes embedded in visually evoked and movement-related responses in the frontal eye fields (FEFs) during head-unrestrained gaze shifts. Monkeys made delayed gaze shifts to the remembered location of briefly presented visual stimuli, with delay serving to dissociate visual and movement responses. A statistical analysis of nonparametric model fits to response field data from 57 neurons (38 with visual and 49 with movement activities) eliminated most effector-specific, head-fixed, and space-fixed models, but confirmed the dominance of eye-centered codes observed in head-restrained studies. More importantly, the visual response encoded target location, whereas the movement response mainly encoded the final position of the imminent gaze shift (including gaze errors). This spatiotemporal distinction between target and gaze coding was present not only at the population level, but even at the single-cell level. We propose that an imperfect visual-motor transformation occurs during the brief memory interval between perception and action, and further transformations from the FEF's eye-centered gaze motor code to effector-specific codes in motor frames occur downstream in the subcortical areas.


Subject(s)
Frontal Lobe/physiology , Head/physiology , Neurons/physiology , Psychomotor Performance/physiology , Saccades , Visual Perception/physiology , Action Potentials , Animals , Female , Macaca mulatta
2.
J Neurophysiol ; 107(2): 573-90, 2012 Jan.
Article in English | MEDLINE | ID: mdl-21994269

ABSTRACT

The object of this study was to model the relationship between neck electromyography (EMG) and three-dimensional (3-D) head kinematics during gaze behavior. In two monkeys, we recorded 3-D gaze, head orientation, and bilateral EMG activity in the sternocleidomastoid, splenius capitis, complexus, biventer cervicis, rectus capitis posterior major, and occipital capitis inferior muscles. Head-unrestrained animals fixated and made gaze saccades between targets within a 60° × 60° grid. We performed a stepwise regression in which polynomial model terms were retained/rejected based on their tendency to increase/decrease a cross-validation-based measure of model generalizability. This revealed several results that could not have been predicted from knowledge of musculoskeletal anatomy. During head holding, EMG activity in most muscles was related to horizontal head orientation, whereas fewer muscles correlated to vertical head orientation and none to small random variations in head torsion. A fourth-order polynomial model, with horizontal head orientation as the only independent variable, generalized nearly as well as higher order models. For head movements, we added time-varying linear and nonlinear perturbations in velocity and acceleration to the previously derived static (head holding) models. The static models still explained most of the EMG variance, but the additional motion terms, which included horizontal, vertical, and torsional contributions, significantly improved the results. Several coordinate systems were used for both static and dynamic analyses, with Fick coordinates showing a marginal (nonsignificant) advantage. Thus, during gaze fixations, recruitment within the neck muscles from which we recorded contributed primarily to position-dependent horizontal orientation terms in our data set, with more complex multidimensional contributions emerging during the head movements that accompany gaze shifts. These are crucial components of the late neuromuscular transformations in a complete model of 3-D head-neck system and should help constrain the study of premotor signals for head control during gaze behaviors.


Subject(s)
Fixation, Ocular/physiology , Head Movements/physiology , Muscle Contraction/physiology , Neck Muscles/innervation , Psychomotor Performance/physiology , Acceleration , Animals , Biomechanical Phenomena , Biophysics , Electric Stimulation , Electromyography/methods , Female , Macaca mulatta , Models, Biological , Orientation , Regression Analysis , Time Factors
3.
J Neurosci ; 31(50): 18313-26, 2011 Dec 14.
Article in English | MEDLINE | ID: mdl-22171035

ABSTRACT

A sensorimotor neuron's receptive field and its frame of reference are easily conflated within the natural variability of spatial behavior. Here, we capitalized on such natural variations in 3-D eye and head positions during head-unrestrained gaze shifts to visual targets in two monkeys: to determine whether intermediate/deep layer superior colliculus (SC) receptive fields code visual targets or gaze kinematics, within four different frames of reference. Visuomotor receptive fields were either characterized during gaze shifts to visual targets from a central fixation position (32 U) or were partially characterized from each of three initial fixation points (31 U). Natural variations of initial 3-D gaze and head orientation (including torsion) provided spatial separation between four different coordinate frame models (space, head, eye, fixed-vector relative to fixation), whereas natural saccade errors provided spatial separation between target and gaze positions. Using a new statistical method based on predictive sum-of-squares, we found that in our population of 63 neurons (1) receptive field fits to target positions were significantly better than fits to actual gaze shift locations and (2) eye-centered models gave significantly better fits than the head or space frame. An intermediate frames analysis confirmed that individual neuron fits were distributed target-in-eye coordinates. Gaze position "gain" effects with the spatial tuning required for a 3-D reference frame transformation were significant in 23% (7/31) of neurons tested. We conclude that the SC primarily represents gaze targets relative to the eye but also carries early signatures of the 3-D sensorimotor transformation.


Subject(s)
Neurons/physiology , Saccades/physiology , Superior Colliculi/physiology , Visual Fields/physiology , Visual Perception/physiology , Animals , Female , Macaca mulatta , Orientation/physiology , Space Perception/physiology
4.
J Neurophysiol ; 103(1): 117-39, 2010 Jan.
Article in English | MEDLINE | ID: mdl-19846615

ABSTRACT

Remapping of gaze-centered target-position signals across saccades has been observed in the superior colliculus and several cortical areas. It is generally assumed that this remapping is driven by saccade-related signals. What is not known is how the different potential forms of this signal (i.e., visual, visuomotor, or motor) might influence this remapping. We trained a three-layer recurrent neural network to update target position (represented as a "hill" of activity in a gaze-centered topographic map) across saccades, using discrete time steps and backpropagation-through-time algorithm. Updating was driven by an efference copy of one of three saccade-related signals: a transient visual response to the saccade-target in two-dimensional (2-D) topographic coordinates (Vtop), a temporally extended motor burst in 2-D topographic coordinates (Mtop), or a 3-D eye velocity signal in brain stem coordinates (EV). The Vtop model produced presaccadic remapping in the output layer, with a "jumping hill" of activity and intrasaccadic suppression. The Mtop model also produced presaccadic remapping with a dispersed moving hill of activity that closely reproduced the quantitative results of Sommer and Wurtz. The EV model produced a coherent moving hill of activity but failed to produce presaccadic remapping. When eye velocity and a topographic (Vtop or Mtop) updater signal were used together, the remapping relied primarily on the topographic signal. An analysis of the hidden layer activity revealed that the transient remapping was highly dispersed across hidden-layer units in both Vtop and Mtop models but tightly clustered in the EV model. These results show that the nature of the updater signal influences both the mechanism and final dynamics of remapping. Taken together with the currently known physiology, our simulations suggest that different brain areas might rely on different signals and mechanisms for updating that should be further distinguishable through currently available single- and multiunit recording paradigms.


Subject(s)
Brain Stem/physiology , Neural Networks, Computer , Psychomotor Performance/physiology , Saccades/physiology , Algorithms , Eye , Motor Activity/physiology , Time Factors , Visual Perception/physiology
5.
J Neurosci Methods ; 180(1): 171-84, 2009 May 30.
Article in English | MEDLINE | ID: mdl-19427544

ABSTRACT

Natural movements towards a target show metric variations between trials. When movements combine contributions from multiple body-parts, such as head-unrestrained gaze shifts involving both eye and head rotation, the individual body-part movements may vary even more than the overall movement. The goal of this investigation was to develop a general method for both mapping sensory or motor response fields of neurons and determining their intrinsic reference frames, where these movement variations are actually utilized rather than avoided. We used head-unrestrained gaze shifts, three-dimensional (3D) geometry, and naturalistic distributions of eye and head orientation to explore the theoretical relationship between the intrinsic reference frame of a sensorimotor neuron's response field and the coherence of the activity when this response field is fitted non-parametrically using different kernel bandwidths in different reference frames. We measure how well the regression surface predicts unfitted data using the PREdictive Sum-of-Squares (PRESS) statistic. The reference frame with the smallest PRESS statistic was categorized as the intrinsic reference frame if the PRESS statistic was significantly larger in other reference frames. We show that the method works best when targets are at regularly spaced positions within the response field's active region, and that the method identifies the best kernel bandwidth for response field estimation. We describe how gain-field effects may be dealt with, and how to test neurons within a population that fall on a continuum between specific reference frames. This method may be applied to any spatially coherent single-unit activity related to sensation and/or movement during naturally varying behaviors.


Subject(s)
Action Potentials/physiology , Brain Mapping/methods , Brain/physiology , Electrophysiology/methods , Neurons/physiology , Neurophysiology/methods , Computer Simulation , Eye Movements/physiology , Fixation, Ocular/physiology , Humans , Orientation/physiology , Psychomotor Performance/physiology , Regression Analysis , Signal Processing, Computer-Assisted , Space Perception/physiology , Visual Perception/physiology
6.
Cereb Cortex ; 19(6): 1372-93, 2009 Jun.
Article in English | MEDLINE | ID: mdl-18842662

ABSTRACT

To explore the possible cortical mechanisms underlying the 3-dimensional (3D) visuomotor transformation for reaching, we trained a 4-layer feed-forward artificial neural network to compute a reach vector (output) from the visual positions of both the hand and target viewed from different eye and head orientations (inputs). The emergent properties of the intermediate layers reflected several known neurophysiological findings, for example, gain field-like modulations and position-dependent shifting of receptive fields (RFs). We performed a reference frame analysis for each individual network unit, simulating standard electrophysiological experiments, that is, RF mapping (unit input), motor field mapping, and microstimulation effects (unit outputs). At the level of individual units (in both intermediate layers), the 3 different electrophysiological approaches identified different reference frames, demonstrating that these techniques reveal different neuronal properties and suggesting that a comparison across these techniques is required to understand the neural code of physiological networks. This analysis showed fixed input-output relationships within each layer and, more importantly, within each unit. These local reference frame transformation modules provide the basic elements for the global transformation; their parallel contributions are combined in a gain field-like fashion at the population level to implement both the linear and nonlinear elements of the 3D visuomotor transformation.


Subject(s)
Decision Making/physiology , Models, Neurological , Motor Skills/physiology , Movement/physiology , Nerve Net/physiology , Task Performance and Analysis , Visual Perception/physiology , Arm/physiology , Computer Simulation , Humans
7.
J Comput Neurosci ; 24(2): 157-78, 2008 Apr.
Article in English | MEDLINE | ID: mdl-17636448

ABSTRACT

The goal of this study was to explore how a neural network could solve the updating task associated with the double-saccade paradigm, where two targets are flashed in succession and the subject must make saccades to the remembered locations of both targets. Because of the eye rotation of the saccade to the first target, the remembered retinal position of the second target must be updated if an accurate saccade to that target is to be made. We trained a three-layer, feed-forward neural network to solve this updating task using back-propagation. The network's inputs were the initial retinal position of the second target represented by a hill of activation in a 2D topographic array of units, as well as the initial eye orientation and the motor error of the saccade to the first target, each represented as 3D vectors in brainstem coordinates. The output of the network was the updated retinal position of the second target, also represented in a 2D topographic array of units. The network was trained to perform this updating using the full 3D geometry of eye rotations, and was able to produce the updated second-target position to within a 1 degrees RMS accuracy for a set of test points that included saccades of up to 70 degrees . Emergent properties in the network's hidden layer included sigmoidal receptive fields whose orientations formed distinct clusters, and predictive remapping similar to that seen in brain areas associated with saccade generation. Networks with the larger numbers of hidden-layer units developed two distinct types of units with different transformation properties: units that preferentially performed the linear remapping of vector subtraction, and units that performed the nonlinear elements of remapping that arise from initial eye orientation.


Subject(s)
Brain Mapping , Models, Neurological , Nerve Net/physiology , Neural Networks, Computer , Saccades , Humans , Orientation , Photic Stimulation/methods , Visual Fields/physiology , Visual Pathways/anatomy & histology , Visual Pathways/physiology
8.
Exp Brain Res ; 180(4): 609-28, 2007 Jul.
Article in English | MEDLINE | ID: mdl-17588185

ABSTRACT

How we perceive the visual world as stable and unified suggests the existence of transsaccadic integration that retains and integrates visual information from one eye fixation to another eye fixation across saccadic eye movements. However, the capacity of transsaccadic integration is still a subject of controversy. We tested our subjects' memory capacity of two basic visual features, i.e. luminance (Experiment 1) and orientation (Experiment 2), both within a single fixation (i.e. visual working memory) and between separate fixations (i.e. transsaccadic memory). Experiment 2 was repeated, but attention allocation was manipulated using attentional cues at either the target or distracter (Experiment 3). Subjects were able to retain 3-4 objects in transsaccadic memory for luminance and orientation; errors generally increased as saccade size increased; and, subjects were more accurate when attention was allocated to the same location as the impending target. These results were modelled by inputting a noisy extra-retinal signal into an eye-centered feature map. Our results suggest that transsaccadic memory has a similar capacity for storing simple visual features as basic visual memory, but this capacity is dependent both on the metrics of the saccade and allocation of attention.


Subject(s)
Attention/physiology , Fixation, Ocular/physiology , Memory, Short-Term/physiology , Saccades/physiology , Visual Perception/physiology , Adult , Cues , Female , Humans , Lighting , Male , Neuropsychological Tests , Orientation/physiology , Photic Stimulation
9.
J Comput Neurosci ; 22(2): 191-209, 2007 Apr.
Article in English | MEDLINE | ID: mdl-17120151

ABSTRACT

The goal of this study was to understand how neural networks solve the 3-D aspects of updating in the double-saccade task, where subjects make sequential saccades to the remembered locations of two targets. We trained a 3-layer, feed-forward neural network, using back-propagation, to calculate the 3-D motor error the second saccade. Network inputs were a 2-D topographic map of the direction of the second target in retinal coordinates, and 3-D vector representations of initial eye orientation and motor error of the first saccade in head-fixed coordinates. The network learned to account for all 3-D aspects of updating. Hidden-layer units (HLUs) showed retinal-coordinate visual receptive fields that were remapped across the first saccade. Two classes of HLUs emerged from the training, one class primarily implementing the linear aspects of updating using vector subtraction, the second class implementing the eye-orientation-dependent, non-linear aspects of updating. These mechanisms interacted at the unit level through gain-field-like input summations, and through the parallel "tweaking" of optimally-tuned HLU contributions to the output that shifted the overall population output vector to the correct second-saccade motor error. These observations may provide clues for the biological implementation of updating.


Subject(s)
Brain Mapping , Models, Neurological , Neural Networks, Computer , Saccades/physiology , Visual Pathways/physiology , Animals , Computer Simulation , Orientation/physiology , Photic Stimulation/methods , Reaction Time/physiology , Visual Perception/physiology
10.
J Neurophysiol ; 93(2): 1104-10, 2005 Feb.
Article in English | MEDLINE | ID: mdl-15385588

ABSTRACT

We tested between three levels of visuospatial adaptation (global map, parallel feature modules, and parallel sensorimotor transformations) by training subjects to reach and grasp virtual objects viewed through a left-right reversing prism, with either visual location or orientation feedback. Even though spatial information about the global left-right reversal was present in every training session, subjects trained with location feedback reached to the correct location but with the wrong (reversed) grasp orientation. Subjects trained with orientation feedback showed the opposite pattern. These errors were task-specific and not feature-specific; subjects trained to correctly grasp visually reversed-oriented bars failed to show knowledge of the reversal when asked to point to the end locations of these bars. These results show that adaptation to visuospatial distortion--even global reversals--is implemented through learning rules that operate on parallel sensorimotor transformations (e.g., reach vs. grasp).


Subject(s)
Adaptation, Physiological/physiology , Orientation/physiology , Photic Stimulation/methods , Psychomotor Performance/physiology , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...