Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 15 de 15
Filter
Add more filters










Publication year range
1.
Vision Res ; 87: 46-52, 2013 Jul 19.
Article in English | MEDLINE | ID: mdl-23770521

ABSTRACT

Previous results suggest that the brain predominantly relies on a constantly updated gaze-centered target representation to guide reach movements when no other visual information is available. In the present study, we investigated whether the addition of reliable visual landmarks influences the use of spatial reference frames for immediate and delayed reaching. Subjects reached immediately or after a delay of 8 or 12s to remembered target locations, either with or without landmarks. After target presentation and before reaching they shifted gaze to one of five different fixation points and held their gaze at this location until the end of the reach. With landmarks present, gaze-dependent reaching errors were smaller and more precise than when reaching without landmarks. Delay influenced neither reaching errors nor variability. These findings suggest that when landmarks are available, the brain seems to still use gaze-dependent representations but combine them with gaze-independent allocentric information to guide immediate or delayed reach movements to visual targets.


Subject(s)
Attention/physiology , Movement/physiology , Space Perception/physiology , Adult , Analysis of Variance , Cues , Female , Humans , Male , Photic Stimulation/methods , Young Adult
2.
Neuroscience ; 244: 99-112, 2013 Aug 06.
Article in English | MEDLINE | ID: mdl-23590906

ABSTRACT

Deep brain stimulation of the subthalamic nucleus (STN DBS) provides a unique window into human brain function since it can reversibly alter the functioning of specific brain circuits. Basal ganglia-cortical circuits are thought to be excessively noisy in patients with Parkinson's disease (PD), based in part on the lack of specificity of proprioceptive signals in basal ganglia-thalamic-cortical circuits in monkey models of the disease. PD patients are known to have deficits in proprioception, but the effects are often subtle, with paradigms typically restricted to one or two joint movements in a plane. Moreover, the effects of STN DBS on proprioception are virtually unexplored. We tested the following hypotheses: first, that PD patients will show substantial deficits in unconstrained, multi-joint proprioception, and, second, that STN DBS will improve multi-joint proprioception. Twelve PD patients with bilaterally implanted electrodes in the subthalamic nucleus and 12 age-matched healthy subjects were asked to position the left hand at a location that was proprioceptively defined in 3D space with the right hand. In a second condition, subjects were provided visual feedback during the task so that they were not forced to rely on proprioception. Overall, with STN DBS switched off, PD patients showed significantly larger proprioceptive localization errors, and greater variability in endpoint localizations than the control subjects. Visual feedback partially normalized PD performance, and demonstrated that the errors in proprioceptive localization were not simply due to a difficulty in executing the movements or in remembering target locations. Switching STN DBS on significantly reduced localization errors from those of control subjects when patients moved without visual feedback relative to when they moved with visual feedback (when proprioception was not required). However, this reduction in localization errors without vision came at the cost of increased localization variability.


Subject(s)
Deep Brain Stimulation , Parkinson Disease/therapy , Somatosensory Disorders/therapy , Aged , Feedback, Sensory/physiology , Female , Humans , Male , Parkinson Disease/complications , Psychomotor Performance/physiology , Somatosensory Disorders/complications , Subthalamic Nucleus/physiology
3.
Vision Res ; 50(24): 2633-41, 2010 Dec.
Article in English | MEDLINE | ID: mdl-20934444

ABSTRACT

Perception self-evidently affects action, but under which conditions does action in turn influence perception? To answer this question we ask observers to view an ambiguous stimulus that is alternatingly perceived as rotating clockwise or counterclockwise. When observers report the perceived direction by rotating a manipulandum, opposing directions between report and percept ('incongruent') destabilize the percept, whereas equal directions ('congruent') stabilize it. In contrast, when observers report their percept by key presses while performing a predefined movement, we find no effect of congruency. Consequently, our findings suggest that only percept-dependent action directly influences perceptual experience.


Subject(s)
Motion Perception/physiology , Psychomotor Performance/physiology , Humans , Photic Stimulation/methods , Rotation
4.
J Neurophysiol ; 96(3): 1464-77, 2006 Sep.
Article in English | MEDLINE | ID: mdl-16707717

ABSTRACT

The saccade generator updates memorized target representations for saccades during eye and head movements. Here, we tested if proprioceptive feedback from the arm can also update handheld object locations for saccades, and what intrinsic coordinate system(s) is used in this transformation. We measured radial saccades beginning from a central light-emitting diode to 16 target locations arranged peripherally in eight directions and two eccentricities on a horizontal plane in front of subjects. Target locations were either indicated 1) by a visual flash, 2) by the subject actively moving the handheld central target to a peripheral location, 3) by the experimenter passively moving the subject's hand, or 4) through a combination of the above proprioceptive and visual stimuli. Saccade direction was relatively accurate, but subjects showed task-dependent systematic overshoots and variable errors in radial amplitude. Visually guided saccades showed the smallest overshoot, followed by saccades guided by both vision and proprioception, whereas proprioceptively guided saccades showed the largest overshoot. In most tasks, the overall distribution of saccade endpoints was shifted and expanded in a gaze- or head-centered cardinal coordinate system. However, the active proprioception task produced a tilted pattern of errors, apparently weighted toward a limb-centered coordinate system. This suggests the saccade generator receives an efference copy of the arm movement command but fails to compensate for the arm's inertia-related directional anisotropy. Thus the saccade system is able to transform hand-centered somatosensory signals into oculomotor coordinates and combine somatosensory signals with visual inputs, but it seems to have a poorly calibrated internal model of limb properties.


Subject(s)
Memory/physiology , Proprioception/physiology , Psychomotor Performance/physiology , Saccades/physiology , Adult , Eye Movements , Female , Fixation, Ocular , Head Movements , Humans , Male , Visual Perception
5.
Exp Brain Res ; 152(1): 70-8, 2003 Sep.
Article in English | MEDLINE | ID: mdl-12827330

ABSTRACT

Eye-hand coordination is geometrically complex. To compute the location of a visual target relative to the hand, the brain must consider every anatomical link in the chain from retinas to fingertips. Here we focus on the first three links, studying how the brain handles information about the angles of the two eyes and the head. It is known that people, even in darkness, reach more accurately when the eye looks toward the target, rather than right or left of it. We show that reaching is also impaired when the binocular fixation point is displaced from the target in depth: reaching becomes not just sloppy, but systematically inaccurate. Surprisingly, though, in normal Gaze-On-Target reaching we found no strong correlations between errors in aiming the eyes and hand onto the target site. We also asked people to reach when the head was not facing the target. When the eyes were on-target, people reached accurately, but when gaze was off-target, performance degraded. Taking all these findings together, we suggest that the brain's computational networks have learned the complex geometry of reaching for well-practiced tasks, but that the networks are poorly calibrated for less common tasks such as Gaze-Off-Target reaching.


Subject(s)
Computational Biology/methods , Head Movements/physiology , Photic Stimulation/methods , Psychomotor Performance/physiology , Vision, Binocular/physiology , Adult , Eye Movements/physiology , Female , Humans , Male
6.
Strabismus ; 11(1): 33-47, 2003 Mar.
Article in English | MEDLINE | ID: mdl-12789582

ABSTRACT

Eye-hand coordination is complicated by the fact that the eyes are constantly in motion relative to the head. This poses problems in interpreting the spatial information gathered from the retinas and using this to guide hand motion. In particular, eye-centered visual information must somehow be spatially updated across eye movements to be useful for future actions, and these representations must then be transformed into commands appropriate for arm motion. In this review, we present evidence that early visuomotor representations for arm movement are remapped relative to the gaze direction during each saccade. We find that this mechanism holds for targets in both far and near visual space. We then show how the brain incorporates the three-dimensional, rotary geometry of the eyes when interpreting retinal images and transforming these into commands for arm movement. Next, we explore the possibility that hand-eye alignment is optimized for the eye with the best field of view. Finally, we describe how head orientation influences the linkage between oculocentric visual frames and bodycentric motor frames. These findings are framed in terms of our 'conversion-on-demand' model, in which only those representations selected for action are put through the complex visuomotor transformations required for interaction with objects in personal space, thus providing a virtual on-line map of visuomotor space.


Subject(s)
Eye Movements/physiology , Hand/physiology , Psychomotor Performance/physiology , Vision, Ocular/physiology , Biomechanical Phenomena , Dominance, Ocular/physiology , Eye/innervation , Hand/innervation , Head Movements/physiology , Humans , Models, Neurological
9.
J Neurophysiol ; 87(4): 1677-85, 2002 Apr.
Article in English | MEDLINE | ID: mdl-11929889

ABSTRACT

Eye-hand coordination requires the brain to integrate visual information with the continuous changes in eye, head, and arm positions. This is a geometrically complex process because the eyes, head, and shoulder have different centers of rotation. As a result, head rotation causes the eye to translate with respect to the shoulder. The present study examines the consequences of this geometry for planning accurate arm movements in a pointing task with the head at different orientations. When asked to point at an object, subjects oriented their arm to position the fingertip on the line running from the target to the viewing eye. But this eye-target line shifts when the eyes translate with each new head orientation, thereby requiring a new arm pointing direction. We confirmed that subjects do realign their fingertip with the eye-target line during closed-loop pointing across various horizontal head orientations when gaze is on target. More importantly, subjects also showed this head-position-dependent pattern of pointing responses for the same paradigm performed in complete darkness. However, when gaze was not on target, compensation for these translations in the rotational centers partially broke down. As a result, subjects tended to overshoot the target direction relative to current gaze; perhaps explaining previously reported errors in aiming the arm to retinally peripheral targets. These results suggest that knowledge of head position signals and the resulting relative displacements in the centers of rotation of the eye and shoulder are incorporated using open-loop mechanisms for eye-hand coordination, but these translations are best calibrated for foveated, gaze-on-target movements.


Subject(s)
Arm/physiology , Brain/physiology , Head/physiology , Ocular Physiological Phenomena , Posture/physiology , Psychomotor Performance/physiology , Shoulder/physiology , Adult , Fixation, Ocular , Forecasting , Humans , Middle Aged
10.
Prog Brain Res ; 140: 329-40, 2002.
Article in English | MEDLINE | ID: mdl-12508600

ABSTRACT

In recent years the scientific community has come to appreciate that the early cortical representations for visually guided arm movements are probably coded in a visual frame, i.e. relative to retinal landmarks. While this scheme accounts for many behavioral and neurophysiological observations, it also poses certain problems for manual control. For example, how are these oculocentric representations updated across eye movements, and how are they then transformed into useful commands for accurate movements of the arm relative to the body? Also, since we have two eyes, which is used as the reference point in eye-hand alignment tasks like pointing? We show that patterns of errors in human pointing suggest that early oculocentric representations for arm movement are remapped relative to the gaze direction during each saccade. To then transform these oculocentric representations into useful commands for accurate movements of the arm relative to the body, the brain correctly incorporates the three-dimensional, rotary geometry of the eyes when interpreting retinal images. We also explore the possibility that the eye-hand coordination system uses a strategy like ocular dominance, but switches alignment between the left and right eye in order to maximize eye-hand coordination in the best field of view. Finally, we describe the influence of eye position on eye-hand alignment, and then consider how head orientation influences the linkage between oculocentric visual frames and bodycentric motor frames. These findings are framed in terms of our 'conversion-on-demand' model, which suggests a virtual representation of egocentric space, i.e. one in which only those representations selected for action are put through the complex visuomotor transformations required for interaction with actual objects in personal space.


Subject(s)
Hand/physiology , Ocular Physiological Phenomena , Psychomotor Performance/physiology , Arm/innervation , Arm/physiology , Eye/innervation , Hand/innervation , Head Movements/physiology , Humans
11.
J Neurophysiol ; 84(5): 2302-16, 2000 Nov.
Article in English | MEDLINE | ID: mdl-11067974

ABSTRACT

This study addressed the question of how the three-dimensional (3-D) control strategy for the upper arm depends on what the forearm is doing. Subjects were instructed to point a laser-attached in line with the upper arm-toward various visual targets, such that two-dimensional (2-D) pointing directions of the upper arm were held constant across different tasks. For each such task, subjects maintained one of several static upper arm-forearm configurations, i. e., each with a set elbow angle and forearm orientation. Upper arm, forearm, and eye orientations were measured with the use of 3-D search coils. The results confirmed that Donders' law (a behavioral restriction of 3-D orientation vectors to a 2-D "surface") does not hold across all pointing tasks, i.e., for a given pointing target, upper arm torsion varied widely. However, for any one static elbow configuration, torsional variance was considerably reduced and was independent of previous arm position, resulting in a thin, Donders-like surface of orientation vectors. More importantly, the shape of this surface (which describes upper arm torsion as a function of its 2-D pointing direction) depended on both elbow angle and forearm orientation. For pointing with the arm fully extended or with the elbow flexed in the horizontal plane, a Listing's-law-like strategy was observed, minimizing shoulder rotations to and from center at the cost of position-dependent tilts in the forearm. In contrast, when the arm was bent in the vertical plane, the surface of best fit showed a Fick-like twist that increased continuously as a function of static elbow flexion, thereby reducing position-dependent tilts of the forearm with respect to gravity. In each case, the torsional variance from these surfaces remained constant, suggesting that Donders' law was obeyed equally well for each task condition. Further experiments established that these kinematic rules were independent of gaze direction and eye orientation, suggesting that Donders' law of the arm does not coordinate with Listing's law for the eye. These results revive the idea that Donders' law is an important governing principle for the control of arm movements but also suggest that its various forms may only be limited manifestations of a more general set of context-dependent kinematic rules. We propose that these rules are implemented by neural velocity commands arising as a function of initial arm orientation and desired pointing direction, calculated such that the torsional orientation of the upper arm is implicitly coordinated with desired forearm posture.


Subject(s)
Forearm/physiology , Movement/physiology , Posture/physiology , Psychomotor Performance/physiology , Biomechanical Phenomena , Elbow Joint/physiology , Humans , Orientation/physiology , Photic Stimulation , Shoulder Joint/physiology , Torsion Abnormality
12.
Exp Brain Res ; 132(2): 179-94, 2000 May.
Article in English | MEDLINE | ID: mdl-10853943

ABSTRACT

The aim of this study was to: (1) quantify errors in open-loop pointing toward a spatially central (but retinally peripheral) visual target with gaze maintained in various eccentric horizontal, vertical, and oblique directions; and (2) determine the computational source of these errors. Eye and arm orientations were measured with the use of search coils while six head-fixed subjects looked and pointed toward remembered targets in complete darkness. On average, subjects made small exaggerations in both the vertical and horizontal components of retinal displacement (tending to overshoot the target relative to current gaze), but individual subjects showed considerable variations in this pattern. Moreover, pointing errors for oblique retinal targets were only partially predictable from errors for the cardinal directions, suggesting that most of these errors did not arise within independent vertical and horizontal coordinate channels. The remaining variance was related to nonhomogeneous, direction-dependent distortions in reading out the magnitudes and directions of retinal displacement. The largest and most consistent nonhomogeneities occurred as discontinuities between adjacent points across the vertical meridian of retinotopic space, perhaps related to the break between the representations of space in the left and right cortices. These findings are consistent with the hypothesis that at least some of these visuomotor distortions are due to miscalibrations in quasi-independent visuomotor readout mechanisms for "patches" of retinotopic space, with major discontinuities existing between patches at certain anatomic and/or physiological borders.


Subject(s)
Movement/physiology , Perceptual Distortion/physiology , Psychomotor Performance/physiology , Retina/physiology , Space Perception/physiology , Adult , Calibration , Female , Fingers/physiology , Fixation, Ocular/physiology , Humans , Linear Models , Male , Visual Fields/physiology
13.
J Neurosci ; 20(7): 2719-30, 2000 Apr 01.
Article in English | MEDLINE | ID: mdl-10729353

ABSTRACT

In the 19th century, Donders observed that only one three-dimensional eye orientation is used for each gaze direction. Listing's law further specifies that the full set of eye orientation vectors forms a plane, whereas the equivalent Donders' law for the head, the Fick strategy, specifies a twisted two-dimensional range. Surprisingly, despite considerable research and speculation, the biological reasons for choosing one such range over another remain obscure. In the current study, human subjects performed head-free gaze shifts between visual targets while wearing pinhole goggles. During fixations, the head orientation range still obeyed Donders' law, but in most subjects, it immediately changed from the twisted Fick-like range to a flattened Listing-like range. Further controls showed that this was not attributable to loss of binocular vision or increased range of head motion, nor was it attributable to blocked peripheral vision; when subjects pointed a helmet-mounted laser toward targets (a task with goggle-like motor demands but normal vision), the head followed Listing's law even more closely. Donders' law of the head only broke down (in favor of a "minimum-rotation strategy") when head motion was dissociated from gaze. These behaviors could not be modeled using current "Donders' operators" but were readily simulated nonholonomically, i.e., by modulating head velocity commands as a function of position and task. We conclude that the gaze control system uses such velocity rules to shape Donders' law on a moment-to-moment basis, not primarily to satisfy perceptual or anatomic demands, but rather for motor optimization; the Fick strategy optimizes the role of the head as a platform for eye movement, whereas Listing's law optimizes rapid control of the eye (or head) as a gaze pointer.


Subject(s)
Eye Movements/physiology , Head , Models, Neurological , Visual Perception/physiology , Adult , Eye Protective Devices , Female , Humans , Male , Posture , Restraint, Physical , Task Performance and Analysis
14.
J Neurosci ; 20(6): 2360-8, 2000 Mar 15.
Article in English | MEDLINE | ID: mdl-10704510

ABSTRACT

Most models of spatial vision and visuomotor control reconstruct visual space by adding a vector representing the site of retinal stimulation to another vector representing gaze angle. However, this scheme fails to account for the curvatures in retinal projection produced by rotatory displacements in eye orientation. In particular, our simulations demonstrate that even simple vertical eye rotation changes the curvature of horizontal retinal projections with respect to eye-fixed retinal landmarks. We confirmed the existence of such curvatures by measuring target direction in eye coordinates in which the retinotopic representation of horizontally displaced targets curved obliquely as a function of vertical eye orientation. We then asked subjects to point (open loop) toward briefly flashed targets at various points along these lines of curvature. The vector-addition model predicted errors in pointing trajectory as a function of eye orientation. In contrast, with only minor exceptions, actual subjects showed no such errors, showing a complete neural compensation for the eye position-dependent geometry of retinal curvatures. Rather than bolstering the traditional model with additional corrective mechanisms for these nonlinear effects, we suggest that the complete geometry of retinal projection can be decoded through a single multiplicative comparison with three-dimensional eye orientation. Moreover, because the visuomotor transformation for pointing involves specific parietal and frontal cortical processes, our experiment implicates specific regions of cortex in such nonlinear transformations.


Subject(s)
Computer Simulation , Eye Movements/physiology , Models, Neurological , Psychomotor Performance/physiology , Space Perception/physiology , Adult , Arm/physiology , Fixation, Ocular/physiology , Humans , Middle Aged , Retina/physiology , Visual Cortex/physiology
15.
J Neurosci ; 18(4): 1583-94, 1998 Feb 15.
Article in English | MEDLINE | ID: mdl-9454863

ABSTRACT

Establishing a coherent internal reference frame for visuospatial representation and maintaining the integrity of this frame during eye movements are thought to be crucial for both perception and motor control. A stable headcentric representation could be constructed by internally comparing retinal signals with eye position. Alternatively, visual memory traces could be actively remapped within an oculocentric frame to compensate for each eye movement. We tested these models by measuring errors in manual pointing (in complete darkness) toward briefly flashed central targets during three oculomotor paradigms; subjects pointed accurately when gaze was maintained on the target location (control paradigm). However, when steadily fixating peripheral locations (static paradigm), subjects exaggerated the retinal eccentricity of the central target by 13.4 +/- 5.1%. In the key "dynamic" paradigm, subjects briefly foveated the central target and then saccaded peripherally before pointing toward the remembered location of the target. Our headcentric model predicted accurate pointing (as seen in the control paradigm) independent of the saccade, whereas our oculocentric model predicted misestimation (as seen in the static paradigm) of an internally shifted retinotopic trace. In fact, pointing errors were significantly larger than were control errors (p /= 0.25) from the static paradigm errors. Scatter plots of pointing errors (dynamic vs static paradigm) for various final fixation directions showed an overall slope of 0.97, contradicting the headcentric prediction (0. 0) and supporting the oculocentric prediction (1.0). Varying both fixation and pointing-target direction confirmed that these errors were a function of retinotopically shifted memory traces rather than eye position per se. To reconcile these results with previous pointing experiments, we propose a "conversion-on-demand" model of visuomotor control in which multiple visual targets are stored and rotated (noncommutatively) within the oculocentric frame, whereas only select targets are transformed further into head- or bodycentric frames for motor execution.


Subject(s)
Arm/physiology , Brain/physiology , Memory/physiology , Psychomotor Performance/physiology , Space Perception/physiology , Visual Perception/physiology , Adult , Female , Humans , Male , Models, Neurological , Movement/physiology , Ocular Physiological Phenomena , Retina/physiology , Saccades/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...