Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 89
Filter
Add more filters










Publication year range
1.
eNeuro ; 2024 Jul 25.
Article in English | MEDLINE | ID: mdl-39054056

ABSTRACT

Single-unit (SU) activity - action potentials isolated from one neuron - has traditionally been employed to relate neuronal activity to behavior. However, recent investigations have shown that multi-unit (MU) activity - ensemble neural activity recorded within the vicinity of one microelectrode - may also contain accurate estimations of task-related neural population dynamics. Here, using an established model-fitting approach, we compared the spatial codes of SU response fields with corresponding MU response fields recorded from the frontal eye fields (FEF) in head-unrestrained monkeys (Macaca mulatta) during a memory-guided saccade task. Overall, both SU and MU populations showed a simple visuomotor transformation: the visual response coded target-in-eye coordinates, transitioning progressively during the delay toward a future gaze-in-eye code in the saccade motor response. However, the SU population showed additional secondary codes, including a predictive gaze code in the visual response and retention of a target code in the motor response. Further, when SUs were separated into regular / fast spiking neurons, these cell types showed different spatial code progressions during the late delay period, only converging toward gaze coding during the final saccade motor response. Finally, reconstructing MU populations (by summing SU data within the same sites) failed to replicate either the SU or MU pattern. These results confirm the theoretical and practical potential of MU activity recordings as a biomarker for fundamental sensorimotor transformations (e.g., Target-to-Gaze coding in the oculomotor system), while also highlighting the importance of SU activity for coding more subtle (e.g., predictive / memory) aspects of sensorimotor behavior.Significance statement Multi-unit recordings (undifferentiated signals from several neurons) are relatively easy to record and provide a simplified estimate of neural dynamics, but it is not clear which single-unit signals are retained, amplified, or lost. Here, we compared single- / multi-unit activity from a well-defined structure (the frontal eye fields) and behavior (memory-delay saccade task), tracking their spatial codes through time. The progressive transformation from target-to-gaze coding observed in single-unit activity was retained in multi-unit activity, but other cognitive signals (gaze prediction within the initial visual response, target memory within the final motor response, and cell-specific delay signals) were lost. This suggests that multi-unit activity provides an excellent biomarker for healthy sensorimotor transformations, at the cost of missing more subtle cognitive signals.

2.
J Vis ; 24(7): 17, 2024 Jul 02.
Article in English | MEDLINE | ID: mdl-39073800

ABSTRACT

Allocentric landmarks have an implicit influence on aiming movements, but it is not clear how an explicit instruction (to aim relative to a landmark) influences reach accuracy and precision. Here, 12 participants performed a task with two instruction conditions (egocentric vs. allocentric) but with similar sensory and motor conditions. Participants fixated gaze near the center of a display aligned with their right shoulder while a target stimulus briefly appeared alongside a visual landmark in one visual field. After a brief mask/memory delay the landmark then reappeared at a different location (same or opposite visual field), creating an ego/allocentric conflict. In the egocentric condition, participants were instructed to ignore the landmark and point toward the remembered location of the target. In the allocentric condition, participants were instructed to remember the initial target location relative to the landmark and then reach relative to the shifted landmark (same or opposite visual field). To equalize motor execution between tasks, participants were instructed to anti-point (point to the visual field opposite to the remembered target) on 50% of the egocentric trials. Participants were more accurate and precise and quicker to react in the allocentric condition, especially when pointing to the opposite field. We also observed a visual field effect, where performance was worse overall in the right visual field. These results suggest that, when egocentric and allocentric cues conflict, explicit use of the visual landmark provides better reach performance than reliance on noisy egocentric signals. Such instructions might aid rehabilitation when the egocentric system is compromised by disease or injury.


Subject(s)
Psychomotor Performance , Space Perception , Visual Fields , Humans , Male , Female , Young Adult , Adult , Psychomotor Performance/physiology , Space Perception/physiology , Visual Fields/physiology , Photic Stimulation/methods , Reaction Time/physiology
3.
J Neurophysiol ; 132(1): 147-161, 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-38836297

ABSTRACT

People usually reach for objects to place them in some position and orientation, but the placement component of this sequence is often ignored. For example, reaches are influenced by gaze position, visual feedback, and memory delays, but their influence on object placement is unclear. Here, we tested these factors in a task where participants placed and oriented a trapezoidal block against two-dimensional (2-D) visual templates displayed on a frontally located computer screen. In experiment 1, participants matched the block to three possible orientations: 0° (horizontal), +45° and -45°, with gaze fixated 10° to the left/right. The hand and template either remained illuminated (closed-loop), or visual feedback was removed (open-loop). Here, hand location consistently overshot the template relative to gaze, especially in the open-loop task; likewise, orientation was influenced by gaze position (depending on template orientation and visual feedback). In experiment 2, a memory delay was added, and participants sometimes performed saccades (toward, away from, or across the template). In this task, the influence of gaze on orientation vanished, but location errors were influenced by both template orientation and final gaze position. Contrary to our expectations, the previous saccade metrics also impacted placement overshoot. Overall, hand orientation was influenced by template orientation in a nonlinear fashion. These results demonstrate interactions between gaze and orientation signals in the planning and execution of hand placement and suggest different neural mechanisms for closed-loop, open-loop, and memory delay placement.NEW & NOTEWORTHY Eye-hand coordination studies usually focus on object acquisition, but placement is equally important. We investigated how gaze position influences object placement toward a 2-D template with different levels of visual feedback. Like reach, placement overestimated goal location relative to gaze and was influenced by previous saccade metrics. Gaze also modulated hand orientation, depending on template orientation and level of visual feedback. Gaze influence was feedback-dependent, with location errors having no significant effect after a memory delay.


Subject(s)
Feedback, Sensory , Fixation, Ocular , Hand , Memory , Psychomotor Performance , Humans , Male , Female , Hand/physiology , Adult , Psychomotor Performance/physiology , Biomechanical Phenomena/physiology , Feedback, Sensory/physiology , Memory/physiology , Fixation, Ocular/physiology , Young Adult , Visual Perception/physiology , Saccades/physiology
4.
Neuropsychologia ; 194: 108773, 2024 02 15.
Article in English | MEDLINE | ID: mdl-38142960

ABSTRACT

Sensorimotor integration involves feedforward and reentrant processing of sensory input. Grasp-related motor activity precedes and is thought to influence visual object processing. Yet, while the importance of reentrant feedback is well established in perception, the top-down modulations for action and the neural circuits involved in this process have received less attention. Do action-specific intentions influence the processing of visual information in the human cortex? Using a cue-separation fMRI paradigm, we found that action-specific instruction processing (manual alignment vs. grasp) became apparent only after the visual presentation of oriented stimuli, and occurred as early as in the primary visual cortex and extended to the dorsal visual stream, motor and premotor areas. Further, dorsal stream area aIPS, known to be involved in object manipulation, and the primary visual cortex showed task-related functional connectivity with frontal, parietal and temporal areas, consistent with the idea that reentrant feedback from dorsal and ventral visual stream areas modifies visual inputs to prepare for action. Importantly, both the task-dependent modulations and connections were linked specifically to the object presentation phase of the task, suggesting a role in processing the action goal. Our results show that intended manual actions have an early, pervasive, and differential influence on the cortical processing of vision.


Subject(s)
Magnetic Resonance Imaging , Visual Perception , Humans , Magnetic Resonance Imaging/methods , Brain Mapping
5.
J Neurosci ; 43(45): 7511-7522, 2023 11 08.
Article in English | MEDLINE | ID: mdl-37940592

ABSTRACT

Real-world actions require one to simultaneously perceive, think, and act on the surrounding world, requiring the integration of (bottom-up) sensory information and (top-down) cognitive and motor signals. Studying these processes involves the intellectual challenge of cutting across traditional neuroscience silos, and the technical challenge of recording data in uncontrolled natural environments. However, recent advances in techniques, such as neuroimaging, virtual reality, and motion tracking, allow one to address these issues in naturalistic environments for both healthy participants and clinical populations. In this review, we survey six topics in which naturalistic approaches have advanced both our fundamental understanding of brain function and how neurologic deficits influence goal-directed, coordinated action in naturalistic environments. The first part conveys fundamental neuroscience mechanisms related to visuospatial coding for action, adaptive eye-hand coordination, and visuomotor integration for manual interception. The second part discusses applications of such knowledge to neurologic deficits, specifically, steering in the presence of cortical blindness, impact of stroke on visual-proprioceptive integration, and impact of visual search and working memory deficits. This translational approach-extending knowledge from lab to rehab-provides new insights into the complex interplay between perceptual, motor, and cognitive control in naturalistic tasks that are relevant for both basic and clinical research.


Subject(s)
Stroke , Virtual Reality , Humans , Goals , Memory, Short-Term , Cognition
6.
Commun Biol ; 6(1): 938, 2023 09 13.
Article in English | MEDLINE | ID: mdl-37704829

ABSTRACT

Visual landmarks influence spatial cognition and behavior, but their influence on visual codes for action is poorly understood. Here, we test landmark influence on the visual response to saccade targets recorded from 312 frontal and 256 supplementary eye field neurons in rhesus macaques. Visual response fields are characterized by recording neural responses to various target-landmark combinations, and then we test against several candidate spatial models. Overall, frontal/supplementary eye fields response fields preferentially code either saccade targets (40%/40%) or landmarks (30%/4.5%) in gaze fixation-centered coordinates, but most cells show multiplexed target-landmark coding within intermediate reference frames (between fixation-centered and landmark-centered). Further, these coding schemes interact: neurons with near-equal target and landmark coding show the biggest shift from fixation-centered toward landmark-centered target coding. These data show that landmark information is preserved and influences target coding in prefrontal visual responses, likely to stabilize movement goals in the presence of noisy egocentric signals.


Subject(s)
Frontal Lobe , Saccades , Animals , Macaca mulatta , Cognition , Fixation, Ocular
7.
J Neurophysiol ; 128(6): 1518-1533, 2022 12 01.
Article in English | MEDLINE | ID: mdl-36321728

ABSTRACT

To generate a hand-specific reach plan, the brain must integrate hand-specific signals with the desired movement strategy. Although various neurophysiology/imaging studies have investigated hand-target interactions in simple reach-to-target tasks, the whole brain timing and distribution of this process remain unclear, especially for more complex, instruction-dependent motor strategies. Previously, we showed that a pro/anti pointing instruction influences magnetoencephalographic (MEG) signals in frontal cortex that then propagate recurrently through parietal cortex (Blohm G, Alikhanian H, Gaetz W, Goltz HC, DeSouza JF, Cheyne DO, Crawford JD. NeuroImage 197: 306-319, 2019). Here, we contrasted left versus right hand pointing in the same task to investigate 1) which cortical regions of interest show hand specificity and 2) which of those areas interact with the instructed motor plan. Eight bilateral areas, the parietooccipital junction (POJ), superior parietooccipital cortex (SPOC), supramarginal gyrus (SMG), medial/anterior interparietal sulcus (mIPS/aIPS), primary somatosensory/motor cortex (S1/M1), and dorsal premotor cortex (PMd), showed hand-specific changes in beta band power, with four of these (M1, S1, SMG, aIPS) showing robust activation before movement onset. M1, SMG, SPOC, and aIPS showed significant interactions between contralateral hand specificity and the instructed motor plan but not with bottom-up target signals. Separate hand/motor signals emerged relatively early and lasted through execution, whereas hand-motor interactions only occurred close to movement onset. Taken together with our previous results, these findings show that instruction-dependent motor plans emerge in frontal cortex and interact recurrently with hand-specific parietofrontal signals before movement onset to produce hand-specific motor behaviors.NEW & NOTEWORTHY The brain must generate different motor signals depending on which hand is used. The distribution and timing of hand use/instructed motor plan integration are not understood at the whole brain level. Using MEG we show that different action planning subnetworks code for hand usage and integrating hand use into a hand-specific motor plan. The timing indicates that frontal cortex first creates a general motor plan and then integrates hand specificity to produce a hand-specific motor plan.


Subject(s)
Motor Cortex , Psychomotor Performance , Psychomotor Performance/physiology , Movement/physiology , Hand/physiology , Motor Cortex/physiology , Parietal Lobe/physiology , Brain Mapping
8.
Cereb Cortex Commun ; 3(3): tgac026, 2022.
Article in English | MEDLINE | ID: mdl-35909704

ABSTRACT

Allocentric (landmark-centered) and egocentric (eye-centered) visual codes are fundamental for spatial cognition, navigation, and goal-directed movement. Neuroimaging and neurophysiology suggest these codes are initially segregated, but then reintegrated in frontal cortex for movement control. We created and validated a theoretical framework for this process using physiologically constrained inputs and outputs. To implement a general framework, we integrated a convolutional neural network (CNN) of the visual system with a multilayer perceptron (MLP) model of the sensorimotor transformation. The network was trained on a task where a landmark shifted relative to the saccade target. These visual parameters were input to the CNN, the CNN output and initial gaze position to the MLP, and a decoder transformed MLP output into saccade vectors. Decoded saccade output replicated idealized training sets with various allocentric weightings and actual monkey data where the landmark shift had a partial influence (R 2 = 0.8). Furthermore, MLP output units accurately simulated prefrontal response field shifts recorded from monkeys during the same paradigm. In summary, our model replicated both the general properties of the visuomotor transformations for gaze and specific experimental results obtained during allocentric-egocentric integration, suggesting it can provide a general framework for understanding these and other complex visuomotor behaviors.

9.
Eur J Neurosci ; 56(6): 4803-4818, 2022 09.
Article in English | MEDLINE | ID: mdl-35841138

ABSTRACT

The visual cortex has been extensively studied to investigate its role in object recognition but to a lesser degree to determine how action planning influences the representation of objects' features. We used functional MRI and pattern classification methods to determine if during action planning, object features (orientation and location) could be decoded in an action-dependent way. Sixteen human participants used their right dominant hand to perform movements (Align or Open reach) towards one of two 3D-real oriented objects that were simultaneously presented and placed on either side of a fixation cross. While both movements required aiming towards target location, Align but not Open reach movements required participants to precisely adjust hand orientation. Therefore, we hypothesized that if the representation of object features is modulated by the upcoming action, pre-movement activity pattern would allow more accurate dissociation between object features in Align than Open reach tasks. We found such dissociation in the anterior and posterior parietal cortex, as well as in the dorsal premotor cortex, suggesting that visuomotor processing is modulated by the upcoming task. The early visual cortex showed significant decoding accuracy for the dissociation between object features in the Align but not Open reach task. However, there was no significant difference between the decoding accuracy in the two tasks. These results demonstrate that movement-specific preparatory signals modulate object representation in the frontal and parietal cortex, and to a lesser extent in the early visual cortex, likely through feedback functional connections.


Subject(s)
Brain Mapping , Visual Cortex , Brain Mapping/methods , Humans , Magnetic Resonance Imaging/methods , Occipital Lobe , Parietal Lobe , Psychomotor Performance
10.
Sci Rep ; 11(1): 8611, 2021 04 21.
Article in English | MEDLINE | ID: mdl-33883578

ABSTRACT

Previous neuroimaging studies have shown that inferior parietal and ventral occipital cortex are involved in the transsaccadic processing of visual object orientation. Here, we investigated whether the same areas are also involved in transsaccadic processing of a different feature, namely, spatial frequency. We employed a functional magnetic resonance imaging paradigm where participants briefly viewed a grating stimulus with a specific spatial frequency that later reappeared with the same or different frequency, after a saccade or continuous fixation. First, using a whole-brain Saccade > Fixation contrast, we localized two frontal (left precentral sulcus and right medial superior frontal gyrus), four parietal (bilateral superior parietal lobule and precuneus), and four occipital (bilateral cuneus and lingual gyri) regions. Whereas the frontoparietal sites showed task specificity, the occipital sites were also modulated in a saccade control task. Only occipital cortex showed transsaccadic feature modulations, with significant repetition enhancement in right cuneus. These observations (parietal task specificity, occipital enhancement, right lateralization) are consistent with previous transsaccadic studies. However, the specific regions differed (ventrolateral for orientation, dorsomedial for spatial frequency). Overall, this study supports a general role for occipital and parietal cortex in transsaccadic vision, with a specific role for cuneus in spatial frequency processing.


Subject(s)
Occipital Lobe/physiology , Saccades/physiology , Adult , Brain Mapping/methods , Female , Humans , Magnetic Resonance Imaging/methods , Male , Parietal Lobe/physiology , Young Adult
11.
J Neurosci ; 40(23): 4525-4535, 2020 06 03.
Article in English | MEDLINE | ID: mdl-32354854

ABSTRACT

Coordinated reach-to-grasp movements are often accompanied by rapid eye movements (saccades) that displace the desired object image relative to the retina. Parietal cortex compensates for this by updating reach goals relative to current gaze direction, but its role in the integration of oculomotor and visual orientation signals for updating grasp plans is unknown. Based on a recent perceptual experiment, we hypothesized that inferior parietal cortex (specifically supramarginal gyrus [SMG]) integrates saccade and visual signals to update grasp plans in additional intraparietal/superior parietal regions. To test this hypothesis in humans (7 females, 6 males), we used a functional magnetic resonance paradigm, where saccades sometimes interrupted grasp preparation toward a briefly presented object that later reappeared (with the same/different orientation) just before movement. Right SMG and several parietal grasp regions, namely, left anterior intraparietal sulcus and bilateral superior parietal lobule, met our criteria for transsaccadic orientation integration: they showed task-dependent saccade modulations and, during grasp execution, they were specifically sensitive to changes in object orientation that followed saccades. Finally, SMG showed enhanced functional connectivity with both prefrontal saccade regions (consistent with oculomotor input) and anterior intraparietal sulcus/superior parietal lobule (consistent with sensorimotor output). These results support the general role of parietal cortex for the integration of visuospatial perturbations, and provide specific cortical modules for the integration of oculomotor and visual signals for grasp updating.SIGNIFICANCE STATEMENT How does the brain simultaneously compensate for both external and internally driven changes in visual input? For example, how do we grasp an unstable object while eye movements are simultaneously changing its retinal location? Here, we used fMRI to identify a group of inferior parietal (supramarginal gyrus) and superior parietal (intraparietal and superior parietal) regions that show saccade-specific modulations during unexpected changes in object/grasp orientation, and functional connectivity with frontal cortex saccade centers. This provides a network, complementary to the reach goal updater, that integrates visuospatial updating into grasp plans, and may help to explain some of the more complex symptoms associated with parietal damage, such as constructional ataxia.


Subject(s)
Hand Strength/physiology , Orientation, Spatial/physiology , Parietal Lobe/diagnostic imaging , Parietal Lobe/physiology , Psychomotor Performance/physiology , Saccades/physiology , Adult , Female , Humans , Magnetic Resonance Imaging/methods , Male , Photic Stimulation/methods , Young Adult
12.
Cereb Cortex Commun ; 1(1): tgaa042, 2020.
Article in English | MEDLINE | ID: mdl-34296111

ABSTRACT

Previous studies in the macaque monkey have provided clear causal evidence for an involvement of the medial-superior-temporal area (MST) in the perception of self-motion. These studies also revealed an overrepresentation of contraversive heading. Human imaging studies have identified a functional equivalent (hMST) of macaque area MST. Yet, causal evidence of hMST in heading perception is lacking. We employed neuronavigated transcranial magnetic stimulation (TMS) to test for such a causal relationship. We expected TMS over hMST to induce increased perceptual variance (i.e., impaired precision), while leaving mean heading perception (accuracy) unaffected. We presented 8 human participants with an optic flow stimulus simulating forward self-motion across a ground plane in one of 3 directions. Participants indicated perceived heading. In 57% of the trials, TMS pulses were applied, temporally centered on self-motion onset. TMS stimulation site was either right-hemisphere hMST, identified by a functional magnetic resonance imaging (fMRI) localizer, or a control-area, just outside the fMRI localizer activation. As predicted, TMS over area hMST, but not over the control-area, increased response variance of perceived heading as compared with noTMS stimulation trials. As hypothesized, this effect was strongest for contraversive self-motion. These data provide a first causal evidence for a critical role of hMST in visually guided navigation.

13.
Ann N Y Acad Sci ; 1464(1): 142-155, 2020 03.
Article in English | MEDLINE | ID: mdl-31621922

ABSTRACT

The use of allocentric cues for movement guidance is complex because it involves the integration of visual targets and independent landmarks and the conversion of this information into egocentric commands for action. Here, we focus on the mechanisms for encoding reach targets relative to visual landmarks in humans. First, we consider the behavioral results suggesting that both of these cues influence target memory, but are then transformed-at the first opportunity-into egocentric commands for action. We then consider the cortical mechanisms for these behaviors. We discuss different allocentric versus egocentric mechanisms for coding of target directional selectivity in memory (inferior temporal gyrus versus superior occipital gyrus) and distinguish these mechanisms from parieto-frontal activation for planning egocentric direction of actual reach movements. Then, we consider where and how the former allocentric representations of remembered reach targets are converted into the latter egocentric plans. In particular, our recent neuroimaging study suggests that four areas in the parietal and frontal cortex (right precuneus, bilateral dorsal premotor cortex, and right presupplementary area) participate in this allo-to-ego conversion. Finally, we provide a functional overview describing how and why egocentric and landmark-centered representations are segregated early in the visual system, but then reintegrated in the parieto-frontal cortex for action.


Subject(s)
Memory/physiology , Parietal Lobe/physiology , Psychomotor Performance/physiology , Temporal Lobe/physiology , Humans , Mental Recall/physiology , Orientation/physiology , Reaction Time/physiology , Space Perception/physiology , Visual Perception/physiology
14.
Neurobiol Dis ; 125: 45-54, 2019 05.
Article in English | MEDLINE | ID: mdl-30677494

ABSTRACT

Dystonia is the third most common movement disorder affecting three million people worldwide. Cervical dystonia is the most common form of dystonia. Despite common prevalence the pathophysiology of cervical dystonia is unclear. Traditional view is that basal ganglia is involved in pathophysiology of cervical dystonia, while contemporary theories suggested the role of cerebellum and proprioception in the pathophysiology of cervical dystonia. It was recently proposed that the cervical dystonia is due to malfunctioning of the head neural integrator - the neuron network that normally converts head velocity to position. Most importantly the neural integrator model was inclusive of traditional proposal emphasizing the role of basal ganglia while also accommodating the contemporary view suggesting the involvement of cerebellum and proprioception. It was hypothesized that the head neural integrator malfunction is the result of impairment in cerebellar, basal ganglia, or proprioceptive feedback that converge onto the integrator. The concept of converging input from the basal ganglia, cerebellum, and proprioception to the network participating in head neural integrator explains that abnormality originating anywhere in the network can lead to the identical motor deficits - drifts followed by rapid corrective movements - a signature of neural integrator dysfunction. We tested this hypothesis in an experiment examining simultaneously recorded globus pallidal single-unit activity, synchronized neural activity (local field potential), and electromyography (EMG) measured from the neck muscles during the standard-of-care deep brain stimulation surgery in 12 cervical dystonia patients (24 hemispheres). Physiological data were collected spontaneously or during voluntary shoulder shrug activating the contralateral trapezius muscle. The activity of pallidal neurons during shoulder shrug exponentially decayed with time constants that were comparable to one measured from the pretectal neural integrator and the trapezius electromyography. These results show that evidence of abnormal neural integration is also seen in globus pallidum, and that latter is connected with the neural integrator. Pretectal single neuron responses consistently preceded the muscle activity; while the globus pallidum internus response always lagged behind the muscle activity. Globus pallidum externa had equal proportion of lag and lead neurons. These results suggest globus pallidum receive feedback from the muscles or the efference copy from the integrator or the other source of the feedback. There was bi-hemispheric asymmetry in the pallidal single-unit activity and local field potentials. The asymmetry correlated with degree of lateral head turning in cervical dystonia patients. These results suggest that bihemispheric asymmetry in the feedback leads to asymmetric dysfunction in the neural integrator causing head turning.


Subject(s)
Feedback, Sensory/physiology , Globus Pallidus/physiopathology , Models, Neurological , Torticollis/physiopathology , Adult , Aged , Female , Humans , Male , Middle Aged , Neural Pathways , Young Adult
15.
Eur J Neurosci ; 47(8): 901-917, 2018 04.
Article in English | MEDLINE | ID: mdl-29512943

ABSTRACT

Targets for goal-directed action can be encoded in allocentric coordinates (relative to another visual landmark), but it is not known how these are converted into egocentric commands for action. Here, we investigated this using a slow event-related fMRI paradigm, based on our previous behavioural finding that the allocentric-to-egocentric (Allo-Ego) conversion for reach is performed at the first possible opportunity. Participants were asked to remember (and eventually reach towards) the location of a briefly presented target relative to another visual landmark. After a first memory delay, participants were forewarned by a verbal instruction if the landmark would reappear at the same location (potentially allowing them to plan a reach following the auditory cue before the second delay), or at a different location where they had to wait for the final landmark to be presented before response, and then reach towards the remembered target location. As predicted, participants showed landmark-centred directional selectivity in occipital-temporal cortex during the first memory delay, and only developed egocentric directional selectivity in occipital-parietal cortex during the second delay for the 'Same cue' task, and during response for the 'Different cue' task. We then compared cortical activation between these two tasks at the times when the Allo-Ego conversion occurred, and found common activation in right precuneus, right presupplementary area and bilateral dorsal premotor cortex. These results confirm that the brain converts allocentric codes to egocentric plans at the first possible opportunity, and identify the four most likely candidate sites specific to the Allo-Ego transformation for reaches.


Subject(s)
Cerebral Cortex/physiology , Mental Recall/physiology , Psychomotor Performance/physiology , Space Perception/physiology , Adult , Auditory Perception/physiology , Female , Functional Neuroimaging , Humans , Magnetic Resonance Imaging , Male , Visual Perception/physiology , Young Adult
16.
J Vis ; 17(5): 20, 2017 05 01.
Article in English | MEDLINE | ID: mdl-28558393

ABSTRACT

The relative contributions of egocentric versus allocentric cues on goal-directed behavior have been examined for reaches, but not saccades. Here, we used a cue conflict task to assess the effect of allocentric landmarks on gaze behavior. Two head-unrestrained macaques maintained central fixation while a target flashed in one of eight radial directions, set against a continuously present visual landmark (two horizontal/vertical lines spanning the visual field, intersecting at one of four oblique locations 11° from the target). After a 100-ms delay followed by a 100-ms mask, the landmark was displaced by 8° in one of eight radial directions. After a second delay (300-700 ms), the fixation point extinguished, signaling for a saccade toward the remembered target. When the landmark was stable, saccades showed a significant but small (mean 15%) pull toward the landmark intersection, and endpoint variability was significantly reduced. When the landmark was displaced, gaze endpoints shifted significantly, not toward the landmark, but partially (mean 25%) toward a virtual target displaced like the landmark. The landmark had a larger influence when it was closer to initial fixation, and when it shifted away from the target, especially in saccade direction. These findings suggest that internal representations of gaze targets are weighted between egocentric and allocentric cues, and this weighting is further modulated by specific spatial parameters.


Subject(s)
Behavior, Animal/physiology , Cues , Fixation, Ocular/physiology , Visual Perception/physiology , Animals , Female , Macaca mulatta , Saccades/physiology
17.
Cereb Cortex ; 27(11): 5242-5260, 2017 11 01.
Article in English | MEDLINE | ID: mdl-27744289

ABSTRACT

The cortical mechanisms for reach have been studied extensively, but directionally selective mechanisms for visuospatial target memory, movement planning, and movement execution have not been clearly differentiated in the human. We used an event-related fMRI design with a visuospatial memory delay, followed by a pro-/anti-reach instruction, a planning delay, and finally a "go" instruction for movement. This sequence yielded temporally separable preparatory responses that expanded from modest parieto-frontal activation for visual target memory to broad occipital-parietal-frontal activation during planning and execution. Using the pro/anti instruction to differentiate visual and motor directional selectivity during planning, we found that one occipital area showed contralateral "visual" selectivity, whereas a broad constellation of left hemisphere occipital, parietal, and frontal areas showed contralateral "movement" selectivity. Temporal analysis of these areas through the entire memory-planning sequence revealed early visual selectivity in most areas, followed by movement selectivity in most areas, with all areas showing a stereotypical visuo-movement transition. Cross-correlation of these spatial parameters through time revealed separate spatiotemporally correlated modules for visual input, motor output, and visuo-movement transformations that spanned occipital, parietal, and frontal cortex. These results demonstrate a highly distributed occipital-parietal-frontal reach network involved in the transformation of retrospective sensory information into prospective movement plans.


Subject(s)
Frontal Lobe/physiology , Hand/physiology , Motor Activity/physiology , Movement/physiology , Occipital Lobe/physiology , Parietal Lobe/physiology , Adult , Brain Mapping , Female , Frontal Lobe/diagnostic imaging , Humans , Magnetic Resonance Imaging , Male , Occipital Lobe/diagnostic imaging , Parietal Lobe/diagnostic imaging , Time Factors , Young Adult
18.
Mem Cognit ; 45(3): 413-427, 2017 04.
Article in English | MEDLINE | ID: mdl-27822732

ABSTRACT

Information maintained in visual working memory (VWM) can be strategically weighted according to its task-relevance. This is typically studied by presenting cues during the maintenance interval, but under natural conditions, the importance of certain aspects of our visual environment is mostly determined by intended actions. We investigated whether representations in VWM are also weighted with respect to their potential action relevance. In a combined memory and movement task, participants memorized a number of items and performed a pointing movement during the maintenance interval. The test item in the memory task was subsequently presented either at the movement goal or at another location. We found that performance was better for test items presented at a location that corresponded to the movement goal than for test items presented at action-irrelevant locations. This effect was sensitive to the number of maintained items, suggesting that preferential maintenance of action relevant information becomes particularly important when the demand on VWM is high. We argue that weighting according to action relevance is mediated by the deployment of spatial attention to action goals, with representations spatially corresponding to the action goal benefitting from this attentional engagement. Performance was also better at locations next to the action goal than at locations farther away, indicating an attentional gradient spreading out from the action goal. We conclude that our actions continue to influence visual processing at the mnemonic level, ensuring preferential maintenance of information that is relevant for current behavioral goals.


Subject(s)
Attention/physiology , Goals , Memory, Short-Term/physiology , Motor Activity/physiology , Space Perception/physiology , Visual Perception/physiology , Adult , Female , Humans , Male , Young Adult
20.
J Neurophysiol ; 117(2): 624-636, 2017 02 01.
Article in English | MEDLINE | ID: mdl-27832593

ABSTRACT

Skillful interaction with the world requires that the brain uses a multitude of sensorimotor programs and subroutines, such as for reaching, grasping, and the coordination of the two body halves. However, it is unclear how these programs operate together. Networks for reaching, grasping, and bimanual coordination might converge in common brain areas. For example, Brodmann area 7 (BA7) is known to activate in disparate tasks involving the three types of movements separately. Here, we asked whether BA7 plays a key role in integrating coordinated reach-to-grasp movements for both arms together. To test this, we applied transcranial magnetic stimulation (TMS) to disrupt BA7 activity in the left and right hemispheres, while human participants performed a bimanual size-perturbation grasping task using the index and middle fingers of both hands to grasp a rectangular object whose orientation (and thus grasp-relevant width dimension) might or might not change. We found that TMS of the right BA7 during object perturbation disrupted the bimanual grasp and transport/coordination components, and TMS over the left BA7 disrupted unimanual grasps. These results show that right BA7 is causally involved in the integration of reach-to-grasp movements of the two arms. NEW & NOTEWORTHY: Our manuscript describes a role of human Brodmann area 7 (BA7) in the integration of multiple visuomotor programs for reaching, grasping, and bimanual coordination. Our results are the first to suggest that right BA7 is critically involved in the coordination of reach-to-grasp movements of the two arms. The results complement previous reports of right-hemisphere lateralization for bimanual grasps.


Subject(s)
Brain Mapping , Hand Strength/physiology , Parietal Lobe/physiology , Psychomotor Performance/physiology , Range of Motion, Articular/physiology , Adult , Analysis of Variance , Female , Functional Laterality , Humans , Kinetics , Male , Movement , Transcranial Magnetic Stimulation , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...